aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1509.03374 | 2232794761 | QoS-aware networking applications such as real-time streaming and video surveillance systems require nearly fixed average end-to-end delay over long periods to communicate efficiently, although may tolerate some delay variations in short periods. This variability exhibits complex dynamics that makes rate control of such applications a formidable task. This paper addresses rate allocation for heterogeneous QoS-aware applications that preserves the long-term end-to-end delay constraint while, similar to Dynamic Network Utility Maximization (DNUM), strives to achieve the maximum network utility aggregated over a fixed time interval. Since capturing temporal dynamics in QoS requirements of sources is allowed in our system model, we incorporate a novel time-coupling constraint in which delay-sensitivity of sources is considered such that a certain end-to-end average delay for each source over a pre-specified time interval is satisfied. We propose DA-DNUM algorithm, as a dual-based solution, which allocates source rates for the next time interval in a distributed fashion, given the knowledge of network parameters in advance. Through numerical experiments, we show that DA-DNUM gains higher average link utilization and a wider range of feasible scenarios in comparison with the best, to our knowledge, rate control schemes that may guarantee such constraints on delay. | In another set of works @cite_3 @cite_8 @cite_14 @cite_22 , the source delay is incorporated as constraints of the optimization problems. By introducing Virtual Link Capacity Margin (VLCM) to characterize source delay as constraint of the problem, the authors in @cite_14 @cite_4 have proposed a joint rate allocation and scheduling scheme in multi-hop wireless networks. By a different approach in @cite_8 , another variant of NUM problem is formulated to address joint power and rate control. Moreover, in @cite_3 , using an elegant fluid model of multi-class flows with different delay requirements, another distributed and stable delay-aware algorithm is proposed. Despite these single-period NUM-based studies, NUM framework is intrinsically incapable of capturing temporal variations in network characteristics especially when these characteristics evolve with time scales comparable to those of the underlying dual-based algorithms. Generally speaking, (single-period) NUM along with delay constraints is subject to limited degrees of freedom, and as a result, one may face a broad range of infeasible problems. We will investigate this phenomenon in details in our experiments in . | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_3"
],
"mid": [
"2075343554",
"",
"2154594768",
"2054629214",
"2097878592"
],
"abstract": [
"This paper investigates the optimal rate allocation problem with end-to-end delay constraints in multi-hop wireless networks. We introduce Virtual Link Capacity Margin (VLCM), which is the gap between the schedulable link capacity and the maximum allowable flow rate over a link, for link delay control. We formulate the problem as a utility maximization framework with two sets of constraints: 1) capacity and schedulability constraints and 2) end-to-end delay constraints. By dual decomposition of the original optimization problem, we present a control algorithm that jointly tunes the flow rates and VLCMs, through a double-price scheme derived with regard to the constraints: the link congestion price reflecting the traffic load of a link, and the flow delay price reflecting the margin between the average packet delay and the delay requirement of a flow. We prove that the algorithm converges to a global optimum where the aggregate network utility defined over the flow rate set is maximized, while the delay constraints are satisfied. A key feature of our algorithm is it does not rely on a specific traffic model. The algorithm is implemented distributedly via joint wireless link scheduling and congestion control. Simulation results show that our algorithm outperforms heuristic rate allocation algorithms while satisfying the end-to-end delay constraints.",
"",
"This paper studies the problem of scheduling in single-hop wireless networks with real-time traffic, where every packet arrival has an associated deadline and a minimum fraction of packets must be transmitted before the end of the deadline. Using optimization and stochastic network theory we study the problem of scheduling to meet quality of service (QoS) requirements under heterogeneous delay constraints and time-varying channel conditions. Our analysis results in an optimal scheduling algorithm which fairly allocates data rates to all flows while meeting long-term delay demands. We also prove that under a simplified scenario our solution translates into a greedy strategy that makes optimal decisions with low complexity.",
"Allocating limited resources such as bandwidth and power in a multi-hop wireless network can be formulated as a Network Utility Maximization (NUM) problem. In this approach, both transmitting source nodes and relaying link nodes exchange information allowing for the NUM problem to be solved in an iterative distributed manner. Some previous NUM formulations of wireless network problems have considered the parameters of data rate, reliability, and transmitter powers either in the source utility function which measures an application's performance or as constraints. However, delay is also an important factor in the performance of many applications. In this paper, we consider an additional constraint based on the average queueing delay requirements of the sources. In particular, we examine an augmented NUM formulation in which rate and power control in a wireless network are balanced to achieve bounded average queueing delays for sources. With the additional delay constraints, the augmented NUM problem is non-convex. Therefore, we present a change of variable to transform the problem to a convex problem and we develop a solution which results in a distributed rate and power control algorithm tailored to achieving bounded average queueing delays. Simulation results demonstrate the efficacy of the distributed algorithm.",
"A fluid model of multi-class flows with priority packet scheduling is considered for controlling the flow rates, their end-to-end delays and their packet losses. We derive a globally stable distributed rate and delay combined control when no information time lags are present. By properly sizing the router buffers, the stable rates attain the end-to- end delay requirements without any packet loss. By further enhancing the network with bandwidth reservation and admission control, we also show that minimum rate is guaranteed. The stability properties of the discrete time version of our control are also derived when no information time lags are present. The stability in the presence of information time lags is studied numerically by computing the delay and rate trajectories for a real test-bed network."
]
} |
1509.03374 | 2232794761 | QoS-aware networking applications such as real-time streaming and video surveillance systems require nearly fixed average end-to-end delay over long periods to communicate efficiently, although may tolerate some delay variations in short periods. This variability exhibits complex dynamics that makes rate control of such applications a formidable task. This paper addresses rate allocation for heterogeneous QoS-aware applications that preserves the long-term end-to-end delay constraint while, similar to Dynamic Network Utility Maximization (DNUM), strives to achieve the maximum network utility aggregated over a fixed time interval. Since capturing temporal dynamics in QoS requirements of sources is allowed in our system model, we incorporate a novel time-coupling constraint in which delay-sensitivity of sources is considered such that a certain end-to-end average delay for each source over a pre-specified time interval is satisfied. We propose DA-DNUM algorithm, as a dual-based solution, which allocates source rates for the next time interval in a distributed fashion, given the knowledge of network parameters in advance. Through numerical experiments, we show that DA-DNUM gains higher average link utilization and a wider range of feasible scenarios in comparison with the best, to our knowledge, rate control schemes that may guarantee such constraints on delay. | To capture dynamics in network and sources, NUM framework has been extended to the DNUM framework @cite_7 that supports time-varying characteristics in network model parameters such as flow utilities, links capacities and routing matrix. The DNUM framework has been extended in different research areas @cite_2 @cite_16 . In @cite_2 , the time-varying nature is utilized to consider temporal variations in modeling the utility function of the sources with video streaming applications. The authors in @cite_16 have proposed another solution for DNUM based on distributed Newton methods. To the author's knowledge, this is the first work that extends the DNUM framework to characterize the average delay requirements of sources in a general and well-structured way. | {
"cite_N": [
"@cite_16",
"@cite_7",
"@cite_2"
],
"mid": [
"2118317131",
"1988044997",
"1996704710"
],
"abstract": [
"The standard Network Utility Maximization (NUM) problem has a static formulation, which fails to capture the temporal dynamics in modern networks. This work considers a dynamic version of the NUM problem by introducing additional constraints, referred to as delivery contracts. Each delivery contract specifies the amount of information that needs to be delivered over a certain time interval for a particular source and is motivated by applications such as video streaming or webpage loading. The existing distributed algorithms for the Network Utility Maximization problems are either only applicable for the static version of the problem or rely on dual decomposition and first-order (gradient or subgradient) methods, which are slow in convergence. In this work, we develop a distributed Newton-type algorithm for the dynamic problem, which is implemented in the primal space and involves computing the dual variables at each primal step. We propose a novel distributed iterative approach for calculating the dual variables with finite termination based on matrix splitting techniques. It can be shown that if the error level in the Newton direction (resulting from finite termination of dual iterations) is below a certain threshold, then the algorithm achieves local quadratic convergence rate to an error neighborhood of the optimal solution in the primal space. Simulation results demonstrate significant convergence rate improvement of our algorithm, relative to the existing first-order methods based on dual decomposition.",
"We consider a multi-period variation on the network utility maximization problem that includes delivery constraints. We allow the flow utilities, link capacities and routing matrices to vary over time, and we introduce the concept of delivery contracts, which couple the flow rates across time. We describe a distributed algorithm, based on dual decomposition, that solves this problem when all data is known ahead of time. We briefly describe a heuristic, based on model predictive control, for approximately solving a variation on the problem, in which the data are not known ahead of time. The formulation and algorithms are illustrated with numerical examples.",
"Nowadays it is vital to design robust mechanisms to provide QoS for multimedia applications as an integral part of the network traffic. The main goal of this paper is to provide an efficient rate control scheme to support content-aware video transmission mechanism with buffer underflow avoidance at the receiver in congested networks. Towards this, we introduce a content-aware time-varying utility function, in which the quality impact of video content is incorporated into its mathematical expression. Moreover, we analytically model the buffer requirements of video sources in two ways: first as constraints of the optimization problem to guarantee a minimum rate demand for each source, and second as a penalty function embedded as part of the objective function attempting to achieve the highest possible rate for each source. Then, using the proposed analytical model, we formulate a dynamic network utility maximization problem, which aims to maximize the aggregate hybrid objective function of sources subject to capacity and buffer constraints. Finally, using primal-dual method, we solve DNUM problem and propose a distributed algorithm called CA-DNUM that optimally allocates the shared bandwidth to video streams. The experimental results demonstrate the efficacy and performance improvement of the proposed content-aware rate allocation algorithm for video sources in different scenarios."
]
} |
1509.02596 | 2951693551 | China has experienced a spectacular economic growth in recent decades. Its economy grew more than 48 times from 1980 to 2013. How are the other countries reacting to China's rise? Do they see it as an economic opportunity or a security threat? In this paper, we answer this question by analyzing online news reports about China published in Australia, France, Germany, Japan, Russia, South Korea, the UK and the US. More specifically, we first analyze the frequency with which China has appeared in news headlines, which is a measure of China's influence in the world. Second, we build a Naive Bayes classifier to study the evolving nature of the news reports, i.e., whether they are economic or political. We then evaluate the friendliness of the news coverage based on sentiment analysis. Empirical results indicate that there has been increasing news coverage of China in all the countries under study. We also find that the emphasis of the reports is generally shifting towards China's economy. Here Japan and South Korea are exceptions: they are reporting more on Chinese politics. In terms of global sentiment, the picture is quite gloomy. With the exception of Australia and, to some extent, France, all the other countries under examination are becoming less positive towards China. | In order to understand and measure foreign perceptions, we examine foreign news media, which have long drawn the attention of IR scholars. For example, Ramos, Ron and Thoms study the reports of leading Western newspapers to answer the question of what influences the Northern media's coverage of events and abuses in explicit human rights terms @cite_6 . Emilie Hafner-Burton constructs an autoregressive model to analyze the effects naming and shaming on political terror and political rights abuses @cite_10 . More recently, Alastair Johnston analyzes Chinese publications and argues that China has not become more assertive despite all the Western suspicion and accusations @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_10",
"@cite_6"
],
"mid": [
"2112332950",
"2123359946",
""
],
"abstract": [
"There has been a rapidly spreading meme in U.S. pundit and academic circles since 2010 that describes China's recent diplomacy as \"newly assertive.\" This \"new assertiveness\" meme suffers from two problems. First, it underestimates the complexity of key episodes in Chinese diplomacy in 2010 and overestimates the amount of change. Second, the explanations for the new assertiveness claim suffer from unclear causal mechanisms and lack comparative rigor that would better contextualize China's diplomacy in 2010. An examination of seven cases in Chinese diplomacy at the heart of the new assertiveness meme finds that, in some instances, China's policy has not changed; in others, it is actually more moderate; and in still others, it is a predictable reaction to changed external conditions. In only one case—maritime disputes—does one see more assertive Chinese rhetoric and behavior. The speed and extent with which the newly assertive meme has emerged point to an understudied issue in international relations—namely, the role that online media and the blogosphere play in the creation of conventional wisdoms that might, in turn, constrain policy debates. The assertive China discourse may be a harbinger of this effect as a Sino-U.S. security dilemma emerges.",
"“Naming and shaming” is a popular strategy to enforce international human rights norms and laws. Nongovernmental organizations, news media, and international organizations publicize countries' violations and urge reform. Evidence that these spotlights are followed by improvements is anecdotal. This article analyzes the relationship between global naming and shaming efforts and governments' human rights practices for 145 countries from 1975 to 2000. The statistics show that governments put in the spotlight for abuses continue or even ramp up some violations afterward, while reducing others. One reason is that governments' capacities for human rights improvements vary across types of violations. Another is that governments are strategically using some violations to offset other improvements they make in response to international pressure to stop violations.",
""
]
} |
1509.02596 | 2951693551 | China has experienced a spectacular economic growth in recent decades. Its economy grew more than 48 times from 1980 to 2013. How are the other countries reacting to China's rise? Do they see it as an economic opportunity or a security threat? In this paper, we answer this question by analyzing online news reports about China published in Australia, France, Germany, Japan, Russia, South Korea, the UK and the US. More specifically, we first analyze the frequency with which China has appeared in news headlines, which is a measure of China's influence in the world. Second, we build a Naive Bayes classifier to study the evolving nature of the news reports, i.e., whether they are economic or political. We then evaluate the friendliness of the news coverage based on sentiment analysis. Empirical results indicate that there has been increasing news coverage of China in all the countries under study. We also find that the emphasis of the reports is generally shifting towards China's economy. Here Japan and South Korea are exceptions: they are reporting more on Chinese politics. In terms of global sentiment, the picture is quite gloomy. With the exception of Australia and, to some extent, France, all the other countries under examination are becoming less positive towards China. | Methodologically, our work uses text classification and sentiment analysis. First we build a Naive Bayes classifier to investigate the evolving nature of foreign reports on China. This is related to many studies in text classification @cite_11 @cite_27 . In a recent study, a Naive Bayes classifier is used to classify SMS messages received by UNICEF Uganda into eleven classes @cite_19 . The fact that their messages often contain various abbreviations and spelling errors seriously affects their initial results. The same data noise problem is not present here as we are examining the news articles published by leading news groups in respective countries. We test our classifier with 800 manually labeled articles, randomly chosen from the target newspapers. The test results, reported in Section 3, show that the Naive Bayes classifier is adequate for our task. | {
"cite_N": [
"@cite_19",
"@cite_27",
"@cite_11"
],
"mid": [
"1975700576",
"",
"2095655043"
],
"abstract": [
"U-report is an open-source SMS platform operated by UNICEF Uganda, designed to give community members a voice on issues that impact them. Data received by the system are either SMS responses to a poll conducted by UNICEF, or unsolicited reports of a problem occurring within the community. There are currently 200,000 U-report participants, and they send up to 10,000 unsolicited text messages a week. The objective of the program in Uganda is to understand the data in real-time, and have issues addressed by the appropriate department in UNICEF in a timely manner. Given the high volume and velocity of the data streams, manual inspection of all messages is no longer sustainable. This paper describes an automated message-understanding and routing system deployed by IBM at UNICEF. We employ recent advances in data mining to get the most out of labeled training data, while incorporating domain knowledge from experts. We discuss the trade-offs, design choices and challenges in applying such techniques in a real-world deployment.",
"",
"Politics and political conflict often occur in the written and spoken word. Scholars have long recognized this, but the massive costs of analyzing even moderately sized collections of texts have hindered their use in political science research. Here lies the promise of automated text analysis: it substantially reduces the costs of analyzing large collections of text. We provide a guide to this exciting new area of research and show how, in many instances, the methods have already obtained part of their promise. But there are pitfalls to using automated methods—they are no substitute for careful thought and close reading and require extensive and problem-specific validation. We survey a wide range of new methods, provide guidance on how to validate the output of the models, and clarify misconceptions and errors in the literature. To conclude, we argue that for automated text methods to become a standard tool for political scientists, methodologists must contribute new methods and new methods of validation. Language is the medium for politics and political conflict. Candidates debate and state policy positions during a campaign. Once elected, representatives write and debate legislation. After laws are passed, bureaucrats solicit comments before they issue regulations. Nations regularly negotiate and then sign peace treaties, with language that signals the motivations and relative power of the countries involved. News reports document the day-to-day affairs of international relations that provide a detailed picture of conflict and cooperation. Individual candidates and political parties articulate their views through party platforms and manifestos. Terrorist groups even reveal their preferences and goals through recruiting materials, magazines, and public statements. These examples, and many others throughout political science, show that to understand what politics is about we need to know what political actors are saying and writing. Recognizing that language is central to the study of politics is not new. To the contrary, scholars of politics have long recognized that much of politics is expressed in words. But scholars have struggled when using texts to make inferences about politics. The primary problem is volume: there are simply too many political texts. Rarely are scholars able to manually read all the texts in even moderately sized corpora. And hiring coders to manually read all documents is still very expensive. The result is that"
]
} |
1509.02596 | 2951693551 | China has experienced a spectacular economic growth in recent decades. Its economy grew more than 48 times from 1980 to 2013. How are the other countries reacting to China's rise? Do they see it as an economic opportunity or a security threat? In this paper, we answer this question by analyzing online news reports about China published in Australia, France, Germany, Japan, Russia, South Korea, the UK and the US. More specifically, we first analyze the frequency with which China has appeared in news headlines, which is a measure of China's influence in the world. Second, we build a Naive Bayes classifier to study the evolving nature of the news reports, i.e., whether they are economic or political. We then evaluate the friendliness of the news coverage based on sentiment analysis. Empirical results indicate that there has been increasing news coverage of China in all the countries under study. We also find that the emphasis of the reports is generally shifting towards China's economy. Here Japan and South Korea are exceptions: they are reporting more on Chinese politics. In terms of global sentiment, the picture is quite gloomy. With the exception of Australia and, to some extent, France, all the other countries under examination are becoming less positive towards China. | In addition to classifying news articles, our work also evaluates the sentiments of these articles. In related work, Pang evaluate the sentiments in movie reviews @cite_20 . Agarwal analyze sentiments in Twitter messages @cite_4 . Joo study the communicative intents of images @cite_23 . Compared with movie reviews and tweets, the news articles in this study are much longer. As will be detailed later, for each newspaper we will concatenate all the articles published within a specific year into one super article. This makes our text long and rich. The extraordinary lengths of these texts make them an ideal candidate for sentiment evaluation based on discriminant words. As we only have access to an English dictionary of negative words, when processing news articles in French and German, we first use Google Translate to translate that dictionary into French and German. | {
"cite_N": [
"@cite_23",
"@cite_4",
"@cite_20"
],
"mid": [
"2073775149",
"1743243001",
"2166706824"
],
"abstract": [
"In this paper we introduce the novel problem of understanding visual persuasion. Modern mass media make extensive use of images to persuade people to make commercial and political decisions. These effects and techniques are widely studied in the social sciences, but behavioral studies do not scale to massive datasets. Computer vision has made great strides in building syntactical representations of images, such as detection and identification of objects. However, the pervasive use of images for communicative purposes has been largely ignored. We extend the significant advances in syntactic analysis in computer vision to the higher-level challenge of understanding the underlying communicative intent implied in images. We begin by identifying nine dimensions of persuasive intent latent in images of politicians, such as \"socially dominant, \" \"energetic, \" and \"trustworthy, \" and propose a hierarchical model that builds on the layer of syntactical attributes, such as \"smile\" and \"waving hand, \" to predict the intents presented in the images. To facilitate progress, we introduce a new dataset of 1, 124 images of politicians labeled with ground-truth intents in the form of rankings. This study demonstrates that a systematic focus on visual persuasion opens up the field of computer vision to a new class of investigations around mediated images, intersecting with media analysis, psychology, and political communication.",
"We examine sentiment analysis on Twitter data. The contributions of this paper are: (1) We introduce POS-specific prior polarity features. (2) We explore the use of a tree kernel to obviate the need for tedious feature engineering. The new features (in conjunction with previously proposed features) and the tree kernel perform approximately at the same level, both outperforming the state-of-the-art baseline.",
"We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging."
]
} |
1509.02317 | 2108105996 | Object Proposals is a recent computer vision technique receiving increasing interest from the research community. Its main objective is to generate a relatively small set of bounding box proposals that are most likely to contain objects of interest. The use of Object Proposals techniques in the scene text understanding field is innovative. Motivated by the success of powerful while expensive techniques to recognize words in a holistic way, Object Proposals techniques emerge as an alternative to the traditional text detectors. In this paper we study to what extent the existing generic Object Proposals methods may be useful for scene text understanding. Also, we propose a new Object Proposals algorithm that is specifically designed for text and compare it with other generic methods in the state of the art. Experiments show that our proposal is superior in its ability of producing good quality word proposals in an efficient way. The source code of our method is made publicly available. | The use of Object Proposals methods to generate candidate class-independent object locations has become a popular trend in computer vision in recent times. A comprehensive survey can be found in Hosang al @cite_2 . In general terms, we can distinguish between two major types of Object proposals methods: the ones that make use of exhaustive search to evaluate a fast to compute objectness measure @cite_0 @cite_22 @cite_23 , and the ones where the search is segmentation-driven @cite_1 @cite_16 @cite_5 . | {
"cite_N": [
"@cite_22",
"@cite_1",
"@cite_0",
"@cite_23",
"@cite_2",
"@cite_5",
"@cite_16"
],
"mid": [
"2010181071",
"2088049833",
"2066624635",
"7746136",
"",
"",
"2121660792"
],
"abstract": [
"Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their corresponding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 × 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single laptop CPU) generates a small set of category-independent, high quality object windows, yielding 96.2 object detection rate (DR) with 1, 000 proposals. Increasing the numbers of proposals and color spaces for computing BING features, our performance can be further improved to 99.5 DR.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).",
"We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"",
"",
"Generic object detection is the challenging task of proposing windows that localize all the objects in an image, regardless of their classes. Such detectors have recently been shown to benefit many applications such as speeding-up class-specific object detection, weakly supervised learning of object detectors and object discovery. In this paper, we introduce a novel and very efficient method for generic object detection based on a randomized version of Prim's algorithm. Using the connectivity graph of an image's super pixels, with weights modelling the probability that neighbouring super pixels belong to the same object, the algorithm generates random partial spanning trees with large expected sum of edge weights. Object localizations are proposed as bounding-boxes of those partial trees. Our method has several benefits compared to the state-of-the-art. Thanks to the efficiency of Prim's algorithm, it samples proposals very quickly: 1000 proposals are obtained in about 0.7s. With proposals bound to super pixel boundaries yet diversified by randomization, it yields very high detection rates and windows that tightly fit objects. In extensive experiments on the challenging PASCAL VOC 2007 and 2012 and SUN2012 benchmark datasets, we show that our method improves over state-of-the-art competitors for a wide range of evaluation scenarios."
]
} |
1509.02317 | 2108105996 | Object Proposals is a recent computer vision technique receiving increasing interest from the research community. Its main objective is to generate a relatively small set of bounding box proposals that are most likely to contain objects of interest. The use of Object Proposals techniques in the scene text understanding field is innovative. Motivated by the success of powerful while expensive techniques to recognize words in a holistic way, Object Proposals techniques emerge as an alternative to the traditional text detectors. In this paper we study to what extent the existing generic Object Proposals methods may be useful for scene text understanding. Also, we propose a new Object Proposals algorithm that is specifically designed for text and compare it with other generic methods in the state of the art. Experiments show that our proposal is superior in its ability of producing good quality word proposals in an efficient way. The source code of our method is made publicly available. | In the first category, Alexe al @cite_0 propose a generic objectness measure for a given image window that combines several image cues, such as a saliency score , the color contrast to its immediate surrounding area, the edge density, and the number of straddling contours. Computation of these features is made efficient by using integral images. Cheng al @cite_22 propose a very fast objectness score using the norm of image gradients in a sliding window, with a suitable resizing of windows into a small fixed size. A different sliding window driven approach is given by Zitnick al @cite_23 , where a box objectness score is measured as the number of edges @cite_10 that are wholly contained in the box minus those that are members of contours that overlap the box's boundary. Using efficient data structures they manage to evaluate millions of candidate boxes in a fraction of second. | {
"cite_N": [
"@cite_0",
"@cite_10",
"@cite_22",
"@cite_23"
],
"mid": [
"2066624635",
"2129587342",
"2010181071",
"7746136"
],
"abstract": [
"We present a generic objectness measure, quantifying how likely it is for an image window to contain an object of any class. We explicitly train it to distinguish objects with a well-defined boundary in space, such as cows and telephones, from amorphous background elements, such as grass and road. The measure combines in a Bayesian framework several image cues measuring characteristics of objects, such as appearing different from their surroundings and having a closed boundary. These include an innovative cue to measure the closed boundary characteristic. In experiments on the challenging PASCAL VOC 07 dataset, we show this new cue to outperform a state-of-the-art saliency measure, and the combined objectness measure to perform better than any cue alone. We also compare to interest point operators, a HOG detector, and three recent works aiming at automatic object segmentation. Finally, we present two applications of objectness. In the first, we sample a small numberof windows according to their objectness probability and give an algorithm to employ them as location priors for modern class-specific object detectors. As we show experimentally, this greatly reduces the number of windows evaluated by the expensive class-specific model. In the second application, we use objectness as a complementary score in addition to the class-specific model, which leads to fewer false positives. As shown in several recent papers, objectness can act as a valuable focus of attention mechanism in many other applications operating on image windows, including weakly supervised learning of object categories, unsupervised pixelwise segmentation, and object tracking in video. Computing objectness is very efficient and takes only about 4 sec. per image.",
"Edge detection is a critical component of many vision systems, including object detectors and image segmentation algorithms. Patches of edges exhibit well-known forms of local structure, such as straight lines or T-junctions. In this paper we take advantage of the structure present in local image patches to learn both an accurate and computationally efficient edge detector. We formulate the problem of predicting local edge masks in a structured learning framework applied to random decision forests. Our novel approach to learning decision trees robustly maps the structured labels to a discrete space on which standard information gain measures may be evaluated. The result is an approach that obtains real time performance that is orders of magnitude faster than many competing state-of-the-art approaches, while also achieving state-of-the-art edge detection results on the BSDS500 Segmentation dataset and NYU Depth dataset. Finally, we show the potential of our approach as a general purpose edge detector by showing our learned edge models generalize well across datasets.",
"Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their corresponding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 × 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single laptop CPU) generates a small set of category-independent, high quality object windows, yielding 96.2 object detection rate (DR) with 1, 000 proposals. Increasing the numbers of proposals and color spaces for computing BING features, our performance can be further improved to 99.5 DR.",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy."
]
} |
1509.02317 | 2108105996 | Object Proposals is a recent computer vision technique receiving increasing interest from the research community. Its main objective is to generate a relatively small set of bounding box proposals that are most likely to contain objects of interest. The use of Object Proposals techniques in the scene text understanding field is innovative. Motivated by the success of powerful while expensive techniques to recognize words in a holistic way, Object Proposals techniques emerge as an alternative to the traditional text detectors. In this paper we study to what extent the existing generic Object Proposals methods may be useful for scene text understanding. Also, we propose a new Object Proposals algorithm that is specifically designed for text and compare it with other generic methods in the state of the art. Experiments show that our proposal is superior in its ability of producing good quality word proposals in an efficient way. The source code of our method is made publicly available. | The use of Object Proposals techniques in scene text understanding has been exploited very recently in two state-of-the-art word-spotting methods @cite_6 @cite_19 while in a distinct manner. In our previous work @cite_6 we propose a text specific selective search method adopting a similar strategy to the selective search of Uijlings al @cite_1 and a holistic word recognition method based on Fisher Vector representations. On the other hand, Jaderberg al @cite_19 opt for the use of a generic Object Proposals algorithm @cite_23 and deep convolutional neural networks for recognition. | {
"cite_N": [
"@cite_19",
"@cite_23",
"@cite_1",
"@cite_6"
],
"mid": [
"2952250911",
"7746136",
"2088049833",
""
],
"abstract": [
"In this work we present an end-to-end system for text spotting -- localising and recognising text in natural scene images -- and text based image retrieval. This system is based on a region proposal mechanism for detection and deep convolutional neural networks for recognition. Our pipeline uses a novel combination of complementary proposal generation techniques to ensure high recall, and a fast subsequent filtering stage for improving precision. For the recognition and ranking of proposals, we train very large convolutional neural networks to perform word recognition on the whole proposal region at the same time, departing from the character classifier based systems of the past. These networks are trained solely on data produced by a synthetic text generation engine, requiring no human labelled data. Analysing the stages of our pipeline, we show state-of-the-art performance throughout. We perform rigorous experiments across a number of standard end-to-end text spotting benchmarks and text-based image retrieval datasets, showing a large improvement over all previous methods. Finally, we demonstrate a real-world application of our text spotting system to allow thousands of hours of news footage to be instantly searchable via a text query.",
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html ).",
""
]
} |
1509.02317 | 2108105996 | Object Proposals is a recent computer vision technique receiving increasing interest from the research community. Its main objective is to generate a relatively small set of bounding box proposals that are most likely to contain objects of interest. The use of Object Proposals techniques in the scene text understanding field is innovative. Motivated by the success of powerful while expensive techniques to recognize words in a holistic way, Object Proposals techniques emerge as an alternative to the traditional text detectors. In this paper we study to what extent the existing generic Object Proposals methods may be useful for scene text understanding. Also, we propose a new Object Proposals algorithm that is specifically designed for text and compare it with other generic methods in the state of the art. Experiments show that our proposal is superior in its ability of producing good quality word proposals in an efficient way. The source code of our method is made publicly available. | The method proposed in this paper builds on top of our previous work @cite_18 @cite_4 @cite_6 , where initial regions in the image are grouped by agglomerative clustering, using complementary similarity measures, in hierarchies where each node defines a possible word hypothesis. But differs from it in two main aspects: First, we do not rely in a classifier to make strong decisions to discriminate text groups from not-text groups, second, we do not combine the different cues in any way. | {
"cite_N": [
"@cite_18",
"@cite_4",
"@cite_6"
],
"mid": [
"1966693245",
"2950124142",
""
],
"abstract": [
"Scene text extraction methodologies are usually based in classification of individual regions or patches, using a priori knowledge for a given script or language. Human perception of text, on the other hand, is based on perceptual organisation through which text emerges as a perceptually significant group of atomic objects. Therefore humans are able to detect text even in languages and scripts never seen before. In this paper, we argue that the text extraction problem could be posed as the detection of meaningful groups of regions. We present a method built around a perceptual organisation framework that exploits collaboration of proximity and similarity laws to create text-group hypotheses. Experiments demonstrate that our algorithm is competitive with state of the art approaches on a standard dataset covering text in variable orientations and two languages.",
"Typography and layout lead to the hierarchical organisation of text in words, text lines, paragraphs. This inherent structure is a key property of text in any script and language, which has nonetheless been minimally leveraged by existing text detection methods. This paper addresses the problem of text segmentation in natural scenes from a hierarchical perspective. Contrary to existing methods, we make explicit use of text structure, aiming directly to the detection of region groupings corresponding to text within a hierarchy produced by an agglomerative similarity clustering process over individual regions. We propose an optimal way to construct such an hierarchy introducing a feature space designed to produce text group hypotheses with high recall and a novel stopping rule combining a discriminative classifier and a probabilistic measure of group meaningfulness based in perceptual organization. Results obtained over four standard datasets, covering text in variable orientations and different languages, demonstrate that our algorithm, while being trained in a single mixed dataset, outperforms state of the art methods in unconstrained scenarios.",
""
]
} |
1509.02454 | 2145121200 | We consider the trajectories of points on under sequences of certain folding maps associated with reflections. The main result characterizes collections of folding maps that produce dense trajectories. The minimal number of maps in such a collection is d+1. | A standard technique for establishing geometric inequalities is to approximate full radial symmetrization by a sequence of simpler symmetrizations. Two-point symmetrization has been used in this way to prove the isoperimetric inequality on spheres @cite_12 , and sharp inequalities for path integrals @cite_8 @cite_3 @cite_2 . The convergence of random sequences of symmetrizations has received some attention in the literature, most notably in the work of Klartag @cite_4 on Steiner symmetrizations of convex sets. Convergence of two-point symmetrizations is less well studied. Very recently, De Keyser and Van Schaftingen have considered random symmetrization processes with time correlations @cite_14 . Open questions in the area include precise conditions for convergence, and bounds on the rate of convergence. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_8",
"@cite_3",
"@cite_2",
"@cite_12"
],
"mid": [
"2101606546",
"2074528182",
"2162337698",
"2085996269",
"2109103292",
"1999441752"
],
"abstract": [
"Under continuity and recurrence assumptions, we prove that the iteration of successive partial symmetrizations that form a time-homogeneous Markov process, converges to a symmetrization. We cover several settings, including the approximation of the spherical nonincreasing rearrangement by Steiner symmetrizations, polarizations and cap symmetrizations. A key tool in our analysis is a quantitative measure of the asymmetry.",
"It is a classical fact, that given an arbitrary convex body (K R ^n , ) there exists an appropriate sequence of Minkowski symmetrizations (or Steiner symmetrizations), that converges in Hausdorff metric to a Euclidean ball. Here we provide quantitative estimates regarding this convergence, for both Minkowski and Steiner symmetrizations. Our estimates are polynomial in the dimension and in the logarithm of the desired distance to a Euclidean ball, improving previously known exponential estimates. Inspired by a method of Diaconis [D], our technique involves spherical harmonics. We also make use of an earlier result by the author regarding “isomorphic Minkowski symmetrization”.",
"We study bounds on the exit time of Brownian motion from a set in terms of its size and shape, and the relation of such bounds with isoperimetric inequalities. The first result is an upper bound for the distribution function of the exit time from a subset of a sphere or hyperbolic space of constant curvature in terms of the exit time from a disc of the same volume. This amounts to a rearrangement inequality for the Dirichlet heat kernel. To connect this inequality with the classical isoperimetric inequality, we derive a formula for the perimeter of a set in terms of the heat flow over the boundary. An auxiliary result generalizes Riesz' rearrangement inequality to multiple integrals.",
"",
"Let (ξ(s)) s ≥ 0 be a standard Brownian motion in d ≥ 1 dimensions and let (D s ) s ≥ 0 be a collection of open sets in ( R ^d ). For each s, let B s be a ball centered at 0 with vol(B s ) = vol(D s ). We show that ( E [ vol ( s t ( (s) + D_s))] E [ vol ( s t ( (s) + B_s))] ), for all t. In particular, this implies that the expected volume of the Wiener sausage increases when a drift is added to the Brownian motion.",
"where de denotes normalized surface measure, V is the conformal gradient and q = (2n) (n 2). A modern folklore theorem is that by taking the infinitedimensional limit of this inequality, one obtains the Gross logarithmic Sobolev inequality for Gaussian measure, which determines Nelson's hypercontractive estimates for the Hermite semigroup (see [8]). One observes using conformal invariance that the above inequality is equivalent to the sharp Sobolev inequality on Rn for which boundedness and extremal functions can be easily calculated using dilation invariance and geometric symmetrization. The roots here go back to Hardy and Littlewood. The advantage of casting the problem on the sphere is that the role of the constants is evident, and one is led immediately to the conjecture that this inequality should hold whenever possible (for example, 2 < q < 0o if n = 2). This is in fact true and will be demonstrated in Section 2. A clear question at this point is \"What is the situation in dimension 2?\" Two important arguments ([25], [26], [27]) dealt with this issue, both motivated by geometric variational problems. Because q goes to infinity for dimension 2, the appropriate function space is the exponential class. Responding in part"
]
} |
1509.02454 | 2145121200 | We consider the trajectories of points on under sequences of certain folding maps associated with reflections. The main result characterizes collections of folding maps that produce dense trajectories. The minimal number of maps in such a collection is d+1. | There are many known results analogous to Theorem for other collections of maps associated with linear isometries of spheres. For example, Crouch and Silva Leite @cite_1 found pairs of one-parameter subgroups of @math which generate all of @math , and produced upper bounds on the number of elements from each subgroup required to generate any element (see also Levitt and Sussmann @cite_13 for related results). An example of this phenomenon is the Euler angles decomposition, a formula for writing any element of @math as a product of three rotations about the @math and @math axes. Rosenthal @cite_5 has established bounds on the rate of convergence to the steady state for random walks generated by conjugacy classes of planar rotations on @math . Porod @cite_9 has obtained analogous results for random walks on @math , @math and @math generated by the conjugacy class of reflections. | {
"cite_N": [
"@cite_13",
"@cite_5",
"@cite_9",
"@cite_1"
],
"mid": [
"2087420634",
"2011211004",
"2043396021",
"2025216121"
],
"abstract": [
"A set S of vector fields on a differentiable manifold M is said to be completely controllable if for every pair @math of points of M there exists a trajectory of S from m to @math . Here a trajectory of S is a curve which is an integral curve of some @math or a finite concatenation of such curves so that, in general, a trajectory of S run in reverse is no longer a trajectory. Our main theorem is: on every connected paracompact manifold of class @math , @math , or @math , there exists a completely controllable set S consisting of two vector fields of class @math .",
"We analyze a random walk on the orthogonal group SO(N) given by repeatedly rotating by a fixed angle through randomly chosen planes of R . We derive estimates of the rate at which this random walk will converge to Haar measure on SO(N), using character theory and the Upper Bound Lemma of Diaconis and Shashahani. In some cases we are able to establish the existence of a “cut-off phenomenon” for the random walk. This is the first interesting such result on a non-finite group.",
"For many random walks on sufficiently large finite groups the so-called cut-off phenomenon occurs: roughly stated, there exists a number k 0 , depending on the size of the group, such that k 0 steps are necessary and sufficient for the random walk to closely approximate uniformity. As a first example on a continuous group, Rosenthal recently proved the occurrence of this cut-off phenomenon for a specific random walk on SO(N). Here we present and [for the case of O(N)] prove results for random walks on O(N), U(N) and Sp(N), where the one-step distribution is a suitable probability measure concentrated on reflections. In all three cases the cut-off phenomenon occurs at k 0 = 1 2N log N.",
"We present closed forms for the exponential of some infinitesimal generators of Lie groups which play an important role in physics and engineering applications. These explicit forms are based on the Putzer’s method. We also compare this methodology and results with related work by other authors."
]
} |
1509.02454 | 2145121200 | We consider the trajectories of points on under sequences of certain folding maps associated with reflections. The main result characterizes collections of folding maps that produce dense trajectories. The minimal number of maps in such a collection is d+1. | On an abstract level, our results are motivated by classical theorems about directed graphs, and more generally Markov chains on discrete state spaces. A graph is transitive if every vertex @math can be connected to every other vertex @math by a path of directed edges. Finding a minimal subset of edges that still connects all vertices is known to be a hard problem, particularly if the graph has cycles. As a practical alternative, Aho, Garey, and Ullmann developed the theory of transitive reductions, which are graphs on the same vertex set with the smallest possible number of edges @cite_10 . In our setting, the points @math play the role of vertices, and a pair @math with @math plays the role of a directed edge connecting @math to @math . Edge paths correspond to trajectories, and transitivity means that all trajectories are dense. Theorem gives necessary and sufficient conditions for transitivity, Proposition constructs a transitive reduction, and Proposition establishes the presence of cycles. The random walk defined in Eq. corresponds to the Markov chain defined by a weighted directed graph, with @math playing the role of the edge weights. Theorem and Corollary concern the steady-state of the Markov chain. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2110920860"
],
"abstract": [
"We consider economical representations for the path information in a directed graph. A directed graph @math is said to be a transitive reduction of the directed graph G provided that (i) @math has a directed path from vertex u to vertex v if and only if G has a directed path from vertex u to vertex v, and (ii) there is no graph with fewer arcs than @math satisfying condition (i). Though directed graphs with cycles may have more than one such representation, we select a natural canonical representative as the transitive reduction for such graphs. It is shown that the time complexity of the best algorithm for finding the transitive reduction of a graph is the same as the time to compute the transitive closure of a graph or to perform Boolean matrix multiplication."
]
} |
1509.02094 | 2234538948 | This paper presents a method for future localization: to predict a set of plausible trajectories of ego-motion given a depth image. We predict paths avoiding obstacles, between objects, even paths turning around a corner into space behind objects. As a byproduct of the predicted trajectories of ego-motion, we discover in the image the empty space occluded by foreground objects. We use no image based features such as semantic labeling segmentation or object detection recognition for this algorithm. Inspired by proxemics, we represent the space around a person using an EgoSpace map, akin to an illustrated tourist map, that measures a likelihood of occlusion at the egocentric coordinate system. A future trajectory of ego-motion is modeled by a linear combination of compact trajectory bases allowing us to constrain the predicted trajectory. We learn the relationship between the EgoSpace map and trajectory from the EgoMotion dataset providing in-situ measurements of the future trajectory. A cost function that takes into account partial occlusion due to foreground objects is minimized to predict a trajectory. This cost function generates a trajectory that passes through the occluded space, which allows us to discover the empty space behind the foreground objects. We quantitatively evaluate our method to show predictive validity and apply to various real world scenes including walking, shopping, and social interactions. | In computer vision, Ali and Shah @cite_34 developed a flow field model that predicts spatial crowd behaviors for tracking extremely cluttered crowd scenes. Inspired by the social force model @cite_16 , @cite_28 predicted pedestrian behaviors in a crowd scene to detect abnormal behaviors, and @cite_4 used a modified model to track multiple agents. Ryoo @cite_7 presented a bag-of-word approach to recognize social activities at the early stage of videos. @cite_1 predicted plausible activities from a static scene by associating the scene statistics and labeled actions. In terms of the trajectory prediction task, our work is closely related with three path planning frameworks by @cite_9 , @cite_10 , and @cite_11 . presented a method to generate multiple plausible trajectories of each agent in the scene constructed by homotopy classes, which allows them to produce a long term trajectory for visual tracking in crowd scenes. leveraged inverse optimal control theory to learn human preference with respect to the scene semantic labels, which enables them to predict the paths an agent follows. introduced a geometric feature, social affinity model that captures a spatial relationship of neighboring agents to predict destinations of a crowd. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_1",
"@cite_16",
"@cite_34",
"@cite_10",
"@cite_11"
],
"mid": [
"1535008136",
"2147615062",
"2164489414",
"2090229683",
"259727857",
"2167052694",
"",
"",
"2134944993"
],
"abstract": [
"This paper introduces a novel framework for modeling interacting humans in a multi-stage game environment by combining concepts from game theory and reinforcement learning. The proposed model has the following desirable characteristics: (1) Bounded rational players, (2) strategic (i.e., players account for one another’s reward functions), and (3) is computationally feasible even on moderately large real-world systems. To do this we extend level-K reasoning to policy space to, for the first time, be able to handle multiple time steps. This allows us to decompose the problem into a series of smaller ones where we can apply standard reinforcement learning algorithms. We investigate these ideas in a cyber-battle scenario over a smart power grid and discuss the relationship between the behavior predicted by our model and what one might expect of real human defenders and attackers.",
"In this paper, we present a novel approach of human activity prediction. Human activity prediction is a probabilistic process of inferring ongoing activities from videos only containing onsets (i.e. the beginning part) of the activities. The goal is to enable early recognition of unfinished activities as opposed to the after-the-fact classification of completed activities. Activity prediction methodologies are particularly necessary for surveillance systems which are required to prevent crimes and dangerous activities from occurring. We probabilistically formulate the activity prediction problem, and introduce new methodologies designed for the prediction. We represent an activity as an integral histogram of spatio-temporal features, efficiently modeling how feature distributions change over time. The new recognition methodology named dynamic bag-of-words is developed, which considers sequential nature of human activities while maintaining advantages of the bag-of-words to handle noisy observations. Our experiments confirm that our approach reliably recognizes ongoing activities from streaming videos with a high accuracy.",
"In this paper we introduce a novel method to detect and localize abnormal behaviors in crowd videos using Social Force model. For this purpose, a grid of particles is placed over the image and it is advected with the space-time average of optical flow. By treating the moving particles as individuals, their interaction forces are estimated using social force model. The interaction force is then mapped into the image plane to obtain Force Flow for every pixel in every frame. Randomly selected spatio-temporal volumes of Force Flow are used to model the normal behavior of the crowd. We classify frames as normal and abnormal by using a bag of words approach. The regions of anomalies in the abnormal frames are localized using interaction forces. The experiments are conducted on a publicly available dataset from University of Minnesota for escape panic scenarios and a challenging dataset of crowd videos taken from the web. The experiments show that the proposed method captures the dynamics of the crowd behavior successfully. In addition, we have shown that the social force approach outperforms similar approaches based on pure optical flow.",
"In this paper, we propose a long-term motion model for visual object tracking. In crowded street scenes, persistent occlusions are a frequent challenge for tracking algorithm and a robust, long-term motion model could help in these situations. Motivated by progresses in robot motion planning, we propose to construct a set of ‘plausible’ plans for each person, which are composed of multiple long-term motion prediction hypotheses that do not include redundancies, unnecessary loops or collisions with other objects. Constructing plausible plan is the key step in utilizing motion planning in object tracking, which has not been fully investigate in robot motion planning. We propose a novel method of efficiently constructing disjoint plans in different homotopy classes, based on winding numbers and winding angles of planned paths around all obstacles. As the goals can be specified by winding numbers and winding angles, we can avoid redundant plans in the same homotopy class and multiple whirls or loops around a single obstacle. We test our algorithm on a challenging, real-world dataset, and compare our algorithm with Linear Trajectory Avoidance and a simplified linear planning model. We find that our algorithm outperforms both algorithms in most sequences.",
"Human actions naturally co-occur with scenes. In this work we aim to discover action-scene correlation for a large number of scene categories and to use such correlation for action prediction. Towards this goal, we collect a new SUN Action dataset with manual annotations of typical human actions for 397 scenes. We next discover action-scene associations and demonstrate that scene categories can be well identified from their associated actions. Using discovered associations, we address a new task of predicting human actions for images of static scenes. We evaluate prediction of 23 and 38 action classes for images of indoor and outdoor scenes respectively and show promising results. We also propose a new application of geo-localized action prediction and demonstrate ability of our method to automatically answer queries such as “Where is a good place for a picnic?” or “Can I cycle along this path?”.",
"It is suggested that the motion of pedestrians can be described as if they would be subject to social forces.'' These forces'' are not directly exerted by the pedestrians' personal environment, but they are a measure for the internal motivations of the individuals to perform certain actions (movements). The corresponding force concept is discussed in more detail and can also be applied to the description of other behaviors. In the presented model of pedestrian behavior several force terms are essential: first, a term describing the acceleration towards the desired velocity of motion; second, terms reflecting that a pedestrian keeps a certain distance from other pedestrians and borders; and third, a term modeling attractive effects. The resulting equations of motion of nonlinearly coupled Langevin equations. Computer simulations of crowds of interacting pedestrians show that the social force model is capable of describing the self-organization of several observed collective effects of pedestrian behavior very realistically.",
"",
"",
"In crowded spaces such as city centers or train stations, human mobility looks complex, but is often influenced only by a few causes. We propose to quantitatively study crowded environments by introducing a dataset of 42 million trajectories collected in train stations. Given this dataset, we address the problem of forecasting pedestrians' destinations, a central problem in understanding large-scale crowd mobility. We need to overcome the challenges posed by a limited number of observations (e.g. sparse cameras), and change in pedestrian appearance cues across different cameras. In addition, we often have restrictions in the way pedestrians can move in a scene, encoded as priors over origin and destination (OD) preferences. We propose a new descriptor coined as Social Affinity Maps (SAM) to link broken or unobserved trajectories of individuals in the crowd, while using the OD-prior in our framework. Our experiments show improvement in performance through the use of SAM features and OD prior. To the best of our knowledge, our work is one of the first studies that provides encouraging results towards a better understanding of crowd behavior at the scale of million pedestrians."
]
} |
1509.02094 | 2234538948 | This paper presents a method for future localization: to predict a set of plausible trajectories of ego-motion given a depth image. We predict paths avoiding obstacles, between objects, even paths turning around a corner into space behind objects. As a byproduct of the predicted trajectories of ego-motion, we discover in the image the empty space occluded by foreground objects. We use no image based features such as semantic labeling segmentation or object detection recognition for this algorithm. Inspired by proxemics, we represent the space around a person using an EgoSpace map, akin to an illustrated tourist map, that measures a likelihood of occlusion at the egocentric coordinate system. A future trajectory of ego-motion is modeled by a linear combination of compact trajectory bases allowing us to constrain the predicted trajectory. We learn the relationship between the EgoSpace map and trajectory from the EgoMotion dataset providing in-situ measurements of the future trajectory. A cost function that takes into account partial occlusion due to foreground objects is minimized to predict a trajectory. This cost function generates a trajectory that passes through the occluded space, which allows us to discover the empty space behind the foreground objects. We quantitatively evaluate our method to show predictive validity and apply to various real world scenes including walking, shopping, and social interactions. | @cite_29 used scene statistics produced by camera ego-motion to recognize sport activities from a firse person camera. Traditional vision frameworks such as object detection, recognition, and segmentation frameworks are successfully integrated in first person data: Pirsiavash and Ramanan @cite_32 recognized daily activities using deformable part models, @cite_22 found important persons and objects, @cite_19 discovered objects, and @cite_26 @cite_24 segmented pixels corresponding to hands. In a social setting, @cite_20 presented a method to recognize social interactions by detecting gaze directions of people and @cite_0 introduced an algorithm to reconstruct joint attention in 3D by leveraging 3D reconstruction of camera ego-motion. This reconstruction allows prediction of joint attention possible by learning the spatial relationship between a social formation and joint attention @cite_13 . | {
"cite_N": [
"@cite_26",
"@cite_22",
"@cite_29",
"@cite_32",
"@cite_24",
"@cite_19",
"@cite_0",
"@cite_13",
"@cite_20"
],
"mid": [
"2136668269",
"2106229755",
"",
"",
"",
"2149276562",
"2113510982",
"1912797782",
""
],
"abstract": [
"We present a model for gaze prediction in egocentric video by leveraging the implicit cues that exist in camera wearer's behaviors. Specifically, we compute the camera wearer's head motion and hand location from the video and combine them to estimate where the eyes look. We further model the dynamic behavior of the gaze, in particular fixations, as latent variables to improve the gaze prediction. Our gaze prediction results outperform the state-of-the-art algorithms by a large margin on publicly available egocentric vision datasets. In addition, we demonstrate that we get a significant performance boost in recognizing daily actions and segmenting foreground objects by plugging in our gaze predictions into state-of-the-art methods.",
"We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization.",
"",
"",
"",
"We present a method to analyze daily activities, such as meal preparation, using video from an egocentric camera. Our method performs inference about activities, actions, hands, and objects. Daily activities are a challenging domain for activity recognition which are well-suited to an egocentric approach. In contrast to previous activity recognition methods, our approach does not require pre-trained detectors for objects and hands. Instead we demonstrate the ability to learn a hierarchical model of an activity by exploiting the consistent appearance of objects, hands, and actions that results from the egocentric context. We show that joint modeling of activities, actions, and objects leads to superior performance in comparison to the case where they are considered independently. We introduce a novel representation of actions based on object-hand interactions and experimentally demonstrate the superior performance of our representation in comparison to standard activity representations such as bag of words.",
"A gaze concurrence is a point in 3D where the gaze directions of two or more people intersect. It is a strong indicator of social saliency because the attention of the participating group is focused on that point. In scenes occupied by large groups of people, multiple concurrences may occur and transition over time. In this paper, we present a method to construct a 3D social saliency field and locate multiple gaze concurrences that occur in a social scene from videos taken by head-mounted cameras. We model the gaze as a cone-shaped distribution emanating from the center of the eyes, capturing the variation of eye-in-head motion. We calibrate the parameters of this distribution by exploiting the fixed relationship between the primary gaze ray and the head-mounted camera pose. The resulting gaze model enables us to build a social saliency field in 3D. We estimate the number and 3D locations of the gaze concurrences via provably convergent mode-seeking in the social saliency field. Our algorithm is applied to reconstruct multiple gaze concurrences in several real world scenes and evaluated quantitatively against motion-captured ground truth.",
"This paper presents a method to predict social saliency, the likelihood of joint attention, given an input image or video by leveraging the social interaction data captured by first person cameras. Inspired by electric dipole moments, we introduce a social formation feature that encodes the geometric relationship between joint attention and its social formation. We learn this feature from the first person social interaction data where we can precisely measure the locations of joint attention and its associated members in 3D. An ensemble classifier is trained to learn the geometric relationship. Using the trained classifier, we predict social saliency in real-world scenes with multiple social groups including scenes from team sports captured in a third person view. Our representation does not require directional measurements such as gaze directions. A geometric analysis of social interactions in terms of the F-formation theory is also presented.",
""
]
} |
1509.02094 | 2234538948 | This paper presents a method for future localization: to predict a set of plausible trajectories of ego-motion given a depth image. We predict paths avoiding obstacles, between objects, even paths turning around a corner into space behind objects. As a byproduct of the predicted trajectories of ego-motion, we discover in the image the empty space occluded by foreground objects. We use no image based features such as semantic labeling segmentation or object detection recognition for this algorithm. Inspired by proxemics, we represent the space around a person using an EgoSpace map, akin to an illustrated tourist map, that measures a likelihood of occlusion at the egocentric coordinate system. A future trajectory of ego-motion is modeled by a linear combination of compact trajectory bases allowing us to constrain the predicted trajectory. We learn the relationship between the EgoSpace map and trajectory from the EgoMotion dataset providing in-situ measurements of the future trajectory. A cost function that takes into account partial occlusion due to foreground objects is minimized to predict a trajectory. This cost function generates a trajectory that passes through the occluded space, which allows us to discover the empty space behind the foreground objects. We quantitatively evaluate our method to show predictive validity and apply to various real world scenes including walking, shopping, and social interactions. | Such characteristics of first person cameras were used to generate interesting applications in vision, graphics, and robotics. @cite_22 summarized a life logging video, @cite_3 detected iconic images using a web image prior. @cite_21 used 3D joint attention to edit social video footages and @cite_23 used 3D camera motion to generate a hyperlapse first person video. In robotics, @cite_8 predicted human activities for human-robot interactions. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_3",
"@cite_23"
],
"mid": [
"2106229755",
"2102813107",
"2074520446",
"13223599",
""
],
"abstract": [
"We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization.",
"Existing approaches to nonrigid structure from motion assume that the instantaneous 3D shape of a deforming object is a linear combination of basis shapes. These bases are object dependent and therefore have to be estimated anew for each video sequence. In contrast, we propose a dual approach to describe the evolving 3D structure in trajectory space by a linear combination of basis trajectories. We describe the dual relationship between the two approaches, showing that they both have equal power for representing 3D structure. We further show that the temporal smoothness in 3D trajectories alone can be used for recovering nonrigid structure from a moving camera. The principal advantage of expressing deforming 3D structure in trajectory space is that we can define an object independent basis. This results in a significant reduction in unknowns and corresponding stability in estimation. We propose the use of the Discrete Cosine Transform (DCT) as the object independent basis and empirically demonstrate that it approaches Principal Component Analysis (PCA) for natural motions. We report the performance of the proposed method, quantitatively using motion capture data, and qualitatively on several video sequences exhibiting nonrigid motions, including piecewise rigid motion, partially nonrigid motion (such as a facial expressions), and highly nonrigid motion (such as a person walking or dancing).",
"We present an approach that takes multiple videos captured by social cameras---cameras that are carried or worn by members of the group involved in an activity---and produces a coherent \"cut\" video of the activity. Footage from social cameras contains an intimate, personalized view that reflects the part of an event that was of importance to the camera operator (or wearer). We leverage the insight that social cameras share the focus of attention of the people carrying them. We use this insight to determine where the important \"content\" in a scene is taking place, and use it in conjunction with cinematographic guidelines to select which cameras to cut to and to determine the timing of those cuts. A trellis graph representation is used to optimize an objective function that maximizes coverage of the important content in the scene, while respecting cinematographic guidelines such as the 180-degree rule and avoiding jump cuts. We demonstrate cuts of the videos in various styles and lengths for a number of scenarios, including sports games, street performances, family activities, and social get-togethers. We evaluate our results through an in-depth analysis of the cuts in the resulting videos and through comparison with videos produced by a professional editor and existing commercial solutions.",
"Wearable cameras capture a first-person view of the world, and offer a hands-free way to record daily experiences or special events. Yet, not every frame is worthy of being captured and stored. We propose to automatically predict “snap points” in unedited egocentric video—that is, those frames that look like they could have been intentionally taken photos. We develop a generative model for snap points that relies on a Web photo prior together with domain-adapted features. Critically, our approach avoids strong assumptions about the particular content of snap points, focusing instead on their composition. Using 17 hours of egocentric video from both human and mobile robot camera wearers, we show that the approach accurately isolates those frames that human judges would believe to be intentionally snapped photos. In addition, we demonstrate the utility of snap point detection for improving object detection and keyframe selection in egocentric video.",
""
]
} |
1509.02470 | 2120176405 | Recently, many researches employ middle-layer output of convolutional neural network models (CNN) as features for different visual recognition tasks. Although promising results have been achieved in some empirical studies, such type of representations still suffer from the well-known issue of semantic gap. This paper proposes so-called deep attribute framework to alleviate this issue from three aspects. First, we introduce object region proposals as intermedia to represent target images, and extract features from region proposals. Second, we study aggregating features from different CNN layers for all region proposals. The aggregation yields a holistic yet compact representation of input images. Results show that cross-region max-pooling of soft-max layer output outperform all other layers. As soft-max layer directly corresponds to semantic concepts, this representation is named "deep attributes". Third, we observe that only a small portion of generated regions by object proposals algorithm are correlated to classification target. Therefore, we introduce context-aware region refining algorithm to pick out contextual regions and build context-aware classifiers. We apply the proposed deep attributes framework for various vision tasks. Extensive experiments are conducted on standard benchmarks for three visual recognition tasks, i.e., image classification, fine-grained recognition and visual instance retrieval. Results show that deep attribute approaches achieve state-of-the-art results, and outperforms existing peer methods with a significant margin, even though some benchmarks have little overlap of concepts with the pre-trained CNN models. | . Since the breakthrough success of CNN models on ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 @cite_2 , employing CNN models to other vision tasks becomes popular in the computer vision community. Razavian al @cite_10 evaluate the performance of CNN features on several vision tasks, including object recognition, fine-grained object recognition, and image retrieval. Meanwhile, DeCAF @cite_35 also shows that CNN features work surprisingly well on image classification. Subsequently, @cite_22 present a similar idea on image retrieval with fine-tuning on self-collected datasets to further improve retrieval accuracy. In addition, they adopt PCA to compress neural-codes for efficient search. All these methods adopt the neural code activation from the first full-connected layer. | {
"cite_N": [
"@cite_35",
"@cite_10",
"@cite_22",
"@cite_2"
],
"mid": [
"2953360861",
"2953391683",
"2950252392",
""
],
"abstract": [
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.",
"It has been shown that the activations invoked by an image within the top layers of a large convolutional neural network provide a high-level descriptor of the visual content of the image. In this paper, we investigate the use of such descriptors (neural codes) within the image retrieval application. In the experiments with several standard retrieval benchmarks, we establish that neural codes perform competitively even when the convolutional neural network has been trained for an unrelated classification task (e.g. Image-Net). We also evaluate the improvement in the retrieval performance of neural codes, when the network is retrained on a dataset of images that are similar to images encountered at test time. We further evaluate the performance of the compressed neural codes and show that a simple PCA compression provides very good short codes that give state-of-the-art accuracy on a number of datasets. In general, neural codes turn out to be much more resilient to such compression in comparison other state-of-the-art descriptors. Finally, we show that discriminative dimensionality reduction trained on a dataset of pairs of matched photographs improves the performance of PCA-compressed neural codes even further. Overall, our quantitative experiments demonstrate the promise of neural codes as visual descriptors for image retrieval.",
""
]
} |
1509.02470 | 2120176405 | Recently, many researches employ middle-layer output of convolutional neural network models (CNN) as features for different visual recognition tasks. Although promising results have been achieved in some empirical studies, such type of representations still suffer from the well-known issue of semantic gap. This paper proposes so-called deep attribute framework to alleviate this issue from three aspects. First, we introduce object region proposals as intermedia to represent target images, and extract features from region proposals. Second, we study aggregating features from different CNN layers for all region proposals. The aggregation yields a holistic yet compact representation of input images. Results show that cross-region max-pooling of soft-max layer output outperform all other layers. As soft-max layer directly corresponds to semantic concepts, this representation is named "deep attributes". Third, we observe that only a small portion of generated regions by object proposals algorithm are correlated to classification target. Therefore, we introduce context-aware region refining algorithm to pick out contextual regions and build context-aware classifiers. We apply the proposed deep attributes framework for various vision tasks. Extensive experiments are conducted on standard benchmarks for three visual recognition tasks, i.e., image classification, fine-grained recognition and visual instance retrieval. Results show that deep attribute approaches achieve state-of-the-art results, and outperforms existing peer methods with a significant margin, even though some benchmarks have little overlap of concepts with the pre-trained CNN models. | Pooling is a general strategy to augment features. As one of the most well known work, spatial pyramid matching performs pooling over pyramid of regular grids @cite_16 @cite_0 . Gong al @cite_19 encodes the activations of CNN fully connected layer by VLAD @cite_12 , and then concatenates the encoded features over windows at three scale levels. Most of these pooling methods simply concatenate features from different grids of scales. On the contrary, decision-level cross-region pooling has been applied when there are multiple region patch candidates @cite_26 @cite_3 . In our work, since we use the semantic output of CNNs as regional features, it is fairly straightforward to perform pooling across different region proposals. | {
"cite_N": [
"@cite_26",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_16",
"@cite_12"
],
"mid": [
"1487583988",
"",
"2097018403",
"1524680991",
"2162915993",
"2012592962"
],
"abstract": [
"We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.",
"",
"Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n2 n3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multi-scale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably reduces the complexity of SVMs to O(n) in training and a constant in testing. In a number of image categorization experiments, we find that, in terms of classification accuracy, the suggested linear SPM based on sparse coding of SIFT descriptors always significantly outperforms the linear SPM kernel on histograms, and is even better than the nonlinear SPM kernels, leading to state-of-the-art performance on several benchmarks by using a single type of descriptors.",
"Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets.",
"This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralbas \"gist\" and Lowes SIFT descriptors.",
"We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms."
]
} |
1509.02470 | 2120176405 | Recently, many researches employ middle-layer output of convolutional neural network models (CNN) as features for different visual recognition tasks. Although promising results have been achieved in some empirical studies, such type of representations still suffer from the well-known issue of semantic gap. This paper proposes so-called deep attribute framework to alleviate this issue from three aspects. First, we introduce object region proposals as intermedia to represent target images, and extract features from region proposals. Second, we study aggregating features from different CNN layers for all region proposals. The aggregation yields a holistic yet compact representation of input images. Results show that cross-region max-pooling of soft-max layer output outperform all other layers. As soft-max layer directly corresponds to semantic concepts, this representation is named "deep attributes". Third, we observe that only a small portion of generated regions by object proposals algorithm are correlated to classification target. Therefore, we introduce context-aware region refining algorithm to pick out contextual regions and build context-aware classifiers. We apply the proposed deep attributes framework for various vision tasks. Extensive experiments are conducted on standard benchmarks for three visual recognition tasks, i.e., image classification, fine-grained recognition and visual instance retrieval. Results show that deep attribute approaches achieve state-of-the-art results, and outperforms existing peer methods with a significant margin, even though some benchmarks have little overlap of concepts with the pre-trained CNN models. | Methods for detecting region proposal are used in object detection to avoid exhaustive sliding window search across images and speed up the detection without noticeable loss of recall rates @cite_21 . In general, region proposal detection is based on low-level features and visual cues to measure objectness of local regions to generate relatively fewer candidate windows. In the past few years, there have been extensive studies on this topic and many techniques are invented, including selective search @cite_17 , edge-boxes @cite_18 , BING @cite_5 , multiscale combinatorial grouping (MCG) @cite_33 , and so on. Recently, Jan @cite_23 evaluates ten region proposal methods, in which selective search and edge-boxes achieved consistently better performance in terms of ground truth recall, repeatability, and detection speed. Hence, we may employ them to produce region proposals as the first step of our deep attribute method. | {
"cite_N": [
"@cite_18",
"@cite_33",
"@cite_21",
"@cite_23",
"@cite_5",
"@cite_17"
],
"mid": [
"7746136",
"1991367009",
"2102605133",
"",
"2010181071",
"2088049833"
],
"abstract": [
"The use of object proposals is an effective recent approach for increasing the computational efficiency of object detection. We propose a novel method for generating object bounding box proposals using edges. Edges provide a sparse yet informative representation of an image. Our main observation is that the number of contours that are wholly contained in a bounding box is indicative of the likelihood of the box containing an object. We propose a simple box objectness score that measures the number of edges that exist in the box minus those that are members of contours that overlap the box’s boundary. Using efficient data structures, millions of candidate boxes can be evaluated in a fraction of a second, returning a ranked set of a few thousand top-scoring proposals. Using standard metrics, we show results that are significantly more accurate than the current state-of-the-art while being faster to compute. In particular, given just 1000 proposals we achieve over 96 object recall at overlap threshold of 0.5 and over 75 recall at the more challenging overlap of 0.7. Our approach runs in 0.25 seconds and we additionally demonstrate a near real-time variant with only minor loss in accuracy.",
"We propose a unified approach for bottom-up hierarchical image segmentation and object candidate generation for recognition, called Multiscale Combinatorial Grouping (MCG). For this purpose, we first develop a fast normalized cuts algorithm. We then propose a high-performance hierarchical segmenter that makes effective use of multiscale information. Finally, we propose a grouping strategy that combines our multiscale regions into highly-accurate object candidates by exploring efficiently their combinatorial space. We conduct extensive experiments on both the BSDS500 and on the PASCAL 2012 segmentation datasets, showing that MCG produces state-of-the-art contours, hierarchical regions and object candidates.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"",
"Training a generic objectness measure to produce a small set of candidate object windows, has been shown to speed up the classical sliding window object detection paradigm. We observe that generic objects with well-defined closed boundary can be discriminated by looking at the norm of gradients, with a suitable resizing of their corresponding image windows in to a small fixed size. Based on this observation and computational reasons, we propose to resize the window to 8 × 8 and use the norm of the gradients as a simple 64D feature to describe it, for explicitly training a generic objectness measure. We further show how the binarized version of this feature, namely binarized normed gradients (BING), can be used for efficient objectness estimation, which requires only a few atomic operations (e.g. ADD, BITWISE SHIFT, etc.). Experiments on the challenging PASCAL VOC 2007 dataset show that our method efficiently (300fps on a single laptop CPU) generates a small set of category-independent, high quality object windows, yielding 96.2 object detection rate (DR) with 1, 000 proposals. Increasing the numbers of proposals and color spaces for computing BING features, our performance can be further improved to 99.5 DR.",
"This paper addresses the problem of generating possible object locations for use in object recognition. We introduce selective search which combines the strength of both an exhaustive search and segmentation. Like segmentation, we use the image structure to guide our sampling process. Like exhaustive search, we aim to capture all possible object locations. Instead of a single technique to generate possible object locations, we diversify our search and use a variety of complementary image partitionings to deal with as many image conditions as possible. Our selective search results in a small set of data-driven, class-independent, high quality locations, yielding 99 recall and a Mean Average Best Overlap of 0.879 at 10,097 locations. The reduced number of locations compared to an exhaustive search enables the use of stronger machine learning techniques and stronger appearance models for object recognition. In this paper we show that our selective search enables the use of the powerful Bag-of-Words model for recognition. The selective search software is made publicly available (Software: http: disi.unitn.it uijlings SelectiveSearch.html )."
]
} |
1509.01676 | 2226886538 | The energy demands of Ethernet links have been an active focus of research in the recent years. This work has enabled a new generation of energy-efficient Ethernet (EEE) interfaces able to adapt their power consumption to the actual traffic demands, thus yielding significant energy savings. With the energy consumption of single network connections being a solved problem, in this paper, we focus on the energy demands of link aggregates that are commonly used to increase the capacity of a network connection. We build on known energy models of single EEE links to derive the energy demands of the whole aggregate as a function on how the traffic load is spread among its powered links. We then provide a practical method to share the load that minimizes overall energy consumption with controlled packet delay and prove that it is valid for a wide range of EEE links. Finally, we validate our method with both synthetic and real traffic traces captured in Internet backbones. | There are several areas where energy can be saved in the current Internet that were first identified in @cite_31 . The existence of spare installed capacity was one of the identified aspects. Several works proposed to power off unused links during low load periods concentrating traffic on just a few network paths @cite_19 @cite_29 @cite_23 @cite_6 @cite_15 . Of all these proposals, @cite_23 @cite_6 also take into consideration aggregated links between two network devices. However, all these works focus on long timescales, usually hours, while we are interested in much lower timescales, as such, both approaches can be seen as complementary. Links (and network paths) can be powered off when the long-term traffic load is low enough, while, for the short timescales, another approach should be used to reduce the energy usage of those links in the aggregate that remain active. | {
"cite_N": [
"@cite_15",
"@cite_29",
"@cite_6",
"@cite_19",
"@cite_23",
"@cite_31"
],
"mid": [
"2029168532",
"2048599615",
"",
"2078426348",
"2104463108",
"2169741605"
],
"abstract": [
"Rapid growth of ICT (Information Communication Technologies) energy consumption involves the need for proposing new mechanisms to enhance their energy efficiency. Focusing on energy consumption of networking equipment, this paper presents a study to achieve a tradeoff between the amount of energy that could be saved in wired networks and the discrete number of energy levels to be implemented by line cards. We use bio-inspired computing based on GA (Genetic Algorithms) and PSO (Particle Swarm Optimization) in order to assess the most suitable network configurations in terms of energy savings for different-sized networks such as NSFNet, Geant and AT&T. Results show a comparison between both bio-inspired algorithms in which, although GA produces better results, PSO achieves a reduction in computation time with an optimality gap below 1.7 . From a practical point of view, a limited number, such as four energy levels, is enough to achieve significant reductions in energy consumption.",
"Recent data confirm that the power consumption of the information and communications technologies (ICT) and of the Internet itself can no longer be ignored, considering the increasing pervasiveness and the importance of the sector on productivity and economic growth. Although the traffic load of communication networks varies greatly over time and rarely reaches capacity limits, its energy consumption is almost constant. Based on this observation, energy management strategies are being considered with the goal of minimizing the energy consumption, so that consumption becomes proportional to the traffic load either at the individual-device level or for the whole network. The focus of this paper is to minimize the energy consumption of the network through a management strategy that selectively switches off devices according to the traffic level. We consider a set of traffic scenarios and jointly optimize their energy consumption assuming a per-flow routing. We propose a traffic engineering mathematical programming formulation based on integer linear programming that includes constraints on the changes of the device states and routing paths to limit the impact on quality of service and the signaling overhead. We show a set of numerical results obtained using the energy consumption of real routers and study the impact of the different parameters and constraints on the optimal energy management strategy. We also present heuristic results to compare the optimal operational planning with online energy management operation .",
"",
"According to several studies, the power consumption of the Internet accounts for up to 10 of the worldwide energy consumption and is constantly increasing. The global consciousness on this problem has also grown, and several initiatives are being put into place to reduce the power consumption of the ICT sector in general. In this paper, we face the problem of minimizing power consumption for Internet service provider (ISP) networks. In particular, we propose and assess strategies to concentrate network traffic on a minimal subset of network resources. Given a telecommunication infrastructure, our aim is to turn off network nodes and links while still guaranteeing full connectivity and maximum link utilization constraints. We first derive a simple and complete formulation, which results into an NP-hard problem that can be solved only for trivial cases. We then derive more complex formulations that can scale up to middle-sized networks. Finally, we provide efficient heuristics that can be used for large networks. We test the effectiveness of our algorithms on both real and synthetic topologies, considering the daily fluctuations of Internet traffic and different classes of users. Results show that the power savings can be significant, e.g., larger than 35 .",
"The paper copes with the reduction of network power consumption by the definition of new routing algorithms, able to take into account the energy consumed by the network devices. In particular, based on the power consumption characterization of the network devices obtained using the Energy Profile (EP) concept, the paper presents the analysis of the exact solution of the Energy Aware Routing (EAR) problem solved with a Mixed Integer Programming solver. The analysis is aimed at evaluating the impact on the performance of three relevant aspects of the problem: the approximation of the actual EP, the traffic load and the topology of the network. Furthermore, the paper proposes a heuristic solution of the EAR, denoted as Dijkstra-based Power Aware Routing Algorithm (DPRA), defined in order to cope with the complexity of the exact solution.",
"In this paper we examine the somewhat controversial subject of energy consumption of networking devices in the Internet, motivated by data collected by the U.S. Department of Commerce. We discuss the impact on network protocols of saving energy by putting network interfaces and other router & switch components to sleep. Using sample packet traces, we first show that it is indeed reasonable to do this and then we discuss the changes that may need to be made to current Internet protocols to support a more aggressive strategy for sleeping. Since this is a position paper, we do not present results but rather suggest interesting directions for core networking research. The impact of saving energy is huge, particularly in the developing world where energy is a precious resource whose scarcity hinders widespread Internet deployment."
]
} |
1509.01676 | 2226886538 | The energy demands of Ethernet links have been an active focus of research in the recent years. This work has enabled a new generation of energy-efficient Ethernet (EEE) interfaces able to adapt their power consumption to the actual traffic demands, thus yielding significant energy savings. With the energy consumption of single network connections being a solved problem, in this paper, we focus on the energy demands of link aggregates that are commonly used to increase the capacity of a network connection. We build on known energy models of single EEE links to derive the energy demands of the whole aggregate as a function on how the traffic load is spread among its powered links. We then provide a practical method to share the load that minimizes overall energy consumption with controlled packet delay and prove that it is valid for a wide range of EEE links. Finally, we validate our method with both synthetic and real traffic traces captured in Internet backbones. | Another source of inefficiency identified in @cite_31 was the physical interfaces of network devices. At that time, physical interfaces drew a constant amount of power, regardless of the actual traffic load. Preliminary works tried to mitigate this either by adapting the transmission speed @cite_30 , with lower speeds demanding less power, or by briefly switching off the physical interfaces when there is none or very little traffic to send @cite_2 @cite_18 . Finally, the IEEE 802.3az @cite_20 standard was sanctioned providing a new low power mode to physical Ethernet interfaces that could be used when there was no need to send traffic. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_2",
"@cite_31",
"@cite_20"
],
"mid": [
"1985591879",
"2057704341",
"2115538900",
"2169741605",
""
],
"abstract": [
"There is a growing research interest in improving the energy efficiency of communication networks. In order to assess the impact of introducing new energy efficient technologies, an up-to-date estimate for the global electricity consumption in communication networks is needed. In this paper we consider the use phase electricity consumption of telecom operator networks, office networks and customer premises equipment. Our results show that the network electricity consumption is growing fast, at a rate of 10 per year, and its relative contribution to the total worldwide electricity consumption has increased from 1.3 in 2007 to 1.8 in 2012. We estimate the worldwide electricity consumption of communication networks will exceed 350 TWh in 2012.",
"Network interfaces in most LAN computing devices are usually severely under-utilized, wasting energy while waiting for new packets to arrive. In this paper, we present two algorithms for opportunistically powering down unused network interfaces in order to save some of that wasted energy. We compare our proposals to the best known opportunistic method, and show that they provide much greater power savings inflicting even lower delays to Internet traffic.",
"Most Ethernet interfaces available for deployment in switches and hosts today can operate in a variety of different low power modes. However, currently these modes have very limited usage models. They do not take advantage of periods of inactivity, when the links remain idle or under-utilized. In this study, we propose methods that allow for detection of such periods to obtain energy savings with little impact on loss or delay. We evaluate our methods on a wide range of real-time traffic traces collected at a high-speed backbone switch within our campus LAN. Our results show that Ethernet interfaces at both ends can be put in extremely low power modes anywhere from 40 -98 of the time observed. In addition, we found that approximately 37 of interfaces studied (on the same switch) can be put in low power modes simultaneously which opens the potential for further energy savings in the switching fabric within the switch.",
"In this paper we examine the somewhat controversial subject of energy consumption of networking devices in the Internet, motivated by data collected by the U.S. Department of Commerce. We discuss the impact on network protocols of saving energy by putting network interfaces and other router & switch components to sleep. Using sample packet traces, we first show that it is indeed reasonable to do this and then we discuss the changes that may need to be made to current Internet protocols to support a more aggressive strategy for sleeping. Since this is a position paper, we do not present results but rather suggest interesting directions for core networking research. The impact of saving energy is huge, particularly in the developing world where energy is a precious resource whose scarcity hinders widespread Internet deployment.",
""
]
} |
1509.01676 | 2226886538 | The energy demands of Ethernet links have been an active focus of research in the recent years. This work has enabled a new generation of energy-efficient Ethernet (EEE) interfaces able to adapt their power consumption to the actual traffic demands, thus yielding significant energy savings. With the energy consumption of single network connections being a solved problem, in this paper, we focus on the energy demands of link aggregates that are commonly used to increase the capacity of a network connection. We build on known energy models of single EEE links to derive the energy demands of the whole aggregate as a function on how the traffic load is spread among its powered links. We then provide a practical method to share the load that minimizes overall energy consumption with controlled packet delay and prove that it is valid for a wide range of EEE links. Finally, we validate our method with both synthetic and real traffic traces captured in Internet backbones. | New research then focused on the best way to use this new low power mode. The straightforward solution, entering low power mode as soon as all traffic has been transmitted, and returning to the normal mode with the first packet arrival, called , was experimentally studied in @cite_4 . A first analytic study appeared in @cite_28 for Poisson traffic, while another analysis considering arrivals of packets trains to take into account burst traffic arrivals was presented in @cite_27 . | {
"cite_N": [
"@cite_28",
"@cite_27",
"@cite_4"
],
"mid": [
"2100524862",
"2124322240",
"2130450172"
],
"abstract": [
"The new IEEE 802.3az Energy Efficient Ethernet (EEE) standard will improve significantly the energy efficiency of 10 Gbps copper transceivers by the introduction of a sleep mode for idle transmission times. The next step towards energy saving seems to be the application of similar concepts to Optical Ethernet, both for short and long range links. To this aim, this paper starts by proposing an analytical model to estimate the energy consumption of a link that uses a sleep-mode power saving mechanism. This model can be useful to answer a number of questions that need to be carefully studied. Otherwise, the complexity of optical components could be increased for the sake of an energy saving that could turn out negligible. In the rest of the paper we analyze three key questions to try to shed some light on this design decision: (a) is the new copper EEE actually outperforming the current regular optical Ethernet in terms of energy saving in such a way that optical PHYs (transceivers) actually need a green upgrade to remain more energy efficient than their copper counterparts? (b) How much energy saving could be actually achieved by EE optical Ethernet? (c) What is the transition time required to achieve a substantial energy saving at medium traffic loads on EE 10 Gb s optical Ethernet links? The answer to the latter question sets a concrete goal for short-term research in fast on-off laser technology.",
"The recently approved Energy Efficient Ethernet standard IEEE 802.3az achieves energy savings by using a low power mode when the link is idle. However, those savings heavily depend on the traffic patterns, due to the overhead inherent in transitions between active and low power modes. This makes it impractical to estimate energy savings through measurements or simulations in all relevant scenarios. In this letter we present an analytical model to estimate the energy consumption of an Energy Efficient Ethernet link, based on simple traffic parameters. The model is validated through simulation and experimental data.",
"In September 2010, the Energy Efficient Ethernet (IEEE 802.3az) standard was officially approved. This new standard introduces a low power mode for the most common Ethernet physical layer standards and is expected to provide large energy savings. In this letter, for the first time, Network Interface Cards (NICs) that implement Energy Efficient Ethernet (EEE) are used to measure energy savings with real traffic. The data presented will be useful to better estimate the energy savings that can be achieved when EEE is deployed. Existing analysis of EEE based on simulations predict a large overhead due to mode transitions between active and low power modes. The experimental results confirm that transition overheads can be significant, leading to almost full energy consumption even at low utilization levels. Therefore traffic patterns will play a key role in the energy savings achieved by EEE as it becomes deployed in the field."
]
} |
1509.01846 | 2949368738 | We present a data-driven optimal control framework that can be viewed as a generalization of the path integral (PI) control approach. We find iterative feedback control laws without parameterization based on probabilistic representation of learned dynamics model. The proposed algorithm operates in a forward-backward manner which differentiate from other PI-related methods that perform forward sampling to find optimal controls. Our method uses significantly less samples to find optimal controls compared to other approaches within the PI control family that relies on extensive sampling from given dynamics models or trials on physical systems in model-free fashions. In addition, the learned controllers can be generalized to new tasks without re-sampling based on the compositionality theory for the linearly-solvable optimal control framework. We provide experimental results on three different systems and comparisons with state-of-the-art model-based methods to demonstrate the efficiency and generalizability of the proposed framework. | Another class of PI-related method is based on policy parameterization. Notable approaches include PI @math @cite_5 , PI @math -CMA , PI-REPS @cite_14 and recently developed state-dependent PI @cite_16 . The limitations of these methods are: 1) They do not take into account model uncertainty in the passive dynamics @math . 2) The imposed policy parameterizations restrict optimal control solutions. 3) The optimized policy parameters can not be generalized to new tasks. A brief comparison of some of these methods can be found in Table 1. Motivated by the challenge of combining sample efficiency and generalizability, next we introduce a probabilistic model-based approach to compute the optimal control ) analytically. | {
"cite_N": [
"@cite_5",
"@cite_14",
"@cite_16"
],
"mid": [
"1925816294",
"91905023",
"2023095801"
],
"abstract": [
"With the goal to generate more scalable algorithms with higher efficiency and fewer open parameters, reinforcement learning (RL) has recently moved towards combining classical techniques from optimal control and dynamic programming with modern learning techniques from statistical estimation theory. In this vein, this paper suggests to use the framework of stochastic optimal control with path integrals to derive a novel approach to RL with parameterized policies. While solidly grounded in value function estimation and optimal control based on the stochastic Hamilton-Jacobi-Bellman (HJB) equations, policy improvements can be transformed into an approximation problem of a path integral which has no open algorithmic parameters other than the exploration noise. The resulting algorithm can be conceived of as model-based, semi-model-based, or even model free, depending on how the learning problem is structured. The update equations have no danger of numerical instabilities as neither matrix inversions nor gradient learning rates are required. Our new algorithm demonstrates interesting similarities with previous RL research in the framework of probability matching and provides intuition why the slightly heuristically motivated probability matching approach can actually perform well. Empirical evaluations demonstrate significant performance improvements over gradient-based policy learning and scalability to high-dimensional control problems. Finally, a learning experiment on a simulated 12 degree-of-freedom robot dog illustrates the functionality of our algorithm in a complex robot learning scenario. We believe that Policy Improvement with Path Integrals (PI2) offers currently one of the most efficient, numerically robust, and easy to implement algorithms for RL based on trajectory roll-outs.",
"Path integral (PI) control defines a general class of control problems for which the optimal control computation is equivalent to an inference problem that can be solved by evaluation of a path integral over state trajectories. However, this potential is mostly unused in real-world problems because of two main limitations: first, current approaches can typically only be applied to learn open-loop controllers and second, current sampling procedures are inefficient and not scalable to high dimensional systems. We introduce the efficient Path Integral Relative-Entropy Policy Search (PI-REPS) algorithm for learning feedback policies with PI control. Our algorithm is inspired by information theoretic policy updates that are often used in policy search. We use these updates to approximate the state trajectory distribution that is known to be optimal from the PI control theory. Our approach allows for a principled treatment of different sampling distributions and can be used to estimate many types of parametric or non-parametric feedback controllers. We show that PI-REPS significantly outperforms current methods and is able to solve tasks that are out of reach for current methods.",
"In this paper we address the problem to compute state dependent feedback controls for path integral control problems. To this end we generalize the path integral control formula and utilize this to construct parameterized state dependent feedback controllers. In addition, we show a novel relation between control and importance sampling: better control, in terms of control cost, yields more efficient importance sampling, in terms of effective sample size. The optimal control provides a zero-variance estimate."
]
} |
1509.01546 | 2770308127 | We study the problem of determining the optimal low dimensional projection for maximising the separability of a binary partition of an unlabelled dataset, as measured by spectral graph theory. This is achieved by finding projections which minimise the second eigenvalue of the graph Laplacian of the projected data, which corresponds to a non-convex, non-smooth optimisation problem. We show that the optimal univariate projection based on spectral connectivity converges to the vector normal to the maximum margin hyperplane through the data, as the scaling parameter is reduced to zero. This establishes a connection between connectivity as measured by spectral graph theory and maximal Euclidean separation. The computational cost associated with each eigen-problem is quadratic in the number of data. To mitigate this issue, we propose an approximation method using microclusters with provable approximation error bounds. Combining multiple binary partitions within a divisive hierarchical model allows us to construct clustering solutions admitting clusters with varying scales and lying within different subspaces. We evaluate the performance of the proposed method on a large collection of benchmark datasets and find that it compares favourably with existing methods for projection pursuit and dimension reduction for data clustering. | Principal component analysis and independent component analysis have been used in the context of clustering, however their objectives do not correspond exactly with those of the clustering task and the justification of their use is based more on common-sense reasoning. Nonetheless, these methods have shown good empirical performance on a number of applications . Some recent approaches to projection pursuit for clustering rely on the non-parametric statistical notion clusters, i.e., that clusters are regions of high density in a probability distribution from which the data are assumed to have arisen. @cite_9 proposed using as projection index the of the projected data. The dip is a measure of departure from unimodality, and so maximising the dip tends to projections which have mutlimodal marginal density, and therefore separate high density clusters. The authors establish that the dip is differentiable for any projection vector onto which the projected data are unique, and use a simple gradient ascent method to find local optima. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1997817740"
],
"abstract": [
"On determine la distribution de la statistique de test, asymptotiquement et empiriquement, pour un echantillonnage a partir de la distribution uniforme"
]
} |
1509.01644 | 2256420211 | We introduce a model-free algorithm for learning in Markov decision processes with parameterized actions-discrete actions with continuous parameters. At each step the agent must select both which action to use and which parameters to use with that action. We introduce the Q-PAMDP algorithm for learning in these domains, show that it converges to a local optimum, and compare it to direct policy search in the goal-scoring and Platform domains. | Rachelson encountered parameterized actions in the form of an action to wait for a given period of time in his research on time dependent, continuous time MDPs (TMDPs). He developed XMDPs, which are TMDPs with a parameterized action space @cite_9 . He developed a Bellman operator for this domain, and in a later paper mentions that the TiMDP @math algorithm can work with parameterized actions, although this specifically refers to the parameterized wait action @cite_1 . This research also takes a planning perspective, and only considers a time dependent domain. Additionally, the size of the parameter space for the parameterized actions is the same for all actions. | {
"cite_N": [
"@cite_9",
"@cite_1"
],
"mid": [
"2784847637",
"2169997910"
],
"abstract": [
"This thesis addresses the question of planning under uncertainty within a time-dependent changing environment. Original motivation for this work came from the problem of building an autonomous agent able to coordinate with its uncertain environment; this environment being composed of other agents communicating their intentions or non-controllable processes for which some discrete-event model is available. We investigate several approaches for modeling continuous time-dependency in the framework of Markov Decision Processes (MDPs), leading us to a definition of Temporal Markov Decision Problems. Then our approach focuses on two separate paradigms. First, we investigate time-dependent problems as processes and describe them through the formalism of Time-dependent MDPs (TMDPs). We extend the existing results concerning optimality equations and present a new Value Iteration algorithm based on piecewise polynomial function representations in order to solve a more general class of TMDPs. This paves the way to a more general discussion on parametric actions in hybrid state and action spaces MDPs with continuous time. In a second time, we investigate the option of separately modeling the concurrent contributions of exogenous events. This approach of modeling leads to the use of Generalized Semi-Markov Decision Processes (GSMDP). We establish a link between the general framework of Discrete Events Systems Specification (DEVS) and the formalism of GSMDP, allowing us to build sound discrete-event compatible simulators. Then we introduce a simulation-based Policy Iteration approach for explicit-event Temporal Markov Decision Problems. This algorithmic contribution brings together results from simulation theory, forward search in MDPs, and statistical learning theory. The implicit-event approach was tested on a specific version of the Mars rover planning problem and on a drone patrol mission planning problem while the explicit-event approach was evaluated on a subway network control problem.",
"Although many real-world stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current state-of-the-art methods cannot adequately address these problems. We present the first framework that can exploit problem structure for modeling and solving hybrid problems efficiently. We formulate these problems as hybrid Markov decision processes (MDPs with continuous and discrete state and action variables), which we assume can be represented in a factored way using a hybrid dynamic Bayesian network (hybrid DBN). This formulation also allows us to apply our methods to collaborative multiagent settings. We present a new linear program approximation method that exploits the structure of the hybrid MDP and lets us compute approximate value functions more efficiently. In particular, we describe a new factored discretization of continuous variables that avoids the exponential blow-up of traditional approaches. We provide theoretical bounds on the quality of such an approximation and on its scale-up potential. We support our theoretical arguments with experiments on a set of control problems with up to 28-dimensional continuous state space and 22-dimensional action space."
]
} |
1509.01644 | 2256420211 | We introduce a model-free algorithm for learning in Markov decision processes with parameterized actions-discrete actions with continuous parameters. At each step the agent must select both which action to use and which parameters to use with that action. We introduce the Q-PAMDP algorithm for learning in these domains, show that it converges to a local optimum, and compare it to direct policy search in the goal-scoring and Platform domains. | Hoey considered mixed discrete-continuous actions in their work on Bayesian affect control theory. To approach this problem they use a form of POMCP, a Monte Carlo sampling algorithm, using domain specific adjustments to compute the continuous action components @cite_5 . They note that the discrete and continuous components of the action space reflect different control aspects: the discrete control provides the what'', while the continuous control describes the how'' @cite_11 . | {
"cite_N": [
"@cite_5",
"@cite_11"
],
"mid": [
"2171084228",
"1969878909"
],
"abstract": [
"This paper introduces a Monte-Carlo algorithm for online planning in large POMDPs. The algorithm combines a Monte-Carlo update of the agent's belief state with a Monte-Carlo tree search from the current belief state. The new algorithm, POMCP, has two important properties. First, Monte-Carlo sampling is used to break the curse of dimensionality both during belief state updates and during planning. Second, only a black box simulator of the POMDP is required, rather than explicit probability distributions. These properties enable POMCP to plan effectively in significantly larger POMDPs than has previously been possible. We demonstrate its effectiveness in three large POMDPs. We scale up a well-known benchmark problem, rocksample, by several orders of magnitude. We also introduce two challenging new POMDPs: 10 x 10 battleship and partially observable PacMan, with approximately 1018 and 1056 states respectively. Our Monte-Carlo planning algorithm achieved a high level of performance with no prior knowledge, and was also able to exploit simple domain knowledge to achieve better results with less search. POMCP is the first general purpose planner to achieve high performance in such large and unfactored POMDPs.",
"Affect Control Theory is a mathematical representation of the interactions between two persons, in which it is posited that people behave in a way so as to minimize the amount of deflection between their cultural emotional sentiments and the transient emotional sentiments that are created by each situation. Affect Control Theory presents a maximum likelihood solution in which optimal behaviours or identities can be predicted based on past interactions. Here, we formulate a probabilistic and decision theoretic model of the same underlying principles, and show this to be a generalisation of the basic theory. The model is more expressive than the original theory, as it can maintain multiple hypotheses about behaviours and identities simultaneously as a probability distribution. This allows the model to generate affectively believable interactions with people by learning about their identity and predicting their behaviours. We demonstrate this generalisation with a set of simulations. We then show how our model can be used as an emotional \"plug-in\" for systems that interact with humans. We demonstrate human-interactive capability by building a simple intelligent tutoring application and pilot-testing it in an experiment with 20 participants."
]
} |
1509.01644 | 2256420211 | We introduce a model-free algorithm for learning in Markov decision processes with parameterized actions-discrete actions with continuous parameters. At each step the agent must select both which action to use and which parameters to use with that action. We introduce the Q-PAMDP algorithm for learning in these domains, show that it converges to a local optimum, and compare it to direct policy search in the goal-scoring and Platform domains. | A hierarchical MDP is an MDP where each action has subtasks. A subtask is itself an MDP with its own states and actions which may have their own subtasks. Hierarchical MDPs are well-suited for representing parameterized actions as we could consider selecting the parameters for a discrete action as a subtask. MAXQ is a method for value function decomposition of hierarchical MDPs @cite_10 . One possiblity is to use MAXQ for learning the action-values in a parameterized action problem. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2121517924"
],
"abstract": [
"This paper presents a new approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The decomposition, known as the MAXQ decomposition, has both a procedural semantics--as a subroutine hierarchy--and a declarative semantics--as a representation of the value function of a hierarchical policy. MAXQ unifies and extends previous work on hierarchical reinforcement learning by Singh, Kaelbling, and Dayan and Hinton. It is based on the assumption that the programmer can identify useful subgoals and define subtasks that achieve these subgoals. By defining such subgoals, the programmer constrains the set of policies that need to be considered during reinforcement learning. The MAXQ value function decomposition can represent the value function of any policy that is consistent with the given hierarchy. The decomposition also creates opportunities to exploit state abstractions, so that individual MDPs within the hierarchy can ignore large parts of the state space. This is important for the practical application of the method. This paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges with probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than flat Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this nonhierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning."
]
} |
1509.01763 | 2217189062 | Recent compilers allow a general-purpose program (written in a conventional programming language) that handles private data to be translated into secure distributed implementation of the corresponding functionality. The resulting program is then guaranteed to provably protect private data using secure multi-party computation techniques. The goals of such compilers are generality, usability, and efficiency, but the complete set of features of a modern programming language has not been supported to date by the existing compilers. In particular, recent compilers PICCO and the two-party ANSI C compiler strive to translate any C program into its secure multi-party implementation, but currently lack support for pointers and dynamic memory allocation, which are important components of many C programs. In this work, we mitigate the limitation and add support for pointers to private data and consequently dynamic memory allocation to the PICCO compiler, enabling it to handle a more diverse set of programs over private data. Because doing so opens up a new design space, we investigate the use of pointers to private data (with known as well as private locations stored in them) in programs and report our findings. Besides dynamic memory allocation, we examine other important topics associated with common pointer use such as reference by pointer address, casting, and building various data structures in the context of secure multi-party computation. This results in enabling the compiler to automatically translate a user program that uses pointers to private data into its distributed implementation that provably protects private data throughout the computation. We empirically evaluate the constructions and report on performance of representative programs. | To support data structures in the SMC framework, several solutions @cite_6 @cite_8 @cite_0 @cite_9 have been proposed. The main motivation of this line of work is the need to store and manipulate private data in an efficient and flexible manner. Toft @cite_6 proposed a private priority queue that has a deterministic access pattern as opposed to randomized ones in ORAM-based data structures. On the other hand, Keller and Scholl @cite_8 introduced implementations of arrays, dictionaries, and priority queues based on various flavors of ORAM implementations. Mitchell and Zimmerman @cite_0 also provide implementations of stacks, queues, and priority queues based on oblivious data compaction and an offline variant of ORAM. @cite_9 proposed implementations of maps, sets, priority queues, stacks, and deques based on ORAM techniques modified for specific data access patterns. Different from all of these publications, our work includes extending the PICCO compiler to support dynamic data structures in a generic way as found in general purpose programming languages. That is, the programmer has the basic tools and primitives that enable her to build any desired data structure. | {
"cite_N": [
"@cite_0",
"@cite_9",
"@cite_6",
"@cite_8"
],
"mid": [
"2165262642",
"",
"2033349877",
"2181401201"
],
"abstract": [
"An algorithm is called data-oblivious if its control flow and memory access pattern do not depend on its input data. Data-oblivious algorithms play a significant role in secure cloud computing, since programs that are run on secret data—as in fully homomorphic encryption or secure multiparty computation—must be data-oblivious. In this paper, we formalize three definitions of data-obliviousness that have appeared implicitly in the literature, explore their implications, and show separations. We observe that data-oblivious algorithms often compose well when viewed as data structures. Using this approach, we construct data-oblivious stacks, queues, and priority queues that are considerably simpler than existing constructions, as well as improving constant factors. We also establish a new upper bound for oblivious data compaction, and use this result to show that an “oine” variant of the Oblivious RAM problem can be solved with O(logn log logn) expected amortized time per operation— as compared with O(log 2 n log logn), the best known upper bound for the standard online formulation. 1998 ACM Subject Classification D.4.6 Security and Protection, E.1 Data Structures, F.1.1 Models of Computation, F.1.2 Modes of Computation",
"",
"This work considers data structures based on multi-party computation (MPC) primitives: structuring secret (e.g. secret shared and potentially unknown) data such that it can both be queried and updated efficiently. Implementing an oblivious RAM (ORAM) using MPC allows any existing data structure to be realized using MPC primitives, however, by focusing on a specific example -- a priority queue -- it is shown that it is possible to achieve much better results than the generic solutions can provide. Moreover, the techniques differ significantly from existing ORAM constructions. Indeed it has recently been shown that any information theoretically secure ORAM with n memory locations requires at least log n random bits per read write to hide the access pattern. In contrast, the present construction achieves security with a completely deterministic access pattern.",
"We present oblivious implementations of several data structures for secure multiparty computation (MPC) such as arrays, dictionaries, and priority queues. The resulting oblivious data structures have only polylogarithmic overhead compared with their classical counterparts. To achieve this, we give secure multiparty protocols for the ORAM of (Asiacrypt ‘11) and the Path ORAM scheme of (CCS ‘13), and we compare the resulting implementations. We subsequently use our oblivious priority queue for secure computation of Dijkstra’s shortest path algorithm on general graphs, where the graph structure is secret. To the best of our knowledge, this is the first implementation of a non-trivial graph algorithm in multiparty computation with polylogarithmic overhead."
]
} |
1509.01763 | 2217189062 | Recent compilers allow a general-purpose program (written in a conventional programming language) that handles private data to be translated into secure distributed implementation of the corresponding functionality. The resulting program is then guaranteed to provably protect private data using secure multi-party computation techniques. The goals of such compilers are generality, usability, and efficiency, but the complete set of features of a modern programming language has not been supported to date by the existing compilers. In particular, recent compilers PICCO and the two-party ANSI C compiler strive to translate any C program into its secure multi-party implementation, but currently lack support for pointers and dynamic memory allocation, which are important components of many C programs. In this work, we mitigate the limitation and add support for pointers to private data and consequently dynamic memory allocation to the PICCO compiler, enabling it to handle a more diverse set of programs over private data. Because doing so opens up a new design space, we investigate the use of pointers to private data (with known as well as private locations stored in them) in programs and report our findings. Besides dynamic memory allocation, we examine other important topics associated with common pointer use such as reference by pointer address, casting, and building various data structures in the context of secure multi-party computation. This results in enabling the compiler to automatically translate a user program that uses pointers to private data into its distributed implementation that provably protects private data throughout the computation. We empirically evaluate the constructions and report on performance of representative programs. | One of the applications that the compiler can naturally be used for once support for pointers to private data is in place is evaluation of a context-free grammar on private data (implemented as a shift-reduce parser using a stack). The grammar can be either public or private, and in the latter case execution will correspond to evaluation of private expressions programs on private data. Techniques for evaluation of private programs (on private data) are a separate area of research, discussion of which is beyond the scope of this work, but the reader may refer to recent results in this areas such as those in @cite_18 @cite_12 . | {
"cite_N": [
"@cite_18",
"@cite_12"
],
"mid": [
"2483852738",
"2397486072"
],
"abstract": [
"Universal circuits UCs can be programmed to evaluate any circuit of a given size k. They provide elegant solutions in various application scenarios, e.g. for private function evaluationi¾?PFE and for improving the flexibility of attribute-based encryptioni¾?ABE schemes. The optimal size of a universal circuit is proven to be @math Ωklogk. Valianti¾?STOC'76 proposed a size-optimized UC construction, which has not been put in practice ever since. The only implementation of universal circuits was provided by Kolesnikov and Schneider FC'08, with sizei¾? @math Oklog2k. In this paper, we refine the size of Valiant's UC and further improve the construction by at least 2k. We show that due to recent optimizations and our improvements, it is the best solution to apply in the case for circuits with a constant number of inputs and outputs. When the number of inputs or outputs is linear in the number of gates, we propose a more efficient hybrid solution based on the two existing constructions. We validate the practicality of Valiant's UC, by giving an example implementation for PFE using these size-optimized UCs.",
"We present GarbledCPU, the first framework that realizes a hardware-based general purpose sequential processor for secure computation. Our MIPS-based implementation enables development of applications (functions) in a high-level language while performing secure function evaluation (SFE) using Yao's garbled circuit protocol in hardware. GarbledCPU provides three degrees of freedom for SFE which allow leveraging the trade-off between privacy and performance: public functions, private functions, and semi-private functions. We synthesize GarbledCPU on a Virtex-7 FPGA as a proof-of-concept implementation and evaluate it on various benchmarks including Hamming distance, private set intersection and AES. Our results indicate that our pipelined hardware framework outperforms the fastest available software implementation."
]
} |
1509.01506 | 2163685309 | Rapid analysis of DNA sequences is important in preventing the evolution of different viruses and bacteria during an early phase, early diagnosis of genetic predispositions to certain diseases (cancer, cardiovascular diseases), and in DNA forensics. However, real-world DNA sequences may comprise several Gigabytes and the process of DNA analysis demands adequate computational resources to be completed within a reasonable time. In this paper we present a scalable approach for parallel DNA analysis that is based on Finite Automata, and which is suitable for analyzing very large DNA segments. We evaluate our approach for real-world DNA segments of mouse (2.7GB), cat (2.4GB), dog (2.4GB), chicken (1GB), human (3.2GB) and turkey (0.2GB). Experimental results on a dual-socket shared-memory system with 24 physical cores show speed-ups of up to 17.6x. Our approach is up to 3x faster than a pattern-based parallel approach that uses the RE2 library. | Existing approaches use both hardware and software to accelerate the process of regular expression matching. In comparison to hardware based state machines, which are faster, less flexible and more expensive, software based acceleration techniques are flexible in terms of updating or adding new patterns @cite_29 . | {
"cite_N": [
"@cite_29"
],
"mid": [
"2011905634"
],
"abstract": [
"Pattern matching has been one of the major operations in modern bioengineering especially in Bioinformatics. Prior work on this area have focus on either pursuing mathematically efficient matching algorithms or hardwired approach. As multicore processor are becoming mainstream, developers need to determine how to take advantage of multicore technology for pattern matching. In this paper, we propose a methodology to evaluate pattern search algorithms for DNA on Multiprocessor. Our evaluation methodology is an automatic simulation framework. Starting from a uniprocessor profiling, the framework constructs task graphs for string matching algorithms. Then task graphs are mapped onto multiprocessor. The system's performance is determined by the analytical performance model. With this framework, we can evaluate the performance of different algorithms on multiprocessor. Our case studies show that finite automaton based (Aho-Corasick) is more efficient than shift table based algorithms (SFKSearch and Wu-Manber) on uniprocessor, however, Wu-Manber is 3 times efficient than Aho-Corasick on multiprocessor due to its inherent parallelism."
]
} |
1509.01506 | 2163685309 | Rapid analysis of DNA sequences is important in preventing the evolution of different viruses and bacteria during an early phase, early diagnosis of genetic predispositions to certain diseases (cancer, cardiovascular diseases), and in DNA forensics. However, real-world DNA sequences may comprise several Gigabytes and the process of DNA analysis demands adequate computational resources to be completed within a reasonable time. In this paper we present a scalable approach for parallel DNA analysis that is based on Finite Automata, and which is suitable for analyzing very large DNA segments. We evaluate our approach for real-world DNA segments of mouse (2.7GB), cat (2.4GB), dog (2.4GB), chicken (1GB), human (3.2GB) and turkey (0.2GB). Experimental results on a dual-socket shared-memory system with 24 physical cores show speed-ups of up to 17.6x. Our approach is up to 3x faster than a pattern-based parallel approach that uses the RE2 library. | presented in @cite_3 an implementation of the Aho-Corasick string matching algorithm using POSIX threads, which is based on the pattern partitioning approach. A replication of the Herath's study with the intention to improve the software implementation of the Aho-Corasick algorithm was conducted by @cite_26 . | {
"cite_N": [
"@cite_26",
"@cite_3"
],
"mid": [
"2107670443",
"1979854742"
],
"abstract": [
"Multiple string matching is known as locating all the occurrences of a given number of patterns in an arbitrary string. It is used in bio-computing applications where the algorithms are commonly used for retrieval of information such as sequence analysis and gene protein identification. Extremely large amount of data in the form of strings has to be processed in such bio-computing applications. Therefore, improving the performance of multiple string matching algorithms is always desirable. Multicore architectures are capable of providing better performance by parallelizing the multiple string matching algorithms. The Aho-Corasick algorithm is the one that is commonly used in exact multiple string matching algorithms. The focus of this paper is the acceleration of Aho-Corasick algorithm through a multicore CPU based software implementation. Through our implementation and evaluation of results, we prove that our method performs better compared to the state of the art.",
"Huge amount of data in the form of strings are being handled in bio-computing applications and searching algorithms are quite frequently used in them. Many methods utilizing on both software and hardware are being proposed to accelerate processing of such data. The typical hardware-based acceleration techniques either require special hardware such as generalpurpose graphics processing units (GPGPUs) or need building a new hardware such as an FPGA based design. On the other hard, software-based acceleration techniques are easier since they only require some changes in the software code or the software architecture. Typical software-based techniques make use of computers connected over a network, also known as a network grid to accelerate the processing. In this paper, we test the hypothesis that multi-core architectures should provide better performance in this kind of computation, but still it would depend on the algorithm selected as well as the programming model being utilized. We present the acceleration of a string-searching algorithm on a multi-core CPU via a POSIX thread based implementation. Our implementation on an 8-core processor (that supports 16-threads) resulted in 9x throughput improvement compared to a single thread implementation."
]
} |
1509.01506 | 2163685309 | Rapid analysis of DNA sequences is important in preventing the evolution of different viruses and bacteria during an early phase, early diagnosis of genetic predispositions to certain diseases (cancer, cardiovascular diseases), and in DNA forensics. However, real-world DNA sequences may comprise several Gigabytes and the process of DNA analysis demands adequate computational resources to be completed within a reasonable time. In this paper we present a scalable approach for parallel DNA analysis that is based on Finite Automata, and which is suitable for analyzing very large DNA segments. We evaluate our approach for real-world DNA segments of mouse (2.7GB), cat (2.4GB), dog (2.4GB), chicken (1GB), human (3.2GB) and turkey (0.2GB). Experimental results on a dual-socket shared-memory system with 24 physical cores show speed-ups of up to 17.6x. Our approach is up to 3x faster than a pattern-based parallel approach that uses the RE2 library. | Mar c ais and Kingsford @cite_30 present the Jellyfish tool, which is based on the lock-free hash table that is optimized for counting @math -mers of length up to 31 bases. @cite_18 present a similar approach to Jellyfish @cite_30 , so called DSK, which is designed for small-memory servers. The @math -mers are counted by traversing the hash tables. Using hash tables for the internal representation resulted to be memory inefficient @cite_20 . As described by @cite_20 a sequence corresponding to a human chromosome with 24-230MB of input data would require gigabytes of memory to store the @math -mers information. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_20"
],
"mid": [
"2096128575",
"2057253402",
"2009491526"
],
"abstract": [
"Motivation: Counting the number of occurrences of every k-mer (substring of length k) in a long string is a central subproblem in many applications, including genome assembly, error correction of sequencing reads, fast multiple sequence alignment and repeat detection. Recently, the deep sequence coverage generated by next-generation sequencing technologies has caused the amount of sequence to be processed during a genome project to grow rapidly, and has rendered current k-mer counting tools too slow and memory intensive. At the same time, large multicore computers have become commonplace in research facilities allowing for a new parallel computational paradigm. Results: We propose a new k-mer counting algorithm and associated implementation, called Jellyfish, which is fast and memory efficient. It is based on a multithreaded, lock-free hash table optimized for counting k-mers up to 31 bases in length. Due to their flexibility, suffix arrays have been the data structure of choice for solving many string problems. For the task of k-mer counting, important in many biological applications, Jellyfish offers a much faster and more memory-efficient solution. Availability: The Jellyfish software is written in C++ and is GPL licensed. It is available for download at http: www.cbcb.umd.edu software jellyfish. Contact: [email protected] Supplementary information:Supplementary data are available at Bioinformatics online.",
"Summary: Counting all the k-mers (substrings of length k )i n DNA RNA sequencing reads is the preliminary step of many bioinformatics applications. However, state of the art k-mer counting methods require that a large data structure resides in memory. Such structure typically grows with the number of distinct k -mers to count. We present an ew streaming algorithm for k-mer counting, called DSK (disk streaming of k-mers), which only requires a fixed user-defined amount of memory and disk space. This approach realizes a memory, time and disk trade-off. The multi-set of all k-mers present in the reads is partitioned, and partitions are saved to disk. Then, each partition is separately loaded in memory in a temporary hash table. The k-mer counts are returned by traversing each hash table. Low-abundance k-mers are optionally filtered. DSK is the first approach that is able to count all the 27-mers of a human genome dataset using only 4.0 GB of memory and moderate disk space (160 GB), in 17.9 h. DSK can replace a popular k-mer counting software (Jellyfish) on small-memory servers. Availability: http: minia.genouest.org dsk Contact: rayan.chikhi@ens-cachan.org",
"This paper presents a parallel algorithm for fast word search to determine the set of biological words of an input DNA sequence. The algorithm is designed to scale well on state-of-the-art multiprocessor multicore systems for large inputs and large maximum word sizes. The pattern exhibited by many sequential solutions to this problem is a repetitive execution over a large input DNA sequence, and the generation of large amounts of output data to store and retrieve the words determined by the algorithm. As we show, this pattern does not lend itself to straightforward standard parallelization techniques. The proposed algorithm aims to achieve three major goals to overcome the drawbacks of embarrassingly parallel solution techniques: (i) to impose a high degree of cache locality on a problem that, by nature, tends to exhibit nonlocal access patterns, (ii) to be lock free or largely reduce the need for data access locking, and (iii) to enable an even distribution of the overall processing load among multiple threads. We present an implementation and performance evaluation of the proposed algorithm on DNA sequences of various sizes for different organisms on a dual processor quad-core system with a total of 8 cores. We compare the performance of the parallel word search implementation with a sequential implementation and with an embarrassingly parallel implementation. The results show that the proposed algorithm far outperforms the embarrassingly parallel strategy and achieves a speed-up's of up to 6.9 on our 8-core test system."
]
} |
1509.01506 | 2163685309 | Rapid analysis of DNA sequences is important in preventing the evolution of different viruses and bacteria during an early phase, early diagnosis of genetic predispositions to certain diseases (cancer, cardiovascular diseases), and in DNA forensics. However, real-world DNA sequences may comprise several Gigabytes and the process of DNA analysis demands adequate computational resources to be completed within a reasonable time. In this paper we present a scalable approach for parallel DNA analysis that is based on Finite Automata, and which is suitable for analyzing very large DNA segments. We evaluate our approach for real-world DNA segments of mouse (2.7GB), cat (2.4GB), dog (2.4GB), chicken (1GB), human (3.2GB) and turkey (0.2GB). Experimental results on a dual-socket shared-memory system with 24 physical cores show speed-ups of up to 17.6x. Our approach is up to 3x faster than a pattern-based parallel approach that uses the RE2 library. | @cite_20 achieved significant speedup by partitioning the input string among the threads in such a way that each thread processes only sequences starting with a specified prefix used to divide the radix tree among the threads. They achieved up to 6.9 @math speedup on a shared memory system with 8 cores. | {
"cite_N": [
"@cite_20"
],
"mid": [
"2009491526"
],
"abstract": [
"This paper presents a parallel algorithm for fast word search to determine the set of biological words of an input DNA sequence. The algorithm is designed to scale well on state-of-the-art multiprocessor multicore systems for large inputs and large maximum word sizes. The pattern exhibited by many sequential solutions to this problem is a repetitive execution over a large input DNA sequence, and the generation of large amounts of output data to store and retrieve the words determined by the algorithm. As we show, this pattern does not lend itself to straightforward standard parallelization techniques. The proposed algorithm aims to achieve three major goals to overcome the drawbacks of embarrassingly parallel solution techniques: (i) to impose a high degree of cache locality on a problem that, by nature, tends to exhibit nonlocal access patterns, (ii) to be lock free or largely reduce the need for data access locking, and (iii) to enable an even distribution of the overall processing load among multiple threads. We present an implementation and performance evaluation of the proposed algorithm on DNA sequences of various sizes for different organisms on a dual processor quad-core system with a total of 8 cores. We compare the performance of the parallel word search implementation with a sequential implementation and with an embarrassingly parallel implementation. The results show that the proposed algorithm far outperforms the embarrassingly parallel strategy and achieves a speed-up's of up to 6.9 on our 8-core test system."
]
} |
1509.01506 | 2163685309 | Rapid analysis of DNA sequences is important in preventing the evolution of different viruses and bacteria during an early phase, early diagnosis of genetic predispositions to certain diseases (cancer, cardiovascular diseases), and in DNA forensics. However, real-world DNA sequences may comprise several Gigabytes and the process of DNA analysis demands adequate computational resources to be completed within a reasonable time. In this paper we present a scalable approach for parallel DNA analysis that is based on Finite Automata, and which is suitable for analyzing very large DNA segments. We evaluate our approach for real-world DNA segments of mouse (2.7GB), cat (2.4GB), dog (2.4GB), chicken (1GB), human (3.2GB) and turkey (0.2GB). Experimental results on a dual-socket shared-memory system with 24 physical cores show speed-ups of up to 17.6x. Our approach is up to 3x faster than a pattern-based parallel approach that uses the RE2 library. | A method for searching arbitrary regular expressions using speculation is proposed by @cite_13 . The drawback is that if an REM performed by a thread does not converge on its sub-input, then the next thread has to start from a new state that breaks the serialization and limits the scalability. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2110199304"
],
"abstract": [
"Intrusion prevention systems (IPSs) determine whether incoming traffic matches a database of signatures, where each signature is a regular expression and represents an attack or a vulnerability. IPSs need to keep up with ever-increasing line speeds, which has lead to the use of custom hardware. A major bottleneck that IPSs face is that they scan incoming packets one byte at a time, which limits their throughput and latency. In this paper, we present a method to search for arbitrary regular expressions by scanning multiple bytes in parallel using speculation. We break the packet in several chunks, opportunistically scan them in parallel, and if the speculation is wrong, correct it later. We present algorithms that apply speculation in single-threaded software running on commodity processors as well as algorithms for parallel hardware. Experimental results show that speculation leads to improvements in latency and throughput in both cases."
]
} |
1509.01719 | 2262071337 | This paper introduces a new method to solve the cross-domain recognition problem. Different from the traditional domain adaption methods which rely on a global domain shift for all classes between source and target domain, the proposed method is more flexible to capture individual class variations across domains. By adopting a natural and widely used assumption -- "the data samples from the same class should lay on a low-dimensional subspace, even if they come from different domains", the proposed method circumvents the limitation of the global domain shift, and solves the cross-domain recognition by finding the compact joint subspaces of source and target domain. Specifically, given labeled samples in source domain, we construct subspaces for each of the classes. Then we construct subspaces in the target domain, called anchor subspaces, by collecting unlabeled samples that are close to each other and highly likely all fall into the same class. The corresponding class label is then assigned by minimizing a cost function which reflects the overlap and topological structure consistency between subspaces across source and target domains, and within anchor subspaces, respectively.We further combine the anchor subspaces to corresponding source subspaces to construct the compact joint subspaces. Subsequently, one-vs-rest SVM classifiers are trained in the compact joint subspaces and applied to unlabeled data in the target domain. We evaluate the proposed method on two widely used datasets: object recognition dataset for computer vision tasks, and sentiment classification dataset for natural language processing tasks. Comparison results demonstrate that the proposed method outperforms the comparison methods on both datasets. | For the cross-domain recognition problem, Domain Adaptation is the most closely related work which is known as a type of fundamental methods in machine learning and computer vision. Here, we give a brief review of this topic. Please refer to @cite_19 for a comprehensive survey. | {
"cite_N": [
"@cite_19"
],
"mid": [
"1982696459"
],
"abstract": [
"In pattern recognition and computer vision, one is often faced with scenarios where the training data used to learn a model have different distribution from the data on which the model is applied. Regardless of the cause, any distributional change that occurs after learning a classifier can degrade its performance at test time. Domain adaptation tries to mitigate this degradation. In this article, we provide a survey of domain adaptation methods for visual recognition. We discuss the merits and drawbacks of existing domain adaptation approaches and identify promising avenues for research in this rapidly evolving field."
]
} |
1509.01653 | 2170783091 | The millimeter wave (mmWave) band, a prime candidate for 5G cellular networks, seems attractive for wireless energy harvesting since it will feature large antenna arrays and extremely dense base station (BS) deployments. The viability of mmWave for energy harvesting though is unclear, due to the differences in propagation characteristics, such as extreme sensitivity to building blockages. This paper considers a scenario where low-power devices extract energy and or information from the mmWave signals. Using stochastic geometry, analytical expressions are derived for the energy coverage probability, the average harvested power, and the overall (energy-and-information) coverage probability at a typical wireless-powered device in terms of the BS density, the antenna geometry parameters, and the channel parameters. Numerical results reveal several network and device level design insights. At the BSs, optimizing the antenna geometry parameters, such as beamwidth, can maximize the network-wide energy coverage for a given user population. At the device level, the performance can be substantially improved by optimally splitting the received signal for energy and information extraction, and by deploying multi-antenna arrays. For the latter, an efficient low-power multi-antenna mmWave receiver architecture is proposed for simultaneous energy and information transfer. Overall, simulation results suggest that mmWave energy harvesting generally outperforms lower frequency solutions. | Wireless energy harvesting is becoming increasingly feasible due to the reduction in the power consumption requirements of wireless sensors and the improvements in energy harvesting technologies @cite_22 @cite_21 @cite_16 @cite_31 . This has also led to considerable research in advancing the theoretical understanding of wireless-powered systems (see @cite_2 @cite_8 for a comprehensive overview). For example, wireless energy and information transfer has been studied for different information-theoretic setups such as a broadcast channel @cite_17 , a fading channel @cite_7 , and an interference channel @cite_28 . Many of these papers highlight the fundamental trade-off between energy and information transfer efficiency and characterize the achievable rate-energy regions for different practical receiver architectures @cite_8 . | {
"cite_N": [
"@cite_22",
"@cite_7",
"@cite_8",
"@cite_28",
"@cite_21",
"@cite_2",
"@cite_31",
"@cite_16",
"@cite_17"
],
"mid": [
"1563362697",
"2111844221",
"2118588134",
"2110098571",
"2027750090",
"1969461599",
"1922569449",
"2120345201",
"2032372805"
],
"abstract": [
"We present the first power over Wi-Fi system that delivers power to low-power sensors and devices and works with existing Wi-Fi chipsets. Specifically, we show that a ubiquitous part of wireless communication infrastructure, the Wi-Fi router, can provide far field wireless power without significantly compromising the network's communication performance. Building on our design, we prototype battery-free temperature and camera sensors that we power with Wi-Fi at ranges of 20 and 17 feet respectively. We also demonstrate the ability to wirelessly trickle-charge nickel---met al hydride and lithium-ion coin-cell batteries at distances of up to 28 feet. We deploy our system in six homes in a metropolitan area and show that it can successfully deliver power via Wi-Fi under real-world network conditions without significantly degrading network performance.",
"Energy harvesting is a promising solution to prolong the operation of energy-constrained wireless networks. In particular, scavenging energy from ambient radio signals, namely wireless energy harvesting (WEH), has recently drawn significant attention. In this paper, we consider a point-to-point wireless link over the narrowband flat-fading channel subject to time-varying co-channel interference. It is assumed that the receiver has no fixed power supplies and thus needs to replenish energy opportunistically via WEH from the unintended interference and or the intended signal sent by the transmitter. We further assume a single-antenna receiver that can only decode information or harvest energy at any time due to the practical circuit limitation. Therefore, it is important to investigate when the receiver should switch between the two modes of information decoding (ID) and energy harvesting (EH), based on the instantaneous channel and interference condition. In this paper, we derive the optimal mode switching rule at the receiver to achieve various trade-offs between wireless information transfer and energy harvesting. Specifically, we determine the minimum transmission outage probability for delay-limited information transfer and the maximum ergodic capacity for no-delay-limited information transfer versus the maximum average energy harvested at the receiver, which are characterized by the boundary of so-called \"outage-energy\" region and \"rate-energy\" region, respectively. Moreover, for the case when the channel state information (CSI) is known at the transmitter, we investigate the joint optimization of transmit power control, information and energy transfer scheduling, and the receiver's mode switching. The effects of circuit energy consumption at the receiver on the achievable rate-energy trade-offs are also characterized. Our results provide useful guidelines for the efficient design of emerging wireless communication systems powered by opportunistic WEH.",
"The performance of wireless communication is fundamentally constrained by the limited battery life of wireless devices, the operations of which are frequently disrupted due to the need of manual battery replacement recharging. The recent advance in RF-enabled wireless energy transfer (WET) technology provides an attractive solution named wireless powered communication (WPC), where the wireless devices are powered by dedicated wireless power transmitters to provide continuous and stable microwave energy over the air. As a key enabling technology for truly perpetual communications, WPC opens up the potential to build a network with larger throughput, higher robustness, and increased flexibility compared to its battery-powered counterpart. However, the combination of wireless energy and information transmissions also raises many new research problems and implementation issues that need to be addressed. In this article, we provide an overview of stateof- the-art RF-enabled WET technologies and their applications to wireless communications, highlighting the key design challenges, solutions, and opportunities ahead.",
"This paper investigates joint wireless information and energy transfer in a two-user MIMO interference channel, in which each receiver either decodes the incoming information data (information decoding, ID) or harvests the RF energy (energy harvesting, EH) to operate with a potentially perpetual energy supply. In the two-user interference channel, we have four different scenarios according to the receiver mode - (ID1, ID2), (EH1, EH2), (EH1, ID2), and (ID1, EH2). While the maximum information bit rate is unknown and finding the optimal transmission strategy is still open for (ID1, ID2), we have derived the optimal transmission strategy achieving the maximum harvested energy for (EH1, EH2). For (EH1, ID2), and (ID1, EH2), we find a necessary condition of the optimal transmission strategy and, accordingly, identify the achievable rate-energy (R-E) tradeoff region for two transmission strategies that satisfy the necessary condition - maximum energy beamforming (MEB) and minimum leakage beamforming (MLB). Furthermore, a new transmission strategy satisfying the necessary condition - signal-to-leakage-and-energy ratio (SLER) maximization beamforming - is proposed and shown to exhibit a better R-E region than the MEB and the MLB strategies. Finally, we propose a mode scheduling method to switch between (EH1, ID2) and (ID1, EH2) based on the SLER.",
"Over the past decade, personal computers have been transformed into small, often mobile devices that are rapidly multiplying. Aside from the ever-present smartphone, a growing set of computing devices has become part of our everyday world, from thermostats and wristwatches, to picture frames, personal activity monitors, and even implantable devices such as pacemakers. All of these devices bring us closer to an “Internet of Things,” but supplying power to sustain this future is a growing burden. Technological advances have so far largely failed to improve power delivery to these machines. Power cords tie devices down, prohibiting their free movement, while batteries add weight, bulk, cost, the need for maintenance, and an undesirable environmental footprint. Fortunately, running small computing devices using only incident RF signals as the power source is increasingly possible. We call such devices RF-powered computers. As might be expected, the amount of power that can be harvested from typical RF signals is small. However, the energy efficiency of the computers themselves has improved exponentially for decades-a lesser-known consequence of Moore's law. This relentless improvement has recently brought the power requirements of small computational workloads into the microwatt realm, roughly equal to the power available from RF sources in practical settings.",
"This paper summarizes recent contributions in the broad area of energy harvesting wireless communications. In particular, we provide the current state of the art for wireless networks composed of energy harvesting nodes, starting from the information-theoretic performance limits to transmission scheduling policies and resource allocation, medium access, and networking issues. The emerging related area of energy transfer for self-sustaining energy harvesting wireless networks is considered in detail covering both energy cooperation aspects and simultaneous energy and information transfer. Various potential models with energy harvesting nodes at different network scales are reviewed, as well as models for energy consumption at the nodes.",
"The idea of wireless power transfer (WPT) has been around since the inception of electricity. In the late 19th century, Nikola Tesla described the freedom to transfer energy between two points without the need for a physical connection to a power source as an ?all-surpassing importance to man? [1]. A truly wireless device, capable of being remotely powered, not only allows the obvious freedom of movement but also enables devices to be more compact by removing the necessity of a large battery. Applications could leverage this reduction in size and weight to increase the feasibility of concepts such as paper-thin, flexible displays [2], contact-lens-based augmented reality [3], and smart dust [4], among traditional point-to-point power transfer applications. While several methods of wireless power have been introduced since Tesla?s work, including near-field magnetic resonance and inductive coupling, laser-based optical power transmission, and far-field RF microwave energy transmission, only RF microwave and laser-based systems are truly long-range methods. While optical power transmission certainly has merit, its mechanisms are outside of the scope of this article and will not be discussed.",
"RF harvesting circuits have been demonstrated for more than 50 years, but only a few have been able to harvest energy from freely available ambient (i.e., non-dedicated) RF sources. In this paper, our objectives were to realize harvester operation at typical ambient RF power levels found within urban and semi-urban environments. To explore the potential for ambient RF energy harvesting, a city-wide RF spectral survey was undertaken from outside all of the 270 London Underground stations at street level. Using the results from this survey, four harvesters (comprising antenna, impedance-matching network, rectifier, maximum power point tracking interface, and storage element) were designed to cover four frequency bands from the largest RF contributors (DTV, GSM900, GSM1800, and 3G) within the ultrahigh frequency (0.3-3 GHz) part of the frequency spectrum. Prototypes were designed and fabricated for each band. The overall end-to-end efficiency of the prototypes using realistic input RF power sources is measured; with our first GSM900 prototype giving an efficiency of 40 . Approximately half of the London Underground stations were found to be suitable locations for harvesting ambient RF energy using our four prototypes. Furthermore, multiband array architectures were designed and fabricated to provide a broader freedom of operation. Finally, an output dc power density comparison was made between all the ambient RF energy harvesters, as well as alternative energy harvesting technologies, and for the first time, it is shown that ambient RF harvesting can be competitive with the other technologies.",
"Wireless power transfer (WPT) is a promising new solution to provide convenient and perpetual energy supplies to wireless networks. In practice, WPT is implementable by various technologies such as inductive coupling, magnetic resonate coupling, and electromagnetic (EM) radiation, for short- mid- long-range applications, respectively. In this paper, we consider the EM or radio signal enabled WPT in particular. Since radio signals can carry energy as well as information at the same time, a unified study on simultaneous wireless information and power transfer (SWIPT) is pursued. Specifically, this paper studies a multiple-input multiple-output (MIMO) wireless broadcast system consisting of three nodes, where one receiver harvests energy and another receiver decodes information separately from the signals sent by a common transmitter, and all the transmitter and receivers may be equipped with multiple antennas. Two scenarios are examined, in which the information receiver and energy receiver are separated and see different MIMO channels from the transmitter, or co-located and see the identical MIMO channel from the transmitter. For the case of separated receivers, we derive the optimal transmission strategy to achieve different tradeoffs for maximal information rate versus energy transfer, which are characterized by the boundary of a so-called rate-energy (R-E) region. For the case of co-located receivers, we show an outer bound for the achievable R-E region due to the potential limitation that practical energy harvesting receivers are not yet able to decode information directly. Under this constraint, we investigate two practical designs for the co-located receiver case, namely time switching and power splitting, and characterize their achievable R-E regions in comparison to the outer bound."
]
} |
1509.01653 | 2170783091 | The millimeter wave (mmWave) band, a prime candidate for 5G cellular networks, seems attractive for wireless energy harvesting since it will feature large antenna arrays and extremely dense base station (BS) deployments. The viability of mmWave for energy harvesting though is unclear, due to the differences in propagation characteristics, such as extreme sensitivity to building blockages. This paper considers a scenario where low-power devices extract energy and or information from the mmWave signals. Using stochastic geometry, analytical expressions are derived for the energy coverage probability, the average harvested power, and the overall (energy-and-information) coverage probability at a typical wireless-powered device in terms of the BS density, the antenna geometry parameters, and the channel parameters. Numerical results reveal several network and device level design insights. At the BSs, optimizing the antenna geometry parameters, such as beamwidth, can maximize the network-wide energy coverage for a given user population. At the device level, the performance can be substantially improved by optimally splitting the received signal for energy and information extraction, and by deploying multi-antenna arrays. For the latter, an efficient low-power multi-antenna mmWave receiver architecture is proposed for simultaneous energy and information transfer. Overall, simulation results suggest that mmWave energy harvesting generally outperforms lower frequency solutions. | Wireless energy and or information transfer in large-scale networks has also been investigated @cite_4 @cite_19 @cite_12 @cite_9 @cite_6 @cite_30 . In @cite_4 , the performance of ambient RF energy harvesting was characterized using tools from stochastic geometry. Using a repulsive point process to model RF transmitters, it was shown that more repulsion helps improve the performance at an energy harvester for a given transmitter density. In @cite_19 @cite_12 , cognitive radio networks were considered, and opportunistic wireless energy harvesting was proposed and analyzed. In @cite_9 , a hybrid cellular network architecture was proposed to enable wireless power transfer for mobiles. In particular, an uplink cellular network was overlaid with power beacons and trade-offs between the transmit power and deployment densities were investigated under an outage constraint on the data links. A broadband wireless network with transmit beamforming was considered in @cite_6 , where optimal power control algorithms were devised for improving the throughput and power transfer efficiency. Simultaneous information and energy transfer in a relay-aided network was considered in @cite_30 . Under a random relay selection strategy, the network-level performance was characterized in terms of the relay density and the relay selection area. | {
"cite_N": [
"@cite_30",
"@cite_4",
"@cite_9",
"@cite_6",
"@cite_19",
"@cite_12"
],
"mid": [
"2015786472",
"1670890661",
"2068499153",
"2103829092",
"2122489958",
"2076579100"
],
"abstract": [
"Energy harvesting (EH) from ambient radio-frequency (RF) electromagnetic waves is an efficient solution for fully autonomous and sustainable communication networks. Most of the related works presented in the literature are based on specific (and small-scale) network structures, which although give useful insights on the potential benefits of the RF-EH technology, cannot characterize the performance of general networks. In this paper, we adopt a large-scale approach of the RF-EH technology and we characterize the performance of a network with random number of transmitter-receiver pairs by using stochastic-geometry tools. Specifically, we analyze the outage probability performance and the average harvested energy, when receivers employ power splitting (PS) technique for \"simultaneous\" information and energy transfer. A non-cooperative scheme, where information energy are conveyed only via direct links, is firstly considered and the outage performance of the system as well as the average harvested energy are derived in closed form in function of the power splitting. For this protocol, an interesting optimization problem which minimizes the transmitted power under outage probability and harvesting constraints, is formulated and solved in closed form. In addition, we study a cooperative protocol where sources' transmissions are supported by a random number of potential relays that are randomly distributed into the network. In this case, information energy can be received at each destination via two independent and orthogonal paths (in case of relaying). We characterize both performance metrics, when a selection combining scheme is applied at the receivers and a single relay is randomly selected for cooperative diversity.",
"Ambient radio frequency (RF) energy harvesting technique has recently been proposed as a potential solution for providing proactive energy replenishment for wireless devices. This paper aims to analyze the performance of a battery-free wireless sensor powered by ambient RF energy harvesting using a stochastic geometry approach. Specifically, we consider the point-to-point uplink transmission of a wireless sensor in a stochastic geometry network, where ambient RF sources, such as mobile transmit devices, access points and base stations, are distributed as a Ginibre @math -determinantal point process (DPP). The DPP is able to capture repulsion among points, and hence, it is more general than the Poisson point process (PPP). We analyze two common receiver architectures: separated receiver and time-switching architectures. For each architecture, we consider the scenarios with and without co-channel interference for information transmission. We derive the expectation of the RF energy harvesting rate in closed form and also compute its variance. Moreover, we perform a worst-case study which derives the upper bound of both power and transmission outage probabilities. Additionally, we provide guidelines on the setting of optimal time-switching coefficient in the case of the time-switching architecture. Numerical results verify the correctness of the analysis and show various tradeoffs between parameter setting. Lastly, we prove that the RF-powered sensor performs better when the distribution of the ambient sources exhibits stronger repulsion.",
"Microwave power transfer (MPT) delivers energy wirelessly from stations called power beacons (PBs) to mobile devices by microwave radiation. This provides mobiles practically infinite battery lives and eliminates the need of power cords and chargers. To enable MPT for mobile recharging, this paper proposes a new network architecture that overlays an uplink cellular network with randomly deployed PBs for powering mobiles, called a hybrid network. The deployment of the hybrid network under an outage constraint on data links is investigated based on a stochastic-geometry model where single-antenna base stations (BSs) and PBs form independent homogeneous Poisson point processes (PPPs) with densities λb and λp, respectively, and single-antenna mobiles are uniformly distributed in Voronoi cells generated by BSs. In this model, mobiles and PBs fix their transmission power at p and q, respectively; a PB either radiates isotropically, called isotropic MPT, or directs energy towards target mobiles by beamforming, called directed MPT. The model is used to derive the tradeoffs between the network parameters (p, λb, q, λp) under the outage constraint. First, consider the deployment of the cellular network. It is proved that the outage constraint is satisfied so long as the product pλbα 2 is above a given threshold where α is the path-loss exponent. Next, consider the deployment of the hybrid network assuming infinite energy storage at mobiles. It is shown that for isotropic MPT, the product qλpλbα 2 has to be above a given threshold so that PBs are sufficiently dense; for directed MPT, zmqλpλbα 2 with zm denoting the array gain should exceed a different threshold to ensure short distances between PBs and their target mobiles. Furthermore, similar results are derived for the case of mobiles having small energy storage.",
"Far-field microwave power transfer (MPT) will free wireless sensors and other mobile devices from the constraints imposed by finite battery capacities. Integrating MPT with wireless communications to support simultaneous wireless information and power transfer (SWIPT) allows the same spectrum to be used for dual purposes without compromising the quality of service. A novel approach is presented in this paper for realizing SWIPT in a broadband system where orthogonal frequency division multiplexing and transmit beamforming are deployed to create a set of parallel sub-channels for SWIPT, which simplifies resource allocation. Based on a proposed reconfigurable mobile architecture, different system configurations are considered by combining single-user multi-user systems, downlink uplink information transfer, and variable fixed coding rates. Optimizing the power control for these configurations results in a new class of multi-user power-control problems featuring the circuit-power constraints, specifying that the transferred power must be sufficiently large to support the operation of the receiver circuitry. Solving these problems gives a set of power-control algorithms that exploit channel diversity in frequency for simultaneously enhancing the throughput and the MPT efficiency. For the system configurations with variable coding rates, the algorithms are variants of water-filling that account for the circuit-power constraints. The optimal algorithms for those configurations with fixed coding rates are shown to sequentially allocate mobiles their required power for decoding in ascending order until the entire budgeted power is spent. The required power for a mobile is derived as simple functions of the minimum signal-to-noise ratio for correct decoding, the circuit power and sub-channel gains.",
"Wireless networks can be self-sustaining by harvesting energy from ambient radio-frequency (RF) signals. Recently, researchers have made progress on designing efficient circuits and devices for RF energy harvesting suitable for low-power wireless applications. Motivated by this and building upon the classic cognitive radio (CR) network model, this paper proposes a novel method for wireless networks coexisting where low-power mobiles in a secondary network, called secondary transmitters (STs), harvest ambient RF energy from transmissions by nearby active transmitters in a primary network, called primary transmitters (PTs), while opportunistically accessing the spectrum licensed to the primary network. We consider a stochastic-geometry model in which PTs and STs are distributed as independent homogeneous Poisson point processes (HPPPs) and communicate with their intended receivers at fixed distances. Each PT is associated with a guard zone to protect its intended receiver from ST's interference, and at the same time delivers RF energy to STs located in its harvesting zone. Based on the proposed model, we analyze the transmission probability of STs and the resulting spatial throughput of the secondary network. The optimal transmission power and density of STs are derived for maximizing the secondary network throughput under the given outage-probability constraints in the two coexisting networks, which reveal key insights to the optimal network design. Finally, we show that our analytical result can be generally applied to a non-CR setup, where distributed wireless power chargers are deployed to power coexisting wireless transmitters in a sensor network.",
"While cognitive radio enables spectrum-efficient wireless communication, radio frequency (RF) energy harvesting from ambient interference is an enabler for energy-efficient wireless communication. In this paper, we model and analyze cognitive and energy harvesting-based device-to-device (D2D) communication in cellular networks. The cognitive D2D transmitters harvest energy from ambient interference and use one of the channels allocated to cellular users (in uplink or downlink), which is referred to as the D2D channel, to communicate with the corresponding receivers. We investigate two spectrum access policies for cellular communication in the uplink or downlink, namely, random spectrum access (RSA) policy and prioritized spectrum access (PSA) policy. In RSA, any of the available channels including the channel used by the D2D transmitters can be selected randomly for cellular communication, while in PSA the D2D channel is used only when all of the other channels are occupied. A D2D transmitter can communicate successfully with its receiver only when it harvests enough energy to perform channel inversion toward the receiver, the D2D channel is free, and the signal-to-interference-plus-noise ratio @math at the receiver is above the required threshold; otherwise, an outage occurs for the D2D communication. We use tools from stochastic geometry to evaluate the performance of the proposed communication system model with general path-loss exponent in terms of outage probability for D2D and cellular users. We show that energy harvesting can be a reliable alternative to power cognitive D2D transmitters while achieving acceptable performance. Under the same @math outage requirements as for the non-cognitive case, cognitive channel access improves the outage probability for D2D users for both the spectrum access policies. When compared with the RSA policy, the PSA policy provides a better performance to the D2D users. Also, using an uplink channel provides improved performance to the D2D users in dense networks when compared to a downlink channel. For cellular users, the PSA policy provides almost the same outage performance as the RSA policy."
]
} |
1509.01653 | 2170783091 | The millimeter wave (mmWave) band, a prime candidate for 5G cellular networks, seems attractive for wireless energy harvesting since it will feature large antenna arrays and extremely dense base station (BS) deployments. The viability of mmWave for energy harvesting though is unclear, due to the differences in propagation characteristics, such as extreme sensitivity to building blockages. This paper considers a scenario where low-power devices extract energy and or information from the mmWave signals. Using stochastic geometry, analytical expressions are derived for the energy coverage probability, the average harvested power, and the overall (energy-and-information) coverage probability at a typical wireless-powered device in terms of the BS density, the antenna geometry parameters, and the channel parameters. Numerical results reveal several network and device level design insights. At the BSs, optimizing the antenna geometry parameters, such as beamwidth, can maximize the network-wide energy coverage for a given user population. At the device level, the performance can be substantially improved by optimally splitting the received signal for energy and information extraction, and by deploying multi-antenna arrays. For the latter, an efficient low-power multi-antenna mmWave receiver architecture is proposed for simultaneous energy and information transfer. Overall, simulation results suggest that mmWave energy harvesting generally outperforms lower frequency solutions. | Our work differs from the prior work in that we investigate wireless energy and information transfer in a large-scale cellular network. Due to the different physical characteristics and design features at mmWave, prior work on energy information transfer in lower frequency networks does not directly apply to mmWave networks. In another line of work, the performance of mmWave cellular networks in terms of signal-to-interference-and-noise ratio (SINR) coverage and rate has also been analyzed using stochastic geometry @cite_3 @cite_27 . None of this work on mmWave networks, however, provides a performance characterization from the perspective of wireless energy and information transfer. | {
"cite_N": [
"@cite_27",
"@cite_3"
],
"mid": [
"1953553238",
"2031858701"
],
"abstract": [
"Millimeter wave (mmWave) cellular systems will require high-gain directional antennas and dense base station (BS) deployments to overcome a high near-field path loss and poor diffraction. As a desirable side effect, high-gain antennas offer interference isolation, providing an opportunity to incorporate self-backhauling , i.e., BSs backhauling among themselves in a mesh architecture without significant loss in the throughput, to enable the requisite large BS densities. The use of directional antennas and resource sharing between access and backhaul links leads to coverage and rate trends that significantly differ from conventional UHF cellular systems. In this paper, we propose a general and tractable mmWave cellular model capturing these key trends and characterize the associated rate distribution. The developed model and analysis are validated using actual building locations from dense urban settings and empirically derived path loss models. The analysis shows that, in sharp contrast to the interference-limited nature of UHF cellular networks, the spectral efficiency of mmWave networks (besides the total rate) also increases with the BS density, particularly at the cell edge. Increasing the system bandwidth does not significantly influence the cell edge rate, although it boosts the median and peak rates. With self-backhauling, different combinations of the wired backhaul fraction (i.e., the fraction of BSs with a wired connection) and the BS density are shown to guarantee the same median rate (QoS).",
"Millimeter wave (mmWave) holds promise as a carrier frequency for fifth generation cellular networks. Because mmWave signals are sensitive to blockage, prior models for cellular networks operated in the ultra high frequency (UHF) band do not apply to analyze mmWave cellular networks directly. Leveraging concepts from stochastic geometry, this paper proposes a general framework to evaluate the coverage and rate performance in mmWave cellular networks. Using a distance-dependent line-of-site (LOS) probability function, the locations of the LOS and non-LOS base stations are modeled as two independent non-homogeneous Poisson point processes, to which different path loss laws are applied. Based on the proposed framework, expressions for the signal-to-noise-and-interference ratio (SINR) and rate coverage probability are derived. The mmWave coverage and rate performance are examined as a function of the antenna geometry and base station density. The case of dense networks is further analyzed by applying a simplified system model, in which the LOS region of a user is approximated as a fixed LOS ball. The results show that dense mmWave networks can achieve comparable coverage and much higher data rates than conventional UHF cellular systems, despite the presence of blockages. The results suggest that the cell size to achieve the optimal SINR scales with the average size of the area that is LOS to a user."
]
} |
1509.01354 | 1756600408 | Along with data on the web increasing dramatically, hashing is becoming more and more popular as a method of approximate nearest neighbor search. Previous supervised hashing methods utilized similarity dissimilarity matrix to get semantic information. But the matrix is not easy to construct for a new dataset. Rather than to reconstruct the matrix, we proposed a straightforward CNN-based hashing method, i.e. binarilizing the activations of a fully connected layer with threshold 0 and taking the binary result as hash codes. This method achieved the best performance on CIFAR-10 and was comparable with the state-of-the-art on MNIST. And our experiments on CIFAR-10 suggested that the signs of activations may carry more information than the relative values of activations between samples, and that the co-adaption between feature extractor and hash functions is important for hashing. | Recently, as the ever-growing web data makes information retrieval and other problems more challenging, hashing has become a popular solution @cite_28 @cite_2 . The short binary codes generated by hashing make retrieval efficient both on storage and computation. In many cases, search in millions of data will only consume constant time via tens-of-bit representations mapped from the query by hashing. | {
"cite_N": [
"@cite_28",
"@cite_2"
],
"mid": [
"2293824885",
"2074668987"
],
"abstract": [
"Hashing is a popular approximate nearest neighbor search approach for large-scale image retrieval. Supervised hashing, which incorporates similarity dissimilarity information on entity pairs to improve the quality of hashing function learning, has recently received increasing attention. However, in the existing supervised hashing methods for images, an input image is usually encoded by a vector of handcrafted visual features. Such hand-crafted feature vectors do not necessarily preserve the accurate semantic similarities of images pairs, which may often degrade the performance of hashing function learning. In this paper, we propose a supervised hashing method for image retrieval, in which we automatically learn a good image representation tailored to hashing as well as a set of hash functions. The proposed method has two stages. In the first stage, given the pairwise similarity matrix S over training images, we propose a scalable coordinate descent method to decompose S into a product of HHT where H is a matrix with each of its rows being the approximate hash code associated to a training image. In the second stage, we propose to simultaneously learn a good feature representation for the input images as well as a set of hash functions, via a deep convolutional network tailored to the learned hash codes in H and optionally the discrete class labels of the images. Extensive empirical evaluations on three benchmark datasets with different kinds of images show that the proposed method has superior performance gains over several state-of-the-art supervised and unsupervised hashing methods.",
"Hashing-based approximate nearest neighbor (ANN) search in huge databases has become popular due to its computational and memory efficiency. The popular hashing methods, e.g., Locality Sensitive Hashing and Spectral Hashing, construct hash functions based on random or principal projections. The resulting hashes are either not very accurate or are inefficient. Moreover, these methods are designed for a given metric similarity. On the contrary, semantic similarity is usually given in terms of pairwise labels of samples. There exist supervised hashing methods that can handle such semantic similarity, but they are prone to overfitting when labeled data are small or noisy. In this work, we propose a semi-supervised hashing (SSH) framework that minimizes empirical error over the labeled set and an information theoretic regularizer over both labeled and unlabeled sets. Based on this framework, we present three different semi-supervised hashing methods, including orthogonal hashing, nonorthogonal hashing, and sequential hashing. Particularly, the sequential hashing method generates robust codes in which each hash function is designed to correct the errors made by the previous ones. We further show that the sequential learning paradigm can be extended to unsupervised domains where no labeled pairs are available. Extensive experiments on four large datasets (up to 80 million samples) demonstrate the superior performance of the proposed SSH methods over state-of-the-art supervised and unsupervised hashing techniques."
]
} |
1509.01329 | 2144102811 | Common visual recognition tasks such as classification, object detection, and semantic segmentation are rapidly reaching maturity, and given the recent rate of progress, it is not unreasonable to conjecture that techniques for many of these problems will approach human levels of performance in the next few years. In this paper we look to the future: what is the next frontier in visual recognition? We offer one possible answer to this question. We propose a detailed image annotation that captures information beyond the visible pixels and requires complex reasoning about full scene structure. Specifically, we create an amodal segmentation of each image: the full extent of each region is marked, not just the visible pixels. Annotators outline and name all salient regions in the image and specify a partial depth order. The result is a rich scene structure, including visible and occluded portions of each region, figure-ground edge information, semantic labels, and object overlap. We create two datasets for semantic amodal segmentation. First, we label 500 images in the BSDS dataset with multiple annotators per image, allowing us to study the statistics of human annotations. We show that the proposed full scene annotation is surprisingly consistent between annotators, including for regions and edges. Second, we annotate 5000 images from COCO. This larger dataset allows us to explore a number of algorithmic ideas for amodal segmentation and depth ordering. We introduce novel metrics for these tasks, and along with our strong baselines, define concrete new challenges for the community. | Compared to datasets @cite_26 @cite_3 @cite_45 , our annotation is dense, amodal, and covers both objects and regions. Related datasets such as LabelMe @cite_38 and Sun @cite_18 also have objects annotated modally. Only for pedestrian detection @cite_21 are objects often annotated amodally (with both visible and amodal bounding boxes). | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_26",
"@cite_21",
"@cite_3",
"@cite_45"
],
"mid": [
"2110764733",
"2017814585",
"2031489346",
"2031454541",
"",
""
],
"abstract": [
"We seek to build a large collection of images with ground truth labels to be used for object detection and recognition research. Such data is useful for supervised learning and quantitative evaluation. To achieve this, we developed a web-based tool that allows easy image annotation and instant sharing of such annotations. Using this annotation tool, we have collected a large dataset that spans many object categories, often containing multiple instances over a wide variety of images. We quantify the contents of the dataset and compare against existing state of the art datasets used for object recognition and detection. Also, we show how to extend the dataset to automatically enhance object labels with WordNet, discover object parts, recover a depth ordering of objects in a scene, and increase the number of labels using minimal user supervision and images from the web.",
"Scene categorization is a fundamental problem in computer vision. However, scene understanding research has been constrained by the limited scope of currently-used databases which do not capture the full variety of scene categories. Whereas standard databases for object categorization contain hundreds of different classes of objects, the largest available dataset of scene categories contains only 15 classes. In this paper we propose the extensive Scene UNderstanding (SUN) database that contains 899 categories and 130,519 images. We use 397 well-sampled categories to evaluate numerous state-of-the-art algorithms for scene recognition and establish new bounds of performance. We measure human scene classification performance on the SUN database and compare this with computational methods. Additionally, we study a finer-grained scene representation to detect scenes embedded inside of larger scenes.",
"The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.",
"Pedestrian detection is a key problem in computer vision, with several applications that have the potential to positively impact quality of life. In recent years, the number of approaches to detecting pedestrians in monocular images has grown steadily. However, multiple data sets and widely varying evaluation protocols are used, making direct comparisons difficult. To address these shortcomings, we perform an extensive evaluation of the state of the art in a unified framework. We make three primary contributions: 1) We put together a large, well-annotated, and realistic monocular pedestrian detection data set and study the statistics of the size, position, and occlusion patterns of pedestrians in urban scenes, 2) we propose a refined per-frame evaluation methodology that allows us to carry out probing and informative comparisons, including measuring performance in relation to scale and occlusion, and 3) we evaluate the performance of sixteen pretrained state-of-the-art detectors across six data sets. Our study allows us to assess the state of the art and provides a framework for gauging future efforts. Our experiments show that despite significant progress, performance still has much room for improvement. In particular, detection is disappointing at low resolutions and for partially occluded pedestrians.",
"",
""
]
} |
1509.01329 | 2144102811 | Common visual recognition tasks such as classification, object detection, and semantic segmentation are rapidly reaching maturity, and given the recent rate of progress, it is not unreasonable to conjecture that techniques for many of these problems will approach human levels of performance in the next few years. In this paper we look to the future: what is the next frontier in visual recognition? We offer one possible answer to this question. We propose a detailed image annotation that captures information beyond the visible pixels and requires complex reasoning about full scene structure. Specifically, we create an amodal segmentation of each image: the full extent of each region is marked, not just the visible pixels. Annotators outline and name all salient regions in the image and specify a partial depth order. The result is a rich scene structure, including visible and occluded portions of each region, figure-ground edge information, semantic labels, and object overlap. We create two datasets for semantic amodal segmentation. First, we label 500 images in the BSDS dataset with multiple annotators per image, allowing us to study the statistics of human annotations. We show that the proposed full scene annotation is surprisingly consistent between annotators, including for regions and edges. Second, we annotate 5000 images from COCO. This larger dataset allows us to explore a number of algorithmic ideas for amodal segmentation and depth ordering. We introduce novel metrics for these tasks, and along with our strong baselines, define concrete new challenges for the community. | We note that our annotation scheme subsumes modal segmentation @cite_52 , edge detection @cite_52 , and figure-ground edge labeling @cite_25 . As our COCO annotations (5000 images) are an order of magnitude larger than BSDS (500 images) @cite_52 , the previous de-facto dataset for these tasks, we expect our data to be quite useful for these classic tasks. | {
"cite_N": [
"@cite_25",
"@cite_52"
],
"mid": [
"2157652882",
"2110158442"
],
"abstract": [
"Figure-ground organization refers to the visual perception that a contour separating two regions belongs to one of the regions. Recent studies have found neural correlates of figure-ground assignment in V2 as early as 10-25 ms after response onset, providing strong support for the role of local bottom-up processing. How much information about figure-ground assignment is available from locally computed cues? Using a large collection of natural images, in which neighboring regions were assigned a figure-ground relation by human observers, we quantified the extent to which figural regions locally tend to be smaller, more convex, and lie below ground regions. Our results suggest that these Gestalt cues are ecologically valid, and we quantify their relative power. We have also developed a simple bottom-up computational model of figure-ground assignment that takes image contours as input. Using parameters fit to natural image statistics, the model is capable of matching human-level performance when scene context limited.",
"This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications."
]
} |
1509.01329 | 2144102811 | Common visual recognition tasks such as classification, object detection, and semantic segmentation are rapidly reaching maturity, and given the recent rate of progress, it is not unreasonable to conjecture that techniques for many of these problems will approach human levels of performance in the next few years. In this paper we look to the future: what is the next frontier in visual recognition? We offer one possible answer to this question. We propose a detailed image annotation that captures information beyond the visible pixels and requires complex reasoning about full scene structure. Specifically, we create an amodal segmentation of each image: the full extent of each region is marked, not just the visible pixels. Annotators outline and name all salient regions in the image and specify a partial depth order. The result is a rich scene structure, including visible and occluded portions of each region, figure-ground edge information, semantic labels, and object overlap. We create two datasets for semantic amodal segmentation. First, we label 500 images in the BSDS dataset with multiple annotators per image, allowing us to study the statistics of human annotations. We show that the proposed full scene annotation is surprisingly consistent between annotators, including for regions and edges. Second, we annotate 5000 images from COCO. This larger dataset allows us to explore a number of algorithmic ideas for amodal segmentation and depth ordering. We introduce novel metrics for these tasks, and along with our strong baselines, define concrete new challenges for the community. | Finally there has been some algorithmic work on amodal completion @cite_43 @cite_5 @cite_4 @cite_20 . Of particular interest, Ke al @cite_42 recently proposed a general approach for amodal segmentation that serves as the foundation for one of our baselines (see ). Most existing recognition systems, however, operate on a per-patch or per-window basis, or with a limited receptive field, including for object detection @cite_11 @cite_13 @cite_36 , edge detection @cite_49 @cite_51 @cite_9 , and semantic segmentation @cite_40 @cite_8 @cite_22 . Our dataset will present challenges to such methods as amodal segmentation requires reasoning about object interactions. | {
"cite_N": [
"@cite_4",
"@cite_22",
"@cite_8",
"@cite_36",
"@cite_9",
"@cite_42",
"@cite_43",
"@cite_40",
"@cite_49",
"@cite_51",
"@cite_5",
"@cite_13",
"@cite_20",
"@cite_11"
],
"mid": [
"",
"",
"",
"",
"",
"2951289157",
"331766535",
"1528789833",
"2165914352",
"",
"",
"",
"",
"2168356304"
],
"abstract": [
"",
"",
"",
"",
"",
"We consider the problem of amodal instance segmentation, the objective of which is to predict the region encompassing both visible and occluded parts of each object. Thus far, the lack of publicly available amodal segmentation annotations has stymied the development of amodal segmentation methods. In this paper, we sidestep this issue by relying solely on standard modal instance segmentation annotations to train our model. The result is a new method for amodal instance segmentation, which represents the first such method to the best of our knowledge. We demonstrate the proposed method's effectiveness both qualitatively and quantitatively.",
"Scene understanding requires reasoning about both what we can see and what is occluded. We offer a simple and general approach to infer labels of occluded background regions. Our approach incorporates estimates of visible surrounding background, detected objects, and shape priors from transferred training regions. We demonstrate the ability to infer the labels of occluded background regions in both the outdoor StreetScenes dataset and an indoor scene dataset using the same approach. Our experiments show that our method outperforms competent baselines.",
"This paper proposes a new approach to learning a discriminative model of object classes, incorporating appearance, shape and context information efficiently. The learned model is used for automatic visual recognition and semantic segmentation of photographs. Our discriminative model exploits novel features, based on textons, which jointly model shape and texture. Unary classification and feature selection is achieved using shared boosting to give an efficient classifier which can be applied to a large number of classes. Accurate image segmentation is achieved by incorporating these classifiers in a conditional random field. Efficient training of the model on very large datasets is achieved by exploiting both random feature selection and piecewise training methods. High classification and segmentation accuracy are demonstrated on three different databases: i) our own 21-object class database of photographs of real objects viewed under general lighting conditions, poses and viewpoints, ii) the 7-class Corel subset and iii) the 7-class Sowerby database used in [1]. The proposed algorithm gives competitive results both for highly textured (e.g. grass, trees), highly structured (e.g. cars, faces, bikes, aeroplanes) and articulated objects (e.g. body, cow).",
"Edge detection is one of the most studied problems in computer vision, yet it remains a very challenging task. It is difficult since often the decision for an edge cannot be made purely based on low level cues such as gradient, instead we need to engage all levels of information, low, middle, and high, in order to decide where to put edges. In this paper we propose a novel supervised learning algorithm for edge and object boundary detection which we refer to as Boosted Edge Learning or BEL for short. A decision of an edge point is made independently at each location in the image; a very large aperture is used providing significant context for each decision. In the learning stage, the algorithm selects and combines a large number of features across different scales in order to learn a discriminative model using an extended version of the Probabilistic Boosting Tree classification algorithm. The learning based framework is highly adaptive and there are no parameters to tune. We show applications for edge detection in a number of specific image domains as well as on natural images. We test on various datasets including the Berkeley dataset and the results obtained are very good.",
"",
"",
"",
"",
"We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function."
]
} |
1509.01288 | 2212495078 | Applications that learn from opinionated documents, like tweets or product reviews, face two challenges. First, the opinionated documents constitute an evolving stream, where both the author's attitude and the vocabulary itself may change. Second, labels of documents are scarce and labels of words are unreliable, because the sentiment of a word depends on the (unknown) context in the author's mind. Most of the research on mining over opinionated streams focuses on the first aspect of the problem, whereas for the second a continuous supply of labels from the stream is assumed. Such an assumption though is utopian as the stream is infinite and the labeling cost is prohibitive. To this end, we investigate the potential of active stream learning algorithms that ask for labels on demand. Our proposed ACOSTREAM 1 approach works with limited labels: it uses an initial seed of labeled documents, occasionally requests additional labels for documents from the human expert and incrementally adapts to the underlying stream while exploiting the available labeled documents. In its core, ACOSTREAM consists of a MNB classifier coupled with "sampling" strategies for requesting class labels for new unlabeled documents. In the experiments, we evaluate the classifier performance over time by varying: (a) the class distribution of the opinionated stream, while assuming that the set of the words in the vocabulary is fixed but their polarities may change with the class distribution; and (b) the number of unknown words arriving at each moment, while the class polarity may also change. Our results show that active learning on a stream of opinionated documents, delivers good performance while requiring a small selection of labels | Active learning is a prominent choice when dealing with problems where labeled data are expensive to obtain, polarity classification or computational biology applications. There exist various active learning approaches, provided in recent surveys such as @cite_30 @cite_8 . They differ in their heuristics to select instances for which the true label is requested. Garnett al @cite_11 use the most likely or the most pessimistic posterior @math made by a current model. In contrast Krempl al @cite_5 and Ho al @cite_3 weight the posteriors by their likelihood resp. use hypotheses testing to include the reliability of the posterior when selecting the next instance. All these approaches follow the same framework: they select the next instance and relearn the classifier with the new instance. Relearning is expensive in terms of runtime when dealing with large streams as we do. Our approach works incrementally, thus it does not require relearning rather is expands the current model with new instances. | {
"cite_N": [
"@cite_30",
"@cite_8",
"@cite_3",
"@cite_5",
"@cite_11"
],
"mid": [
"2021732807",
"2903158431",
"2145452316",
"94418491",
"2952562730"
],
"abstract": [
"Active learning aims to train an accurate prediction model with minimum cost by labeling most informative instances. In this paper, we survey existing works on active learning from an instance-selection perspective and classify them into two categories with a progressive relationship: (1) active learning merely based on uncertainty of independent and identically distributed (IID) instances, and (2) active learning by further taking into account instance correlations. Using the above categorization, we summarize major approaches in the field, along with their technical strengths weaknesses, followed by a simple runtime performance comparison, and discussion about emerging active learning applications and instance-selection challenges therein. This survey intends to provide a high-level summa- rization for active learning and motivates interested readers to consider instance-selection approaches for designing effective active learning solutions.",
"",
"There has been recently a growing interest in the use of transductive inference for learning. We expand here the scope of transductive inference to active learning in a stream-based setting. Towards that end this paper proposes Query-by-Transduction (QBT) as a novel active learning algorithm. QBT queries the label of an example based on the p-values obtained using transduction. We show that QBT is closely related to Query-by-Committee (QBC) using relations between transduction, Bayesian statistical testing, Kullback-Leibler divergence, and Shannon information. The feasibility and utility of QBT is shown on both binary and multi-class classification tasks using SVM as the choice classifier. Our experimental results show that QBT compares favorably, in terms of mean generalization, against random sampling, committee-based active learning, margin-based active learning, and QBC in the stream-based setting.",
"Mining data with minimal annotation costs requires efficient active approaches, that ideally select the optimal candidate for labelling under a user-specified classification performance measure. Common generic approaches, that are usable with any classifier and any performance measure, are either slow like error reduction, or heuristics like uncertainty sampling. In contrast, our Probabilistic Active Learning (PAL) approach offers versatility, direct optimisation of a performance measure and computational efficiency. Given a labelling candidate from a pool, PAL models both the candidate’s label and the true posterior in its neighbourhood as random variables. By computing the expectation of the gain in classification performance over both random variables, PAL then selects the candidate that in expectation will improve the classification performance the most. Extending our recent poster, we discuss the properties of PAL and perform a thorough experimental evaluation on several synthetic and real-world data sets of different sizes. Results show comparable or better classification performance than error reduction and uncertainty sampling, yet PAL has the same asymptotic time complexity as uncertainty sampling and is faster than error reduction.",
"We consider two active binary-classification problems with atypical objectives. In the first, active search, our goal is to actively uncover as many members of a given class as possible. In the second, active surveying, our goal is to actively query points to ultimately predict the proportion of a given class. Numerous real-world problems can be framed in these terms, and in either case typical model-based concerns such as generalization error are only of secondary importance. We approach these problems via Bayesian decision theory; after choosing natural utility functions, we derive the optimal policies. We provide three contributions. In addition to introducing the active surveying problem, we extend previous work on active search in two ways. First, we prove a novel theoretical result, that less-myopic approximations to the optimal policy can outperform more-myopic approximations by any arbitrary degree. We then derive bounds that for certain models allow us to reduce (in practice dramatically) the exponential search space required by a naive implementation of the optimal policy, enabling further lookahead while still ensuring that optimal decisions are always made."
]
} |
1509.01288 | 2212495078 | Applications that learn from opinionated documents, like tweets or product reviews, face two challenges. First, the opinionated documents constitute an evolving stream, where both the author's attitude and the vocabulary itself may change. Second, labels of documents are scarce and labels of words are unreliable, because the sentiment of a word depends on the (unknown) context in the author's mind. Most of the research on mining over opinionated streams focuses on the first aspect of the problem, whereas for the second a continuous supply of labels from the stream is assumed. Such an assumption though is utopian as the stream is infinite and the labeling cost is prohibitive. To this end, we investigate the potential of active stream learning algorithms that ask for labels on demand. Our proposed ACOSTREAM 1 approach works with limited labels: it uses an initial seed of labeled documents, occasionally requests additional labels for documents from the human expert and incrementally adapts to the underlying stream while exploiting the available labeled documents. In its core, ACOSTREAM consists of a MNB classifier coupled with "sampling" strategies for requesting class labels for new unlabeled documents. In the experiments, we evaluate the classifier performance over time by varying: (a) the class distribution of the opinionated stream, while assuming that the set of the words in the vocabulary is fixed but their polarities may change with the class distribution; and (b) the number of unknown words arriving at each moment, while the class polarity may also change. Our results show that active learning on a stream of opinionated documents, delivers good performance while requiring a small selection of labels | Zliobaite al @cite_26 propose two sampling strategies which are flexible towards a growing collection as well as considering concept change. The latter is covered while allowing the learner to select also samples which are not close to the decision boundary, for which the classifier is very certain, so that the classifier will not miss concept change. Boy al @cite_19 test uncertainty and relevance Relevance sampling regards the labeling of those examples which are most likely to be class members @cite_17 . sampling with different classifiers. It is used to acquire more examples from a class which is scarce. Their results expose that Multinomial Naive Bayes (MNB) classifier performs best for both sampling techniques on polarity classification. We also use MNB as classifier. | {
"cite_N": [
"@cite_19",
"@cite_26",
"@cite_17"
],
"mid": [
"2034090215",
"3540556",
"2085989833"
],
"abstract": [
"Sentiment analysis, also called opinion mining, is a form of information extraction from text of growing research and commercial interest. In this paper we present our machine learning experiments with regard to sentiment analysis in blog, review and forum texts found on the World Wide Web and written in English, Dutch and French. We train from a set of example sentences or statements that are manually annotated as positive, negative or neutral with regard to a certain entity. We are interested in the feelings that people express with regard to certain consumption products. We learn and evaluate several classification models that can be configured in a cascaded pipeline. We have to deal with several problems, being the noisy character of the input texts, the attribution of the sentiment to a particular entity and the small size of the training set. We succeed to identify positive, negative and neutral feelings to the entity under consideration with ca. 83 accuracy for English texts based on unigram features augmented with linguistic features. The accuracy results of processing the Dutch and French texts are ca. 70 and 68 respectively due to the larger variety of the linguistic expressions that more often diverge from standard language, thus demanding more training patterns. In addition, our experiments give us insights into the portability of the learned models across domains and languages. A substantial part of the article investigates the role of active learning techniques for reducing the number of examples to be manually annotated.",
"In learning to classify streaming data, obtaining the true labels may require major effort and may incur excessive cost. Active learning focuses on learning an accurate model with as few labels as possible. Streaming data poses additional challenges for active learning, since the data distribution may change over time (concept drift) and classifiers need to adapt. Conventional active learning strategies concentrate on querying the most uncertain instances, which are typically concentrated around the decision boundary. If changes do not occur close to the boundary, they will be missed and classifiers will fail to adapt. In this paper we develop two active learning strategies for streaming data that explicitly handle concept drift. They are based on uncertainty, dynamic allocation of labeling efforts over time and randomization of the search space. We empirically demonstrate that these strategies react well to changes that can occur anywhere in the instance space and unexpectedly.",
"The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness."
]
} |
1509.01288 | 2212495078 | Applications that learn from opinionated documents, like tweets or product reviews, face two challenges. First, the opinionated documents constitute an evolving stream, where both the author's attitude and the vocabulary itself may change. Second, labels of documents are scarce and labels of words are unreliable, because the sentiment of a word depends on the (unknown) context in the author's mind. Most of the research on mining over opinionated streams focuses on the first aspect of the problem, whereas for the second a continuous supply of labels from the stream is assumed. Such an assumption though is utopian as the stream is infinite and the labeling cost is prohibitive. To this end, we investigate the potential of active stream learning algorithms that ask for labels on demand. Our proposed ACOSTREAM 1 approach works with limited labels: it uses an initial seed of labeled documents, occasionally requests additional labels for documents from the human expert and incrementally adapts to the underlying stream while exploiting the available labeled documents. In its core, ACOSTREAM consists of a MNB classifier coupled with "sampling" strategies for requesting class labels for new unlabeled documents. In the experiments, we evaluate the classifier performance over time by varying: (a) the class distribution of the opinionated stream, while assuming that the set of the words in the vocabulary is fixed but their polarities may change with the class distribution; and (b) the number of unknown words arriving at each moment, while the class polarity may also change. Our results show that active learning on a stream of opinionated documents, delivers good performance while requiring a small selection of labels | @cite_23 propose an active stream learning based classifier for classifying tweets into relevant or irrelevant for a given company. Their idea is to built a company profile of positive and negative words and test the tweet against the profile to decide on its class. The profile is maintained online over the stream; initially a small set of words is included but the seed set is expanded by also including words that co-occur often in the stream with words in the seed set. We also expand in a word-basis, however our approaches are broader rather than topic specific. | {
"cite_N": [
"@cite_23"
],
"mid": [
"44890233"
],
"abstract": [
"Twitter is a popular micro-blogging service on theWeb, where people can enter short messages, which then become visible to some other users of the service. While the topics of these messages varies, there are a lot of messages where the users express their opinions about some companies or their products. These messages are a rich source of information for companies for sentiment analysis or opinion mining. There is however a great obstacle for analyzing the messages directly: as the company names are often ambiguous (e.g. apple, the fruit vs. Apple Inc.), one needs first to identify, which messages are related to the company. In this paper we address this question. We present various techniques for classifying tweet messages containing a given keyword, whether they are related to a particular company with that name or not. We first present simple techniques, which make use of company profiles, which we created semi-automatically from external Web sources. Our advanced techniques take ambiguity estimations into account and also automatically extend the company profiles from the twitter stream itself. We demonstrate the effectiveness of our methods through an extensive set of experiments. Moreover, we extensively analyze the sources of errors in the classification. The analysis not only brings further improvement, but also enables to use the human input more efficiently."
]
} |
1509.01288 | 2212495078 | Applications that learn from opinionated documents, like tweets or product reviews, face two challenges. First, the opinionated documents constitute an evolving stream, where both the author's attitude and the vocabulary itself may change. Second, labels of documents are scarce and labels of words are unreliable, because the sentiment of a word depends on the (unknown) context in the author's mind. Most of the research on mining over opinionated streams focuses on the first aspect of the problem, whereas for the second a continuous supply of labels from the stream is assumed. Such an assumption though is utopian as the stream is infinite and the labeling cost is prohibitive. To this end, we investigate the potential of active stream learning algorithms that ask for labels on demand. Our proposed ACOSTREAM 1 approach works with limited labels: it uses an initial seed of labeled documents, occasionally requests additional labels for documents from the human expert and incrementally adapts to the underlying stream while exploiting the available labeled documents. In its core, ACOSTREAM consists of a MNB classifier coupled with "sampling" strategies for requesting class labels for new unlabeled documents. In the experiments, we evaluate the classifier performance over time by varying: (a) the class distribution of the opinionated stream, while assuming that the set of the words in the vocabulary is fixed but their polarities may change with the class distribution; and (b) the number of unknown words arriving at each moment, while the class polarity may also change. Our results show that active learning on a stream of opinionated documents, delivers good performance while requiring a small selection of labels | Recently @cite_16 present an active learning framework for selecting the most suitable tweets an initial trained classification model. They use as a Support Vector Machine and re-build the model as soon as new suitable tweets are selected. They select suitable tweets based on uncertainty and random sampling. Similarly @cite_31 contribute an active learning approach distinguishing opinionated (positive and negative) from non-opinionated (neutral) tweets in finance twitter data streams. Based on an SVM classifier, determine a query strategy for active learning, combining advantages from uncertainty and random sampling. | {
"cite_N": [
"@cite_31",
"@cite_16"
],
"mid": [
"2078309899",
"1995068038"
],
"abstract": [
"Studying the relationship between public sentiment and stock prices has been the focus of several studies. This paper analyzes whether the sentiment expressed in Twitter feeds, which discuss selected companies and their products, can indicate their stock price changes. To address this problem, an active learning approach was developed and applied to sentiment analysis of tweet streams in the stock market domain. The paper first presents a static Twitter data analysis problem, explored in order to determine the best Twitter-specific text preprocessing setting for training the Support Vector Machine (SVM) sentiment classifier. In the static setting, the Granger causality test shows that sentiments in stock-related tweets can be used as indicators of stock price movements a few days in advance, where improved results were achieved by adapting the SVM classifier to categorize Twitter posts into three sentiment categories of positive, negative and neutral (instead of positive and negative only). These findings were adopted in the development of a new stream-based active learning approach to sentiment analysis, applicable in incremental learning from continuously changing financial tweet streams. To this end, a series of experiments was conducted to determine the best querying strategy for active learning of the SVM classifier adapted to sentiment analysis of financial tweet streams. The experiments in analyzing stock market sentiments of a particular company show that changes in positive sentiment probability can be used as indicators of the changes in stock closing prices.",
"Abstract Sentiment analysis from data streams is aimed at detecting authors’ attitude, emotions and opinions from texts in real-time. To reduce the labeling effort needed in the data collection phase, active learning is often applied in streaming scenarios, where a learning algorithm is allowed to select new examples to be manually labeled in order to improve the learner’s performance. Even though there are many on-line platforms which perform sentiment analysis, there is no publicly available interactive on-line platform for dynamic adaptive sentiment analysis, which would be able to handle changes in data streams and adapt its behavior over time. This paper describes ClowdFlows, a cloud-based scientific workflow platform, and its extensions enabling the analysis of data streams and active learning. Moreover, by utilizing the data and workflow sharing in ClowdFlows, the labeling of examples can be distributed through crowdsourcing. The advanced features of ClowdFlows are demonstrated on a sentiment analysis use case, using active learning with a linear Support Vector Machine for learning sentiment classification models to be applied to microblogging data streams."
]
} |
1509.01288 | 2212495078 | Applications that learn from opinionated documents, like tweets or product reviews, face two challenges. First, the opinionated documents constitute an evolving stream, where both the author's attitude and the vocabulary itself may change. Second, labels of documents are scarce and labels of words are unreliable, because the sentiment of a word depends on the (unknown) context in the author's mind. Most of the research on mining over opinionated streams focuses on the first aspect of the problem, whereas for the second a continuous supply of labels from the stream is assumed. Such an assumption though is utopian as the stream is infinite and the labeling cost is prohibitive. To this end, we investigate the potential of active stream learning algorithms that ask for labels on demand. Our proposed ACOSTREAM 1 approach works with limited labels: it uses an initial seed of labeled documents, occasionally requests additional labels for documents from the human expert and incrementally adapts to the underlying stream while exploiting the available labeled documents. In its core, ACOSTREAM consists of a MNB classifier coupled with "sampling" strategies for requesting class labels for new unlabeled documents. In the experiments, we evaluate the classifier performance over time by varying: (a) the class distribution of the opinionated stream, while assuming that the set of the words in the vocabulary is fixed but their polarities may change with the class distribution; and (b) the number of unknown words arriving at each moment, while the class polarity may also change. Our results show that active learning on a stream of opinionated documents, delivers good performance while requiring a small selection of labels | We skip a discussion on the most recent polarity classification algorithms such as @cite_24 as the contribution of our work is towards active learning strategies for polarity classification rather than pure polarity classification. | {
"cite_N": [
"@cite_24"
],
"mid": [
"2251939518"
],
"abstract": [
"Semantic word spaces have been very useful but cannot express the meaning of longer phrases in a principled way. Further progress towards understanding compositionality in tasks such as sentiment detection requires richer supervised training and evaluation resources and more powerful models of composition. To remedy this, we introduce a Sentiment Treebank. It includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality. To address them, we introduce the Recursive Neural Tensor Network. When trained on the new treebank, this model outperforms all previous methods on several metrics. It pushes the state of the art in single sentence positive negative classification from 80 up to 85.4 . The accuracy of predicting fine-grained sentiment labels for all phrases reaches 80.7 , an improvement of 9.7 over bag of features baselines. Lastly, it is the only model that can accurately capture the effects of negation and its scope at various tree levels for both positive and negative phrases."
]
} |
1509.01094 | 2279275868 | Rising energy consumption of IT infrastructure concerns have spurred the development of more power efficient networking equipment and algorithms. When old equipment just drew an almost constant amount of power regardless of the traffic load, there were some efforts to minimize the total energy usage by modifying routing decisions to aggregate traffic in a minimal set of links, creating the opportunity to power off some unused equipment during low traffic periods. New equipment, with power profile functions depending on the offered load, presents new challenges for optimal routing. The goal now is not just to power some links down, but to aggregate and or spread the traffic so that devices operate in their sweet spot in regards to network usage. In this paper we present an algorithm that, making use of the ant colonization algorithm, computes, in a decentralized manner, the routing tables so as to minimize global energy consumption. Moreover, the resulting algorithm is also able to track changes in the offered loadand react to them in real time. HighlightsNew network links show load dependant energy consumption.Energy saving routing algorithm for load dependant links.Simply powering down links can increase energy consumption.Obtained power savings in the 10-20 interval for real networks. | Research on new routing procedures that save power on communication networks have been ongoing for a few years already. The first proposals focused on concentrating the traffic on a reduced set of network elements so that unused resources could be powered off during low load periods decreasing power consumption. @cite_26 belongs to this first family of proposals. It tries to concentrate traffic flows on a reduced set of links to power off the rest. Another proposals in the same vein are @cite_16 and @cite_25 . The first formulates a minimization problem of the energy consumption considering that powered nodes and links need a constant amount of power, and the second treats the problem of maximizing the number of powered off links. As both problems are intractable (NP-complete) @cite_26 @cite_16 @cite_30 @cite_25 , both articles provide some heuristics to approximate the solution. All these proposals, however, do not take into account the different power profiles that new power-aware networking equipment exhibits and may even cause more harm than good when these profiles are super-linear, as the increased power consumption caused by traffic aggregation can surpass any power savings obtained by the reduced consumption of the powered-down resources. | {
"cite_N": [
"@cite_30",
"@cite_26",
"@cite_25",
"@cite_16"
],
"mid": [
"2022055436",
"2094542421",
"2037133245",
"2078426348"
],
"abstract": [
"According to recent research, the current Internet wastes energy due to an un-optimized network design, which does not consider the energy consumption of network elements such as routers and switches. Looking toward energy saving networks, a generalized problem called the energy consumption minimized network (EMN) had been proposed. However, due to the NP-completeness of this problem, it requires a considerable amount of time to obtain the solution, making it practically intractable for large-scale networks.In this paper, we re-formulate the NP-complete EMN problem into a simpler one using a newly defined concept called 'traffic centrality'. We then propose a new ant colony-based self-adaptive energy saving routing scheme, referred to as A-ESR, which exploits the ant colony optimization (ACO) method to make the Internet more energy efficient. The proposed A-ESR algorithm heuristically solves the re-formulated problem without any supervised control by allowing the incoming flows to be autonomously aggregated on specific heavily-loaded links and switching off the other lightly-loaded links. Additionally, the A-ESR algorithm adjusts the energy consumption by tuning the aggregation parameter s, which can dramatically reduce the energy consumption during nighttime hours (at the expense of tolerable network delay performance). Another promising capability of this algorithm is that it provides a high degree of self-organizing capabilities due to the amazing advantages of the swarm intelligence of artificial ants. The simulation results in real IP networks show that the proposed A-ESR algorithm performs better than previous algorithms in terms of its energy efficiency. The results also show that this efficiency can be adjusted by tuning s.",
"This paper deals with an energy saving routing solution, called Energy Saving IP Routing (ESIR), to be applied in an IP network. ESIR operation is integrated with Open Shorthest Path First (OSPF) protocol and allows the selection of the links to be switched off so that the negative effects of the IP topology reconfiguration procedures are avoided. The basic mechanisms which ESIR is based on are the concepts of SPT exportation and move. These mechanisms allow to share a Shortest Path Tree (SPT) between neighbor routers, so that the overall set of active network links can be reduced. Properties of moves are defined and the energy saving problem in an IP network is formulated as the problem of finding the Maximum Set of Compatible Moves (MSCM). The MSCM problem is investigated in two steps: firstly, a relaxed version of the problem, named basic MSCM problem, is considered in which QoS requirements are neglected; in the second step, the solution of the full problem, named QoS-aware MSCM problem, is faced. We prove that the basic MSCM problem can be formulated as the well-known Maximum Clique Problem in a graph; instead the QoS-aware MSCM introduces a condition equivalent to the Knapsack problem. ILP formulations to solve both the problems are given and heuristics to solve them in practical cases are proposed. The performance evaluation shows that in a real ISP network scenario ESIR is able to switch off up to 30 of network links by exploiting over-provisioning adopted by operators in the network resource planning phase and typical daily traffic trend.",
"The inefficiency of energy usage on the Internet has become a critical problem with its rapid growth, as all network devices operate at full capacity in spite of the real traffic load. Existing studies try to develop energy efficient routings by aggregating traffic and switching underutilized devices into sleep mode. However, most existing approaches do not address the problem of routing convergence well. Since traffic changes frequently in a network, routing convergence may be triggered frequently for an energy efficient routing, which may induce routing loops and black holes, resulting in severe packet loss. In this paper, we present a fast rerouting-based (FRR-based) energy efficient routing scheme, namely GreenFRR, which leverages the technique of fast rerouting to reduce the convergence time. We first study typical fast rerouting techniques and address the challenge of guaranteeing loop-free routing. Then, we formalize the FRR-based energy efficient routing problem and prove that the problem is NP-hard. In order to solve this problem, we design heuristic algorithms to maximize the number of sleeping links. In particular, we consider link utilization ratio and path stretch in our algorithms. We evaluate our scheme by simulations on real and synthetic topologies with real and synthetic traffic traces. The results show that the power consumed by line cards achieves a saving of 40 and the convergence time can be reduced by 95 .",
"According to several studies, the power consumption of the Internet accounts for up to 10 of the worldwide energy consumption and is constantly increasing. The global consciousness on this problem has also grown, and several initiatives are being put into place to reduce the power consumption of the ICT sector in general. In this paper, we face the problem of minimizing power consumption for Internet service provider (ISP) networks. In particular, we propose and assess strategies to concentrate network traffic on a minimal subset of network resources. Given a telecommunication infrastructure, our aim is to turn off network nodes and links while still guaranteeing full connectivity and maximum link utilization constraints. We first derive a simple and complete formulation, which results into an NP-hard problem that can be solved only for trivial cases. We then derive more complex formulations that can scale up to middle-sized networks. Finally, we provide efficient heuristics that can be used for large networks. We test the effectiveness of our algorithms on both real and synthetic topologies, considering the daily fluctuations of Internet traffic and different classes of users. Results show that the power savings can be significant, e.g., larger than 35 ."
]
} |
1509.01094 | 2279275868 | Rising energy consumption of IT infrastructure concerns have spurred the development of more power efficient networking equipment and algorithms. When old equipment just drew an almost constant amount of power regardless of the traffic load, there were some efforts to minimize the total energy usage by modifying routing decisions to aggregate traffic in a minimal set of links, creating the opportunity to power off some unused equipment during low traffic periods. New equipment, with power profile functions depending on the offered load, presents new challenges for optimal routing. The goal now is not just to power some links down, but to aggregate and or spread the traffic so that devices operate in their sweet spot in regards to network usage. In this paper we present an algorithm that, making use of the ant colonization algorithm, computes, in a decentralized manner, the routing tables so as to minimize global energy consumption. Moreover, the resulting algorithm is also able to track changes in the offered loadand react to them in real time. HighlightsNew network links show load dependant energy consumption.Energy saving routing algorithm for load dependant links.Simply powering down links can increase energy consumption.Obtained power savings in the 10-20 interval for real networks. | New proposals that take into account the different power profiles are also known in the literature. For instance, @cite_10 considers super-linear energy costs functions in the analysis of the maximum power savings attainable by powering down part of the network. @cite_5 the authors formulate a minimization problem considering the links formed by IEEE 802.3az links. Similarly, the authors of @cite_8 address a similar problem and compare the results obtained with both super and sub-linear power profiles. The same problem is also studied in @cite_22 @cite_6 , this time considering bundle links between adjacent routers. The authors find out in @cite_6 that traffic consolidation does not always lead to energy savings. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_6",
"@cite_5",
"@cite_10"
],
"mid": [
"2104463108",
"2131414071",
"2031127175",
"1526016459",
"2121291271"
],
"abstract": [
"The paper copes with the reduction of network power consumption by the definition of new routing algorithms, able to take into account the energy consumed by the network devices. In particular, based on the power consumption characterization of the network devices obtained using the Energy Profile (EP) concept, the paper presents the analysis of the exact solution of the Energy Aware Routing (EAR) problem solved with a Mixed Integer Programming solver. The analysis is aimed at evaluating the impact on the performance of three relevant aspects of the problem: the approximation of the actual EP, the traffic load and the topology of the network. Furthermore, the paper proposes a heuristic solution of the EAR, denoted as Dijkstra-based Power Aware Routing Algorithm (DPRA), defined in order to cope with the complexity of the exact solution.",
"Energy efficient communication devices are essential to minimize the operational cost of future networks and to reduce the negative effects of global warming. In this paper we propose a novel energy reduction approach on network level that takes load-dependent energy consumption information of communication equipment into account. Case study calculation results show that energy savings of more than 35 and with it operational cost can be saved by applying energy profile aware routing.",
"In this paper we study the behavior of a general optimization model for reducing the power consumption of core networks employing energy-aware network equipment. Specifically, we assess how the energy profiles of the devices affect the outcome of the optimization model and hence determine the general power saving policy. The computational analysis performed on several real topologies shows that the widespread traffic consolidation strategy does not always provide the best results. In fact, for devices presenting a cubic (convex in general) energy profile, the highest energy savings are achieved by spreading the traffic on the network.",
"Energy Efficient Ethernet, as defined by the IEEE 802.3az standard, has shown not to be as efficient as originally expected given the large values of the transition times between the active and sleep power modes. In fact, EEE performs nearly optimal only when the link load is either very low or very high, but never at medium loads. So, in order to achieve large power savings, then it is necessary to design a flow allocation algorithm that allocates traffic demands on links that avoid medium traffic loads on links, since these are far from optimal. This work defines EEE-FA, an energy-aware flow allocation algorithm that computes the best possible route in terms of energy consumption for a given network load condition. Essentially, EEE-FA computes the K-shortest paths for a given traffic demand and evaluates the consumption impact of allocating the traffic demand on each of them, in order to further select that route with minimum energy consumption impact for a given network status. This algorithm is compared with shortest path routing and it is shown that important energy savings may be achieved, however at the expense of increasing the global network traffic load and the average number of hops per demand as a consequence of using sub-optimal (in terms of distance) routes.",
"Nowadays two main approaches are being pursued to reduce energy consumption of networks: the use of sleep modes in which devices enter a low-power state during inactivity periods, and the adoption of energy proportional mechanisms where the device architecture is designed to make energy consumption proportional to the actual load. Common to all the proposals is the evaluation of energy saving performance by means of simulation or experimental evidence, which typically consider a limited set of benchmarking scenarios.In this paper, we do not focus on a particular algorithm or procedure to offer energy saving capabilities in networks, but rather we formulate a theoretical model based on random graph theory that allows to estimate the potential gains achievable by adopting sleep modes in networks where energy proportional devices are deployed. Intuitively, when some devices enter sleep modes some energy is saved. However, this saving could vanish because of the additional load (and power consumption) induced onto the active devices. The impact of this effect changes based on the degree of load proportionality. As such, it is not simple to foresee which are the scenarios that make sleep mode or energy proportionality more convenient.Instead of conducting detailed simulations, we consider simple models of networks in which devices (i.e., nodes and links) consume energy proportionally to the handled traffic, and in which a given fraction of nodes are put into sleep mode. Our model allows to predict how much energy can be saved in different scenarios. The results show that sleep modes can be successfully combined with load proportional solutions. However, if the static power consumption component is one order of magnitude less than the load proportional component, then sleep modes become not convenient anymore. Thanks to random graph theory, our model gauges the impact of different properties of the network topology. For instance, highly connected networks tend to make the use of sleep modes more convenient."
]
} |
1509.00511 | 2103117377 | Pinboard on Pinterest is an emerging media to engage online social media users, on which users post online images for specific topics. Regardless of its significance, there is little previous work specifically to facilitate information discovery based on pinboards. This paper proposes a novel pinboard recommendation system for Twitter users. In order to associate contents from the two social media platforms, we propose to use MultiLabel classification to map Twitter user followees to pinboard topics and visual diversification to recommend pinboards given user interested topics. A preliminary experiment on a dataset with 2000 users validated our proposed system. | aims to merge social signals from different network to increase online social media platform engagement. For example, in @cite_4 , Yan al proposed to identify the best Twitter accounts to promote YouTube videos, by mining the associations between topics learning from user tweets and their favorite YouTube videos. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2020120010"
],
"abstract": [
"We introduce a novel cross-network collaborative problem in this work: given YouTube videos, to find optimal Twitter followees that can maximize the video promotion on Twitter. Since YouTube videos and Twitter followees distribute on heterogeneous spaces, we present a cross-network association-based solution framework. Three stages are addressed: (1) heterogeneous topic modeling, where YouTube videos and Twitter followees are modeled in topic level; (2) cross-network topic association, where the overlapped users are exploited to conduct cross-network topic distribution transfer; and (3) referrer identification, where the query YouTube video and candidate Twitter followees are matched in the same topic space. Different methods in each stage are designed and compared by qualitative as well as quantitative experiments. Based on the proposed framework, we also discuss the potential applications, extensions, and suggest some principles for future heterogeneous social media utilization and cross-network collaborative applications."
]
} |
1509.00511 | 2103117377 | Pinboard on Pinterest is an emerging media to engage online social media users, on which users post online images for specific topics. Regardless of its significance, there is little previous work specifically to facilitate information discovery based on pinboards. This paper proposes a novel pinboard recommendation system for Twitter users. In order to associate contents from the two social media platforms, we propose to use MultiLabel classification to map Twitter user followees to pinboard topics and visual diversification to recommend pinboards given user interested topics. A preliminary experiment on a dataset with 2000 users validated our proposed system. | is the foundation of personalized services, such as personalized recommendation, search engine reranking and advertisements targeting. One component of our Pinterest board recommendation system is based on Pinterest board ontology mapping, which is inspired from @cite_8 . In this work, Geng al proposed a multi-task CNN to map pinterest images to fashion ontology and the classification results were sorted as user profiles for image recommendation. We adapt the idea that user interests can be represented by multiple nodes on ontology and further extend the ontology domain to 20 categories, including Fashion'', Food'' and Wedding''. In addition, instead of single label classification on images and aggregating the classification results to form user profile, we perform hierarchical multi-label classification on aggregated user tweets to figure out the user interests directly. | {
"cite_N": [
"@cite_8"
],
"mid": [
"1995343386"
],
"abstract": [
"Social Curation Service (SCS) is a new type of emerging social media platform, where users can select, organize and keep track of multimedia contents they like. In this paper, we take advantage of this great opportunity and target at the very starting point in social media: user profiling, which supports fundamental applications such as personalized search and recommendation. As compared to other profiling methods in conventional Social Network Services (SNS), our work benefits from the two distinguishable characteristics of SCS: a) organized multimedia user-generated contents, and b) content-centric social network. Based on these two characteristics, we are able to deploy the state-of-the-art multimedia analysis techniques to establish content-based user profiles by extracting user preferences and their social relations. First, we automatically construct a content-based user preference ontology and learn the ontological models to generate comprehensive user profiles. In particular, we propose a new deep learning strategy called multi-task convolutional neural network (mtCNN) to learn profile models and profile-related visual features simultaneously. Second, we propose to model the multi-level social relations offered by SCS to refine the user profiles in a low-rank recovery framework. To the best of our knowledge, our work is the first that explores how social curation can help in content-based social media technologies, taking user profiling as an example. Extensive experiments on 1,293 users and 1.5 million images collected from Pinterest in fashion domain demonstrate that recommendation methods based on the proposed user profiles are considerably more effective than other state-of-the-art recommendation strategies."
]
} |
1509.00511 | 2103117377 | Pinboard on Pinterest is an emerging media to engage online social media users, on which users post online images for specific topics. Regardless of its significance, there is little previous work specifically to facilitate information discovery based on pinboards. This paper proposes a novel pinboard recommendation system for Twitter users. In order to associate contents from the two social media platforms, we propose to use MultiLabel classification to map Twitter user followees to pinboard topics and visual diversification to recommend pinboards given user interested topics. A preliminary experiment on a dataset with 2000 users validated our proposed system. | is the problem of classifying data instances to multiple labels or attributes, in which the labels are structured in a hierarchical taxonomy. @cite_2 proposed a hierarchical multi-label system to classify short texts (e.g., tweets). In this work, Ren al proposed to use text expansion, e.g., entity linking, to deal with the shortness and concept drift problem in short text classification. Our twitter user modeling is also based on hierarchical multi-label classification. However, instead of single tweet classification, we deal with the entire timeline of users, so that each user is modeled by a large number of tweets. In this paper, we adapt Randomized Labelsets @cite_6 to efficiently model the hierarchical dependency automatically. | {
"cite_N": [
"@cite_6",
"@cite_2"
],
"mid": [
"1953606363",
"1975719446"
],
"abstract": [
"This paper proposes an ensemble method for multilabel classification. The RAndom k-labELsets (RAKEL) algorithm constructs each member of the ensemble by considering a small random subset of labels and learning a single-label classifier for the prediction of each element in the powerset of this subset. In this way, the proposed algorithm aims to take into account label correlations using single-label classifiers that are applied on subtasks with manageable number of labels and adequate number of examples per label. Experimental results on common multilabel domains involving protein, document and scene classification show that better performance can be achieved compared to popular multilabel classification approaches.",
"Hierarchical multi-label classification assigns a document to multiple hierarchical classes. In this paper we focus on hierarchical multi-label classification of social text streams. Concept drift, complicated relations among classes, and the limited length of documents in social text streams make this a challenging problem. Our approach includes three core ingredients: short document expansion, time-aware topic tracking, and chunk-based structural learning. We extend each short document in social text streams to a more comprehensive representation via state-of-the-art entity linking and sentence ranking strategies. From documents extended in this manner, we infer dynamic probabilistic distributions over topics by dividing topics into dynamic \"global\" topics and \"local\" topics. For the third and final phase we propose a chunk-based structural optimization strategy to classify each document into multiple classes. Extensive experiments conducted on a large real-world dataset show the effectiveness of our proposed method for hierarchical multi-label classification of social text streams."
]
} |
1509.00643 | 2949542761 | Recently security researchers have started to look into automated generation of attack trees from socio-technical system models. The obvious next step in this trend of automated risk analysis is automating the selection of security controls to treat the detected threats. However, the existing socio-technical models are too abstract to represent all security controls recommended by practitioners and standards. In this paper we propose an attack-defence model, consisting of a set of attack-defence bundles, to be generated and maintained with the socio-technical model. The attack-defence bundles can be used to synthesise attack-defence trees directly from the model to offer basic attack-defence analysis, but also they can be used to select and maintain the security controls that cannot be handled by the model itself. | The question of attack trees generation from system models has been tackled in @cite_7 . Similarly, @cite_9 and @cite_16 worked on generating attack models from a system model. While we follow the same approach for attacker's view, our main focus is on keeping both attacker's and defender's views consistent with the main socio-technical model. | {
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_7"
],
"mid": [
"",
"2110908300",
"1848531833"
],
"abstract": [
"",
"Attack graphs are important tools for analyzing security vulnerabilities in enterprise networks. Previous work on attack graphs has not provided an account of the scalability of the graph generating process, and there is often a lack of logical formalism in the representation of attack graphs, which results in the attack graph being difficult to use and understand by human beings. Pioneer work by Sheyner, et al is the first attack-graph tool based on formal logical techniques, namely model-checking. However, when applied to moderate-sized networks, Sheyner's tool encountered a significant exponential explosion problem. This paper describes a new approach to represent and generate attack graphs. We propose logical attack graphs, which directly illustrate logical dependencies among attack goals and configuration information. A logical attack graph always has size polynomial to the network being analyzed. Our attack graph generation tool builds upon MulVAL, a network security analyzer based on logical programming. We demonstrate how to produce a derivation trace in the MulVAL logic-programming engine, and how to use the trace to generate a logical attack graph in quadratic time. We show experimental evidence that our logical attack graph generation algorithm is very efficient. We have generated logical attack graphs for fully connected networks of 1000 machines using a Pentium 4 CPU with 1GB of RAM.",
"Manually identifying possible attacks on an organisation is a complex undertaking; many different factors must be considered, and the resulting attack scenarios can be complex and hard to maintain as the organisation changes. System models provide a systematic representation of organisations that helps in structuring attack identification and can integrate physical, virtual, and social components. These models form a solid basis for guiding the manual identification of attack scenarios. Their main benefit, however, is in the analytic generation of attacks. In this work we present a systematic approach to transforming graphical system models to graphical attack models in the form of attack trees. Based on an asset in the model, our transformations result in an attack tree that represents attacks by all possible actors in the model, after which the actor in question has obtained the asset."
]
} |
1509.00643 | 2949542761 | Recently security researchers have started to look into automated generation of attack trees from socio-technical system models. The obvious next step in this trend of automated risk analysis is automating the selection of security controls to treat the detected threats. However, the existing socio-technical models are too abstract to represent all security controls recommended by practitioners and standards. In this paper we propose an attack-defence model, consisting of a set of attack-defence bundles, to be generated and maintained with the socio-technical model. The attack-defence bundles can be used to synthesise attack-defence trees directly from the model to offer basic attack-defence analysis, but also they can be used to select and maintain the security controls that cannot be handled by the model itself. | In @cite_5 the authors work on directly applying model checking to a socio-technical model in order to evaluate some reachability-based security properties. | {
"cite_N": [
"@cite_5"
],
"mid": [
"2041582099"
],
"abstract": [
"Recent initiatives that evaluate the security of physical systems with objects as assets and people as agents - here called socio-technical physical systems - have limitations: their agent behavior is too simple, they just estimate feasibility and not the likelihood of attacks, or they do estimate likelihood but on explicitly provided attacks only. We propose a model that can detect and quantify attacks. It has a rich set of agent actions with associated probability and cost. We also propose a threat model, an intruder that can misbehave and that competes with honest agents. The intruder's actions have an associated cost and are constrained to be realistic. We map our model to a probabilistic symbolic model checker and we express templates of security properties in the Probabilistic Computation Tree Logic, thus supporting automatic analysis of security properties. A use case shows the effectiveness of our approach."
]
} |
1509.00519 | 1779483307 | The variational autoencoder (VAE; Kingma, Welling (2014)) is a recently proposed generative model pairing a top-down generative network with a bottom-up recognition network which approximates posterior inference. It typically makes strong assumptions about posterior inference, for instance that the posterior distribution is approximately factorial, and that its parameters can be approximated with nonlinear regression from the observations. As we show empirically, the VAE objective can lead to overly simplified representations which fail to use the network's entire modeling capacity. We present the importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting. In the IWAE, the recognition network uses multiple samples to approximate the posterior, giving it increased flexibility to model complex posteriors which do not fit the VAE modeling assumptions. We show empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log-likelihood on density estimation benchmarks. | Other researchers have derived log-probability lower bounds by way of importance sampling. and avoided recognition networks entirely, instead performing inference using importance sampling from the prior. presented a variety of graphical model inference algorithms based on importance weighting. Reweighted wake-sleep (RWS) of @cite_2 is another recognition network approach which combines the original wake-sleep algorithm with updates to the generative network equivalent to gradient ascent on our bound @math . However, interpret this update as following a biased estimate of @math , whereas we interpret it as following an unbiased estimate of @math . The IWAE also differs from RWS in that the generative and recognition networks are trained to maximize a single objective, @math . By contrast, the @math -wake and sleep steps of RWS do not appear to be related to @math . Finally, the IWAE differs from RWS in that it makes use of the reparameterization trick. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2963848397"
],
"abstract": [
"Markov random fields (MRFs) are difficult to evaluate as generative models because computing the test log-probabilities requires the intractable partition function. Annealed importance sampling (AIS) is widely used to estimate MRF partition functions, and often yields quite accurate results. However, AIS is prone to overestimate the log-likelihood with little indication that anything is wrong. We present the Reverse AIS Estimator (RAISE), a stochastic lower bound on the log-likelihood of an approximation to the original MRF model. RAISE requires only the same MCMC transition operators as standard AIS. Experimental results indicate that RAISE agrees closely with AIS log-probability estimates for RBMs, DBMs, and DBNs, but typically errs on the side of underestimating, rather than overestimating, the log-likelihood."
]
} |
1509.00773 | 2204311365 | Current generation of Internet-based services are typically hosted on large data centers that take the form of warehouse-size structures housing tens of thousands of servers. Continued availability of a modern data center is the result of a complex orchestration among many internal and external actors including computing hardware, multiple layers of intricate software, networking and storage devices, electrical power and cooling plants. During the course of their operation, many of these components produce large amounts of data in the form of event and error logs that are essential not only for identifying and resolving problems but also for improving data center efficiency and management. Most of these activities would benefit significantly from data analytics techniques to exploit hidden statistical patterns and correlations that may be present in the data. The sheer volume of data to be analyzed makes uncovering these correlations and patterns a challenging task. This paper presents Big Data analyzer (BiDAl), a prototype Java tool for log-data analysis that incorporates several Big Data technologies in order to simplify the task of extracting information from data traces produced by large clusters and server farms. BiDAl provides the user with several analysis languages (SQL, R and Hadoop MapReduce) and storage backends (HDFS and SQLite) that can be freely mixed and matched so that a custom tool for a specific task can be easily constructed. BiDAl has a modular architecture so that it can be extended with other backends and analysis languages in the future. In this paper we present the design of BiDAl and describe our experience using it to analyze publicly-available traces from Google data clusters, with the goal of building a realistic model of a complex data center. | With the public availability of the two cluster traces @cite_24 generated by the Borg system at Google @cite_29 , numerous analyses of different aspects of the data have been reported. These provide general statistics about the workload and node state for such clusters @cite_25 @cite_19 @cite_28 and identify high levels of heterogeneity and dynamicity of the system, especially in comparison to grid workloads @cite_0 . Heterogeneity at user level -- large variations between workload submitted by the different users -- is also observed @cite_20 . Prediction is attempted for job @cite_15 and machine @cite_1 failures and also for host load @cite_36 . However, no unified tool for studying the different traces were introduced. is one of the first such tools facilitating Big Data analysis of trace data, which underlines similar properties of the public Google traces as the previous studies. Other traces have been analyzed in the past @cite_11 @cite_30 @cite_32 , but again without a general-purpose tool available for further study. | {
"cite_N": [
"@cite_30",
"@cite_28",
"@cite_36",
"@cite_29",
"@cite_1",
"@cite_32",
"@cite_24",
"@cite_19",
"@cite_0",
"@cite_15",
"@cite_25",
"@cite_20",
"@cite_11"
],
"mid": [
"2163291889",
"",
"2075168211",
"2141992894",
"2963330992",
"228898923",
"",
"2129542763",
"2136510202",
"2007286423",
"2060331550",
"2034467200",
"2132353061"
],
"abstract": [
"MapReduce systems face enormous challenges due to increasing growth, diversity, and consolidation of the data and computation involved. Provisioning, configuring, and managing large-scale MapReduce clusters require realistic, workload-specific performance insights that existing MapReduce benchmarks are ill-equipped to supply. In this paper, we build the case for going beyond benchmarks for MapReduce performance evaluations. We analyze and compare two production MapReduce traces to develop a vocabulary for describing MapReduce workloads. We show that existing benchmarks fail to capture rich workload characteristics observed in traces, and propose a framework to synthesize and execute representative workloads. We demonstrate that performance evaluations using realistic workloads gives cluster operator new ways to identify workload-specific resource bottlenecks, and workload-specific choice of MapReduce task schedulers. We expect that once available, workload suites would allow cluster operators to accomplish previously challenging tasks beyond what we can now imagine, thus serving as a useful tool to help design and manage MapReduce systems.",
"",
"Prediction of host load in Cloud systems is critical for achieving service-level agreements. However, accurate prediction of host load in Clouds is extremely challenging because it fluctuates drastically at small timescales. We design a prediction method based on Bayes model to predict the mean load over a long-term time interval, as well as the mean load in consecutive future time intervals. We identify novel predictive features of host load that capture the expectation, predictability, trends and patterns of host load. We also determine the most effective combinations of these features for prediction. We evaluate our method using a detailed one-month trace of a Google data center with thousands of machines. Experiments show that the Bayes method achieves high accuracy with a mean squared error of 0.0014. Moreover, the Bayes method improves the load prediction accuracy by 5.6 -- 50 compared to other state-of-the-art methods based on moving averages, auto-regression, and or noise filters.",
"Google's Borg system is a cluster manager that runs hundreds of thousands of jobs, from many thousands of different applications, across a number of clusters each with up to tens of thousands of machines. It achieves high utilization by combining admission control, efficient task-packing, over-commitment, and machine sharing with process-level performance isolation. It supports high-availability applications with runtime features that minimize fault-recovery time, and scheduling policies that reduce the probability of correlated failures. Borg simplifies life for its users by offering a declarative job specification language, name service integration, real-time job monitoring, and tools to analyze and simulate system behavior. We present a summary of the Borg system architecture and features, important design decisions, a quantitative analysis of some of its policy decisions, and a qualitative examination of lessons learned from a decade of operational experience with it.",
"Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using generated data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating a predictive model for node failures. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing machine state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if machines will fail in a future 24-hour window. Our evaluation reveals that if we limit false positive rates to 5 , we can achieve true positive rates between 27 and 88 with precision varying between 50 and 72 . We discuss the practicality of including our predictive model as the central component of a data-driven autonomic manager and operating it on-line with live data streams (rather than off-line on data logs). All of the scripts used for BigQuery and classification analyses are publicly available from the authors' website.",
"Abstract : In this paper, we analyze seven MapReduce workload traces from production clusters at Facebook and at Cloudera customers in e-commerce, telecommunications media, and retail. Cumulatively, these traces comprise over a year's worth of data logged from over 5000 machines, and contain over two million jobs that perform 1.6 exabytes of I O. Key observations include input data forms up to 77 of all bytes, 90 of jobs access KB to GB sized files that make up less than 16 of stored bytes, up to 60 of jobs re-access data that has been touched within the past 6 hours, peak-to-median job submission rates are 9:1 or greater, an average of 68 of all compute time is spent in map, task-seconds-per-byte is a key metric for balancing compute and data bandwidth task durations range from seconds to hours, and five out of seven workloads contain map-only jobs. We have also deployed a public workload repository with workload replay tools so that the researchers can systematically assess design priorities and compare performance across diverse MapReduce workloads.",
"",
"To better understand the challenges in developing effective cloud-based resource schedulers, we analyze the first publicly available trace data from a sizable multi-purpose cluster. The most notable workload characteristic is heterogeneity: in resource types (e.g., cores:RAM per machine) and their usage (e.g., duration and resources needed). Such heterogeneity reduces the effectiveness of traditional slot- and core-based scheduling. Furthermore, some tasks are constrained as to the kind of machine types they can use, increasing the complexity of resource assignment and complicating task migration. The workload is also highly dynamic, varying over time and most workload features, and is driven by many short jobs that demand quick scheduling decisions. While few simplifying assumptions apply, we find that many longer-running jobs have relatively stable resource utilizations, which can help adaptive resource schedulers.",
"A new era of Cloud Computing has emerged, but the characteristics of Cloud load in data centers is not perfectly clear. Yet this characterization is critical for the design of novel Cloud job and resource management systems. In this paper, we comprehensively characterize the job task load and host load in a real-world production data center at Google Inc. We use a detailed trace of over 25 million tasks across over 12,500 hosts. We study the differences between a Google data center and other Grid HPC systems, from the perspective of both work load (w.r.t. jobs and tasks) and host load (w.r.t. machines). In particular, we study the job length, job submission frequency, and the resource utilization of jobs in the different systems, and also investigate valuable statistics of machine's maximum load, queue state and relative usage levels, with different job priorities and resource attributes. We find that the Google data center exhibits finer resource allocation with respect to CPU and memory than that of Grid HPC systems. Google jobs are always submitted with much higher frequency and they are much shorter than Grid jobs. As such, Google host load exhibits higher variance and noise.",
"Cloud computing has become increasingly popular by obviating the need for users to own and maintain complex computing infrastructures. However, due to their inherent complexity and large scale, production cloud computing systems are prone to various runtime problems caused by hardware and software faults and environmental factors. Autonomic anomaly detection is a crucial technique for understanding emergent, cloud-wide phenomena and self-managing cloud resources for system-level dependability assurance. To detect anomalous cloud behaviors, we need to monitor the cloud execution and collect runtime cloud performance data. These data consist of values of performance metrics for different types of failures, which display different correlations with the performance metrics. In this paper, we present an adaptive anomaly identification mechanism that explores the most relevant principal components of different failure types in cloud computing infrastructures. It integrates the cloud performance metric analysis with filtering techniques to achieve automated, efficient, and accurate anomaly identification. The proposed mechanism adapts itself by recursively learning from the newly verified detection results to refine future detections. We have implemented a prototype of the anomaly identification system and conducted experiments in an on-campus cloud computing environment and by using the Google data center traces. Our experimental results show that our mechanism can achieve more efficient and accurate anomaly detection than other existing schemes.",
"Cloud computing offers high scalability, flexibility and cost-effectiveness to meet emerging computing requirements. Understanding the characteristics of real workloads on a large production cloud cluster benefits not only cloud service providers but also researchers and daily users. This paper studies a large-scale Google cluster usage trace dataset and characterizes how the machines in the cluster are managed and the workloads submitted during a 29-day period behave. We focus on the frequency and pattern of machine maintenance events, job- and task-level workload behavior, and how the overall cluster resources are utilized.",
"In the era of cloud computing, users encounter the challenging task of effectively composing and running their applications on the cloud. In an attempt to understand user behavior in constructing applications and interacting with typical cloud infrastructures, we analyzed a large utilization dataset of Google cluster. In the present paper, we consider user behaviorin composing applications from the perspective of topology, maximum requested computational resources, and workload type. We model user dynamic behavior around the user's session view. Mass-Count disparity metrics are used to investigate the characteristics of underlying statistical models and to characterize users into distinct groups according to their composition and behavioral classes and patterns. The present study reveals interesting insight into the heterogeneous structure of the Google cloud workload.",
"MapReduce is a programming paradigm for parallel processing that is increasingly being used for data-intensive applications in cloud computing environments. An understanding of the characteristics of workloads running in MapReduce environments benefits both the service providers in the cloud and users: the service provider can use this knowledge to make better scheduling decisions, while the user can learn what aspects of their jobs impact performance. This paper analyzes 10-months of MapReduce logs from the M45 supercomputing cluster which Yahoo! made freely available to select universities for academic research. We characterize resource utilization patterns, job patterns, and sources of failures. We use an instance-based learning technique that exploits temporal locality to predict job completion times from historical data and identify potential performance problems in our dataset."
]
} |
1509.00773 | 2204311365 | Current generation of Internet-based services are typically hosted on large data centers that take the form of warehouse-size structures housing tens of thousands of servers. Continued availability of a modern data center is the result of a complex orchestration among many internal and external actors including computing hardware, multiple layers of intricate software, networking and storage devices, electrical power and cooling plants. During the course of their operation, many of these components produce large amounts of data in the form of event and error logs that are essential not only for identifying and resolving problems but also for improving data center efficiency and management. Most of these activities would benefit significantly from data analytics techniques to exploit hidden statistical patterns and correlations that may be present in the data. The sheer volume of data to be analyzed makes uncovering these correlations and patterns a challenging task. This paper presents Big Data analyzer (BiDAl), a prototype Java tool for log-data analysis that incorporates several Big Data technologies in order to simplify the task of extracting information from data traces produced by large clusters and server farms. BiDAl provides the user with several analysis languages (SQL, R and Hadoop MapReduce) and storage backends (HDFS and SQLite) that can be freely mixed and matched so that a custom tool for a specific task can be easily constructed. BiDAl has a modular architecture so that it can be extended with other backends and analysis languages in the future. In this paper we present the design of BiDAl and describe our experience using it to analyze publicly-available traces from Google data clusters, with the goal of building a realistic model of a complex data center. | can be very useful in generating synthetic trace data. In general synthesizing traces involves two phases: characterizing the process by analyzing historical data and generation of new data. The aforementioned Google traces and log data from other sources have been successfully used for workload characterization. In terms of resource usage, classes of jobs and their prevalence can be used to characterize workloads and generate new ones @cite_9 @cite_16 , or real usage patterns can be replaced by the average utilization @cite_7 . Placement constraints have also been synthesized using clustering for characterization @cite_4 . Our tool enables workload and cloud structure characterization through fitting of distributions that can be further used for trace synthesis. The analysis is not restricted to one particular aspect, but the flexibility of our tool allows the the user to decide what phenomenon to characterize and then simulate. | {
"cite_N": [
"@cite_9",
"@cite_16",
"@cite_4",
"@cite_7"
],
"mid": [
"2111556044",
"2143492785",
"2028617807",
"2182419557"
],
"abstract": [
"The advent of cloud computing promises highly available, efficient, and flexible computing services for applications such as web search, email, voice over IP, and web search alerts. Our experience at Google is that realizing the promises of cloud computing requires an extremely scalable backend consisting of many large compute clusters that are shared by application tasks with diverse service level requirements for throughput, latency, and jitter. These considerations impact (a) capacity planning to determine which machine resources must grow and by how much and (b) task scheduling to achieve high machine utilization and to meet service level objectives. Both capacity planning and task scheduling require a good understanding of task resource consumption (e.g., CPU and memory usage). This in turn demands simple and accurate approaches to workload classification-determining how to form groups of tasks (workloads) with similar resource demands. One approach to workload classification is to make each task its own workload. However, this approach scales poorly since tens of thousands of tasks execute daily on Google compute clusters. Another approach to workload classification is to view all tasks as belonging to a single workload. Unfortunately, applying such a coarse-grain workload classification to the diversity of tasks running on Google compute clusters results in large variances in predicted resource consumptions. This paper describes an approach to workload classification and its application to the Google Cloud Backend, arguably the largest cloud backend on the planet. Our methodology for workload classification consists of: (1) identifying the workload dimensions; (2) constructing task classes using an off-the-shelf algorithm such as k-means; (3) determining the break points for qualitative coordinates within the workload dimensions; and (4) merging adjacent task classes to reduce the number of workloads. We use the foregoing, especially the notion of qualitative coordinates, to glean several insights about the Google Cloud Backend: (a) the duration of task executions is bimodal in that tasks either have a short duration or a long duration; (b) most tasks have short durations; and (c) most resources are consumed by a few tasks with long duration that have large demands for CPU and memory.",
"Designing cloud computing setups is a challenging task. It involves understanding the impact of a plethora of parameters ranging from cluster configuration, partitioning, networking characteristics, and the targeted applications' behavior. The design space, and the scale of the clusters, make it cumbersome and error-prone to test different cluster configurations using real setups. Thus, the community is increasingly relying on simulations and models of cloud setups to infer system behavior and the impact of design choices. The accuracy of the results from such approaches depends on the accuracy and realistic nature of the workload traces employed. Unfortunately, few cloud workload traces are available (in the public domain). In this paper, we present the key steps towards analyzing the traces that have been made public, e.g., from Google, and inferring lessons that can be used to design realistic cloud workloads as well as enable thorough quantitative studies of Hadoop design. Moreover, we leverage the lessons learned from the traces to undertake two case studies: (i) Evaluating Hadoop job schedulers, and (ii) Quantifying the impact of shared storage on Hadoop system performance.",
"Evaluating the performance of large compute clusters requires benchmarks with representative workloads. At Google, performance benchmarks are used to obtain performance metrics such as task scheduling delays and machine resource utilizations to assess changes in application codes, machine configurations, and scheduling algorithms. Existing approaches to workload characterization for high performance computing and grids focus on task resource requirements for CPU, memory, disk, I O, network, etc. Such resource requirements address how much resource is consumed by a task. However, in addition to resource requirements, Google workloads commonly include task placement constraints that determine which machine resources are consumed by tasks. Task placement constraints arise because of task dependencies such as those related to hardware architecture and kernel version. This paper develops methodologies for incorporating task placement constraints and machine properties into performance benchmarks of large compute clusters. Our studies of Google compute clusters show that constraints increase average task scheduling delays by a factor of 2 to 6, which often results in tens of minutes of additional task wait time. To understand why, we extend the concept of resource utilization to include constraints by introducing a new metric, the Utilization Multiplier (UM). UM is the ratio of the resource utilization seen by tasks with a constraint to the average utilization of the resource. UM provides a simple model of the performance impact of constraints in that task scheduling delays increase with UM. Last, we describe how to synthesize representative task constraints and machine properties, and how to incorporate this synthesis into existing performance benchmarks. Using synthetic task constraints and machine properties generated by our methodology, we accurately reproduce performance metrics for benchmarks of Google compute clusters with a discrepancy of only 13 in task scheduling delay and 5 in resource utilization.",
"The increase in scale and complexity of large compute clusters motivates a need for representative workload benchmarks to evaluate the performance impact of system changes, so as to assist in designing better scheduling algorithms and in carrying out management activities. To achieve this goal, it is necessary to construct workload characterizations from which realistic performance benchmarks can be created. In this paper, we focus on characterizing run-time task resource usage for CPU, memory and disk. The goal is to find an accurate characterization that can faithfully reproduce the performance of historical workload traces in terms of key performance metrics, such as task wait time and machine resource utilization. Through experiments using workload traces from Google production clusters, we find that simply using the mean of task usage can generate synthetic workload traces that accurately reproduce resource utilizations and task waiting time. This seemingly surprising result can be justified by the fact that resource usage for CPU, memory and disk are relatively stable over time for the majority of the tasks. Our work not only presents a simple technique for constructing realistic workload benchmarks, but also provides insights into understanding workload performance in production compute clusters."
]
} |
1509.00773 | 2204311365 | Current generation of Internet-based services are typically hosted on large data centers that take the form of warehouse-size structures housing tens of thousands of servers. Continued availability of a modern data center is the result of a complex orchestration among many internal and external actors including computing hardware, multiple layers of intricate software, networking and storage devices, electrical power and cooling plants. During the course of their operation, many of these components produce large amounts of data in the form of event and error logs that are essential not only for identifying and resolving problems but also for improving data center efficiency and management. Most of these activities would benefit significantly from data analytics techniques to exploit hidden statistical patterns and correlations that may be present in the data. The sheer volume of data to be analyzed makes uncovering these correlations and patterns a challenging task. This paper presents Big Data analyzer (BiDAl), a prototype Java tool for log-data analysis that incorporates several Big Data technologies in order to simplify the task of extracting information from data traces produced by large clusters and server farms. BiDAl provides the user with several analysis languages (SQL, R and Hadoop MapReduce) and storage backends (HDFS and SQLite) that can be freely mixed and matched so that a custom tool for a specific task can be easily constructed. BiDAl has a modular architecture so that it can be extended with other backends and analysis languages in the future. In this paper we present the design of BiDAl and describe our experience using it to analyze publicly-available traces from Google data clusters, with the goal of building a realistic model of a complex data center. | Traces (either synthetic or the exact events) can be used for validation of various workload management algorithms. The Google trace has been used recently in @cite_34 to evaluate consolidation strategies, in @cite_22 @cite_12 to validate over-committing (overbooking), in @cite_38 to perform provisioning for heterogeneous systems and in @cite_5 to investigate checkpointing algorithms. Again, data analysis is performed individually by the research groups and no specific tool was published. is very suitable for extending these analyses to synthetic traces, to evaluate algorithms beyond the exact timeline of the Google dataset. | {
"cite_N": [
"@cite_38",
"@cite_22",
"@cite_5",
"@cite_34",
"@cite_12"
],
"mid": [
"2072362295",
"2002566704",
"1966771895",
"2052179907",
"2031679758"
],
"abstract": [
"Data centers consume tremendous amounts of energy in terms of power distribution and cooling. Dynamic capacity provisioning is a promising approach for reducing energy consumption by dynamically adjusting the number of active machines to match resource demands. However, despite extensive studies of the problem, existing solutions have not fully considered the heterogeneity of both workload and machine hardware found in production environments. In particular, production data centers often comprise heterogeneous machines with different capacities and energy consumption characteristics. Meanwhile, the production cloud workloads typically consist of diverse applications with different priorities, performance and resource requirements. Failure to consider the heterogeneity of both machines and workloads will lead to both sub-optimal energy-savings and long scheduling delays, due to incompatibility between workload requirements and the resources offered by the provisioned machines. To address this limitation, we present Harmony, a Heterogeneity-Aware dynamic capacity provisioning scheme for cloud data centers. Specifically, we first use the K-means clustering algorithm to divide workload into distinct task classes with similar characteristics in terms of resource and performance requirements. Then we present a technique that dynamically adjusting the number of machines to minimize total energy consumption and scheduling delay. Simulations using traces from a Google's compute cluster demonstrate Harmony can reduce energy by 28 percent compared to heterogeneity-oblivious solutions.",
"One of the key enablers of a cloud provider competitiveness is ability to over-commit shared infrastructure at ratios that are higher than those of other competitors, without compromising non-functional requirements, such as performance. A widely recognized impediment to achieving this goal is so called \"Virtual Machines sprawl\", a phenomenon referring to the situation when customers order Virtual Machines (VM) on the cloud, use them extensively and then leave them inactive for prolonged periods of time. Since a typical cloud provisioning system treats new VM provision requests according to the nominal virtual hardware specification, an often occurring situation is that the nominal resources of a cloud pool become exhausted fast while the physical hosts utilization remains low.We present a novel cloud resources scheduler called Pulsar that extends OpenStack Nova Filter Scheduler. The key design principle of Pulsar is adaptivity. It recognises that effective safely attainable over-commit ratio varies with time due to workloads' variability and dynamically adapts the effective over-commit ratio to these changes. We evaluate Pulsar via extensive simulations and demonstrate its performance on the actual OpenStack based testbed running popular workloads.",
"In this paper, we aim at optimizing fault-tolerance techniques based on a checkpointing restart mechanism, in the context of cloud computing. Our contribution is three-fold. (1) We derive a fresh formula to compute the optimal number of checkpoints for cloud jobs with varied distributions of failure events. Our analysis is not only generic with no assumption on failure probability distribution, but also attractively simple to apply in practice. (2) We design an adaptive algorithm to optimize the impact of checkpointing regarding various costs like checkpointing restart overhead. (3) We evaluate our optimized solution in a real cluster environment with hundreds of virtual machines and Berkeley Lab Checkpoint Restart tool. Task failure events are emulated via a production trace produced on a large-scale Google data center. Experiments confirm that our solution is fairly suitable for Google systems. Our optimized formula outperforms Young's formula by 3-10 percent, reducing wall-clock lengths by 50-100 seconds per job on average.",
"Cloud providers aim to provide computing services for a wide range of applications, such as web applications, emails, web searches, map reduce jobs. These applications are commonly scheduled to run on multi-purpose clusters that nowadays are becoming larger and more heterogeneous. A major challenge is to efficiently utilize the cluster's available resources, in particular to maximize the machines' utilization level while minimizing the applications' waiting time. We studied a publicly available trace from a large Google cluster (i12,000 machines) and observed that users generally request more resources than required for running their tasks, leading to low levels of utilization. In this paper, we propose a methodology for achieving an efficient utilization of the cluster's resources while providing the users with fast and reliable computing services. The methodology consists of three main modules: i) a prediction module that forecasts the maximum resource requirement of a task, ii) a scalable scheduling module that efficiently allocates tasks to machines, and iii) a monitoring module that tracks the levels of utilization of the machines and tasks. We present results that show that the impact of more accurate resource estimations for the scheduling of tasks can lead to an increase in the average utilization of the cluster, a reduction in the number of tasks being evicted, and a reduction in the tasks' waiting time.",
"Cloud service providers (CSPs) often overbook their resources with user applications despite having to maintain service-level agreements with their customers. Overbooking is attractive to CSPs because it helps to reduce power consumption in the data center by packing more user jobs in less number of resources while improving their profits. Overbooking becomes feasible because user applications tend to overestimate their resource requirements utilizing only a fraction of the allocated resources. Arbitrary resource overbooking ratios, however, may be detrimental to soft real-time applications, such as airline reservations or Netflix video streaming, which are increasingly hosted in the cloud. The changing dynamics of the cloud preclude an offline determination of overbooking ratios. To address these concerns, this paper presents iOverbook, which uses a machine learning approach to make systematic and online determination of overbooking ratios such that the quality of service needs of soft real-time systems can be met while still benefiting from overbooking. Specifically, iOverbook utilizes historic data of tasks and host machines in the cloud to extract their resource usage patterns and predict future resource usage along with the expected mean performance of host machines. To evaluate our approach, we have used a large usage trace made available by Google of one of its production data centers. In the context of the traces, our experiments show that iOverbook can help CSPs improve their resource utilization by an average of 12.5 and save 32 power in the data center."
]
} |
1509.00773 | 2204311365 | Current generation of Internet-based services are typically hosted on large data centers that take the form of warehouse-size structures housing tens of thousands of servers. Continued availability of a modern data center is the result of a complex orchestration among many internal and external actors including computing hardware, multiple layers of intricate software, networking and storage devices, electrical power and cooling plants. During the course of their operation, many of these components produce large amounts of data in the form of event and error logs that are essential not only for identifying and resolving problems but also for improving data center efficiency and management. Most of these activities would benefit significantly from data analytics techniques to exploit hidden statistical patterns and correlations that may be present in the data. The sheer volume of data to be analyzed makes uncovering these correlations and patterns a challenging task. This paper presents Big Data analyzer (BiDAl), a prototype Java tool for log-data analysis that incorporates several Big Data technologies in order to simplify the task of extracting information from data traces produced by large clusters and server farms. BiDAl provides the user with several analysis languages (SQL, R and Hadoop MapReduce) and storage backends (HDFS and SQLite) that can be freely mixed and matched so that a custom tool for a specific task can be easily constructed. BiDAl has a modular architecture so that it can be extended with other backends and analysis languages in the future. In this paper we present the design of BiDAl and describe our experience using it to analyze publicly-available traces from Google data clusters, with the goal of building a realistic model of a complex data center. | Recently, the Failure Trace Archive (FTA) has published a toolkit for analysis of failure trace data @cite_2 . This toolkit is implemented in Matlab and enables analysis of traces from the FTA repository, which consists of about 20 public traces. It is, to our knowledge, the only other tool for large scale trace data analysis. However, the analysis is only possible if traces are stored in the FTA format in a relational database, and is only available for traces containing failure information. on the other hand provides two different storage options, including HDFS, with transfer among them transparent to the user, and is available for any trace data, regardless of what process it describes. Additionally, usage of FTA on new data requires publication of the data in their repository, while can be used also for sensitive data that cannot be made public. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2158197021"
],
"abstract": [
"With the increasing presence, scale, and complexity of distributed systems, resource failures are becoming an important and practical topic of computer science research. While numerous failure models and failure-aware algorithms exist, their comparison has been hampered by the lack of public failure data sets and data processing tools. To facilitate the design, validation, and comparison of fault-tolerant models and algorithms, we have created the Failure Trace Archive (FTA)-an online, public repository of failure traces collected from diverse parallel and distributed systems. In this work, we first describe the design of the archive, in particular of the standard FTA data format, and the design of a toolbox that facilitates automated analysis of trace data sets. We also discuss the use of the FTA for various current and future purposes. Second, after applying the toolbox to nine failure traces collected from distributed systems used in various application domains (e.g., HPC, Internet operation, and various online applications), we present a comparative analysis of failures in various distributed systems. Our analysis presents various statistical insights and typical statistical modeling results for the availability of individual resources in various distributed systems. The analysis results underline the need for public availability of trace data from different distributed systems. Last, we show how different interpretations of the meaning of failure data can result in different conclusions for failure modeling and job scheduling in distributed systems. Our results for different interpretations show evidence that there may be a need for further revisiting existing failure-aware algorithms, when applied for general rather than for domain-specific distributed systems."
]
} |
1509.00773 | 2204311365 | Current generation of Internet-based services are typically hosted on large data centers that take the form of warehouse-size structures housing tens of thousands of servers. Continued availability of a modern data center is the result of a complex orchestration among many internal and external actors including computing hardware, multiple layers of intricate software, networking and storage devices, electrical power and cooling plants. During the course of their operation, many of these components produce large amounts of data in the form of event and error logs that are essential not only for identifying and resolving problems but also for improving data center efficiency and management. Most of these activities would benefit significantly from data analytics techniques to exploit hidden statistical patterns and correlations that may be present in the data. The sheer volume of data to be analyzed makes uncovering these correlations and patterns a challenging task. This paper presents Big Data analyzer (BiDAl), a prototype Java tool for log-data analysis that incorporates several Big Data technologies in order to simplify the task of extracting information from data traces produced by large clusters and server farms. BiDAl provides the user with several analysis languages (SQL, R and Hadoop MapReduce) and storage backends (HDFS and SQLite) that can be freely mixed and matched so that a custom tool for a specific task can be easily constructed. BiDAl has a modular architecture so that it can be extended with other backends and analysis languages in the future. In this paper we present the design of BiDAl and describe our experience using it to analyze publicly-available traces from Google data clusters, with the goal of building a realistic model of a complex data center. | Although public tools for analysis of general trace data are scarce, several large corporations reported to have built in-house custom applications for analysis of logs. These are, in general, used for live monitoring of the system, and analyze in real time large amounts of data to provide visualization that help operators make administrative decisions. While Facebook use Scuba @cite_13 , mentioned before, Microsoft have developed the Autopilot system @cite_17 , which helps with the administration of their clusters. Autopilot has a component (Cockpit) that analyzes logs and provides real time statistics to operators. An example from Google is CPI2 @cite_37 which monitors Cycles per Instruction (CPI) for running tasks to determine job performance interference; this helps in deciding task migration or throttling to maintain high performance of production jobs. All these tools are, however, not open, apply only to data of the corresponding company and sometimes require very large computational resources (e.g., Scuba). Our aim in this paper is to provide an open research tool that can be used also by smaller research groups that have more limited resources. | {
"cite_N": [
"@cite_37",
"@cite_13",
"@cite_17"
],
"mid": [
"2093941454",
"2024463287",
"2001276096"
],
"abstract": [
"Performance isolation is a key challenge in cloud computing. Unfortunately, Linux has few defenses against performance interference in shared resources such as processor caches and memory buses, so applications in a cloud can experience unpredictable performance caused by other programs' behavior. Our solution, CPI2, uses cycles-per-instruction (CPI) data obtained by hardware performance counters to identify problems, select the likely perpetrators, and then optionally throttle them so that the victims can return to their expected behavior. It automatically learns normal and anomalous behaviors by aggregating data from multiple tasks in the same job. We have rolled out CPI2 to all of Google's shared compute clusters. The paper presents the analysis that lead us to that outcome, including both case studies and a large-scale evaluation of its ability to solve real production issues.",
"Facebook takes performance monitoring seriously. Performance issues can impact over one billion users so we track thousands of servers, hundreds of PB of daily network traffic, hundreds of daily code changes, and many other metrics. We require latencies of under a minute from events occuring (a client request on a phone, a bug report filed, a code change checked in) to graphs showing those events on developers' monitors. Scuba is the data management system Facebook uses for most real-time analysis. Scuba is a fast, scalable, distributed, in-memory database built at Facebook. It currently ingests millions of rows (events) per second and expires data at the same rate. Scuba stores data completely in memory on hundreds of servers each with 144 GB RAM. To process each query, Scuba aggregates data from all servers. Scuba processes almost a million queries per day. Scuba is used extensively for interactive, ad hoc, analysis queries that run in under a second over live data. In addition, Scuba is the workhorse behind Facebook's code regression analysis, bug report monitoring, ads revenue monitoring, and performance debugging.",
"Microsoft is rapidly increasing the number of large-scale web services that it operates. Services such as Windows Live Search and Windows Live Mail operate from data centers that contain tens or hundreds of thousands of computers, and it is essential that these data centers function reliably with minimal human intervention. This paper describes the first version of Autopilot, the automatic data center management infrastructure developed within Microsoft over the last few years. Autopilot is responsible for automating software provisioning and deployment; system monitoring; and carrying out repair actions to deal with faulty software and hardware. A key assumption underlying Autopilot is that the services built on it must be designed to be manageable. We also therefore outline the best practices adopted by applications that run on Autopilot."
]
} |
1509.00773 | 2204311365 | Current generation of Internet-based services are typically hosted on large data centers that take the form of warehouse-size structures housing tens of thousands of servers. Continued availability of a modern data center is the result of a complex orchestration among many internal and external actors including computing hardware, multiple layers of intricate software, networking and storage devices, electrical power and cooling plants. During the course of their operation, many of these components produce large amounts of data in the form of event and error logs that are essential not only for identifying and resolving problems but also for improving data center efficiency and management. Most of these activities would benefit significantly from data analytics techniques to exploit hidden statistical patterns and correlations that may be present in the data. The sheer volume of data to be analyzed makes uncovering these correlations and patterns a challenging task. This paper presents Big Data analyzer (BiDAl), a prototype Java tool for log-data analysis that incorporates several Big Data technologies in order to simplify the task of extracting information from data traces produced by large clusters and server farms. BiDAl provides the user with several analysis languages (SQL, R and Hadoop MapReduce) and storage backends (HDFS and SQLite) that can be freely mixed and matched so that a custom tool for a specific task can be easily constructed. BiDAl has a modular architecture so that it can be extended with other backends and analysis languages in the future. In this paper we present the design of BiDAl and describe our experience using it to analyze publicly-available traces from Google data clusters, with the goal of building a realistic model of a complex data center. | In terms of simulation, numerous modeling tools for computer systems have been introduced, ranging from queuing models to agent-based and other statistical models. The systems modeled range from clusters to grids, and more recently, to clouds and data centers @cite_26 . CloudSim is a recent discrete event simulator that allows simulation of virtualized environments @cite_8 . More specialized simulators such as MRPerf have been designed for MapReduce environments @cite_27 . In general, these simulators are used to analyze the behavior of different workload processing algorithms (e.g., schedulers) and different networking infrastructures. A comprehensive model is GDCSim (Green Data Centre Simulator), a very detailed simulator that takes into account computing equipment and its layout, data center physical structure (such as raised floors), resource management and cooling strategies @cite_31 . However the level of detail limits scalability of the system. Our simulator is more similar to the former examples and allows for large scale simulations of workload management (experiments with 12k nodes). | {
"cite_N": [
"@cite_27",
"@cite_26",
"@cite_31",
"@cite_8"
],
"mid": [
"2048554864",
"2088943669",
"1994995820",
"2045287414"
],
"abstract": [
"MapReduce has emerged as a model of choice for supporting modern data-intensive applications. The model is easy-to-use and promising in reducing time-to-solution. It is also a key enabler for cloud computing, which provides transparent and flexible access to a large number of compute, storage and networking resources. Setting up and operating a large MapReduce cluster entails careful evaluation of various design choices and run-time parameters to achieve high efficiency. However, this design space has not been explored in detail. In this paper, we adopt a simulation approach to systematically understanding the performance of MapReduce setups. The resulting simulator, MRPerf, captures such aspects of these setups as node, rack and network configurations, disk parameters and performance, data layout and application I O characteristics, among others, and uses this information to predict expected application performance. Specifically, we use MRPerf to explore the effect of several component inter-connect topologies, data locality, and software and hardware failures on overall application performance. MR-Perf allows us to quantify the effect of these factors, and thus can serve as a tool for optimizing existing MapReduce setups as well as designing new ones.",
"Cloud computing provides computing resources as a service over a network. As rapid application of this emerging technology in real world, it becomes more and more important how to evaluate the performance and security problems that cloud computing confronts. Currently, modeling and simulation technology has become a useful and powerful tool in cloud computing research community to deal with these issues. In this paper, to the best of our knowledge, we review the existing results on modeling and simulation of cloud computing. We start from reviewing the basic concepts of cloud computing and its security issues, and subsequently review the existing cloud computing simulators. Furthermore, we indicate that there exist two types of cloud computing simulators, that is, simulators just based on software and simulators based on both software and hardware. Finally, we analyze and compare features of the existing cloud computing simulators.",
"Energy consumption in data centers can be reduced by efficient design of the data centers and efficient management of computing resources and cooling units. A major obstacle in the analysis of data centers is the lack of a holistic simulator, where the impact of new computing resource (or cooling) management techniques can be tested with diffierent designs (i.e., layouts and configurations) of data centers. To fill this gap, this paper proposes Green Data Center Simulator (GDCSim) for studying the energy efficiency of data centers under various data center geometries, workload characteristics, platform power management schemes, and scheduling algorithms. GDCSim is used to iteratively design green data centers. Further, it is validated against established CFD simulators. GDCSim is developed as a part of the BlueTool infrastructure project at Impact Lab.",
"Cloud computing is a recent advancement wherein IT infrastructure and applications are provided as ‘services’ to end-users under a usage-based payment model. It can leverage virtualized services even on the fly based on requirements (workload patterns and QoS) varying with time. The application services hosted under Cloud computing model have complex provisioning, composition, configuration, and deployment requirements. Evaluating the performance of Cloud provisioning policies, application workload models, and resources performance models in a repeatable manner under varying system and user configurations and requirements is difficult to achieve. To overcome this challenge, we propose CloudSim: an extensible simulation toolkit that enables modeling and simulation of Cloud computing systems and application provisioning environments. The CloudSim toolkit supports both system and behavior modeling of Cloud system components such as data centers, virtual machines (VMs) and resource provisioning policies. It implements generic application provisioning techniques that can be extended with ease and limited effort. Currently, it supports modeling and simulation of Cloud computing environments consisting of both single and inter-networked clouds (federation of clouds). Moreover, it exposes custom interfaces for implementing policies and provisioning techniques for allocation of VMs under inter-networked Cloud computing scenarios. Several researchers from organizations, such as HP Labs in U.S.A., are using CloudSim in their investigation on Cloud resource provisioning and energy-efficient management of data center resources. The usefulness of CloudSim is demonstrated by a case study involving dynamic provisioning of application services in the hybrid federated clouds environment. The result of this case study proves that the federated Cloud computing model significantly improves the application QoS requirements under fluctuating resource and service demand patterns. Copyright © 2010 John Wiley & Sons, Ltd."
]
} |
1509.00190 | 1922742869 | The integration of Linked Open Data (LOD) content in Web pages is a challenging and sometimes tedious task for Web developers. At the same moment, most software packages for blogs, content manage- ment systems (CMS), and shop applications support the consumption of feed formats, namely RSS and Atom. In this technical report, we demon- strate an on-line tool that fetches e-commerce data from a SPARQL endpoint and syndicates obtained results as RSS or Atom feeds. Our ap- proach combines (1) the popularity and broad tooling support of existing feed formats, (2) the precision of queries against structured data built upon common Web vocabularies like schema.org, GoodRelations, FOAF, VCard, and WGS 84, and (3) the ease of integrating content from a large number of Web sites and other data sources in RDF in general. | We compared our work with existing approaches, namely (1) single feed definition dialogs, as offered by major sites (e.g. eBay and Amazon), and (2) feed aggregation services, i.e. Yahoo Pipes http: pipes.yahoo.com and DERI Pipes @cite_6 . The former approaches typically fail at integrating different data sources (e.g. ), whereas aggregation services are limited to filter results by brittle regex-based expressions (e.g. ) and lack simple unit conversion (e.g. ) (cf. @cite_0 ). | {
"cite_N": [
"@cite_0",
"@cite_6"
],
"mid": [
"2169970536",
"2168236902"
],
"abstract": [
"For typical Web developers, it is complicated to integrate content from the Semantic Web to an existing Web site. On the contrary, most software packages for blogs, content management, and shop applications support the simple syndication of content from external sources via data feed formats, namely RSS and Atom. In this paper, we describe a novel technique for consuming useful data from the Semantic Web in the form of RSS or Atom feeds. Our approach combines (1) the simplicity and broad tooling support of existing feed formats, (2) the precision of queries against structured data built upon common Web vocabularies like schema.org, GoodRelations, FOAF, SIOC, or VCard, and (3) the ease of integrating content from a large number of Web sites and other data sources of RDF in general. We also (4) provide a pattern for embedding RDFa into the feed content in a \"viral\" way so that the original URIs of entities are included in all Web pages that republish the original content and that those pages will link back to the original content. This helps prevent the proliferation of identifiers for entities and provides a simple means for tracking the document URI at which particular content reappears.",
"The use of RDF data published on the Web for applications is still a cumbersome and resource-intensive task due to the limited software support and the lack of standard programming paradigms to deal with everyday problems such as combination of RDF data from dierent sources, object identifier consolidation, ontology alignment and mediation, or plain querying and filtering tasks. In this paper we present a framework, Semantic Web Pipes, that supports fast implementation of Semantic data mash-ups while preserving desirable properties such as abstraction, encapsulation, component-orientation, code re-usability and maintainability which are common and well supported in other application areas."
]
} |
1509.00144 | 2133297674 | A few works address the challenge of automating software diversification, and they all share one core idea: using automated test suites to drive diversification. However, there is is lack of solid understanding of how test suites, programs and transformations interact one with another in this process. We explore this intricate interplay in the context of a specific diversification technique called "sosiefication". Sosiefication generates sosie programs, i.e., variants of a program in which some statements are deleted, added or replaced but still pass the test suite of the original program. Our investigation of the influence of test suites on sosiefication exploits the following observation: test suites cover the different regions of programs in very unequal ways. Hence, we hypothesize that sosie synthesis has different performances on a statement that is covered by one hundred test case and on a statement that is covered by a single test case. We synthesize 24583 sosies on 6 popular open-source Java programs. Our results show that there are two dimensions for diversification. The first one lies in the specification: the more test cases cover a statement, the more difficult it is to synthesize sosies. Yet, to our surprise, we are also able to synthesize sosies on highly tested statements (up to 600 test cases), which indicates an intrinsic property of the programs we study. The second dimension is in the code: we manually explore dozens of sosies and characterize new types of forgiving code regions that are prone to diversification. | Mutational robustness @cite_2 is the ability of software to resist to mutations. The essential difference between both works lies in the definition of program transformations: use only random operations, while we use a heuristics based on types and variable renaming. Also, say that software is robust to mutations, we say that we can synthesize diversity and that this indicates the presence of true plasticity in the code. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1980199047"
],
"abstract": [
"Neutral landscapes and mutational robustness are believed to be important enablers of evolvability in biology. We apply these concepts to software, defining mutational robustness to be the fraction of random mutations to program code that leave a program's behavior unchanged. Test cases are used to measure program behavior and mutation operators are taken from earlier work on genetic programming. Although software is often viewed as brittle, with small changes leading to catastrophic changes in behavior, our results show surprising robustness in the face of random software mutations. The paper describes empirical studies of the mutational robustness of 22 programs, including 14 production software projects, the Siemens benchmarks, and four specially constructed programs. We find that over 30 of random mutations are neutral with respect to their test suite. The results hold across all classes of programs, for mutations at both the source code and assembly instruction levels, across various programming languages, and bear only a limited relation to test suite coverage. We conclude that mutational robustness is an inherent property of software, and that neutral variants (i.e., those that pass the test suite) often fulfill the program's original purpose or specification. Based on these results, we conjecture that neutral mutations can be leveraged as a mechanism for generating software diversity. We demonstrate this idea by generating a population of neutral program variants and showing that the variants automatically repair latent bugs. Neutral landscapes also provide a partial explanation for recent results that use evolutionary computation to automatically repair software bugs."
]
} |
1509.00144 | 2133297674 | A few works address the challenge of automating software diversification, and they all share one core idea: using automated test suites to drive diversification. However, there is is lack of solid understanding of how test suites, programs and transformations interact one with another in this process. We explore this intricate interplay in the context of a specific diversification technique called "sosiefication". Sosiefication generates sosie programs, i.e., variants of a program in which some statements are deleted, added or replaced but still pass the test suite of the original program. Our investigation of the influence of test suites on sosiefication exploits the following observation: test suites cover the different regions of programs in very unequal ways. Hence, we hypothesize that sosie synthesis has different performances on a statement that is covered by one hundred test case and on a statement that is covered by a single test case. We synthesize 24583 sosies on 6 popular open-source Java programs. Our results show that there are two dimensions for diversification. The first one lies in the specification: the more test cases cover a statement, the more difficult it is to synthesize sosies. Yet, to our surprise, we are also able to synthesize sosies on highly tested statements (up to 600 test cases), which indicates an intrinsic property of the programs we study. The second dimension is in the code: we manually explore dozens of sosies and characterize new types of forgiving code regions that are prone to diversification. | The work of Langdon and Harman @cite_5 defines an iterative process of code transformations and testing in order to speed-up program execution. Schulte and colleagues use a similar process to reduce energy consumption of embedded programs @cite_7 . Works in the area of genetic improvements of programs is related to ours since they also rely on code transformations and test suites in order to automatically produce different versions of a program. Our analysis of statement execution signatures could also improve such approaches. | {
"cite_N": [
"@cite_5",
"@cite_7"
],
"mid": [
"1984074188",
"2069265488"
],
"abstract": [
"We show that the genetic improvement of programs (GIP) can scale by evolving increased performance in a widely-used and highly complex 50000 line system. Genetic improvement of software for multiple objective exploration (GISMOE) found code that is 70 times faster (on average) and yet is at least as good functionally. Indeed, it even gives a small semantic gain.",
"Modern compilers typically optimize for executable size and speed, rarely exploring non-functional properties such as power efficiency. These properties are often hardware-specific, time-intensive to optimize, and may not be amenable to standard dataflow optimizations. We present a general post-compilation approach called Genetic Optimization Algorithm (GOA), which targets measurable non-functional aspects of software execution in programs that compile to x86 assembly. GOA combines insights from profile-guided optimization, superoptimization, evolutionary computation and mutational robustness. GOA searches for program variants that retain required functional behavior while improving non-functional behavior, using characteristic workloads and predictive modeling to guide the search. The resulting optimizations are validated using physical performance measurements and a larger held-out test suite. Our experimental results on PARSEC benchmark programs show average energy reductions of 20 , both for a large AMD system and a small Intel system, while maintaining program functionality on target workloads."
]
} |
1509.00144 | 2133297674 | A few works address the challenge of automating software diversification, and they all share one core idea: using automated test suites to drive diversification. However, there is is lack of solid understanding of how test suites, programs and transformations interact one with another in this process. We explore this intricate interplay in the context of a specific diversification technique called "sosiefication". Sosiefication generates sosie programs, i.e., variants of a program in which some statements are deleted, added or replaced but still pass the test suite of the original program. Our investigation of the influence of test suites on sosiefication exploits the following observation: test suites cover the different regions of programs in very unequal ways. Hence, we hypothesize that sosie synthesis has different performances on a statement that is covered by one hundred test case and on a statement that is covered by a single test case. We synthesize 24583 sosies on 6 popular open-source Java programs. Our results show that there are two dimensions for diversification. The first one lies in the specification: the more test cases cover a statement, the more difficult it is to synthesize sosies. Yet, to our surprise, we are also able to synthesize sosies on highly tested statements (up to 600 test cases), which indicates an intrinsic property of the programs we study. The second dimension is in the code: we manually explore dozens of sosies and characterize new types of forgiving code regions that are prone to diversification. | Our investigations of software plasticity at the edge of correctness tradeoffs directly relate to seminal works that advocate for novel ways of building software that is more approximate and evolvable, but also less brittle. In particular, our work is very much inspired by the work of Richard Gabriel @cite_11 , Gerald Sussman @cite_20 and Mary Shaw @cite_22 . They all warn against the desire of building perfectly correct system, which can only be correct in very specific conditions and are consequently very brittle outside these conditions. They advocate for new approaches that would support the construction of software systems that have the ability to evolve and adapt, in exchange of certain tradeoffs with respect to correctness. We foresee our investigations about automatic diversification of application source code as a contribution towards the design of such new approaches. | {
"cite_N": [
"@cite_22",
"@cite_20",
"@cite_11"
],
"mid": [
"2033272857",
"2097058024",
""
],
"abstract": [
"Modern practical computing systems are much more complex than the simple programs on which we developed our models of dependability. These dependability models depend on precise specifications, but it is often impractical to obtain precise specifications of practical software-intensive systems. Furthermore, the criteria for acceptable behavior vary from time to time and from one user to another. When development methods are based on the classic models that assume precise specifications, the resulting systems are often brittle --- they are vulnerable to unexpected conditions and hard to tune to changing expectations. Practical systems would be better served by development models that recognize the variability and unpredictability of the environment in which the systems are used. Such development methods should pursue not the absolute criterion of correctness, but rather the goal of fitness for the intended task, or sufficient correctness. They should accommodate environmental unpredictability not only by reactive mechanisms, but also by design that produces resilience to environmental change, or homeostasis. In many cases, this resilience may be achievable by relaxing tolerances in the specifications, thereby enlarging the envelope of acceptable operation.",
"It is hard to build robust systems: systems that have accept able behavior over a larger class of situations than was anticipated by their designers. The most robust systems are evolvable: they can be easily adapted to new situations with only minor mod ification. How can we design systems that are flexible in this way? Observations of biological systems tell us a great deal about how to make robust and evolvable systems. Techniques origi nally developed in support of symbolic Artificial Intelligence can be viewed as ways of enhancing robustness and evolvability in programs and other engineered systems. By contrast, common practice of computer science actively discourages the construc tion of robust systems.",
""
]
} |
1508.07647 | 2950325406 | Some images that are difficult to recognize on their own may become more clear in the context of a neighborhood of related images with similar social-network metadata. We build on this intuition to improve multilabel image annotation. Our model uses image metadata nonparametrically to generate neighborhoods of related images using Jaccard similarities, then uses a deep neural network to blend visual information from the image and its neighbors. Prior work typically models image metadata parametrically, in contrast, our nonparametric treatment allows our model to perform well even when the vocabulary of metadata changes between training and testing. We perform comprehensive experiments on the NUS-WIDE dataset, where we show that our model outperforms state-of-the-art methods for multilabel image annotation even when our model is forced to generalize to new types of metadata. | Our work falls in the broad area of image annotation and search @cite_44 . Harvesting images from the web to train visual classifiers without human annotation is an idea that have been explored many times in the past decade @cite_21 @cite_49 @cite_34 @cite_29 @cite_2 @cite_11 @cite_46 @cite_22 . Early work on image annotation used voting to transfer labels between visually similar images, often using simple nonparametric models @cite_41 @cite_30 . This strategy is well suited for multimodal data and large vocabularies of weak labels, but is very sensitive to the metric used to find visual neighbors. Extensions use learnable metrics and weighted voting schemes @cite_20 @cite_1 , or more carefully select the training images used for voting @cite_43 . Our method differs from this work because we do not transfer labels from the training set; instead we compute nearest-neighbors between images using metadata. | {
"cite_N": [
"@cite_30",
"@cite_22",
"@cite_41",
"@cite_29",
"@cite_21",
"@cite_1",
"@cite_44",
"@cite_43",
"@cite_49",
"@cite_2",
"@cite_46",
"@cite_34",
"@cite_20",
"@cite_11"
],
"mid": [
"2161258050",
"2950931866",
"1877469910",
"2107250100",
"2172191903",
"2146024151",
"1510632636",
"2096995231",
"",
"2127292559",
"2081613070",
"2018573225",
"2536305071",
"1964763677"
],
"abstract": [
"Social image analysis and retrieval is important for helping people organize and access the increasing amount of user tagged multimedia. Since user tagging is known to be uncontrolled, ambiguous, and overly personalized, a fundamental problem is how to interpret the relevance of a user-contributed tag with respect to the visual content the tag is describing. Intuitively, if different persons label visually similar images using the same tags, these tags are likely to reflect objective aspects of the visual content. Starting from this intuition, we propose in this paper a neighbor voting algorithm which accurately and efficiently learns tag relevance by accumulating votes from visual neighbors. Under a set of well-defined and realistic assumptions, we prove that our algorithm is a good tag relevance measurement for both image ranking and tag ranking. Three experiments on 3.5 million Flickr photos demonstrate the general applicability of our algorithm in both social image retrieval and image tag suggestion. Our tag relevance learning algorithm substantially improves upon baselines for all the experiments. The results suggest that the proposed algorithm is promising for real-world applications.",
"We present an approach to utilize large amounts of web data for learning CNNs. Specifically inspired by curriculum learning, we present a two-step approach for CNN training. First, we use easy images to train an initial visual representation. We then use this initial CNN and adapt it to harder, more realistic images by leveraging the structure of data and categories. We demonstrate that our two-stage CNN outperforms a fine-tuned CNN trained on ImageNet on Pascal VOC 2012. We also demonstrate the strength of webly supervised learning by localizing objects in web images and training a R-CNN style detector. It achieves the best performance on VOC 2007 where no VOC training data is used. Finally, we show our approach is quite robust to noise and performs comparably even when we use image search results from March 2013 (pre-CNN image search era).",
"Automatically assigning keywords to images is of great interest as it allows one to index, retrieve, and understand large collections of image data. Many techniques have been proposed for image annotation in the last decade that give reasonable performance on standard datasets. However, most of these works fail to compare their methods with simple baseline techniques to justify the need for complex models and subsequent training. In this work, we introduce a new baseline technique for image annotation that treats annotation as a retrieval problem. The proposed technique utilizes low-level image features and a simple combination of basic distances to find nearest neighbors of a given image. The keywords are then assigned using a greedy label transfer mechanism. The proposed baseline outperforms the current state-of-the-art methods on two standard and one large Web dataset. We believe that such a baseline measure will provide a strong platform to compare and better understand future annotation techniques.",
"Most current image categorization methods require large collections of manually annotated training examples to learn accurate visual recognition models. The time-consuming human labeling effort effectively limits these approaches to recognition problems involving a small number of different object classes. In order to address this shortcoming, in recent years several authors have proposed to learn object classifiers from weakly-labeled Internet images, such as photos retrieved by keyword-based image search engines. While this strategy eliminates the need for human supervision, the recognition accuracies of these methods are considerably lower than those obtained with fully-supervised approaches, because of the noisy nature of the labels associated to Web data. In this paper we investigate and compare methods that learn image classifiers by combining very few manually annotated examples (e.g., 1-10 images per class) and a large number of weakly-labeled Web photos retrieved using keyword-based image search. We cast this as a domain adaptation problem: given a few strongly-labeled examples in a target domain (the manually annotated examples) and many source domain examples (the weakly-labeled Web photos), learn classifiers yielding small generalization error on the target domain. Our experiments demonstrate that, for the same number of strongly-labeled examples, our domain adaptation approach produces significant recognition rate improvements over the best published results (e.g., 65 better when using 5 labeled training examples per class) and that our classifiers are one order of magnitude faster to learn and to evaluate than the best competing method, despite our use of large weakly-labeled data sets.",
"Current approaches to object category recognition require datasets of training images to be manually prepared, with varying degrees of supervision. We present an approach that can learn an object category from just its name, by utilizing the raw output of image search engines available on the Internet. We develop a new model, TSI-pLSA, which extends pLSA (as applied to visual words) to include spatial information in a translation and scale invariant manner. Our approach can handle the high intra-class variability and large proportion of unrelated images returned by search engines. We evaluate tire models on standard test sets, showing performance competitive with existing methods trained on hand prepared datasets",
"Automatic image annotation aims at predicting a set of textual labels for an image that describe its semantics. These are usually taken from an annotation vocabulary of few hundred labels. Because of the large vocabulary, there is a high variance in the number of images corresponding to different labels (\"class-imbalance\"). Additionally, due to the limitations of manual annotation, a significant number of available images are not annotated with all the relevant labels (\"weak-labelling\"). These two issues badly affect the performance of most of the existing image annotation models. In this work, we propose 2PKNN, a two-step variant of the classical K-nearest neighbour algorithm, that addresses these two issues in the image annotation task. The first step of 2PKNN uses \"image-to-label\" similarities, while the second step uses \"image-to-image\" similarities; thus combining the benefits of both. Since the performance of nearest-neighbour based methods greatly depends on how features are compared, we also propose a metric learning framework over 2PKNN that learns weights for multiple features as well as distances together. This is done in a large margin set-up by generalizing a well-known (single-label) classification metric learning algorithm for multi-label prediction. For scalability, we implement it by alternating between stochastic sub-gradient descent and projection steps. Extensive experiments demonstrate that, though conceptually simple, 2PKNN alone performs comparable to the current state-of-the-art on three challenging image annotation datasets, and shows significant improvements after metric learning.",
"Where previous reviews on content-based image retrieval emphasize what can be seen in an image to bridge the semantic gap, this survey considers what people tag about an image. A comprehensive treatise of three closely linked problems (i.e., image tag assignment, refinement, and tag-based image retrieval) is presented. While existing works vary in terms of their targeted tasks and methodology, they rely on the key functionality of tag relevance, that is, estimating the relevance of a specific tag with respect to the visual content of a given image and its social context. By analyzing what information a specific method exploits to construct its tag relevance function and how such information is exploited, this article introduces a two-dimensional taxonomy to structure the growing literature, understand the ingredients of the main works, clarify their connections and difference, and recognize their merits and limitations. For a head-to-head comparison with the state of the art, a new experimental protocol is presented, with training sets containing 10,000, 100,000, and 1 million images, and an evaluation on three test sets, contributed by various research groups. Eleven representative works are implemented and evaluated. Putting all this together, the survey aims to provide an overview of the past and foster progress for the near future.",
"Lazy local learning methods train a classifier \"on the fly\" at test time, using only a subset of the training instances that are most relevant to the novel test example. The goal is to tailor the classifier to the properties of the data surrounding the test example. Existing methods assume that the instances most useful for building the local model are strictly those closest to the test example. However, this fails to account for the fact that the success of the resulting classifier depends on the full distribution of selected training instances. Rather than simply gathering the test example's nearest neighbors, we propose to predict the subset of training data that is jointly relevant to training its local model. We develop an approach to discover patterns between queries and their \"good\" neighborhoods using large-scale multi-label classification with compressed sensing. Given a novel test point, we estimate both the composition and size of the training subset likely to yield an accurate local model. We demonstrate the approach on image classification tasks on SUN and aPascal and show its advantages over traditional global and local approaches.",
"",
"We address the problem of large-scale annotation of web images. Our approach is based on the concept of visual synset, which is an organization of images which are visually-similar and semantically-related. Each visual synset represents a single prototypical visual concept, and has an associated set of weighted annotations. Linear SVM's are utilized to predict the visual synset membership for unseen image examples, and a weighted voting rule is used to construct a ranked list of predicted annotations from a set of visual synsets. We demonstrate that visual synsets lead to better performance than standard methods on a new annotation database containing more than 200 million im- ages and 300 thousand annotations, which is the largest ever reported",
"Recognition is graduating from labs to real-world applications. While it is encouraging to see its potential being tapped, it brings forth a fundamental challenge to the vision researcher: scalability. How can we learn a model for any concept that exhaustively covers all its appearance variations, while requiring minimal or no human supervision for compiling the vocabulary of visual variance, gathering the training images and annotations, and learning the models? In this paper, we introduce a fully-automated approach for learning extensive models for a wide range of variations (e.g. actions, interactions, attributes and beyond) within any concept. Our approach leverages vast resources of online books to discover the vocabulary of variance, and intertwines the data collection and modeling steps to alleviate the need for explicit human supervision in training the models. Our approach organizes the visual knowledge about a concept in a convenient and useful way, enabling a variety of applications across vision and NLP. Our online system has been queried by users to learn models for several interesting concepts including breakfast, Gandhi, beautiful, etc. To date, our system has models available for over 50, 000 variations within 150 concepts, and has annotated more than 10 million images with bounding boxes.",
"The explosion of the Internet provides us with a tremendous resource of images shared online. It also confronts vision researchers the problem of finding effective methods to navigate the vast amount of visual information. Semantic image understanding plays a vital role towards solving this problem. One important task in image understanding is object recognition, in particular, generic object categorization. Critical to this problem are the issues of learning and dataset. Abundant data helps to train a robust recognition system, while a good object classifier can help to collect a large amount of images. This paper presents a novel object recognition algorithm that performs automatic dataset collecting and incremental model learning simultaneously. The goal of this work is to use the tremendous resources of the web to learn robust object category models for detecting and searching for objects in real-world cluttered scenes. Humans contiguously update the knowledge of objects when new examples are observed. Our framework emulates this human learning process by iteratively accumulating model knowledge and image examples. We adapt a non-parametric latent topic model and propose an incremental learning framework. Our algorithm is capable of automatically collecting much larger object category datasets for 22 randomly selected classes from the Caltech 101 dataset. Furthermore, our system offers not only more images in each object category but also a robust object category model and meaningful image annotation. Our experiments show that OPTIMOL is capable of collecting image datasets that are superior to the well known manually collected object datasets Caltech 101 and LabelMe.",
"Image auto-annotation is an important open problem in computer vision. For this task we propose TagProp, a discriminatively trained nearest neighbor model. Tags of test images are predicted using a weighted nearest-neighbor model to exploit labeled training images. Neighbor weights are based on neighbor rank or distance. TagProp allows the integration of metric learning by directly maximizing the log-likelihood of the tag predictions in the training set. In this manner, we can optimally combine a collection of image similarity metrics that cover different aspects of image content, such as local shape descriptors, or global color histograms. We also introduce a word specific sigmoidal modulation of the weighted neighbor tag predictions to boost the recall of rare words. We investigate the performance of different variants of our model and compare to existing work. We present experimental results for three challenging data sets. On all three, TagProp makes a marked improvement as compared to the current state-of-the-art.",
"We propose NEIL (Never Ending Image Learner), a computer program that runs 24 hours per day and 7 days per week to automatically extract visual knowledge from Internet data. NEIL uses a semi-supervised learning algorithm that jointly discovers common sense relationships (e.g., \"Corolla is a kind of looks similar to Car\", \"Wheel is a part of Car\") and labels instances of the given visual categories. It is an attempt to develop the world's largest visual structured knowledge base with minimum human labeling effort. As of 10th October 2013, NEIL has been continuously running for 2.5 months on 200 core cluster (more than 350K CPU hours) and has an ontology of 1152 object categories, 1034 scene categories and 87 attributes. During this period, NEIL has discovered more than 1700 relationships and has labeled more than 400K visual instances."
]
} |
1508.07647 | 2950325406 | Some images that are difficult to recognize on their own may become more clear in the context of a neighborhood of related images with similar social-network metadata. We build on this intuition to improve multilabel image annotation. Our model uses image metadata nonparametrically to generate neighborhoods of related images using Jaccard similarities, then uses a deep neural network to blend visual information from the image and its neighbors. Prior work typically models image metadata parametrically, in contrast, our nonparametric treatment allows our model to perform well even when the vocabulary of metadata changes between training and testing. We perform comprehensive experiments on the NUS-WIDE dataset, where we show that our model outperforms state-of-the-art methods for multilabel image annotation even when our model is forced to generalize to new types of metadata. | These approaches have shown good results, but are limited because they treat tags and visual features separately, and may be biased towards common labels. Some authors instead tackle multilabel image annotation by learning parametric models over visual features that can make predictions @cite_10 @cite_49 @cite_14 @cite_23 or rank tags @cite_32 . Gong al @cite_23 recently showed state of the art results on NUS-WIDE @cite_7 using CNNs with multilabel ranking losses. These methods typically do not take advantage of image metadata. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_32",
"@cite_23",
"@cite_49",
"@cite_10"
],
"mid": [
"2025051526",
"2007972815",
"",
"1514027499",
"",
"2162867699"
],
"abstract": [
"Automatically assigning relevant text keywords to images is an important problem. Many algorithms have been proposed in the past decade and achieved good performance. Efforts have focused upon model representations of keywords, but properties of features have not been well investigated. In most cases, a group of features is preselected, yet important feature properties are not well used to select features. In this paper, we introduce a regularization based feature selection algorithm to leverage both the sparsity and clustering properties of features, and incorporate it into the image annotation task. A novel approach is also proposed to iteratively obtain similar and dissimilar pairs from both the keyword similarity and the relevance feedback. Thus keyword similarity is modeled in the annotation framework. Numerous experiments are designed to compare the performance between features, feature combinations and regularization based feature selection methods applied on the image annotation task, which gives insight into the properties of features in the image annotation task. The experimental results demonstrate that the group sparsity based method is more accurate and stable than others.",
"This paper introduces a web image dataset created by NUS's Lab for Media Search. The dataset includes: (1) 269,648 images and the associated tags from Flickr, with a total of 5,018 unique tags; (2) six types of low-level features extracted from these images, including 64-D color histogram, 144-D color correlogram, 73-D edge direction histogram, 128-D wavelet texture, 225-D block-wise color moments extracted over 5x5 fixed grid partitions, and 500-D bag of words based on SIFT descriptions; and (3) ground-truth for 81 concepts that can be used for evaluation. Based on this dataset, we highlight characteristics of Web image collections and identify four research issues on web image annotation and retrieval. We also provide the baseline results for web image annotation by learning from the tags using the traditional k-NN algorithm. The benchmark results indicate that it is possible to learn effective models from sufficiently large image dataset to facilitate general image retrieval.",
"",
"Multilabel image annotation is one of the most important challenges in computer vision with many real-world applications. While existing work usually use conventional visual features for multilabel annotation, features based on Deep Neural Networks have shown potential to significantly boost performance. In this work, we propose to leverage the advantage of such features and analyze key components that lead to better performances. Specifically, we show that a significant performance gain could be obtained by combining convolutional architectures with approximate top- @math ranking objectives, as thye naturally fit the multilabel tagging problem. Our experiments on the NUS-WIDE dataset outperforms the conventional visual features by about 10 , obtaining the best reported performance in the literature.",
"",
"This paper introduces a discriminative model for the retrieval of images from text queries. Our approach formalizes the retrieval task as a ranking problem, and introduces a learning procedure optimizing a criterion related to the ranking performance. The proposed model hence addresses the retrieval problem directly and does not rely on an intermediate image annotation task, which contrasts with previous research. Moreover, our learning procedure builds upon recent work on the online learning of kernel-based classifiers. This yields an efficient, scalable algorithm, which can benefit from recent kernels developed for image comparison. The experiments performed over stock photography data show the advantage of our discriminative ranking approach over state-of-the-art alternatives (e.g. our model yields 26.3 average precision over the Corel dataset, which should be compared to 22.0 , for the best alternative model evaluated). Further analysis of the results shows that our model is especially advantageous over difficult queries such as queries with few relevant pictures or multiple-word queries."
]
} |
1508.07647 | 2950325406 | Some images that are difficult to recognize on their own may become more clear in the context of a neighborhood of related images with similar social-network metadata. We build on this intuition to improve multilabel image annotation. Our model uses image metadata nonparametrically to generate neighborhoods of related images using Jaccard similarities, then uses a deep neural network to blend visual information from the image and its neighbors. Prior work typically models image metadata parametrically, in contrast, our nonparametric treatment allows our model to perform well even when the vocabulary of metadata changes between training and testing. We perform comprehensive experiments on the NUS-WIDE dataset, where we show that our model outperforms state-of-the-art methods for multilabel image annotation even when our model is forced to generalize to new types of metadata. | A common approach for utilizing image metadata is to learn a joint representation of image and tags. To this end, prior work generatively models the association between visual data and tags or labels @cite_12 @cite_8 @cite_5 @cite_27 or applies non-negative matrix factorization to model this latent structure @cite_24 @cite_47 @cite_25 . Similarly, Niu al @cite_18 encode the text tags as relations among the images, and define a semi-supervised relational topic model for image classification. Another popular approach maps images and tags to a common semantic space, using CCA or kCCA @cite_16 @cite_40 @cite_17 @cite_38 . This line of work is closely related to our task, however these approaches only model user tags and assume static vocabularies; in contrast we show that our model can generalize to new types of metadata. | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_8",
"@cite_24",
"@cite_27",
"@cite_40",
"@cite_5",
"@cite_47",
"@cite_16",
"@cite_25",
"@cite_12",
"@cite_17"
],
"mid": [
"2087501015",
"2074486097",
"2137471889",
"",
"154472438",
"2029163572",
"2125238156",
"2236279451",
"21006490",
"2047779269",
"2127411609",
"2464688717"
],
"abstract": [
"Automatic image annotation is still an important open problem in multimedia and computer vision. The success of media sharing websites has led to the availability of large collections of images tagged with human-provided labels. Many approaches previously proposed in the literature do not accurately capture the intricate dependencies between image content and annotations. We propose a learning procedure based on Kernel Canonical Correlation Analysis which finds a mapping between visual and textual words by projecting them into a latent meaning space. The learned mapping is then used to annotate new images using advanced nearest-neighbor voting methods. We evaluate our approach on three popular datasets, and show clear improvements over several approaches relying on more standard representations.",
"In this paper, we address the problem of recognizing images with weakly annotated text tags. Most previous work either cannot be applied to the scenarios where the tags are loosely related to the images, or simply take a pre-fusion at the feature level or a post-fusion at the decision level to combine the visual and textual content. Instead, we first encode the text tags as the relations among the images, and then propose a semi-supervised relational topic model (ss-RTM) to explicitly model the image content and their relations. In such way, we can efficiently leverage the loosely related tags, and build an intermediate level representation for a collection of weakly annotated images. The intermediate level representation can be regarded as a mid-level fusion of the visual and textual content, which is able to explicitly model their intrinsic relationships. Moreover, image category labels are also modeled in the ss-RTM, and recognition can be conducted without training an additional discriminative classifier. Our extensive experiments on social multimedia datasets (images+tags) demonstrated the advantages of the proposed model.",
"We present a new approach for modeling multi-modal data sets, focusing on the specific case of segmented images with associated text. Learning the joint distribution of image regions and words has many applications. We consider in detail predicting words associated with whole images (auto-annotation) and corresponding to particular image regions (region naming). Auto-annotation might help organize and access large collections of images. Region naming is a model of object recognition as a process of translating image regions to words, much as one might translate from one language to another. Learning the relationships between image regions and semantic correlates (words) is an interesting example of multi-modal data mining, particularly because it is typically hard to apply data mining techniques to collections of images. We develop a number of models for the joint distribution of image regions and words, including several which explicitly learn the correspondence between regions and words. We study multi-modal and correspondence extensions to Hofmann's hierarchical clustering aspect model, a translation model adapted from statistical machine translation (), and a multi-modal extension to mixture of latent Dirichlet allocation (MoM-LDA). All models are assessed using a large collection of annotated images of real scenes. We study in depth the difficult problem of measuring performance. For the annotation task, we look at prediction performance on held out data. We present three alternative measures, oriented toward different types of task. Measuring the performance of correspondence methods is harder, because one must determine whether a word has been placed on the right region of an image. We can use annotation performance as a proxy measure, but accurate measurement requires hand labeled data, and thus must occur on a smaller scale. We show results using both an annotation proxy, and manually labeled data.",
"",
"Data often consists of multiple diverse modalities. For example, images are tagged with textual information and videos are accompanied by audio. Each modality is characterized by having distinct statistical properties. We propose a Deep Boltzmann Machine for learning a generative model of such multimodal data. We show that the model can be used to create fused representations by combining features across modalities. These learned representations are useful for classification and information retrieval. By sampling from the conditional distributions over each data modality, it is possible to create these representations even when some data modalities are missing. We conduct experiments on bimodal image-text and audio-video data. The fused representation achieves good classification results on the MIR-Flickr data set matching or outperforming other deep models as well as SVM based models that use Multiple Kernel Learning. We further demonstrate that this multimodal model helps classification and retrieval even when only unimodal data is available at test time.",
"We introduce an approach to image retrieval and auto-tagging that leverages the implicit information about object importance conveyed by the list of keyword tags a person supplies for an image. We propose an unsupervised learning procedure based on Kernel Canonical Correlation Analysis that discovers the relationship between how humans tag images (e.g., the order in which words are mentioned) and the relative importance of objects and their layout in the scene. Using this discovered connection, we show how to boost accuracy for novel queries, such that the search results better preserve the aspects a human may find most worth mentioning. We evaluate our approach on three datasets using either keyword tags or natural language descriptions, and quantify results with both ground truth parameters as well as direct tests with human subjects. Our results show clear improvements over approaches that either rely on image features alone, or that use words and image features but ignore the implied importance cues. Overall, our work provides a novel way to incorporate high-level human perception of scenes into visual representations for enhanced image search.",
"A probabilistic formulation for semantic image annotation and retrieval is proposed. Annotation and retrieval are posed as classification problems where each class is defined as the group of database images labeled with a common semantic label. It is shown that, by establishing this one-to-one correspondence between semantic labels and semantic classes, a minimum probability of error annotation and retrieval are feasible with algorithms that are 1) conceptually simple, 2) computationally efficient, and 3) do not require prior semantic segmentation of training images. In particular, images are represented as bags of localized feature vectors, a mixture density estimated for each image, and the mixtures associated with all images annotated with a common semantic label pooled into a density estimate for the corresponding semantic class. This pooling is justified by a multiple instance learning argument and performed efficiently with a hierarchical extension of expectation-maximization. The benefits of the supervised formulation over the more complex, and currently popular, joint modeling of semantic label and visual feature distributions are illustrated through theoretical arguments and extensive experiments. The supervised formulation is shown to achieve higher accuracy than various previously published methods at a fraction of their computational cost. Finally, the proposed method is shown to be fairly robust to parameter tuning",
"It is now generally recognized that user-provided image tags are incomplete and noisy. In this study, we focus on the problem of tag completion that aims to simultaneously enrich the missing tags and remove noisy tags. The novel component of the proposed framework is a noisy matrix recovery algorithm. It assumes that the observed tags are independently sampled from an unknown tag matrix and our goal is to recover the tag matrix based on the sampled tags. We show theoretically that the proposed noisy tag matrix recovery algorithm is able to simultaneously recover the missing tags and de-emphasize the noisy tags even with a limited number of observations. In addition, a graph Laplacian based component is introduced to combine the noisy matrix recovery component with visual features. Our empirical study with multiple benchmark datasets for image tagging shows that the proposed algorithm outperforms state-of-the-art approaches in terms of both effectiveness and efficiency when handling missing and noisy tags.",
"Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at the top of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method, called WSABIE, both outperforms several baseline methods and is faster and consumes less memory.",
"The real world image databases such as Flickr are characterized by continuous addition of new images. The recent approaches for image annotation, i.e. the problem of assigning tags to images, have two major drawbacks. First, either models are learned using the entire training data, or to handle the issue of dataset imbalance, tag-specific discriminative models are trained. Such models become obsolete and require relearning when new images and tags are added to database. Second, the task of feature-fusion is typically dealt using ad-hoc approaches. In this paper, we present a weighted extension of Multi-view Non-negative Matrix Factorization (NMF) to address the aforementioned drawbacks. The key idea is to learn query-specific generative model on the features of nearest-neighbors and tags using the proposed NMF-KNN approach which imposes consensus constraint on the coefficient matrices across different features. This results in coefficient vectors across features to be consistent and, thus, naturally solves the problem of feature fusion, while the weight matrices introduced in the proposed formulation alleviate the issue of dataset imbalance. Furthermore, our approach, being query-specific, is unaffected by addition of images and tags in a database. We tested our method on two datasets used for evaluation of image annotation and obtained competitive results.",
"We propose an approach to learning the semantics of images which allows us to automatically annotate an image with keywords and to retrieve images based on text queries. We do this using a formalism that models the generation of annotated images. We assume that every image is divided into regions, each described by a continuous-valued feature vector. Given a training set of images with annotations, we compute a joint probabilistic model of image features and words which allow us to predict the probability of generating a word given the image regions. This may be used to automatically annotate and retrieve images given a word as a query. Experiments show that our model significantly outperforms the best of the previously reported results on the tasks of automatic image annotation and retrieval.",
""
]
} |
1508.07647 | 2950325406 | Some images that are difficult to recognize on their own may become more clear in the context of a neighborhood of related images with similar social-network metadata. We build on this intuition to improve multilabel image annotation. Our model uses image metadata nonparametrically to generate neighborhoods of related images using Jaccard similarities, then uses a deep neural network to blend visual information from the image and its neighbors. Prior work typically models image metadata parametrically, in contrast, our nonparametric treatment allows our model to perform well even when the vocabulary of metadata changes between training and testing. We perform comprehensive experiments on the NUS-WIDE dataset, where we show that our model outperforms state-of-the-art methods for multilabel image annotation even when our model is forced to generalize to new types of metadata. | Besides user tags, previous work uses GPS and timestamps @cite_6 @cite_33 @cite_19 @cite_9 to improve classification performance in specific tasks such as landmark classification. Some authors model the relations between images using multiple metadata @cite_39 @cite_15 @cite_3 @cite_28 @cite_42 . Duan al @cite_3 present a latent CRF model in which tags, visual features and GPS-tags are used jointly for image clustering. McAuley and Leskovec model pairwise social relations between images and then apply a structural learning approach for image classification and labeling @cite_15 . They use this model to analyze the utility of different types of metadata for image labeling. Our work is similarly motivated, but their method does not use any visual representation. In contrast, we use a deep neural network to blend the visual information of images that share similar metadata. | {
"cite_N": [
"@cite_33",
"@cite_28",
"@cite_9",
"@cite_42",
"@cite_6",
"@cite_39",
"@cite_3",
"@cite_19",
"@cite_15"
],
"mid": [
"",
"1605620159",
"",
"2949861479",
"2103163130",
"2113227937",
"1970244761",
"1499644441",
"2125204570"
],
"abstract": [
"",
"We describe a system for searching your personal photos using an extremely wide range of text queries, including dates and holidays (\"Halloween\"), named and categorical places (\"Empire State Building\" or \"park\"), events and occasions (\"Radiohead concert\" or \"wedding\"), activities (\"skiing\"), object categories (\"whales\"), attributes (\"outdoors\"), and object instances (\"Mona Lisa\"), and any combination of these -- all with no manual labeling required. We accomplish this by correlating information in your photos -- the timestamps, GPS locations, and image pixels -- to information mined from the Internet. This includes matching dates to holidays listed on Wikipedia, GPS coordinates to places listed on Wikimapia, places and dates to find named events using Google, visual categories using classifiers either pre-trained on ImageNet or trained on-the-fly using results from Google Image Search, and object instances using interest point-based matching, again using results from Google Images. We tie all of these disparate sources of information together in a unified way, allowing for fast and accurate searches using whatever information you remember about a photo.",
"",
"Image feature representation plays an essential role in image recognition and related tasks. The current state-of-the-art feature learning paradigm is supervised learning from labeled data. However, this paradigm requires large-scale category labels, which limits its applicability to domains where labels are hard to obtain. In this paper, we propose a new data-driven feature learning paradigm which does not rely on category labels. Instead, we learn from user behavior data collected on social media. Concretely, we use the image relationship discovered in the latent space from the user behavior data to guide the image feature learning. We collect a large-scale image and user behavior dataset from Behance.net. The dataset consists of 1.9 million images and over 300 million view records from 1.9 million users. We validate our feature learning paradigm on this dataset and find that the learned feature significantly outperforms the state-of-the-art image features in learning better image similarities. We also show that the learned feature performs competitively on various recognition benchmarks.",
"Estimating geographic information from an image is an excellent, difficult high-level computer vision problem whose time has come. The emergence of vast amounts of geographically-calibrated image data is a great reason for computer vision to start looking globally - on the scale of the entire planet! In this paper, we propose a simple algorithm for estimating a distribution over geographic locations from a single image using a purely data-driven scene matching approach. For this task, we leverage a dataset of over 6 million GPS-tagged images from the Internet. We represent the estimated image location as a probability distribution over the Earthpsilas surface. We quantitatively evaluate our approach in several geolocation tasks and demonstrate encouraging performance (up to 30 times better than chance). We show that geolocation estimates can provide the basis for numerous other image understanding tasks such as population density estimation, land cover estimation or urban rural classification.",
"Most personal photos that are shared online are embedded in some form of social network, and these social networks are a potent source of contextual information that can be leveraged for automatic image understanding. In this paper, we investigate the utility of social network context for the task of automatic face recognition in personal photographs. We combine face recognition scores with social context in a conditional random field (CRF) model and apply this model to label faces in photos from the popular online social network Facebook, which is now the top photo-sharing site on the Web with billions of photos in total. We demonstrate that our simple method of enhancing face recognition with social network context substantially increases recognition performance beyond that of a baseline face recognition system.",
"Photo-sharing websites have become very popular in the last few years, leading to huge collections of online images. In addition to image data, these websites collect a variety of multimodal metadata about photos including text tags, captions, GPS coordinates, camera metadata, user profiles, etc. However, this metadata is not well constrained and is often noisy, sparse, or missing altogether. In this paper, we propose a framework to model these \"loosely organized\" multimodal datasets, and show how to perform loosely-supervised learning using a novel latent Conditional Random Field framework. We learn parameters of the LCRF automatically from a small set of validation data, using Information Theoretic Metric Learning (ITML) to learn distance functions and a structural SVM formulation to learn the potential functions. We apply our framework on four datasets of images from Flickr, evaluating both qualitatively and quantitatively against several baselines.",
"Can we model the temporal evolution of topics in Web image collections? If so, can we exploit the understanding of dynamics to solve novel visual problems or improve recognition performance? These two challenging questions are the motivation for this work. We propose a nonparametric approach to modeling and analysis of topical evolution in image sets. A scalable and parallelizable sequential Monte Carlo based method is developed to construct the similarity network of a large-scale dataset that provides a base representation for wide ranges of dynamics analysis. In this paper, we provide several experimental results to support the usefulness of image dynamics with the datasets of 47 topics gathered from Flickr. First, we produce some interesting observations such as tracking of subtopic evolution and outbreak detection, which cannot be achieved with conventional image sets. Second, we also present the complementary benefits that the images can introduce over the associated text analysis. Finally, we show that the training using the temporal association significantly improves the recognition performance.",
"Large-scale image retrieval benchmarks invariably consist of images from the Web. Many of these benchmarks are derived from online photo sharing networks, like Flickr, which in addition to hosting images also provide a highly interactive social community. Such communities generate rich metadata that can naturally be harnessed for image classification and retrieval. Here we study four popular benchmark datasets, extending them with social-network metadata, such as the groups to which each image belongs, the comment thread associated with the image, who uploaded it, their location, and their network of friends. Since these types of data are inherently relational, we propose a model that explicitly accounts for the interdependencies between images sharing common properties. We model the task as a binary labeling problem on a network, and use structured learning techniques to learn model parameters. We find that social-network metadata are useful in a variety of classification tasks, in many cases outperforming methods based on image content."
]
} |
1508.07654 | 2953379097 | Realistic videos of human actions exhibit rich spatiotemporal structures at multiple levels of granularity: an action can always be decomposed into multiple finer-grained elements in both space and time. To capture this intuition, we propose to represent videos by a hierarchy of mid-level action elements (MAEs), where each MAE corresponds to an action-related spatiotemporal segment in the video. We introduce an unsupervised method to generate this representation from videos. Our method is capable of distinguishing action-related segments from background segments and representing actions at multiple spatiotemporal resolutions. Given a set of spatiotemporal segments generated from the training data, we introduce a discriminative clustering algorithm that automatically discovers MAEs at multiple levels of granularity. We develop structured models that capture a rich set of spatial, temporal and hierarchical relations among the segments, where the action label and multiple levels of MAE labels are jointly inferred. The proposed model achieves state-of-the-art performance in multiple action recognition benchmarks. Moreover, we demonstrate the effectiveness of our model in real-world applications such as action recognition in large-scale untrimmed videos and action parsing. | The literature on human action recognition is immense. we refer the readers to the recent survey @cite_7 . In the following, we only review the related work closely to our work. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1983705368"
],
"abstract": [
"Human activity recognition is an important area of computer vision research. Its applications include surveillance systems, patient monitoring systems, and a variety of systems that involve interactions between persons and electronic devices such as human-computer interfaces. Most of these applications require an automated recognition of high-level activities, composed of multiple simple (or atomic) actions of persons. This article provides a detailed overview of various state-of-the-art research papers on human activity recognition. We discuss both the methodologies developed for simple human actions and those for high-level activities. An approach-based taxonomy is chosen that compares the advantages and limitations of each approach. Recognition methodologies for an analysis of the simple actions of a single person are first presented in the article. Space-time volume approaches and sequential approaches that represent and recognize activities directly from input images are discussed. Next, hierarchical recognition methodologies for high-level activities are presented and compared. Statistical approaches, syntactic approaches, and description-based approaches for hierarchical recognition are discussed in the article. In addition, we further discuss the papers on the recognition of human-object interactions and group activities. Public datasets designed for the evaluation of the recognition methodologies are illustrated in our article as well, comparing the methodologies' performances. This review will provide the impetus for future research in more productive areas."
]
} |
1508.07876 | 2949411737 | Online social systems are multiplex in nature as multiple links may exist between the same two users across different social networks. In this work, we introduce a framework for studying links and interactions between users beyond the individual social network. Exploring the cross-section of two popular online platforms - Twitter and location-based social network Foursquare - we represent the two together as a composite multilayer online social network. Through this paradigm we study the interactions of pairs of users differentiating between those with links on one or both networks. We find that users with multiplex links, who are connected on both networks, interact more and have greater neighbourhood overlap on both platforms, in comparison with pairs who are connected on just one of the social networks. In particular, the most frequented locations of users are considerably closer, and similarity is considerably greater among multiplex links. We present a number of structural and interaction features, such as the multilayer Adamic Adar coefficient, which are based on the extension of the concept of the node neighbourhood beyond the single network. Our evaluation, which aims to shed light on the implications of multiplexity for the link generation process, shows that multilayer features, constructed from properties across social networks, perform better than their single network counterparts in predicting links across networks. We propose that combining information from multiple networks in a multilayer configuration can provide new insights into user interactions on online social networks, and can significantly improve link prediction overall with valuable applications to social bootstrapping and friend recommendations. | Multi-relational or multilayer networks have been explored in the context of a wide range of systems from global air transportation @cite_24 to massive online multiplayer games @cite_14 . A comprehensive review of multilayer network models can be found in @cite_1 . In the context of social networks, it is generally accepted that the more information we can obtain about the relationship between people, the more insight we can gain. A recent large-scale study on the subject has demonstrated the need for multi-channel data when comprehensively studying social networks @cite_19 . Despite the observable multilayer nature of the composite OSNs of users @cite_1 @cite_5 @cite_9 , most research efforts have been focused on theoretical modelling @cite_1 , with little to no empirical work exploiting data-driven applications in the domain of multilayer OSNs, especially with respect to how location-based and social interactions are coupled in the online social space. We attempt to fill these gaps in the present work by presenting a generalisable online multilayer framework applied to classic problems such as link prediction in OSNs. Our framework is strongly motivated by the theory of media multiplexity, which we review next. | {
"cite_N": [
"@cite_14",
"@cite_9",
"@cite_1",
"@cite_24",
"@cite_19",
"@cite_5"
],
"mid": [
"2072701859",
"",
"",
"2066132582",
"1987835826",
"2012308226"
],
"abstract": [
"The capacity to collect fingerprints of individuals in online media has revolutionized the way researchers explore human society. Social systems can be seen as a nonlinear superposition of a multitude of complex social networks, where nodes represent individuals and links capture a variety of different social relations. Much emphasis has been put on the network topology of social interactions, however, the multidimensional nature of these interactions has largely been ignored, mostly because of lack of data. Here, for the first time, we analyze a complete, multirelational, large social network of a society consisting of the 300,000 odd players of a massive multiplayer online game. We extract networks of six different types of one-to-one interactions between the players. Three of them carry a positive connotation (friendship, communication, trade), three a negative (enmity, armed aggression, punishment). We first analyze these types of networks as separate entities and find that negative interactions differ from positive interactions by their lower reciprocity, weaker clustering, and fatter-tail degree distribution. We then explore how the interdependence of different network types determines the organization of the social system. In particular, we study correlations and overlap between different types of links and demonstrate the tendency of individuals to play different roles in different networks. As a demonstration of the power of the approach, we present the first empirical large-scale verification of the long-standing structural balance theory, by focusing on the specific multiplex network of friendship and enmity relations.",
"",
"",
"We study the dynamics of the European Air Transport Network by using a multiplex network formalism. We will consider the set of flights of each airline as an interdependent network and we analyze the resilience of the system against random flight failures in the passenger’s rescheduling problem. A comparison between the single-plex approach and the corresponding multiplex one is presented illustrating that the multiplexity strongly affects the robustness of the European Air Network.",
"This paper describes the deployment of a large-scale study designed to measure human interactions across a variety of communication channels, with high temporal resolution and spanning multiple years—the Copenhagen Networks Study. Specifically, we collect data on face-to-face interactions, telecommunication, social networks, location, and background information (personality, demographics, health, politics) for a densely connected population of 1 000 individuals, using state-of-the-art smartphones as social sensors. Here we provide an overview of the related work and describe the motivation and research agenda driving the study. Additionally, the paper details the data-types measured, and the technical infrastructure in terms of both backend and phone software, as well as an outline of the deployment procedures. We document the participant privacy procedures and their underlying principles. The paper is concluded with early results from data analysis, illustrating the importance of multi-channel high-resolution approach to data collection.",
"A method for extraction of the multi-layered social network based on the data about human collaborative achievements, in particular scientific papers, is presented in the paper. The objects linking people form a hierarchy, which is flattened in the pre-processing stage. Only one level of the hierarchy remains together with new activities moved from its other levels. Separate layers of the multi-layered social network are created based on these pre-processed activities."
]
} |
1508.07876 | 2949411737 | Online social systems are multiplex in nature as multiple links may exist between the same two users across different social networks. In this work, we introduce a framework for studying links and interactions between users beyond the individual social network. Exploring the cross-section of two popular online platforms - Twitter and location-based social network Foursquare - we represent the two together as a composite multilayer online social network. Through this paradigm we study the interactions of pairs of users differentiating between those with links on one or both networks. We find that users with multiplex links, who are connected on both networks, interact more and have greater neighbourhood overlap on both platforms, in comparison with pairs who are connected on just one of the social networks. In particular, the most frequented locations of users are considerably closer, and similarity is considerably greater among multiplex links. We present a number of structural and interaction features, such as the multilayer Adamic Adar coefficient, which are based on the extension of the concept of the node neighbourhood beyond the single network. Our evaluation, which aims to shed light on the implications of multiplexity for the link generation process, shows that multilayer features, constructed from properties across social networks, perform better than their single network counterparts in predicting links across networks. We propose that combining information from multiple networks in a multilayer configuration can provide new insights into user interactions on online social networks, and can significantly improve link prediction overall with valuable applications to social bootstrapping and friend recommendations. | Media multiplexity @cite_13 is the principle that tie strength is observed to be greater when the number of media channels used to communicate between two people is greater (higher multiplexity). @cite_29 the authors studied the effects of media use on relationships in an academic organisation and found that those pairs of participants who utilised more types of media (including email and videoconferencing) interacted more frequently and therefore had a closer relationship, such as friendship. More recently, multiplexity has been studied in light of multilayer communication networks, where the intersection of the layers was found to indicate a strong tie, while single-layer links were found to denote a weaker relationship @cite_23 . The strength of social ties is an important consideration in friend recommendations and link prediction @cite_3 , and we employ the previously understudied multiplex properties of OSNs to such ends in this work. | {
"cite_N": [
"@cite_29",
"@cite_3",
"@cite_13",
"@cite_23"
],
"mid": [
"2088793533",
"2124142520",
"2039688301",
"1780950168"
],
"abstract": [
"We use a social network approach to examine how work and friendship ties in a university research group were associated with the kinds of media used for different kinds of information exchange. The use of electronic mail, unscheduled face-to-face encounters, and scheduled face-to-face meetings predominated for the exchange of six kinds of information: Receiving Work, Giving Work, Collaborative Writing, Computer Programming, Sociability, and Major Emotional Support. Few pairs used synchronous desktop videoconferencing or the telephone. E-mail was used in similar ways as face-to-face communication. The more frequent the contact, the more “multiplex” the tie: A larger number of media was used to exchange a greater variety of information. The closeness of work ties and of friendship ties were each independently associated with more interaction: A greater frequency of communication, the exchange of more kinds of information, and the use of more media. © 1998 John Wiley & Sons, Inc.",
"Social media treats all users the same: trusted friend or total stranger, with little or nothing in between. In reality, relationships fall everywhere along this spectrum, a topic social science has investigated for decades under the theme of tie strength. Our work bridges this gap between theory and practice. In this paper, we present a predictive model that maps social media data to tie strength. The model builds on a dataset of over 2,000 social media ties and performs quite well, distinguishing between strong and weak ties with over 85 accuracy. We complement these quantitative findings with interviews that unpack the relationships we could not predict. The paper concludes by illustrating how modeling tie strength can improve social media design elements, including privacy controls, message routing, friend introductions and information prioritization.",
"This paper explores the impact of communication media and the Internet on connectivity between people. Results from a series of social network studies of media use are used as background for exploration of these impacts. These studies explored the use of all available media among members of an academic research group and among distance learners. Asking about media use as well as about the strength of the tie between communicating pairs revealed that those more strongly tied used more media to communicate than weak ties, and that media use within groups conformed to a unidimensional scale, showing a configuration of different tiers of media use supporting social networks of different ties strengths. These results lead to a number of implications regarding media and Internet connectivity, including: how media use can be added to characteristics of social network ties; how introducing a medium can create latent tie connectivity among group members that provides the technical means for activating weak ties, a...",
"Social media allow for an unprecedented amount of interaction between people online. A fundamental aspect of human social behavior, however, is the tendency of people to associate themselves with like-minded individuals, forming homogeneous social circles both online and offline. In this work, we apply a new model that allows us to distinguish between social ties of varying strength, and to observe evidence of homophily with regards to politics, music, health, residential sector & year in college, within the online and offline social network of 74 college students. We present a multiplex network approach to social tie strength, here applied to mobile communication data - calls, text messages, and co-location, allowing us to dimensionally identify relationships by considering the number of communication channels utilized between students. We find that strong social ties are characterized by maximal use of communication channels, while weak ties by minimal use. We are able to identify 75 of close friendships, 90 of weaker ties, and 90 of Facebook friendships as compared to reported ground truth. We then show that stronger ties exhibit greater profile similarity than weaker ones. Apart from high homogeneity in social circles with respect to political and health aspects, we observe strong homophily driven by music, residential sector and year in college. Despite Facebook friendship being highly dependent on residence and year, exposure to less homogeneous content can be found in the online rather than the offline social circles of students, most notably in political and music aspects."
]
} |
1508.07876 | 2949411737 | Online social systems are multiplex in nature as multiple links may exist between the same two users across different social networks. In this work, we introduce a framework for studying links and interactions between users beyond the individual social network. Exploring the cross-section of two popular online platforms - Twitter and location-based social network Foursquare - we represent the two together as a composite multilayer online social network. Through this paradigm we study the interactions of pairs of users differentiating between those with links on one or both networks. We find that users with multiplex links, who are connected on both networks, interact more and have greater neighbourhood overlap on both platforms, in comparison with pairs who are connected on just one of the social networks. In particular, the most frequented locations of users are considerably closer, and similarity is considerably greater among multiplex links. We present a number of structural and interaction features, such as the multilayer Adamic Adar coefficient, which are based on the extension of the concept of the node neighbourhood beyond the single network. Our evaluation, which aims to shed light on the implications of multiplexity for the link generation process, shows that multilayer features, constructed from properties across social networks, perform better than their single network counterparts in predicting links across networks. We propose that combining information from multiple networks in a multilayer configuration can provide new insights into user interactions on online social networks, and can significantly improve link prediction overall with valuable applications to social bootstrapping and friend recommendations. | The problem of link prediction was first introduced in the seminal work of @cite_2 and since then, has been applied in various network domains. For instance, in @cite_10 the authors exploit place features in location-based services to recommend friendships, and in @cite_6 a new model based on supervised random walks is proposed to predict new links in Facebook. Most of these works build on features that are endogenous to the system that hosts the social network of users. In our evaluation, however, we train and test on heterogeneous networks. In a similar spirit, the authors in @cite_25 show how using both location and social information from the same network significantly improves link prediction. Our approach differs in that it frames the link prediction task in the context of multilayer networks and empirically shows the relationship between two different systems - Foursquare and Twitter - by mining features from both. Before presenting our framework and analysis, we will next state the research questions we are interested in answering through this work. | {
"cite_N": [
"@cite_10",
"@cite_25",
"@cite_6",
"@cite_2"
],
"mid": [
"2001344462",
"2069090820",
"2952696519",
"2148847267"
],
"abstract": [
"Link prediction systems have been largely adopted to recommend new friends in online social networks using data about social interactions. With the soaring adoption of location-based social services it becomes possible to take advantage of an additional source of information: the places people visit. In this paper we study the problem of designing a link prediction system for online location-based social networks. We have gathered extensive data about one of these services, Gowalla, with periodic snapshots to capture its temporal evolution. We study the link prediction space, finding that about 30 of new links are added among \"place-friends\", i.e., among users who visit the same places. We show how this prediction space can be made 15 times smaller, while still 66 of future connections can be discovered. Thus, we define new prediction features based on the properties of the places visited by users which are able to discriminate potential future links among them. Building on these findings, we describe a supervised learning framework which exploits these prediction features to predict new links among friends-of-friends and place-friends. Our evaluation shows how the inclusion of information about places and related user activity offers high link prediction performance. These results open new directions for real-world link recommendation systems on location-based social networks.",
"Location plays an essential role in our lives, bridging our online and offline worlds. This paper explores the interplay between people's location, interactions, and their social ties within a large real-world dataset. We present and evaluate Flap, a system that solves two intimately related tasks: link and location prediction in online social networks. For link prediction, Flap infers social ties by considering patterns in friendship formation, the content of people's messages, and user location. We show that while each component is a weak predictor of friendship alone, combining them results in a strong model, accurately identifying the majority of friendships. For location prediction, Flap implements a scalable probabilistic model of human mobility, where we treat users with known GPS positions as noisy sensors of the location of their friends. We explore supervised and unsupervised learning scenarios, and focus on the efficiency of both learning and inference. We evaluate Flap on a large sample of highly active users from two distinct geographical areas and show that it (1) reconstructs the entire friendship graph with high accuracy even when no edges are given; and (2) infers people's fine-grained location, even when they keep their data private and we can only access the location of their friends. Our models significantly outperform current comparable approaches to either task.",
"Predicting the occurrence of links is a fundamental problem in networks. In the link prediction problem we are given a snapshot of a network and would like to infer which interactions among existing members are likely to occur in the near future or which existing interactions are we missing. Although this problem has been extensively studied, the challenge of how to effectively combine the information from the network structure with rich node and edge attribute data remains largely open. We develop an algorithm based on Supervised Random Walks that naturally combines the information from the network structure with node and edge level attributes. We achieve this by using these attributes to guide a random walk on the graph. We formulate a supervised learning task where the goal is to learn a function that assigns strengths to edges in the network such that a random walker is more likely to visit the nodes to which new links will be created in the future. We develop an efficient training algorithm to directly learn the edge strength estimation function. Our experiments on the Facebook social graph and large collaboration networks show that our approach outperforms state-of-the-art unsupervised approaches as well as approaches that are based on feature extraction.",
"Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link-prediction problem, and we develop approaches to link prediction based on measures for analyzing the “proximity” of nodes in a network. Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. © 2007 Wiley Periodicals, Inc."
]
} |
1508.07680 | 2953039697 | The problem of domain generalization is to take knowledge acquired from a number of related domains where training data is available, and to then successfully apply it to previously unseen domains. We propose a new feature learning algorithm, Multi-Task Autoencoder (MTAE), that provides good generalization performance for cross-domain object recognition. Our algorithm extends the standard denoising autoencoder framework by substituting artificially induced corruption with naturally occurring inter-domain variability in the appearance of objects. Instead of reconstructing images from noisy versions, MTAE learns to transform the original image into analogs in multiple related domains. It thereby learns features that are robust to variations across domains. The learnt features are then used as inputs to a classifier. We evaluated the performance of the algorithm on benchmark image recognition datasets, where the task is to learn features from multiple datasets and to then predict the image label from unseen datasets. We found that (denoising) MTAE outperforms alternative autoencoder-based models as well as the current state-of-the-art algorithms for domain generalization. | Other works in object recognition exist that address a similar problem, in the sense of having unknown targets, where the unseen dataset contains noisy images that are not in the training set @cite_4 @cite_7 . However, these were designed to be noise-specific and may suffer from dataset bias when observing objects with different types of noise. | {
"cite_N": [
"@cite_4",
"@cite_7"
],
"mid": [
"2004984348",
"1963079441"
],
"abstract": [
"We introduce Deep Hybrid Networks that are robust to the recognition of out-of-sample objects, i.e., ones that are drawn from a different probability distribution from the training data distribution. The networks are based on a particular combination of an auto-encoder and stacked Restricted Boltzmann Machines (RBMs). The autoencoder is used to extract sparse features, which are expected to be noise invariant in the observations. The stacked RBMs then observe the sparse features as inputs to learn the top hierarchical features. The use of RBMs is motivated by the fact that the stacked RBMs typically provide good performance when dealing with in-sample observations, as proven in the previous works. To improve the robustness against local noise, we propose a variant of our hybrid network by the usage of a mixture of sparse features and sparse connections in the auto-encoder layer. The experiments show that our proposed deep networks provide good performance in both the in-sample and out-of-sample situations, particularly when the number of training examples is small.",
"Deep Belief Networks (DBNs) are hierarchical generative models which have been used successfully to model high dimensional visual data. However, they are not robust to common variations such as occlusion and random noise. We explore two strategies for improving the robustness of DBNs. First, we show that a DBN with sparse connections in the first layer is more robust to variations that are not in the training set. Second, we develop a probabilistic denoising algorithm to determine a subset of the hidden layer nodes to unclamp. We show that this can be applied to any feedforward network classifier with localized first layer connections. Recognition results after denoising are significantly better over the standard DBN implementations for various sources of noise."
]
} |
1508.07680 | 2953039697 | The problem of domain generalization is to take knowledge acquired from a number of related domains where training data is available, and to then successfully apply it to previously unseen domains. We propose a new feature learning algorithm, Multi-Task Autoencoder (MTAE), that provides good generalization performance for cross-domain object recognition. Our algorithm extends the standard denoising autoencoder framework by substituting artificially induced corruption with naturally occurring inter-domain variability in the appearance of objects. Instead of reconstructing images from noisy versions, MTAE learns to transform the original image into analogs in multiple related domains. It thereby learns features that are robust to variations across domains. The learnt features are then used as inputs to a classifier. We evaluated the performance of the algorithm on benchmark image recognition datasets, where the task is to learn features from multiple datasets and to then predict the image label from unseen datasets. We found that (denoising) MTAE outperforms alternative autoencoder-based models as well as the current state-of-the-art algorithms for domain generalization. | Our proposed algorithm is based on the feature learning approach. Feature learning has been of a great interest in the machine learning community since the emergence of deep learning (see @cite_22 and references therein). Some feature learning methods have been successfully applied to domain adaptation or transfer learning applications @cite_23 @cite_34 . To our best knowledge, there is no prior work along these lines on the more difficult problem of domain generalization, , to create useful representations without observing the target domain. | {
"cite_N": [
"@cite_34",
"@cite_22",
"@cite_23"
],
"mid": [
"2953360861",
"2163922914",
"2949821452"
],
"abstract": [
"We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be re-purposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.",
"The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.",
"Stacked denoising autoencoders (SDAs) have been successfully used to learn new representations for domain adaptation. Recently, they have attained record accuracy on standard benchmark tasks of sentiment analysis across different text domains. SDAs learn robust data representations by reconstruction, recovering original features from data that are artificially corrupted with noise. In this paper, we propose marginalized SDA (mSDA) that addresses two crucial limitations of SDAs: high computational cost and lack of scalability to high-dimensional features. In contrast to SDAs, our approach of mSDA marginalizes noise and thus does not require stochastic gradient descent or other optimization algorithms to learn parameters ? in fact, they are computed in closed-form. Consequently, mSDA, which can be implemented in only 20 lines of MATLAB^ TM , significantly speeds up SDAs by two orders of magnitude. Furthermore, the representations learnt by mSDA are as effective as the traditional SDAs, attaining almost identical accuracies in benchmark tasks."
]
} |
1508.07953 | 2952850130 | We introduce RIANN (Ring Intersection Approximate Nearest Neighbor search), an algorithm for matching patches of a video to a set of reference patches in real-time. For each query, RIANN finds potential matches by intersecting rings around key points in appearance space. Its search complexity is reversely correlated to the amount of temporal change, making it a good fit for videos, where typically most patches change slowly with time. Experiments show that RIANN is up to two orders of magnitude faster than previous ANN methods, and is the only solution that operates in real-time. We further demonstrate how RIANN can be used for real-time video processing and provide examples for a range of real-time video applications, including colorization, denoising, and several artistic effects. | The general problem of Approximate Nearest Neighbor matching received several excellent solutions that have become highly popular @cite_22 @cite_1 @cite_34 @cite_19 @cite_2 . None of these, however, reach real-time computation of ANN Fields in video. Image-specific methods for computing the ANN Field between a pair of images achieve shorter run-times by further exploiting properties of natural images @cite_31 @cite_14 @cite_4 @cite_25 @cite_17 @cite_18 . In particular, they rely on spatial coherency in images to propagate good matches between neighboring patches in the image plane. While sufficiently fast for most interactive image-editing applications, these methods are far from running at conventional video frame rates. It is only fair to say that these methods were not designed for video and do not leverage statistical properties of video. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_22",
"@cite_1",
"@cite_19",
"@cite_2",
"@cite_31",
"@cite_34",
"@cite_25",
"@cite_17"
],
"mid": [
"1627400044",
"1763426478",
"2145940484",
"2147717514",
"",
"2427881153",
"",
"1763426478",
"2050749090",
"2163292664",
"1644552752"
],
"abstract": [
"For many computer vision problems, the most time consuming component consists of nearest neighbor matching in high-dimensional spaces. There are no known exact algorithms for solving these high-dimensional problems that are faster than linear search. Approximate algorithms are known to provide large speedups with only minor loss in accuracy, but many such algorithms have been published with only minimal guidance on selecting an algorithm and its parameters for any given problem. In this paper, we describe a system that answers the question, “What is the fastest approximate nearest-neighbor algorithm for my data?” Our system will take any given dataset and desired degree of precision and use these to automatically determine the best algorithm and parameter values. We also describe a new algorithm that applies priority search on hierarchical k-means trees, which we have found to provide the best known performance on many datasets. After testing a range of alternatives, we have found that multiple randomized k-d trees provide the best performance for other datasets. We are releasing public domain code that implements these approaches. This library provides about one order of magnitude improvement in query time over the best previously available software and provides fully automated parameter selection.",
"PatchMatch is a fast algorithm for computing dense approximate nearest neighbor correspondences between patches of two image regions [1]. This paper generalizes PatchMatch in three ways: (1) to find k nearest neighbors, as opposed to just one, (2) to search across scales and rotations, in addition to just translations, and (3) to match using arbitrary descriptors and distances, not just sum-of-squared-differences on patch colors. In addition, we offer new search and parallelization strategies that further accelerate the method, and we show performance improvements over standard kd-tree techniques across a variety of inputs. In contrast to many previous matching algorithms, which for efficiency reasons have restricted matching to sparse interest points, or spatially proximate matches, our algorithm can efficiently find global, dense matches, even while matching across all scales and rotations. This is especially useful for computer vision applications, where our algorithm can be used as an efficient general-purpose component. We explore a variety of vision applications: denoising, finding forgeries by detecting cloned regions, symmetry detection, and object detection.",
"Coherency Sensitive Hashing (CSH) extends Locality Sensitivity Hashing (LSH) and PatchMatch to quickly find matching patches between two images. LSH relies on hashing, which maps similar patches to the same bin, in order to find matching patches. PatchMatch, on the other hand, relies on the observation that images are coherent, to propagate good matches to their neighbors, in the image plane. It uses random patch assignment to seed the initial matching. CSH relies on hashing to seed the initial patch matching and on image coherence to propagate good matches. In addition, hashing lets it propagate information between patches with similar appearance (i.e., map to the same bin). This way, information is propagated much faster because it can use similarity in appearance space or neighborhood in the image plane. As a result, CSH is at least three to four times faster than PatchMatch and more accurate, especially in textured regions, where reconstruction artifacts are most noticeable to the human eye. We verified CSH on a new, large scale, data set of 133 image pairs.",
"We present two algorithms for the approximate nearest neighbor problem in high-dimensional spaces. For data sets of size n living in R d , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree. The article is based on the material from the authors' STOC'98 and FOCS'01 papers. It unifies, generalizes and simplifies the results from those papers.",
"",
"Consider a set of S of n data points in real d -dimensional space, R d , where distances are measured using any Minkowski metric. In nearest neighbor searching, we preprocess S into a data structure, so that given any query point q ∈ R d , is the closest point of S to q can be reported quickly. Given any positive real e, data point p is a (1 +e)- approximate nearest neighbor of q if its distance from q is within a factor of (1 + e) of the distance to the true nearest neighbor. We show that it is possible to preprocess a set of n points in R d in O(dn log n ) time and O(dn) space, so that given a query point q ∈ R d , and e > 0, a (1 + e)-approximate nearest neighbor of q can be computed in O ( c d , e log n ) time, where c d,e ≤ d 1 + 6d e ; d is a factor depending only on dimension and e. In general, we show that given an integer k ≥ 1, (1 + e)-approximations to the k nearest neighbors of q can be computed in additional O(kd log n ) time.",
"",
"PatchMatch is a fast algorithm for computing dense approximate nearest neighbor correspondences between patches of two image regions [1]. This paper generalizes PatchMatch in three ways: (1) to find k nearest neighbors, as opposed to just one, (2) to search across scales and rotations, in addition to just translations, and (3) to match using arbitrary descriptors and distances, not just sum-of-squared-differences on patch colors. In addition, we offer new search and parallelization strategies that further accelerate the method, and we show performance improvements over standard kd-tree techniques across a variety of inputs. In contrast to many previous matching algorithms, which for efficiency reasons have restricted matching to sparse interest points, or spatially proximate matches, our algorithm can efficiently find global, dense matches, even while matching across all scales and rotations. This is especially useful for computer vision applications, where our algorithm can be used as an efficient general-purpose component. We explore a variety of vision applications: denoising, finding forgeries by detecting cloned regions, symmetry detection, and object detection.",
"We address the problem ofdesigning data structures that allow efficient search f or approximate nearest neighbors. More specifically, given a database consisting ofa set ofvectors in some high dimensional Euclidean space, we want to construct a space-efficient data structure that would allow us to search, given a query vector, for the closest or nearly closest vector in the database. We also address this problem when distances are measured by the L1 norm and in the Hamming cube. Significantly improving and extending recent results ofKleinberg, we construct data structures whose size is polynomial in the size ofthe database and search algorithms that run in time nearly linear or nearly quadratic in the dimension. (Depending on the case, the extra factors are polylogarithmic in the size ofthe database.)",
"This paper exploits the context of natural dynamic scenes for human action recognition in video. Human actions are frequently constrained by the purpose and the physical properties of scenes and demonstrate high correlation with particular scene classes. For example, eating often happens in a kitchen while running is more common outdoors. The contribution of this paper is three-fold: (a) we automatically discover relevant scene classes and their correlation with human actions, (b) we show how to learn selected scene classes from video without manual supervision and (c) we develop a joint framework for action and scene recognition and demonstrate improved recognition of both in natural video. We use movie scripts as a means of automatic supervision for training. For selected action classes we identify correlated scene classes in text and then retrieve video samples of actions and scenes for training using script-to-video alignment. Our visual models for scenes and actions are formulated within the bag-of-features framework and are combined in a joint scene-action SVM-based classifier. We report experimental results and validate the method on a new large dataset with twelve action classes and ten scene classes acquired from 69 movies.",
"TreeCANN is a fast algorithm for approximately matching all patches between two images. It does so by following the established convention of finding an initial set of matching patch candidates between the two images and then propagating good matches to neighboring patches in the image plane. TreeCANN accelerates each of these components substantially leading to an algorithm that is ×3 to ×5 faster than existing methods. Seed matching is achieved using a properly tuned k-d tree on a sparse grid of patches. In particular, we show that a sequence of key design decisions can make k-d trees run as fast as recently proposed state-of-the-art methods, and because of image coherency it is enough to consider only a sparse grid of patches across the image plane. We then develop a novel propagation step that is based on the integral image, which drastically reduces the computational load that is dominated by the need to repeatedly measure similarity between pairs of patches. As a by-product we give an optimal algorithm for exact matching that is based on the integral image. The proposed exact algorithm is faster than previously reported results and depends only on the size of the images and not on the size of the patches. We report results on large and varied data sets and show that TreeCANN is orders of magnitude faster than exact NN search yet produces matches that are within 1 error, compared to the exact NN search."
]
} |
1508.07953 | 2952850130 | We introduce RIANN (Ring Intersection Approximate Nearest Neighbor search), an algorithm for matching patches of a video to a set of reference patches in real-time. For each query, RIANN finds potential matches by intersecting rings around key points in appearance space. Its search complexity is reversely correlated to the amount of temporal change, making it a good fit for videos, where typically most patches change slowly with time. Experiments show that RIANN is up to two orders of magnitude faster than previous ANN methods, and is the only solution that operates in real-time. We further demonstrate how RIANN can be used for real-time video processing and provide examples for a range of real-time video applications, including colorization, denoising, and several artistic effects. | An extension from images to video was proposed by Liu & Freeman @cite_26 for the propose of video denoising through non-local-means. For each patch in the video they search for @math Approximate Nearest Neighbors within the same frame or in nearby frames. This is done by propagating candidate matches both temporally, using optical flow, and spatially in a similar manner to PatchMatch @cite_31 . One can think of this problem setup as similar to ours, but with a varying reference set. While we keep a fixed reference set for the entire video, @cite_26 use as different set of reference patches for each video frame. | {
"cite_N": [
"@cite_31",
"@cite_26"
],
"mid": [
"1763426478",
"1512782336"
],
"abstract": [
"PatchMatch is a fast algorithm for computing dense approximate nearest neighbor correspondences between patches of two image regions [1]. This paper generalizes PatchMatch in three ways: (1) to find k nearest neighbors, as opposed to just one, (2) to search across scales and rotations, in addition to just translations, and (3) to match using arbitrary descriptors and distances, not just sum-of-squared-differences on patch colors. In addition, we offer new search and parallelization strategies that further accelerate the method, and we show performance improvements over standard kd-tree techniques across a variety of inputs. In contrast to many previous matching algorithms, which for efficiency reasons have restricted matching to sparse interest points, or spatially proximate matches, our algorithm can efficiently find global, dense matches, even while matching across all scales and rotations. This is especially useful for computer vision applications, where our algorithm can be used as an efficient general-purpose component. We explore a variety of vision applications: denoising, finding forgeries by detecting cloned regions, symmetry detection, and object detection.",
"Although the recent advances in the sparse representations of images have achieved outstanding denosing results, removing real, structured noise in digital videos remains a challenging problem. We show the utility of reliable motion estimation to establish temporal correspondence across frames in order to achieve high-quality video denoising. In this paper, we propose an adaptive video denosing framework that integrates robust optical flow into a nonlocal means (NLM) framework with noise level estimation. The spatial regularization in optical flow is the key to ensure temporal coherence in removing structured noise. Furthermore, we introduce approximate K-nearest neighbor matching to significantly reduce the complexity of classical NLM methods. Experimental results show that our system is comparable with the state of the art in removing AWGN, and significantly outperforms the state of the art in removing real, structured noise."
]
} |
1508.07953 | 2952850130 | We introduce RIANN (Ring Intersection Approximate Nearest Neighbor search), an algorithm for matching patches of a video to a set of reference patches in real-time. For each query, RIANN finds potential matches by intersecting rings around key points in appearance space. Its search complexity is reversely correlated to the amount of temporal change, making it a good fit for videos, where typically most patches change slowly with time. Experiments show that RIANN is up to two orders of magnitude faster than previous ANN methods, and is the only solution that operates in real-time. We further demonstrate how RIANN can be used for real-time video processing and provide examples for a range of real-time video applications, including colorization, denoising, and several artistic effects. | In we show how our ANNF framework can be used for video processing. The idea of using ANNF for video processing has been proposed before, and several works make use of it. Sun & Liu @cite_5 suggest an approach to video deblocking that considers both optical flow estimation and ANNs found using a kd-tree. An approach that utilizes temporal propagation for video super-resolution is proposed in @cite_12 . They as well rely on optical flow estimation for this. The quality of the results obtained by these methods is high but this comes at the price of very long runtimes, often in the hours. | {
"cite_N": [
"@cite_5",
"@cite_12"
],
"mid": [
"204827909",
"1981990039"
],
"abstract": [
"Real-world video sequences coded at low bit rates suffer from compression artifacts, which are visually disruptive and can cause problems to computer vision algorithms. Unlike the denoising problem where the high frequency components of the signal are present in the noisy observation, most high frequency details are lost during compression and artificial discontinuities arise across the coding block boundaries. In addition to sparse spatial priors that can reduce the blocking artifacts for a single frame, temporal information is needed to recover the lost spatial details. However, establishing accurate temporal correspondences from the compressed videos is challenging because of the loss of high frequency details and the increase of false blocking artifacts. In this paper, we propose a non-causal temporal prior model to reduce video compression artifacts by propagating information from adjacent frames and iterating between image reconstruction and motion estimation. Experimental results on real-world sequences demonstrate that the deblocked videos by the proposed system have marginal statistics of high frequency components closer to those of the original ones, and are better input for standard edge and corner detectors than the coded ones.",
"Although multi-frame super resolution has been extensively studied in past decades, super resolving real-world video sequences still remains challenging. In existing systems, either the motion models are oversimplified, or important factors such as blur kernel and noise level are assumed to be known. Such models cannot deal with the scene and imaging conditions that vary from one sequence to another. In this paper, we propose a Bayesian approach to adaptive video super resolution via simultaneously estimating underlying motion, blur kernel and noise level while reconstructing the original high-res frames. As a result, our system not only produces very promising super resolution results that outperform the state of the art, but also adapts to a variety of noise levels and blur kernels. Theoretical analysis of the relationship between blur kernel, noise level and frequency-wise reconstruction rate is also provided, consistent with our experimental results."
]
} |
1508.07753 | 2949938158 | Bayesian networks, and especially their structures, are powerful tools for representing conditional independencies and dependencies between random variables. In applications where related variables form a priori known groups, chosen to represent different "views" to or aspects of the same entities, one may be more interested in modeling dependencies between groups of variables rather than between individual variables. Motivated by this, we study prospects of representing relationships between variable groups using Bayesian network structures. We show that for dependency structures between groups to be learnable, the data have to satisfy the so-called groupwise faithfulness assumption. We also show that one cannot learn causal relations between groups using only groupwise conditional independencies, but also variable-wise relations are needed. Additionally, we present algorithms for finding the groupwise dependency structures. | Burge and Lane @cite_2 have presented Bayesian networks for aggregation hierarchies which are related to hierarchical Bayesian networks. Groups of variables are aggregated by, for example, taking a maximum or mean and then networks are learned between the aggregated variables. From our point of view, the downside of this approach is that conditional independencies between aggregated variables do not necessarily correspond to conditional independencies between groups. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1608889008"
],
"abstract": [
"Bayesian network structure identification is known to be NP-Hard in the general case. We demonstrate a heuristic search for structure identification based on aggregationhierarchies. The basic idea is to perform initial exhaustive searches on composite “high-level” random variables (RVs) that are created via aggregations of atomic RVs. The results of the high-level searches then constrain a refined search on the atomic RVs. We demonstrate our methods on a challenging real-world neuroimaging domain and show that they consistently yield higher scoring networks when compared to traditional searches, provided sufficient topological complexity is permitted. On simulated data, where ground truth is known and controllable, our methods yield improved classification accuracy and structural precision, but can also result in reduced structural recall on particularly noisy datasets."
]
} |
1508.07753 | 2949938158 | Bayesian networks, and especially their structures, are powerful tools for representing conditional independencies and dependencies between random variables. In applications where related variables form a priori known groups, chosen to represent different "views" to or aspects of the same entities, one may be more interested in modeling dependencies between groups of variables rather than between individual variables. Motivated by this, we study prospects of representing relationships between variable groups using Bayesian network structures. We show that for dependency structures between groups to be learnable, the data have to satisfy the so-called groupwise faithfulness assumption. We also show that one cannot learn causal relations between groups using only groupwise conditional independencies, but also variable-wise relations are needed. Additionally, we present algorithms for finding the groupwise dependency structures. | Entner and Hoyer @cite_6 have presented an algorithm for finding causal structures among groups of continuous variables. Their model works under the assumptions that variables are linearly related and associated with non-Gaussian noise. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1526951520"
],
"abstract": [
"The machine learning community has recently devoted much attention to the problem of inferring causal relationships from statistical data. Most of this work has focused on uncovering connections among scalar random variables. We generalize existing methods to apply to collections of multi-dimensional random vectors, focusing on techniques applicable to linear models. The performance of the resulting algorithms is evaluated and compared in simulations, which show that our methods can, in many cases, provide useful information on causal relationships even for relatively small sample sizes."
]
} |
1508.07753 | 2949938158 | Bayesian networks, and especially their structures, are powerful tools for representing conditional independencies and dependencies between random variables. In applications where related variables form a priori known groups, chosen to represent different "views" to or aspects of the same entities, one may be more interested in modeling dependencies between groups of variables rather than between individual variables. Motivated by this, we study prospects of representing relationships between variable groups using Bayesian network structures. We show that for dependency structures between groups to be learnable, the data have to satisfy the so-called groupwise faithfulness assumption. We also show that one cannot learn causal relations between groups using only groupwise conditional independencies, but also variable-wise relations are needed. Additionally, we present algorithms for finding the groupwise dependency structures. | An earlier version of this paper @cite_1 appeared in the proceedings of the PGM 2016 conference. New contents of this paper include an analysis of the relationship between faithfulness and groupwise faithfulness (Theorems and ), an alternative definition of causality for variable groups and an analysis of it (Definition and Theorem ), a new algorithm for learning group DAGs (), and more thorough experiments (). | {
"cite_N": [
"@cite_1"
],
"mid": [
"2226727257"
],
"abstract": [
"Bayesian networks, and especially their structures, are powerful tools for representing conditional independencies and dependencies between random variables. In applications where related variables form a priori known groups, chosen to represent different “views” to or aspects of the same entities, one may be more interested in modeling dependencies between groups of variables rather than between individual variables. Motivated by this, we study prospects of representing relationships between variable groups using Bayesian network structures. We show that for dependency structures between groups to be expressible exactly, the data have to satisfy the so-called groupwise faithfulness assumption. We also show that one cannot learn causal relations between groups using only groupwise conditional independencies, but also variable-wise relations are needed. Additionally, we present algorithms for finding the groupwise dependency structures."
]
} |
1508.06950 | 2285630400 | We build a model of information cascades on feed-based networks, taking into account the finite attention span of users, message generation rates and message forwarding rates. Through simulation of this model, we study the effect of the extent of user attention on the probability that the cascade becomes viral. In analogy with a branching process, we estimate the branching factor associated with the cascade process for different attention spans and different forwarding probabilities, and demonstrate that beyond a certain attention span, cascades tend to become viral. The critical forwarding probabilities have an inverse relationship with the attention span. Next, we develop an analytical and numerical approach that allows us to determine the branching factor for given values of message generation rates, message forwarding rates and attention spans. The branching factors obtained using this analytical approach show good agreement with those obtained through simulations. Finally, we analyze an event-specific dataset obtained from Twitter, and show that estimated branching factors correlate well with the cascade-size distributions associated with distinct hashtags. | Using attributes of the underlying graph as explanatory variables is a common approach to studying information cascades on networks. Typically, these networks have either directional links, through follower and followee relationships or bi-directional links, through friendship statuses. @cite_0 consider a model to predict spikes of activity while considering various graph-theoretic characteristics of these networks. In contrast, our analysis is based on abstracting the cascade as a branching process, and we apply this approach to data generated from simulations on scale-free networks, and to data collected from Twitter. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2963914481"
],
"abstract": [
"We propose a novel mathematical model for the activity of microbloggers during an external, event-driven spike. The model leads to a testable prediction of who would become most active if a spike were to take place. This type of insight into human behaviour has many applications, as it identifies key players who can be targeted with information in real time when the network is most receptive. The model takes account of the fact that dynamic interactions evolve over an underlying, static network that records \"who listens to whom\". Our fundamental assumption is that, in the case where the entire community has become aware of an external news event, a key driver of activity is the motivation to participate by responding to incoming messages. We validate the resulting algorithm on a large scale Twitter conversation concerning the appointment of a UK Premier League football club manager. We also find that the half-life of a spike in activity can be quantified in terms of the network size and the typical response rate."
]
} |
1508.06950 | 2285630400 | We build a model of information cascades on feed-based networks, taking into account the finite attention span of users, message generation rates and message forwarding rates. Through simulation of this model, we study the effect of the extent of user attention on the probability that the cascade becomes viral. In analogy with a branching process, we estimate the branching factor associated with the cascade process for different attention spans and different forwarding probabilities, and demonstrate that beyond a certain attention span, cascades tend to become viral. The critical forwarding probabilities have an inverse relationship with the attention span. Next, we develop an analytical and numerical approach that allows us to determine the branching factor for given values of message generation rates, message forwarding rates and attention spans. The branching factors obtained using this analytical approach show good agreement with those obtained through simulations. Finally, we analyze an event-specific dataset obtained from Twitter, and show that estimated branching factors correlate well with the cascade-size distributions associated with distinct hashtags. | Another approach common in studying information cascades in social networks is to consider community structure. @cite_14 @cite_13 investigate the impact of community structure on spreading of memes. The dynamics of these cascades are studied with respect to various complex contagion models. Others have used epidemic models to investigate the dynamics of information cascades. @cite_8 considers these epidemic models with four states in epidemics and attempts to fit empirical data to parameterize these models. Analogs to states in this type of model are the user is when open to viewing messages of a particular topic; while a message is in its queue; after the message is forwarded, when it can return to the susceptible state. Our approach does not allow for such a recovered state. | {
"cite_N": [
"@cite_14",
"@cite_13",
"@cite_8"
],
"mid": [
"2156716308",
"",
"2164082612"
],
"abstract": [
"How does network structure affect diffusion? Recent studies suggest that the answer depends on the type of contagion. Complex contagions, unlike infectious diseases (simple contagions), are affected by social reinforcement and homophily. Hence, the spread within highly clustered communities is enhanced, while diffusion across communities is hampered. A common hypothesis is that memes and behaviors are complex contagions. We show that, while most memes indeed spread like complex contagions, a few viral memes spread across many communities, like diseases. We demonstrate that the future popularity of a meme can be predicted by quantifying its early spreading pattern in terms of community concentration. The more communities a meme permeates, the more viral it is. We present a practical method to translate data about community structure into predictive knowledge about what information will spread widely. This connection contributes to our understanding in computational social science, social media analytics, and marketing applications.",
"",
"Characterizing information diffusion on social platforms like Twitter enables us to understand the properties of underlying media and model communication patterns. As Twitter gains in popularity, it has also become a venue to broadcast rumors and misinformation. We use epidemiological models to characterize information cascades in twitter resulting from both news and rumors. Specifically, we use the SEIZ enhanced epidemic model that explicitly recognizes skeptics to characterize eight events across the world and spanning a range of event types. We demonstrate that our approach is accurate at capturing diffusion in these events. Our approach can be fruitfully combined with other strategies that use content modeling and graph theoretic features to detect (and possibly disrupt) rumors."
]
} |
1508.06950 | 2285630400 | We build a model of information cascades on feed-based networks, taking into account the finite attention span of users, message generation rates and message forwarding rates. Through simulation of this model, we study the effect of the extent of user attention on the probability that the cascade becomes viral. In analogy with a branching process, we estimate the branching factor associated with the cascade process for different attention spans and different forwarding probabilities, and demonstrate that beyond a certain attention span, cascades tend to become viral. The critical forwarding probabilities have an inverse relationship with the attention span. Next, we develop an analytical and numerical approach that allows us to determine the branching factor for given values of message generation rates, message forwarding rates and attention spans. The branching factors obtained using this analytical approach show good agreement with those obtained through simulations. Finally, we analyze an event-specific dataset obtained from Twitter, and show that estimated branching factors correlate well with the cascade-size distributions associated with distinct hashtags. | A work related to biases in data collection, @cite_2 compares the full Twitter feed Firehose with the sampled Gardenhose Twitter Streaming API to which the majority of researchers have access. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1845748792"
],
"abstract": [
"Twitter is a social media giant famous for the exchange of short, 140-character messages called \"tweets\". In the scientific community, the microblogging site is known for openness in sharing its data. It provides a glance into its millions of users and billions of tweets through a \"Streaming API\" which provides a sample of all tweets matching some parameters preset by the API user. The API service has been used by many researchers, companies, and governmental institutions that want to extract knowledge in accordance with a diverse array of questions pertaining to social media. The essential drawback of the Twitter API is the lack of documentation concerning what and how much data users get. This leads researchers to question whether the sampled data is a valid representation of the overall activity on Twitter. In this work we embark on answering this question by comparing data collected using Twitter's sampled API service with data collected using the full, albeit costly, Firehose stream that includes every single published tweet. We compare both datasets using common statistical metrics as well as metrics that allow us to compare topics, networks, and locations of tweets. The results of our work will help researchers and practitioners understand the implications of using the Streaming API."
]
} |
1508.06976 | 2210149738 | This paper addresses the problem of predicting the k events that are most likely to occur next, over historical real-time event streams. Existing approaches to causal prediction queries have a number of limitations. First, they exhaustively search over an acyclic causal network to find the most likely k effect events; however, data from real event streams frequently reflect cyclic causality. Second, they contain conservative assumptions intended to exclude all possible non-causal links in the causal network; it leads to the omission of many less-frequent but important causal links. We overcome these limitations by proposing a novel event precedence model and a run-time causal inference mechanism. The event precedence model constructs a first order absorbing Markov chain incrementally over event streams, where an edge between two events signifies a temporal precedence relationship between them, which is a necessary condition for causality. Then, the run-time causal inference mechanism learns causal relationships dynamically during query processing. This is done by removing some of the temporal precedence relationships that do not exhibit causality in the presence of other events in the event precedence model. This paper presents two query processing algorithms -- one performs exhaustive search on the model and the other performs a more efficient reduced search with early termination. Experiments using two real datasets (cascading blackouts in power systems and web page views) verify the effectiveness of the probabilistic top-k prediction queries and the efficiency of the algorithms. Specifically, the reduced search algorithm reduced runtime, relative to exhaustive search, by 25-80 (depending on the application) with only a small reduction in accuracy. | In addition, there has been some work (e.g., @cite_40 @cite_34 @cite_19 ) to support Bayesian network which aims to handle the cyclic causality in Bayesian networks. This work, however, still carries the drawbacks inherent in the Bayesian network approach -- that is, the ambiguity of equivalence classes and the inability to meet the requirement of a causal network that the parent node in the network should always represent the direct cause -- and hence is not useful in our work. | {
"cite_N": [
"@cite_19",
"@cite_40",
"@cite_34"
],
"mid": [
"1586497982",
"372138008",
"878863653"
],
"abstract": [
"Although undirected cycles in directed graphs of Bayesian belief networks have been thoroughly studied, little attention has so far been given to a systematic analysis of directed (feedback) cycles. In this paper we propose a way of looking at those cycles; namely, we suggest that a feedback cycle represents a family of probabilistic distributions rather than a single distribution (as a regular Bayesian belief network does). A non-empty family of distributions can be explicitly represented by an ideal of conjunctions with interval estimates on the probabilities of its elements. This ideal can serve as a probabilistic model of an expert's uncertain knowledge pattern; such models are studied in the theory of algebraic Bayesian networks. The family of probabilistic distributions may also be empty; in this case, the probabilistic assignment over cycle nodes is inconsistent. We propose a simple way of explicating the probabilistic relationships an isolated directed cycle contains, give an algorithm (based on linear programming) of its consistency checking, and establish a lower bound of the complexity of this checking.",
"The first comprehensive overview of preprocessing, mining, and postprocessing of biological dataMolecular biology is undergoing exponential growth in both the volume and complexity of biological dataand knowledge discovery offers the capacity to automate complex search and data analysis tasks. This book presents a vast overview of the most recent developments on techniques and approaches in the field of biological knowledge discovery and data mining (KDD)providing in-depth fundamental and technical field information on the most important topics encountered.Written by top experts, Biological Knowledge Discovery Handbook: Preprocessing, Mining, and Postprocessing of Biological Data covers the three main phases of knowledge discovery (data preprocessing, data processingalso known as data miningand data postprocessing) and analyzes both verification systems and discovery systems.BIOLOGICAL DATA PREPROCESSINGPart A: Biological Data ManagementPart B: Biological Data ModelingPart C: Biological Feature ExtractionPart D Biological Feature SelectionBIOLOGICAL DATA MININGPart E: Regression Analysis of Biological DataPart F Biological Data ClusteringPart G: Biological Data ClassificationPart H: Association Rules Learning from Biological DataPart I: Text Mining and Application to Biological DataPart J: High-Performance Computing for Biological Data MiningCombining sound theory with practical applications in molecular biology, Biological Knowledge Discovery Handbook is ideal for courses in bioinformatics and biological KDD as well as for practitioners and professional researchers in computer science, life science, and mathematics.",
""
]
} |
1508.06976 | 2210149738 | This paper addresses the problem of predicting the k events that are most likely to occur next, over historical real-time event streams. Existing approaches to causal prediction queries have a number of limitations. First, they exhaustively search over an acyclic causal network to find the most likely k effect events; however, data from real event streams frequently reflect cyclic causality. Second, they contain conservative assumptions intended to exclude all possible non-causal links in the causal network; it leads to the omission of many less-frequent but important causal links. We overcome these limitations by proposing a novel event precedence model and a run-time causal inference mechanism. The event precedence model constructs a first order absorbing Markov chain incrementally over event streams, where an edge between two events signifies a temporal precedence relationship between them, which is a necessary condition for causality. Then, the run-time causal inference mechanism learns causal relationships dynamically during query processing. This is done by removing some of the temporal precedence relationships that do not exhibit causality in the presence of other events in the event precedence model. This paper presents two query processing algorithms -- one performs exhaustive search on the model and the other performs a more efficient reduced search with early termination. Experiments using two real datasets (cascading blackouts in power systems and web page views) verify the effectiveness of the probabilistic top-k prediction queries and the efficiency of the algorithms. Specifically, the reduced search algorithm reduced runtime, relative to exhaustive search, by 25-80 (depending on the application) with only a small reduction in accuracy. | The existing body of work on prediction only addresses inference of the likelihood of occurrence of an effect variable given a cause variable (e.g., @cite_38 @cite_6 @cite_29 @cite_26 @cite_27 @cite_36 ), while the prediction of top-k effects requires finding the most likely effects among all possible effect variables. Therefore, the only way to find the top-k next effects is to construct a traditional causal network, which ignores cyclic causality and suffers from causal information loss, over event streams and then infer the top-k effects of the cause exhaustively(e.g., @cite_2 @cite_37 @cite_31 ). To the best of our knowledge, there is no solution to address cyclic causality, mitigate the causal information loss, and perform only necessary partial search to find the top-k effects of the given causes over event streams. | {
"cite_N": [
"@cite_38",
"@cite_37",
"@cite_26",
"@cite_36",
"@cite_29",
"@cite_6",
"@cite_27",
"@cite_2",
"@cite_31"
],
"mid": [
"2150344704",
"1510302714",
"2135267747",
"1496450597",
"1489119587",
"",
"1548084629",
"2128293149",
"2126653228"
],
"abstract": [
"Stochastic sampling algorithms, while an attractive alternative to exact algorithms in very large Bayesian network models, have been observed to perform poorly in evidential reasoning with extremely unlikely evidence. To address this problem, we propose an adaptive importance sampling algorithm, AIS-BN, that shows promising convergence rates even under extreme conditions and seems to outperform the existing sampling algorithms consistently. Three sources of this performance improvement are (1) two heuristics for initialization of the importance function that are based on the theoretical properties of importance sampling in finite-dimensional integrals and the structural advantages of Bayesian networks, (2) a smooth learning method for the importance function, and (3) a dynamic weighting function for combining samples from different stages of the algorithm. We tested the performance of the AIS-BN algorithm along with two state of the art general purpose sampling algorithms, likelihood weighting (Fung & Chang, 1989; Shachter & Peot, 1989) and self-importance sampling (Shachter & Peot, 1989). We used in our tests three large real Bayesian network models available to the scientific community: the CPCS network (, 1994), the PATHFINDER network (Heckerman, Horvitz, & Nathwani, 1990), and the ANDES network (Conati, Gertner, VanLehn, & Druzdzel, 1997), with evidence as unlikely as 10-41. While the AIS-BN algorithm always performed better than the other two algorithms, in the majority of the test cases it achieved orders of magnitude improvement in precision of the results. Improvement in speed given a desired precision is even more dramatic, although we are unable to report numerical results here, as the other algorithms almost never achieved the precision reached even by the first few iterations of the AIS-BN algorithm.",
"Since exact probabilistic inference is intractable in general for large multiply connected belief nets, approximate methods are required. A promising approach is to use heuristic search among hypotheses (instantiations of the network) to find the most probable ones, as in the TopN algorithm. Search is based on the relative probabilities of hypotheses which are efficient to compute. Given upper and lower bounds on the relative probability of partial hypotheses, it is possible to obtain bounds on the absolute probabilities of hypotheses. Best-first search aimed at reducing the maximum error progressively narrows the bounds as more hypotheses are examined. Here, qualitative probabilistic analysis is employed to obtain bounds on the relative probability of partial hypotheses for the BN20 class of networks networks and a generalization replacing the noisy OR assumption by negative synergy. The approach is illustrated by application to a very large belief network, QMR-BN, which is a reformulation of the Internist-1 system for diagnosis in internal medicine.",
"Approximate Bayesian computation (ABC) is a popular approach to address inference problems where the likelihood function is intractable, or expensive to calculate. To improve over Markov chain Monte Carlo (MCMC) implementations of ABC, the use of sequential Monte Carlo (SMC) methods has recently been suggested. Most effective SMC algorithms that are currently available for ABC have a computational complexity that is quadratic in the number of Monte Carlo samples (, Biometrika 86:983---990, 2009; , Technical report, 2008; , J. Roy. Soc. Interface 6:187---202, 2009) and require the careful choice of simulation parameters. In this article an adaptive SMC algorithm is proposed which admits a computational complexity that is linear in the number of samples and adaptively determines the simulation parameters. We demonstrate our algorithm on a toy example and on a birth-death-mutation model arising in epidemiology.",
"The general problem of computing poste rior probabilities in Bayesian networks is NP hard Cooper However e cient algorithms are often possible for particular applications by exploiting problem struc tures It is well understood that the key to the materialization of such a possibil ity is to make use of conditional indepen dence and work with factorizations of joint probabilities rather than joint probabilities themselves Di erent exact approaches can be characterized in terms of their choices of factorizations We propose a new approach which adopts a straightforward way for fac torizing joint probabilities In comparison with the clique tree propagation approach our approach is very simple It allows the pruning of irrelevant variables it accommo dates changes to the knowledge base more easily it is easier to implement More importantly it can be adapted to utilize both intercausal independence and condi tional independence in one uniform frame work On the other hand clique tree prop agation is better in terms of facilitating pre computations",
"This chapter describes a sequence of Monte Carlo methods: importance sampling, rejection sampling, the Metropolis method, and Gibbs sampling. For each method, we discuss whether the method is expected to be useful for high—dimensional problems such as arise in inference with graphical models. After the methods have been described, the terminology of Markov chain Monte Carlo methods is presented. The chapter concludes with a discussion of advanced methods, including methods for reducing random walk behaviour.",
"",
"The arc reversal node reduction approach to probabilistic inference is extended to include the case of instantiated evidence by an operation called “evidence reversal.” This not only provides a technique for computing posterior joint distributions on general belief networks, but also provides insight into the methods of Pearl [1986b] and Lauritzen and Spiegelhalter [1988]. Although it is well understood that the latter two algorithms are closely related, in fact all three algorithms are identical whenever the belief network is a forest.",
"Prediction is emerging as an essential ingredient for real-time monitoring, planning and decision support applications such as intrusion detection, e-commerce pricing and automated resource management. This paper presents a system that efficiently supports continuous prediction queries (CPQs) over streaming data using seamlessly-integrated probabilistic models. Specifically, we describe how to execute and optimize CPQs using discrete (Dynamic) Bayesian Networks as the underlying predictive model. Our primary contribution is a novel cost-based optimization framework that employs materialization, sharing, and model-specific optimization techniques to enable highly-efficient point- and range-based CPQ execution. Furthermore, we support efficient execution of top-k and threshold-based high probability queries. We characterize the behavior of our system and demonstrate significant performance gains using a prototype implementation operating on real-world network intrusion data and deployed as part of a real-time software-performance monitoring system.",
"A new method is proposed for exploiting causal independencies in exact Bayesian network inference. A Bayesian network can be viewed as representing a factorization of a joint probability into the multiplication of a set of conditional probabilities. We present a notion of causal independence that enables one to further factorize the conditional probabilities into a combination of even smaller factors and consequently obtain a finer-grain factorization of the joint probability. The new formulation of causal independence lets us specify the conditional probability of a variable given its parents in terms of an associative and commutative operator, such as \"or\", \"sum\" or \"max\", on the contribution of each parent. We start with a simple algorithm VE for Bayesian network inference that, given evidence and a query variable, uses the factorization to find the posterior distribution of the query. We show how this algorithm can be extended to exploit causal independence. Empirical studies, based on the CPCS networks for medical diagnosis, show that this method is more efficient than previous methods and allows for inference in larger networks than previous algorithms."
]
} |
1508.06976 | 2210149738 | This paper addresses the problem of predicting the k events that are most likely to occur next, over historical real-time event streams. Existing approaches to causal prediction queries have a number of limitations. First, they exhaustively search over an acyclic causal network to find the most likely k effect events; however, data from real event streams frequently reflect cyclic causality. Second, they contain conservative assumptions intended to exclude all possible non-causal links in the causal network; it leads to the omission of many less-frequent but important causal links. We overcome these limitations by proposing a novel event precedence model and a run-time causal inference mechanism. The event precedence model constructs a first order absorbing Markov chain incrementally over event streams, where an edge between two events signifies a temporal precedence relationship between them, which is a necessary condition for causality. Then, the run-time causal inference mechanism learns causal relationships dynamically during query processing. This is done by removing some of the temporal precedence relationships that do not exhibit causality in the presence of other events in the event precedence model. This paper presents two query processing algorithms -- one performs exhaustive search on the model and the other performs a more efficient reduced search with early termination. Experiments using two real datasets (cascading blackouts in power systems and web page views) verify the effectiveness of the probabilistic top-k prediction queries and the efficiency of the algorithms. Specifically, the reduced search algorithm reduced runtime, relative to exhaustive search, by 25-80 (depending on the application) with only a small reduction in accuracy. | The well-established association rule mining algorithms (e.g., @cite_35 @cite_24 @cite_3 ) are extensively used for prediction and recommendation. However, association does not necessarily imply causation (e.g., @cite_22 @cite_15 @cite_10 @cite_39 @cite_28 @cite_45 ). Therefore, they are not useful to our problem due to the exclusion of the fundamental concept of causality. That is, two variables that are associated require stronger conditions, such as temporality and strength, to be considered causally related. A few works on top-k query processing in the Internet domain, such as over social-tagging networks @cite_16 and over web 2.0 stream @cite_12 , have been published. Unlike our work, however, these works do not address causal prediction in an event-based environment at all. | {
"cite_N": [
"@cite_35",
"@cite_22",
"@cite_28",
"@cite_3",
"@cite_39",
"@cite_24",
"@cite_45",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_12"
],
"mid": [
"2110893740",
"1492644044",
"1961009203",
"2085419825",
"2112544758",
"",
"1811660475",
"",
"2156496568",
"2321919568",
""
],
"abstract": [
"In this paper we propose to use association rules to mine the association relationships among different genes under the same experimental conditions. These kinds of relations may also exist across many different experiments with various experimental conditions. In this paper, a new approach, called FIS-tree mining, is proposed for mining the microarray data. Our approach uses two new data structures, BSC-tree and FIS-tree, and a data partition format for gene expression level data. Based on these two new data structures it is possible to mine the association rules efficiently and quickly from the gene expression database. Our algorithm was tested using the two real-life gene expression databases available at Stanford University and Harvard Medical School and was shown to perform better than the two existing algorithms, Apriori and FP-Growth.",
"Association rules discovered through attribute-oriented induction are commonly used in data mining tools to express relationships between variables. However, causal inference algorithms discover more concise relationships between variables, namely, relations of direct cause. These algorithms produce regressive structured equation models for continuous linear data and Bayes networks for discrete data. This work compares the effectiveness of causal inference algorithms with association rule induction for discovering patterns in discrete data.",
"Mining for association rules in market basket data has proved a fruitful area of research. Measures such as conditional probability (confidence) and correlation have been used to infer rules of the form “the existence of item A implies the existence of item B.” However, such rules indicate only a statistical relationship between A and B. They do not specify the nature of the relationship: whether the presence of A causes the presence of B, or the converse, or some other attribute or phenomenon causes both to appear together. In applications, knowing such causal relationships is extremely useful for enhancing understanding and effecting change. While distinguishing causality from correlation is a truly difficult problem, recent work in statistics and Bayesian learning provide some avenues of attack. In these fields, the goal has generally been to learn complete causal models, which are essentially impossible to learn in large-scale data mining applications with a large number of variables. In this paper, we consider the problem of determining casual relationships, instead of mere associations, when mining market basket data. We identify some problems with the direct application of Bayesian learning ideas to mining large databases, concerning both the scalability of algorithms and the appropriateness of the statistical techniques, and introduce some initial ideas for dealing with these problems. We present experimental results from applying our algorithms on several large, real-world data sets. The results indicate that the approach proposed here is both computationally feasible and successful in identifying interesting causal structures. An interesting outcome is that it is perhaps easier to infer the lack of causality than to infer causality, information that is useful in preventing erroneous decision making.",
"Some applications have to present their results in the form of ranked lists. This is the case of many information retrieval applications, in which documents must be sorted according to their relevance to a given query. This has led the interest of the information retrieval community in methods that automatically learn effective ranking functions. In this paper we propose a novel method which uncovers patterns (or rules) in the training data associating features of the document with its relevance to the query, and then uses the discovered rules to rank documents. To address typical problems that are inherent to the utilization of association rules (such as missing rules and rule explosion), the proposed method generates rules on a demand-driven basis, at query-time. The result is an extremely fast and effective ranking method. We conducted a systematic evaluation of the proposed method using the LETOR benchmark collections. We show that generating rules on a demand-driven basis can boost ranking performance, providing gains ranging from 12 to 123 , outperforming the state-of-the-art methods that learn to rank, with no need of time-consuming and laborious pre-processing. As a highlight, we also show that additional information, such as query terms, can make the generated rules more discriminative, further improving ranking performance.",
"Time series are ubiquitous in all domains of human endeavor. They are generated, stored, and manipulated during any kind of activity. The goal of this chapter is to introduce a novel approach to mine multidimensional time-series data for causal relationships. The main feature of the proposed system is supporting discovery of causal relations based on automatically discovered recurring patterns in the input time series. This is achieved by integrating a variety of data mining techniques.",
"",
"“Any claim coming from an observational study is most likely to be wrong.” Startling, but true. Coffee causes pancreatic cancer. Type A personality causes heart attacks. Trans-fat is a killer. Women who eat breakfast cereal give birth to more boys. All these claims come from observational studies; yet when the studies are carefully examined, the claimed links appear to be incorrect. What is going wrong? Some have suggested that the scientific method is failing, that nature itself is playing tricks on us. But it is our way of studying nature that is broken and that urgently needs mending, say S. Stanley Young and Alan Karr; and they propose a strategy to fix it.",
"",
"Web 2.0 portals have made content generation easier than ever with millions of users contributing news stories in form of posts in weblogs or short textual snippets as in Twitter. Efficient and effective filtering solutions are key to allow users stay tuned to this ever-growing ocean of information, releasing only relevant trickles of personal interest. In classical information filtering systems, user interests are formulated using standard IR techniques and data from all available information sources is filtered based on a predefined absolute quality-based threshold. In contrast to this restrictive approach which may still overwhelm the user with the returned stream of data, we envision a system which continuously keeps the user updated with only the top-k relevant new information. Freshness of data is guaranteed by considering it valid for a particular time interval, controlled by a sliding window. Considering relevance as relative to the existing pool of new information creates a highly dynamic setting. We present POL-filter which together with our maintenance module constitute an efficient solution to this kind of problem. We show by comprehensive performance evaluations using real world data, obtained from a weblog crawl, that our approach brings performance gains compared to state-of-the-art.",
"Causal reasoning plays an essential role in both informal and formal human decision-making. Causality itself as well as human understanding of causality is imprecise, sometimes necessarily so. A common sense understanding of the world tells us that we have to deal with imprecision, uncertainty and imperfect knowledge. A difficulty is striking a good balance between precise formalism and commonsense imprecise reality. An algorithmic method of accommodating imprecision in causality is needed. Today, data mining holds the promise of extracting unsuspected information from very large databases. However, the most common data mining rule forms do not express a causal relationship. Without understanding the underlying causality, a naive use of data mining rules can lead to undesirable actions.",
""
]
} |
1508.06721 | 2950733490 | In this paper, we study the problem of distributing a real-time video sequence to a group of partially connected cooperative wireless devices using instantly decodable network coding (IDNC). In such a scenario, the coding conflicts occur to service multiple devices with an immediately decodable packet and the transmission conflicts occur from simultaneous transmissions of multiple devices. To avoid these conflicts, we introduce a novel IDNC graph that represents all feasible coding and transmission conflict-free decisions in one unified framework. Moreover, a real-time video sequence has a hard deadline and unequal importance of video packets. Using these video characteristics and the new IDNC graph, we formulate the problem of minimizing the mean video distortion before the deadline as a finite horizon Markov decision process (MDP) problem. However, the backward induction algorithm that finds the optimal policy of the MDP formulation has high modelling and computational complexities. To reduce these complexities, we further design a two-stage maximal independent set selection algorithm, which can efficiently reduce the mean video distortion before the deadline. Simulation results over a real video sequence show that our proposed IDNC algorithms improve the received video quality compared to the existing IDNC algorithms. | Numerous IDNC schemes have been developed to meet different requirements of video streaming applications @cite_20 @cite_28 @cite_8 @cite_13 @cite_15 @cite_26 @cite_29 . In particular, the authors in @cite_20 @cite_28 considered IDNC for wireless broadcast of a set of packets and serviced the maximum number of devices with any new packet in each time slot. Moreover, the authors in @cite_8 addressed the problem of minimizing the number of time slots required for broadcasting a set of packets in IDNC systems and formulated the problem into a stochastic shortest path (SSP) framework. However, the works in @cite_20 @cite_28 @cite_8 neither considered explicit packet delivery deadline nor considered unequal importance of video packets. | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_15",
"@cite_13",
"@cite_20"
],
"mid": [
"2150167333",
"1975409994",
"2140832808",
"1972328812",
"2140379242",
"2123530434",
"2083123304"
],
"abstract": [
"In this paper, we study video streaming over wireless networks with network coding capabilities. We build upon recent work, which demonstrated that network coding can increase throughput over a broadcast medium, by mixing packets from different flows into a single packet, thus increasing the information content per transmission. Our key insight is that, when the transmitted flows are video streams, network codes should be selected so as to maximize not only the network throughput but also the video quality. We propose video-aware opportunistic network coding schemes that take into account both the decodability of network codes by several receivers and the importance and deadlines of video packets. Simulation results show that our schemes significantly improve both video quality and throughput. This work is a first step towards content-aware network coding.",
"In this paper, we consider the problem of minimizing the completion delay for instantly decodable network coding (IDNC) in wireless multicast and broadcast scenarios. We are interested in this class of network coding due to its numerous benefits, such as low decoding delay, low coding and decoding complexities, and simple receiver requirements. We first extend the IDNC graph, which represents all feasible IDNC coding opportunities, to efficiently operate in both multicast and broadcast scenarios. We then formulate the minimum completion delay problem for IDNC as a stochastic shortest path (SSP) problem. Although finding the optimal policy using SSP is intractable, we use this formulation to draw the theoretical guidelines for the policies that can minimize the completion delay in IDNC. Based on these guidelines, we design a maximum weight clique selection algorithm, which can efficiently reduce the IDNC completion delay in polynomial time. We also design a quadratic-time heuristic clique selection algorithm, which can operate in real-time applications. Simulation results show that our proposed algorithms significantly reduce the IDNC completion delay compared to the random and maximum-rate algorithms, and almost achieve the global optimal completion delay performance over all network codes in broadcast scenarios.",
"We consider the scenario of broadcasting for realtime applications and loss recovery via instantly decodable network coding. Past work focused on minimizing the completion delay, which is not the right objective for real-time applications that have strict deadlines. In this work, we are interested in finding a code that is instantly decodable by the maximum number of users. First, we prove that this problem is NP-Hard in the general case. Then we consider the practical probabilistic scenario, where users have i.i.d. loss probability, and the number of packets is linear or polynomial in the number of users. In this case, we provide a polynomial-time (in the number of users) algorithm that finds the optimal coded packet. Simulation results show that the proposed coding scheme significantly outperforms an optimal repetition code and a COPE-like greedy scheme.",
"This work aims at introducing two novel packet retransmission techniques for reliable multicast in the framework of Instantly Decodable Network Coding (IDNC). These methods are suitable for order- and delay-sensitive applications, where some information is of high importance for an earlier gain at the receiver's side. We introduce hence an Unequal Error Protection (UEP) scheme, showing by simulations that the Quality of Experience (QoE) for the end-users is improved even without complex encoding and decoding.",
"In recent years, Wireless Local Area Networks (WLAN) have become the premier choice for many homes and enterprises. WiMAX (Worldwide Interoperability for Microwave Access) has also emerged as the wireless standard that aims to deliver data over long distances, and can potentially provide wireless broadband access as an alternative to the wired cable and DSL networks. Parallel with the surge of wireless networks is the explosive growth of multimedia applications. Therefore, it is important to explore efficient methods for delivering multimedia data in such wireless settings. In this paper, we propose a network coding based scheduling policy to be used at WLAN-like Access Point (AP) or at a WiMAX-like broadcast station that optimizes the multimedia transmission in both broadcast and unicast settings. In particular, the contributions of this paper include (a) a framework for increasing the bandwidth efficiency of broadcast and unicast sessions in a wireless network based on network coding techniques and (b) an optimized scheduling algorithm based on the Markov Decision Process (MDP) to maximize the quality of multimedia applications. Simulations and theoretical results demonstrate the advantages of our approach over the conventional techniques.",
"Multimedia streaming applications have stringent Quality-of-Service (QoS) requirements. Typically, each packet is associated with a packet delivery deadline. This work models and considers streaming broadcast of stored video over the downlink of a single cell. We first generalize the existing class of immediately-decodable network coding (IDNC) schemes to take into account the deadline constraints. The performance analysis of IDNC schemes are significantly complicated by the packet deadline constraints (from the application layer) and the immediate-decodability requirement (from the network layer). Despite this difficulty, we prove that for independent channels, the IDNC schemes are asymptotically throughput-optimal subject to the deadline constraints when there are no more than three users and when the video file size is sufficiently large. The deadline-constrained throughput gain of IDNC schemes over non-coding scheme is also explicitly quantified. Numerical results show that IDNC schemes strictly outperform the non-coding scheme not only in the asymptotic regime of large files but also for small files. Our results show that the IDNC schemes do not suffer from the substantial decoding delay that is inherent to existing generation-based network coding protocols.",
"Consider a source broadcasting M packets to N receivers over independent erasure channels, where perfect feedback is available from the receivers to the source, and the source is allowed to use coding. We investigate offline and online algorithms that optimize delay, both through theoretical analysis as well as simulation results."
]
} |
1508.06721 | 2950733490 | In this paper, we study the problem of distributing a real-time video sequence to a group of partially connected cooperative wireless devices using instantly decodable network coding (IDNC). In such a scenario, the coding conflicts occur to service multiple devices with an immediately decodable packet and the transmission conflicts occur from simultaneous transmissions of multiple devices. To avoid these conflicts, we introduce a novel IDNC graph that represents all feasible coding and transmission conflict-free decisions in one unified framework. Moreover, a real-time video sequence has a hard deadline and unequal importance of video packets. Using these video characteristics and the new IDNC graph, we formulate the problem of minimizing the mean video distortion before the deadline as a finite horizon Markov decision process (MDP) problem. However, the backward induction algorithm that finds the optimal policy of the MDP formulation has high modelling and computational complexities. To reduce these complexities, we further design a two-stage maximal independent set selection algorithm, which can efficiently reduce the mean video distortion before the deadline. Simulation results over a real video sequence show that our proposed IDNC algorithms improve the received video quality compared to the existing IDNC algorithms. | Several other works including @cite_13 @cite_15 @cite_26 @cite_29 considered video streaming applications with unequally important packets. The work in @cite_13 proposed an IDNC scheme that is asymptotically throughput optimal for the three-device system subject to sequential packet delivery deadline constraints. Moreover, the works in @cite_15 @cite_26 determined the importance of each video packet based on its contribution to the video quality and proposed IDNC schemes to maximize the overall video quality at the devices. The aforementioned works @cite_20 @cite_28 @cite_8 @cite_13 @cite_15 @cite_26 @cite_29 developed IDNC schemes for conventional PMP networks, which are fundamentally different from partially connected D2D networks considered in this paper. | {
"cite_N": [
"@cite_26",
"@cite_8",
"@cite_28",
"@cite_29",
"@cite_15",
"@cite_13",
"@cite_20"
],
"mid": [
"2150167333",
"1975409994",
"2140832808",
"1972328812",
"2140379242",
"2123530434",
"2083123304"
],
"abstract": [
"In this paper, we study video streaming over wireless networks with network coding capabilities. We build upon recent work, which demonstrated that network coding can increase throughput over a broadcast medium, by mixing packets from different flows into a single packet, thus increasing the information content per transmission. Our key insight is that, when the transmitted flows are video streams, network codes should be selected so as to maximize not only the network throughput but also the video quality. We propose video-aware opportunistic network coding schemes that take into account both the decodability of network codes by several receivers and the importance and deadlines of video packets. Simulation results show that our schemes significantly improve both video quality and throughput. This work is a first step towards content-aware network coding.",
"In this paper, we consider the problem of minimizing the completion delay for instantly decodable network coding (IDNC) in wireless multicast and broadcast scenarios. We are interested in this class of network coding due to its numerous benefits, such as low decoding delay, low coding and decoding complexities, and simple receiver requirements. We first extend the IDNC graph, which represents all feasible IDNC coding opportunities, to efficiently operate in both multicast and broadcast scenarios. We then formulate the minimum completion delay problem for IDNC as a stochastic shortest path (SSP) problem. Although finding the optimal policy using SSP is intractable, we use this formulation to draw the theoretical guidelines for the policies that can minimize the completion delay in IDNC. Based on these guidelines, we design a maximum weight clique selection algorithm, which can efficiently reduce the IDNC completion delay in polynomial time. We also design a quadratic-time heuristic clique selection algorithm, which can operate in real-time applications. Simulation results show that our proposed algorithms significantly reduce the IDNC completion delay compared to the random and maximum-rate algorithms, and almost achieve the global optimal completion delay performance over all network codes in broadcast scenarios.",
"We consider the scenario of broadcasting for realtime applications and loss recovery via instantly decodable network coding. Past work focused on minimizing the completion delay, which is not the right objective for real-time applications that have strict deadlines. In this work, we are interested in finding a code that is instantly decodable by the maximum number of users. First, we prove that this problem is NP-Hard in the general case. Then we consider the practical probabilistic scenario, where users have i.i.d. loss probability, and the number of packets is linear or polynomial in the number of users. In this case, we provide a polynomial-time (in the number of users) algorithm that finds the optimal coded packet. Simulation results show that the proposed coding scheme significantly outperforms an optimal repetition code and a COPE-like greedy scheme.",
"This work aims at introducing two novel packet retransmission techniques for reliable multicast in the framework of Instantly Decodable Network Coding (IDNC). These methods are suitable for order- and delay-sensitive applications, where some information is of high importance for an earlier gain at the receiver's side. We introduce hence an Unequal Error Protection (UEP) scheme, showing by simulations that the Quality of Experience (QoE) for the end-users is improved even without complex encoding and decoding.",
"In recent years, Wireless Local Area Networks (WLAN) have become the premier choice for many homes and enterprises. WiMAX (Worldwide Interoperability for Microwave Access) has also emerged as the wireless standard that aims to deliver data over long distances, and can potentially provide wireless broadband access as an alternative to the wired cable and DSL networks. Parallel with the surge of wireless networks is the explosive growth of multimedia applications. Therefore, it is important to explore efficient methods for delivering multimedia data in such wireless settings. In this paper, we propose a network coding based scheduling policy to be used at WLAN-like Access Point (AP) or at a WiMAX-like broadcast station that optimizes the multimedia transmission in both broadcast and unicast settings. In particular, the contributions of this paper include (a) a framework for increasing the bandwidth efficiency of broadcast and unicast sessions in a wireless network based on network coding techniques and (b) an optimized scheduling algorithm based on the Markov Decision Process (MDP) to maximize the quality of multimedia applications. Simulations and theoretical results demonstrate the advantages of our approach over the conventional techniques.",
"Multimedia streaming applications have stringent Quality-of-Service (QoS) requirements. Typically, each packet is associated with a packet delivery deadline. This work models and considers streaming broadcast of stored video over the downlink of a single cell. We first generalize the existing class of immediately-decodable network coding (IDNC) schemes to take into account the deadline constraints. The performance analysis of IDNC schemes are significantly complicated by the packet deadline constraints (from the application layer) and the immediate-decodability requirement (from the network layer). Despite this difficulty, we prove that for independent channels, the IDNC schemes are asymptotically throughput-optimal subject to the deadline constraints when there are no more than three users and when the video file size is sufficiently large. The deadline-constrained throughput gain of IDNC schemes over non-coding scheme is also explicitly quantified. Numerical results show that IDNC schemes strictly outperform the non-coding scheme not only in the asymptotic regime of large files but also for small files. Our results show that the IDNC schemes do not suffer from the substantial decoding delay that is inherent to existing generation-based network coding protocols.",
"Consider a source broadcasting M packets to N receivers over independent erasure channels, where perfect feedback is available from the receivers to the source, and the source is allowed to use coding. We investigate offline and online algorithms that optimize delay, both through theoretical analysis as well as simulation results."
]
} |
1508.06721 | 2950733490 | In this paper, we study the problem of distributing a real-time video sequence to a group of partially connected cooperative wireless devices using instantly decodable network coding (IDNC). In such a scenario, the coding conflicts occur to service multiple devices with an immediately decodable packet and the transmission conflicts occur from simultaneous transmissions of multiple devices. To avoid these conflicts, we introduce a novel IDNC graph that represents all feasible coding and transmission conflict-free decisions in one unified framework. Moreover, a real-time video sequence has a hard deadline and unequal importance of video packets. Using these video characteristics and the new IDNC graph, we formulate the problem of minimizing the mean video distortion before the deadline as a finite horizon Markov decision process (MDP) problem. However, the backward induction algorithm that finds the optimal policy of the MDP formulation has high modelling and computational complexities. To reduce these complexities, we further design a two-stage maximal independent set selection algorithm, which can efficiently reduce the mean video distortion before the deadline. Simulation results over a real video sequence show that our proposed IDNC algorithms improve the received video quality compared to the existing IDNC algorithms. | The network coded D2D communications have drawn a significant attention over the past several years to take advantages of both network coding and devices' cooperation. The works in @cite_21 @cite_14 @cite_4 incorporated algebraic network coding for D2D communications at the packet level. In particular, the authors in @cite_21 provided upper and lower bounds on the number of time slots required for recovering all the missing packets at the devices. Furthermore, the authors in @cite_14 proposed a randomized algorithm that has a high probability of achieving the minimum number of time slots. However, the works in @cite_21 @cite_14 @cite_4 neither considered erasure channels nor considered addressing the hard deadline for high importance video packets. | {
"cite_N": [
"@cite_14",
"@cite_21",
"@cite_4"
],
"mid": [
"2153984637",
"2033259439",
"2150287160"
],
"abstract": [
"We consider scenarios where wireless clients are missing some packets, but they collectively know every packet. The clients collaborate to exchange missing packets over an error-free broadcast channel with capacity of one packet per channel use. First, we present an algorithm that allows each client to obtain missing packets, with minimum number of transmissions. The algorithm employs random linear coding over a sufficiently large field. Next, we show that the field size can be reduced while maintaining the same number of transmissions. Finally, we establish lower and upper bounds on the minimum number of transmissions that are easily computable and often tight as demonstrated by numerical simulations.",
"We consider the problem of data exchange by a group of closely-located wireless nodes. In this problem each node holds a set of packets and needs to obtain all the packets held by other nodes. Each of the nodes can broadcast the packets in its possession (or a combination thereof) via a noiseless broadcast channel of capacity one packet per channel use. The goal is to minimize the total number of transmissions needed to satisfy the demands of all the nodes, assuming that they can cooperate with each other and are fully aware of the packet sets available to other nodes. This problem arises in several practical settings, such as peer-to-peer systems and wireless data broadcast. In this paper, we establish upper and lower bounds on the optimal number of transmissions and present an efficient algorithm with provable performance guarantees. The effectiveness of our algorithms is established through numerical simulations.",
"In this paper we study the problem of data exchange, where each node in the system has a number of linear combinations of the data packets. Communicating over a public channel, the goal is for all nodes to reconstruct the entire set of the data packets in minimal total number of bits exchanged over the channel. We present a novel divide and conquer based architecture that determines the number of bits each node should transmit. This along with the well known fact, that it is sufficient for the nodes to broadcast linear combinations of their local information, provides a polynomial time deterministic algorithm for reconstructing the entire set of the data packets at all nodes in minimal amount of total communication."
]
} |
1508.06721 | 2950733490 | In this paper, we study the problem of distributing a real-time video sequence to a group of partially connected cooperative wireless devices using instantly decodable network coding (IDNC). In such a scenario, the coding conflicts occur to service multiple devices with an immediately decodable packet and the transmission conflicts occur from simultaneous transmissions of multiple devices. To avoid these conflicts, we introduce a novel IDNC graph that represents all feasible coding and transmission conflict-free decisions in one unified framework. Moreover, a real-time video sequence has a hard deadline and unequal importance of video packets. Using these video characteristics and the new IDNC graph, we formulate the problem of minimizing the mean video distortion before the deadline as a finite horizon Markov decision process (MDP) problem. However, the backward induction algorithm that finds the optimal policy of the MDP formulation has high modelling and computational complexities. To reduce these complexities, we further design a two-stage maximal independent set selection algorithm, which can efficiently reduce the mean video distortion before the deadline. Simulation results over a real video sequence show that our proposed IDNC algorithms improve the received video quality compared to the existing IDNC algorithms. | Several other works including @cite_33 @cite_41 @cite_19 adopted IDNC for D2D communications. In @cite_33 @cite_41 , the authors selected a transmitting device and its XOR packet combination to service a large number of devices with any new packet in each time slot. Moreover, the authors in @cite_19 prioritized packets based on their contributions to the video quality as in @cite_15 @cite_26 and proposed a joint device and packet selection algorithm that maximizes the overall video quality after the current time slot. The aforementioned works @cite_21 @cite_14 @cite_4 @cite_33 @cite_41 @cite_19 developed network coding schemes for a fully connected D2D network. This fully connected D2D network is not always practical due to the limited transmission range of devices. Consequently, in this paper, we consider a partially connected D2D network, which is more general and includes the fully connected D2D network as a special case. Unlike a single transmitting device in a fully connected D2D network, multiple devices can transmit simultaneously in a partially connected D2D network without causing transmission conflicts. | {
"cite_N": [
"@cite_26",
"@cite_14",
"@cite_33",
"@cite_4",
"@cite_41",
"@cite_21",
"@cite_19",
"@cite_15"
],
"mid": [
"2150167333",
"2153984637",
"2030775967",
"2150287160",
"1963566798",
"2033259439",
"2950479988",
"2140379242"
],
"abstract": [
"In this paper, we study video streaming over wireless networks with network coding capabilities. We build upon recent work, which demonstrated that network coding can increase throughput over a broadcast medium, by mixing packets from different flows into a single packet, thus increasing the information content per transmission. Our key insight is that, when the transmitted flows are video streams, network codes should be selected so as to maximize not only the network throughput but also the video quality. We propose video-aware opportunistic network coding schemes that take into account both the decodability of network codes by several receivers and the importance and deadlines of video packets. Simulation results show that our schemes significantly improve both video quality and throughput. This work is a first step towards content-aware network coding.",
"We consider scenarios where wireless clients are missing some packets, but they collectively know every packet. The clients collaborate to exchange missing packets over an error-free broadcast channel with capacity of one packet per channel use. First, we present an algorithm that allows each client to obtain missing packets, with minimum number of transmissions. The algorithm employs random linear coding over a sufficiently large field. Next, we show that the field size can be reduced while maintaining the same number of transmissions. Finally, we establish lower and upper bounds on the minimum number of transmissions that are easily computable and often tight as demonstrated by numerical simulations.",
"This paper investigates the use of instantly decodable network coding (IDNC) for minimizing the mean decoding delay in multicast cooperative data exchange systems, where the clients cooperate with each other to obtain their missing packets. Here, IDNC is used to reduce the decoding delay of each transmission across all clients. We first introduce a new framework to find the optimum client and coded packet that result in the minimum mean decoding delay. However, since finding the optimum solution of the proposed framework is NP-hard, we further propose a heuristic algorithm that aims to minimize the lower bound on the expected decoding delay in each transmission. The effectiveness of the proposed algorithm is assessed through simulations.",
"In this paper we study the problem of data exchange, where each node in the system has a number of linear combinations of the data packets. Communicating over a public channel, the goal is for all nodes to reconstruct the entire set of the data packets in minimal total number of bits exchanged over the channel. We present a novel divide and conquer based architecture that determines the number of bits each node should transmit. This along with the well known fact, that it is sufficient for the nodes to broadcast linear combinations of their local information, provides a polynomial time deterministic algorithm for reconstructing the entire set of the data packets at all nodes in minimal amount of total communication.",
"",
"We consider the problem of data exchange by a group of closely-located wireless nodes. In this problem each node holds a set of packets and needs to obtain all the packets held by other nodes. Each of the nodes can broadcast the packets in its possession (or a combination thereof) via a noiseless broadcast channel of capacity one packet per channel use. The goal is to minimize the total number of transmissions needed to satisfy the demands of all the nodes, assuming that they can cooperate with each other and are fully aware of the packet sets available to other nodes. This problem arises in several practical settings, such as peer-to-peer systems and wireless data broadcast. In this paper, we establish upper and lower bounds on the optimal number of transmissions and present an efficient algorithm with provable performance guarantees. The effectiveness of our algorithms is established through numerical simulations.",
"Consider a scenario of broadcasting a common content to a group of cooperating wireless nodes that are within proximity of each other. Nodes in this group may receive partial content from the source due to packet losses over wireless broadcast links. We further consider that packet losses are different for different nodes. The remaining missing content at each node can then be recovered, thanks to cooperation among the nodes. In this context, the minimum amount of time that can guarantee a complete acquisition of the common content at every node is referred to as the completion time. It has been shown that instantly decodable network coding (IDNC) reduces the completion time as compared to no network coding in this scenario. Yet, for applications such as video streaming, not all packets have the same importance and not all users are interested in the same quality of content. This problem is even more interesting when additional, but realistic constraints, such as strict deadline, bandwidth, or limited energy are added to the problem formulation. We assert that direct application of IDNC in such a scenario yields poor performance in terms of content quality and completion time. In this paper, we propose a novel Content-Aware IDNC scheme that improves content quality and network coding opportunities jointly by taking into account significance of each packet towards the desired quality of service (QoS). Our proposed Content-Aware IDNC maximizes the quality under the completion time constraint, and minimizes the completion time under the quality constraint.We demonstrate the benefits of Content-Aware IDNC through simulations.",
"In recent years, Wireless Local Area Networks (WLAN) have become the premier choice for many homes and enterprises. WiMAX (Worldwide Interoperability for Microwave Access) has also emerged as the wireless standard that aims to deliver data over long distances, and can potentially provide wireless broadband access as an alternative to the wired cable and DSL networks. Parallel with the surge of wireless networks is the explosive growth of multimedia applications. Therefore, it is important to explore efficient methods for delivering multimedia data in such wireless settings. In this paper, we propose a network coding based scheduling policy to be used at WLAN-like Access Point (AP) or at a WiMAX-like broadcast station that optimizes the multimedia transmission in both broadcast and unicast settings. In particular, the contributions of this paper include (a) a framework for increasing the bandwidth efficiency of broadcast and unicast sessions in a wireless network based on network coding techniques and (b) an optimized scheduling algorithm based on the Markov Decision Process (MDP) to maximize the quality of multimedia applications. Simulations and theoretical results demonstrate the advantages of our approach over the conventional techniques."
]
} |
1508.06721 | 2950733490 | In this paper, we study the problem of distributing a real-time video sequence to a group of partially connected cooperative wireless devices using instantly decodable network coding (IDNC). In such a scenario, the coding conflicts occur to service multiple devices with an immediately decodable packet and the transmission conflicts occur from simultaneous transmissions of multiple devices. To avoid these conflicts, we introduce a novel IDNC graph that represents all feasible coding and transmission conflict-free decisions in one unified framework. Moreover, a real-time video sequence has a hard deadline and unequal importance of video packets. Using these video characteristics and the new IDNC graph, we formulate the problem of minimizing the mean video distortion before the deadline as a finite horizon Markov decision process (MDP) problem. However, the backward induction algorithm that finds the optimal policy of the MDP formulation has high modelling and computational complexities. To reduce these complexities, we further design a two-stage maximal independent set selection algorithm, which can efficiently reduce the mean video distortion before the deadline. Simulation results over a real video sequence show that our proposed IDNC algorithms improve the received video quality compared to the existing IDNC algorithms. | In the context of partially connected networks, the related works to our work are @cite_7 @cite_1 @cite_37 @cite_22 . In particular, the authors in @cite_7 provided various necessary and sufficient conditions that characterize the number of transmissions required to recover all missing packets at all devices. The authors in @cite_1 continued the work in @cite_7 and showed that solving the minimum number of transmissions problem exactly or even approximately is computationally intractable. Moreover, the authors in @cite_7 @cite_1 adopted algebraic network coding in large finite fields. Unlike the works in @cite_7 @cite_1 , we consider erasure channels, XOR based network coding, explicit packet delivery deadline and unequal importance of video packets. | {
"cite_N": [
"@cite_37",
"@cite_1",
"@cite_22",
"@cite_7"
],
"mid": [
"1594194634",
"2952721294",
"2089001139",
"2024301368"
],
"abstract": [
"This paper considers the problem of reducing the broadcast delay of wireless networks using instantly decodable network coding (IDNC) based device-to-device (D2D) communications. In D2D-enabled networks, devices help hasten the recovery of the lost packets of devices in their transmission range by sending network coded packets. To solve the problem, the different events occurring at each device are identified so as to derive an expression for the probability distribution of the decoding delay. The joint optimization problem over the set of transmitting devices and the packet combinations of each is formulated. Due to the high complexity of finding the optimal solution, this paper focuses on cooperation without interference between the transmitting users. The optimal solution, in such interference-less scenario, is expressed using a graph theory approach by introducing the cooperation graph. Extensive simulations compare the decoding delay experienced in the Point to Multi-Point (PMP), the fully connected D2D (FC-D2D) and the more practical partially connected D2D (PC-D2D) configurations and suggest that the PC-D2D outperforms the FC-D2D in all situations and provides an enormous gain for poorly connected networks.",
"We consider the \"coded cooperative data exchange problem\" for general graphs. In this problem, given a graph G=(V,E) representing clients in a broadcast network, each of which initially hold a (not necessarily disjoint) set of information packets; one wishes to design a communication scheme in which eventually all clients will hold all the packets of the network. Communication is performed in rounds, where in each round a single client broadcasts a single (possibly encoded) information packet to its neighbors in G. The objective is to design a broadcast scheme that satisfies all clients with the minimum number of broadcast rounds. The coded cooperative data exchange problem has seen significant research over the last few years; mostly when the graph G is the complete broadcast graph in which each client is adjacent to all other clients in the network, but also on general topologies, both in the fractional and integral setting. In this work we focus on the integral setting in general undirected topologies G. We tie the data exchange problem on G to certain well studied combinatorial properties of G and in such show that solving the problem exactly or even approximately within a multiplicative factor of |V| is intractable (i.e., NP-Hard). We then turn to study efficient data exchange schemes yielding a number of communication rounds comparable to our intractability result. Our communication schemes do not involve encoding, and in such yield bounds on the \"coding advantage\" in the setting at hand.",
"We consider a group of n wireless clients and a set of k messages. Each client initially holds a subset of messages and is interested in an arbitrary subset of messages. Each client cooperates with other clients to obtain the set of messages it wants by exchanging instantly decodable network coded (IDNC) packets. This problem setting is known as the cooperative index coding problem. Clients are assumed to be connected through an arbitrary topology. In the absence of any known algorithm to complete the exchange of packets for general network topologies, we propose a greedy algorithm to satisfy the demands of all the clients with the aim of reducing the mean completion time. Our algorithm, in a completely distributed fashion, decides which subset of clients should transmit at each round of transmission and which messages should be coded together by each transmitting client to generate an IDNC packet. The algorithm encourages transmissions which are decodable for a larger number of clients and attempts to avoid collisions. We evaluate the performance of our algorithm via numerical experiments.",
"Consider a connected network of n nodes that all wish to recover k desired packets. Each node begins with a subset of the desired packets and exchanges coded packets with its neighbors. This paper provides necessary and sufficient conditions that characterize the set of all transmission strategies that permit every node to ultimately learn (recover) all k packets. When the network satisfies certain regularity conditions and packets are randomly distributed, this paper provides tight concentration results on the number of transmissions required to achieve universal recovery. For the case of a fully connected network, a polynomial-time algorithm for computing an optimal transmission strategy is derived. An application to secrecy generation is discussed."
]
} |
1508.06721 | 2950733490 | In this paper, we study the problem of distributing a real-time video sequence to a group of partially connected cooperative wireless devices using instantly decodable network coding (IDNC). In such a scenario, the coding conflicts occur to service multiple devices with an immediately decodable packet and the transmission conflicts occur from simultaneous transmissions of multiple devices. To avoid these conflicts, we introduce a novel IDNC graph that represents all feasible coding and transmission conflict-free decisions in one unified framework. Moreover, a real-time video sequence has a hard deadline and unequal importance of video packets. Using these video characteristics and the new IDNC graph, we formulate the problem of minimizing the mean video distortion before the deadline as a finite horizon Markov decision process (MDP) problem. However, the backward induction algorithm that finds the optimal policy of the MDP formulation has high modelling and computational complexities. To reduce these complexities, we further design a two-stage maximal independent set selection algorithm, which can efficiently reduce the mean video distortion before the deadline. Simulation results over a real video sequence show that our proposed IDNC algorithms improve the received video quality compared to the existing IDNC algorithms. | The works in @cite_22 @cite_37 adopted IDNC for a partially connected D2D network and addressed the problem of servicing a large number of devices with any new packet in each time slot. However, these works are not readily compatible with the real-time video sequence that has a hard deadline and unequally important video packets. In contrast to @cite_22 @cite_37 , we introduce a novel IDNC graph that represents all feasible coding and transmission conflict-free decisions in one unified framework and develop an efficient IDNC framework that prioritizes the distribution of high importance video packets to all devices before the deadline. | {
"cite_N": [
"@cite_37",
"@cite_22"
],
"mid": [
"1594194634",
"2089001139"
],
"abstract": [
"This paper considers the problem of reducing the broadcast delay of wireless networks using instantly decodable network coding (IDNC) based device-to-device (D2D) communications. In D2D-enabled networks, devices help hasten the recovery of the lost packets of devices in their transmission range by sending network coded packets. To solve the problem, the different events occurring at each device are identified so as to derive an expression for the probability distribution of the decoding delay. The joint optimization problem over the set of transmitting devices and the packet combinations of each is formulated. Due to the high complexity of finding the optimal solution, this paper focuses on cooperation without interference between the transmitting users. The optimal solution, in such interference-less scenario, is expressed using a graph theory approach by introducing the cooperation graph. Extensive simulations compare the decoding delay experienced in the Point to Multi-Point (PMP), the fully connected D2D (FC-D2D) and the more practical partially connected D2D (PC-D2D) configurations and suggest that the PC-D2D outperforms the FC-D2D in all situations and provides an enormous gain for poorly connected networks.",
"We consider a group of n wireless clients and a set of k messages. Each client initially holds a subset of messages and is interested in an arbitrary subset of messages. Each client cooperates with other clients to obtain the set of messages it wants by exchanging instantly decodable network coded (IDNC) packets. This problem setting is known as the cooperative index coding problem. Clients are assumed to be connected through an arbitrary topology. In the absence of any known algorithm to complete the exchange of packets for general network topologies, we propose a greedy algorithm to satisfy the demands of all the clients with the aim of reducing the mean completion time. Our algorithm, in a completely distributed fashion, decides which subset of clients should transmit at each round of transmission and which messages should be coded together by each transmitting client to generate an IDNC packet. The algorithm encourages transmissions which are decodable for a larger number of clients and attempts to avoid collisions. We evaluate the performance of our algorithm via numerical experiments."
]
} |
1508.06600 | 2267078894 | A finite ergodic Markov chain exhibits cutoff if its distance to equilibrium remains close to its initial value over a certain number of iterations and then abruptly drops to near 0 on a much shorter time scale. Originally discovered in the context of card shuffling (Aldous-Diaconis, 1986), this remarkable phenomenon is now rigorously established for many reversible chains. Here we consider the non-reversible case of random walks on sparse directed graphs, for which even the equilibrium measure is far from being understood. We work under the configuration model, allowing both the in-degrees and the out-degrees to be freely specified. We establish the cutoff phenomenon, determine its precise window and prove that the cutoff profile approaches a universal shape. We also provide a detailed description of the equilibrium measure. | Motivated by applications to real-world networks (see, e.g., the survey by Cooper @cite_17 and the references therein), the mixing properties of random walks on large but finite random graphs have recently become the subject of many investigations. The attention has been mostly restricted to the undirected setting, where the in-degree distribution is reversible. In particular, Frieze and Cooper have studied the (i.e., the expected time needed for the chain to visit all states) of various random graphs @cite_28 @cite_14 @cite_34 @cite_33 , and analyzed the precise component structures induced by the walk on random regular graphs @cite_15 . Bounds for the mixing time on the largest component of the popular Erd os--Renyi model have also been obtained by various authors, in both the critical and super-critical connectivity regime @cite_18 @cite_1 @cite_5 @cite_13 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_33",
"@cite_28",
"@cite_1",
"@cite_5",
"@cite_15",
"@cite_34",
"@cite_13",
"@cite_17"
],
"mid": [
"2014075779",
"1997989470",
"1984070713",
"1994392346",
"1488446430",
"1973616368",
"2949304946",
"2087887302",
"2087384099",
"1714396858"
],
"abstract": [
"Let C 1 denote the largest connected component of the critical Erdos-Renyi random graph G(n, 1 n). We show that, typically, the diameter of C 1 is of order n 1 3 and the mixing time of the lazy simple random walk on C 1 is of order n. The latter answers a question of Benjamini, Kozma and Wormald. These results extend to clusters of size n 2 3 of p-bond percolation on any d-regular n-vertex graph where such clusters exist, provided that p(d-1)≤ 1+O(n- 1 3 ).",
"We study the cover time of a random walk on graphs G e Gn,p when @math . We prove that whp, the cover time, is asymptotic to @math . ©Wiley Periodicals, Inc. Random Struct. Alg., 2007",
"We study the cover time of a random walk on the largest component of the random graph Gn,p. We determine its value up to a factor 1 + o(1) whenever np = c > 1, c = O(lnn). In particular, we show that the cover time is not monotone for c = Θ(lnn). We also determine the cover time of the k-cores, k ≥ 2. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008",
"Let @math be constant, and let @math denote the set of r-regular graphs with vertex set V = 1,2,...,n . Let G be chosen randomly from @math . We prove that with high probability ( ) the cover time of a random walk on G is asymptotic to @math .",
"We show that the total variation mixing time of the simple random walk on the giant component of supercritical Gn,p and Gn,m is i¾?log2n. This statement was proved, independently, by Fountoulakis and Reed. Our proof follows from a structure result for these graphs which is interesting in its own right. We show that these graphs are \"decorated expanders\" - an expander glued to graphs whose size has constant expectation and exponential tail, and such that each vertex in the expander is glued to no more than a constant number of decorations. © 2014 Wiley Periodicals, Inc. Random Struct. Alg., 45, 383-407, 2014",
"In this article we present a study of the mixing time of a random walk on the largest component of a supercritical random graph, also known as the giant component. We identify local obstructions that slow down the random walk, when the average degree d is at most O( @math ), proving that the mixing time in this case is Θ((n-d)2) asymptotically almost surely. As the average degree grows these become negligible and it is the diameter of the largest component that takes over, yielding mixing time Θ(n-d) a.a.s.. We proved these results during the 2003–04 academic year. Similar results but for constant d were later proved independently by in [3]. © 2008 Wiley Periodicals, Inc. Random Struct. Alg., 2008 Most of this work was completed while the author was a research fellow at the School of Computer Science, McGill University.",
"Given a discrete random walk on a finite graph @math , the vacant set and vacant net are, respectively, the sets of vertices and edges which remain unvisited by the walk at a given step @math . These sets induce subgraphs of the underlying graph. Let @math be the subgraph of @math induced by the vacant set of the walk at step @math . Similarly, let @math be the subgraph of @math induced by the edges of the vacant net. For random @math -regular graphs @math , it was previously established that for a simple random walk, the graph @math of the vacant set undergoes a phase transition in the sense of the phase transition on Erd os -Renyi graphs @math . Thus, for @math there is an explicit value @math of the walk, such that for @math , @math has a unique giant component, plus components of size @math , whereas for @math all the components of @math are of size @math . We establish the threshold value @math for a phase transition in the graph @math of the vacant net of a simple random walk on a random @math -regular graph. We obtain the corresponding threshold results for the vacant set and vacant net of two modified random walks. These are a non-backtracking random walk, and, for @math even, a random walk which chooses unvisited edges whenever available. This allows a direct comparison of thresholds between simple and modified walks on random @math -regular graphs. The main findings are the following: As @math increases the threshold for the vacant set converges to @math in all three walks. For the vacant net, the threshold converges to @math for both the simple random walk and non-backtracking random walk. When @math is even, the threshold for the vacant net of the unvisited edge process converges to @math , which is also the vertex cover time of the process.",
"The preferential attachment graph G\"m(n) is a random graph formed by adding a new vertex at each time step, with m edges which point to vertices selected at random with probability proportional to their degree. Thus at time n there are n vertices and mn edges. This process yields a graph which has been proposed as a simple model of the world wide web [A. Barabasi, R. Albert, Emergence of scaling in random networks, Science 286 (1999) 509-512]. In this paper we show that if m>=2 then whp the cover time of a simple random walk on G\"m(n) is asymptotic to 2mm-1nlogn.",
"Let @math be the largest component of the Erdős–Renyi random graph @math . The mixing time of random walk on @math in the strictly supercritical regime, p = c n with fixed c > 1, was shown to have order log2n by Fountoulakis and Reed, and independently by Benjamini, Kozma and Wormald. In the critical window, p = (1 + e) n where λ = e3n is bounded, Nachmias and Peres proved that the mixing time on @math is of order n. However, it was unclear how to interpolate between these results, and estimate the mixing time as the giant component emerges from the critical window. Indeed, even the asymptotics of the diameter of @math in this regime were only recently obtained by Riordan and Wormald, as well as the present authors and Kim. In this paper, we show that for p = (1 + e) n with λ = e3n → ∞ and λ = o(n), the mixing time on @math is with high probability of order (n λ)log2λ. In addition, we show that this is the order of the largest mixing time over all components, both in the slightly supercritical and in the slightly subcritical regime [i.e., p = (1 − e) n with λ as above].",
"The aim of this article is to discuss some applications of random processes in searching and reaching consensus on finite graphs. The topics covered are: Why random walks?, Speeding up random walks, Random and deterministic walks, Interacting particles and voting, Searching changing graphs."
]
} |
1508.06600 | 2267078894 | A finite ergodic Markov chain exhibits cutoff if its distance to equilibrium remains close to its initial value over a certain number of iterations and then abruptly drops to near 0 on a much shorter time scale. Originally discovered in the context of card shuffling (Aldous-Diaconis, 1986), this remarkable phenomenon is now rigorously established for many reversible chains. Here we consider the non-reversible case of random walks on sparse directed graphs, for which even the equilibrium measure is far from being understood. We work under the configuration model, allowing both the in-degrees and the out-degrees to be freely specified. We establish the cutoff phenomenon, determine its precise window and prove that the cutoff profile approaches a universal shape. We also provide a detailed description of the equilibrium measure. | In contrast, very little is known about random walks on random directed graphs. The failure of the crucial property makes many of the ingredients used in the above works unavailable. Even understanding the equilibrium measure constitutes an important theoretical challenge, with applications to link-based ranking in large databases (see, e.g., @cite_7 and the references therein). In @cite_0 , Cooper and Frieze consider the random digraph on @math vertices formed by independently placing an arc between every pair of vertices with probability @math , where @math is fixed while @math . In this regime, they prove that the equilibrium measure is asymptotically close to the in-degree distribution. The recent work @cite_8 by Addario-Berry, Balle and Perarnau provides precise estimates on the extrema of the equilibrium measure in a sparse random digraph where all out-degrees are equal. To the best of our knowledge, the present paper provides the first proof of the cutoff phenomenon in the non-reversible setting of random directed graphs. | {
"cite_N": [
"@cite_0",
"@cite_7",
"@cite_8"
],
"mid": [
"2158978530",
"",
"2146999606"
],
"abstract": [
"We study properties of a simple random walk on the random digraph D\"n\",\"p when np=dlogn, d>1. We prove that whp the value @p\"v of the stationary distribution at vertex v is asymptotic to deg^-(v) m where deg^-(v) is the in-degree of v and m=n(n-1)p is the expected number of edges of D\"n\",\"p. If d=d(n)-> with n, the stationary distribution is asymptotically uniform whp. Using this result we prove that, for d>1, whp the cover time of D\"n\",\"p is asymptotic to dlog(d (d-1))nlogn. If d=d(n)-> with n, then the cover time is asymptotic to nlogn.",
"",
"Let D(n,r) be a random r-out regular directed multigraph on the set of vertices 1,...,n . In this work, we establish that for every r � 2, there existsr > 0 such that diam(D(n,r)) = (1 + �r + o(1))log r n. Our techniques also allow us to bound some extremal quantities related to the stationary distribution of a simple random walk on D(n,r). In particular, we determine the asymptotic behaviour ofmax andmin, the maximum and the minimum values of the stationary distribution. We show that with"
]
} |
1508.06583 | 2952739619 | Consensus is one of the fundamental tasks studied in distributed computing. Processors have input values from some set @math and they have to decide the same value from this set. If all processors have the same input value, then they must all decide this value. We study the task of consensus in a Multiple Access Channel (MAC) prone to faults, under a very weak communication model called the @math . Communication proceeds in synchronous rounds. Some processors wake up spontaneously, in possibly different rounds decided by an adversary. In each round, an awake processor can either listen, i.e., stay silent, or beep, i.e., emit a signal. In each round, a fault can occur in the channel independently with constant probability @math . In a fault-free round, an awake processor hears a beep if it listens in this round and if one or more other processors beep in this round. A processor still dormant in a fault-free round in which some other processor beeps is woken up by this beep and hears it. In a faulty round nothing is heard, regardless of the behaviour of the processors. An algorithm working with error probability at most @math , for a given @math , is called @math - @math . Our main result is the design and analysis, for any constant @math , of a deterministic @math -safe consensus algorithm that works in time @math in a fault-prone MAC, where @math is the smallest input value of all participating processors. We show that this time cannot be improved, even when the MAC is fault-free. The main algorithmic tool that we develop to achieve our goal, and that might be of independent interest, is a deterministic algorithm that, with arbitrarily small constant error probability, establishes a global clock in a fault-prone MAC in constant time. | The Multiple Access Channel (MAC) is a popular and well-studied medium of communication. Most research concerning the MAC has been done under the radio communication model in which processors can send an entire message in a single round, and this message is heard by other processors if exactly one processor transmits, and all others listen in this round. This communication model is incomparable to the beeping model: on the one hand it is much stronger, as large messages (and not only beeps) can be sent in a single round, but on the other hand it is weaker, as it requires a unique transmitter in a round to make the transmission successful, while in the beeping model many beeps may be heard simultaneously. Leader election was studied in a MAC under the radio model, both in the deterministic @cite_3 @cite_12 and in the randomized setting @cite_16 @cite_6 . | {
"cite_N": [
"@cite_16",
"@cite_12",
"@cite_3",
"@cite_6"
],
"mid": [
"2080763277",
"2036925455",
"1965299092",
"2033487809"
],
"abstract": [
"",
"A problem related to the decentralized control of a multiple access channel is considered: Suppose k stations from an ensemble of n simultaneously transmit to a multiple access channel that provides the feedback 0, 1, or 2+, denoting k = 0, k = 1, or k ≥ 2, respectively. If k = 1, then the transmission succeeds. But if k ≥ 2, as a result of the conflict, none of the transmissions succeed. An algorithm to resolve a conflict determines how to schedule retransmissions so that each of the conflicting stations eventually transmits singly to the channel. In this paper, a general model of deterministic algorithms to resolve conflicts is introduced, and it is established that, for all k and n (2 ≤ k ≤ n ), O( k (log n ) (log k )) time must elapse in the worst case before all k transmissions succeed.",
"Selective families, a weaker variant of superimposed codes [KS64, F92, 197, CR96], have been recently used to design Deterministic Distributed Broadcast (DDB) protocols for unknown radio networks (a radio network is said to be unknown when the nodes know nothing about the network but their own label) [CGGPR00, CGOR00]. We first provide a general almost tight lower bound on the size of selective families. Then, by reverting the selective families - DDB protocols connection, we exploit our lower bound to construct a family of “hard” radio networks (i.e. directed graphs). These networks yield an O(n log D) lower bound on the completion time of DDB protocols that is superlinear (in the size n of the network) even for very small maximum eccentricity D of the network, while all the previous lower bounds (e.g. O(D log n) [CGGPR00]) are superlinear only when D is almost linear. On the other hand, the previous upper bounds are all superlinear in n independently of the eccentricity D and the maximum in-degree d of the network. We introduce a broadcast technique that exploits selective families in a new way. Then, by combining selective families of almost optimal size with our new broadcast technique, we obtain an O(Dd log3 n) upper bound that we prove to be almost optimal when d = O(n D). This exponentially improves over the best known upper bound [CGR00) when D, d = O(polylogn). Furthermore, by comparing our deterministic upper bound with the best known randomized one [BGI87] we obtain a new, rather surprising insight into the real gap between deterministic and randomized protocols. It turns out that this gap is exponential (as discovered in [BGI87]), but only when the network has large maximum in-degree (i.e. d = O(na), for some constant a > O). We then look at the multibroadcast problem on unknown radio networks. A similar connection to that between selective families and (single) broadcast also holds between superimposed codes and multibroadcast. We in fact combine a variant of our (single) broadcast technique with superimposed codes of almost optimal size available in literature [EFF85, HS87, I97, CHI99]. This yields a multibroadcast protocol having completion time O(Dd2 log3 n). Finally, in order to determine the limits of our multibroadcast technique, we generalize (and improve) the best known lower bound [CR96] on the size of superimposed codes.",
"We propose two selection protols that run on multiple access channels in log-logarithmic expected time, and establish a complementary lower bound showing that the first protocols falls within an additive constant of optimality and that the second differs from optimality by less than any multiplicative factor infinitesimally greater than 1 as the size of the problem approaches infinity. It is difficult to second-guess the fast-changing electronics industry, but our mathematical analysis could be relevant outside the traditional interests of communications protocols to semaphore-like problems."
]
} |
1508.06583 | 2952739619 | Consensus is one of the fundamental tasks studied in distributed computing. Processors have input values from some set @math and they have to decide the same value from this set. If all processors have the same input value, then they must all decide this value. We study the task of consensus in a Multiple Access Channel (MAC) prone to faults, under a very weak communication model called the @math . Communication proceeds in synchronous rounds. Some processors wake up spontaneously, in possibly different rounds decided by an adversary. In each round, an awake processor can either listen, i.e., stay silent, or beep, i.e., emit a signal. In each round, a fault can occur in the channel independently with constant probability @math . In a fault-free round, an awake processor hears a beep if it listens in this round and if one or more other processors beep in this round. A processor still dormant in a fault-free round in which some other processor beeps is woken up by this beep and hears it. In a faulty round nothing is heard, regardless of the behaviour of the processors. An algorithm working with error probability at most @math , for a given @math , is called @math - @math . Our main result is the design and analysis, for any constant @math , of a deterministic @math -safe consensus algorithm that works in time @math in a fault-prone MAC, where @math is the smallest input value of all participating processors. We show that this time cannot be improved, even when the MAC is fault-free. The main algorithmic tool that we develop to achieve our goal, and that might be of independent interest, is a deterministic algorithm that, with arbitrarily small constant error probability, establishes a global clock in a fault-prone MAC in constant time. | The differences between local and global clocks for the wake-up problem were first studied in @cite_7 and then in @cite_14 @cite_0 @cite_18 . The communication model used in these papers was that of radio networks in which the main challenge is the occurrence of collisions between simultaneously received messages. A global clock is often used in the study of broadcasting in radio networks (cf. @cite_18 ). | {
"cite_N": [
"@cite_0",
"@cite_18",
"@cite_14",
"@cite_7"
],
"mid": [
"2951682038",
"",
"2116344546",
"2093347633"
],
"abstract": [
"We present the communication model, which assumes nodes have minimal knowledge about their environment and severely limited communication capabilities. Specifically, nodes have no information regarding the local or global structure of the network, don't have access to synchronized clocks and are woken up by an adversary. Moreover, instead on communicating through messages they rely solely on carrier sensing to exchange information. We study the problem of , a variant of vertex coloring specially suited for the studied beeping model. Given a set of resources, the goal of interval coloring is to assign every node a large contiguous fraction of the resources, such that neighboring nodes share no resources. To highlight the importance of the discreteness of the model, we contrast it against a continuous variant described in [17]. We present an O(1 @math (T ) @math T @math O( n) @math (T ) @math O( n) @math ( n)$ on the time required to solve interval coloring for this model against randomized algorithms. This lower bound implies that our algorithm is asymptotically optimal for constant degree graphs.",
"",
"Radio networks model wireless communication when processing units communicate using one wave frequency. This is captured by the property that multiple messages arriving simultaneously to a node interfere with one another and none of them can be read reliably. We present improved solutions to the problem of waking up such a network. This requires activating all nodes in a scenario when some nodes start to be active spontaneously, while every sleeping node needs to be awaken by receiving successfully a message from a neighbor. Our contributions concern the existence and efficient construction of universal radio synchronizers, which are combinatorial structures introduced in [6] as building blocks of efficient wake-up algorithms. First we show by counting that there are (n,g)-universal synchronizers for @math . Next we show an explicit construction of (n,g)-universal-synchronizers for @math . By way of applications, we obtain an existential wake-up algorithm which works in time @math and an explicitly instantiated algorithm that works in time @math , where n is the number of nodes and @math is the maximum in-degree in the network. Algorithms for leader-election and synchronization can be developed on top of wake-up ones, as shown in [7], such that they work in time slower by a factor of @math than the underlying wake-up ones.",
"This paper studies the differences between two levels of synchronization in a distributed broadcast system (or a multiple-access channel). In the globally synchronous model, all processors have access to a global clock. In the locally synchronous model, processors have local clocks ticking at the same rate, but each clock starts individually when the processor wakes up. We consider the fundamental problem of waking up all n processors of a completely connected broadcast system. Some processors wake up spontaneously, while others have to be woken up. Only awake processors can send messages; a sleeping processor is woken up upon hearing a message. The processors hear a message in a given round if and only if exactly one processor sends a message in that round. Our goal is to wake up all processors as fast as possible in the worst case, assuming an adversary controls which processors wake up and when. We analyze the problem in both the globally synchronous and locally synchronous models with or without the assumption that n is known to the processors. We propose randomized and deterministic algorithms for the problem, as well as lower bounds in some of the cases. These bounds establish a gap between the globally synchronous and locally synchronous models."
]
} |
1508.06717 | 2950226878 | Anomaly detection is an important task in many real world applications such as fraud detection, suspicious activity detection, health care monitoring etc. In this paper, we tackle this problem from supervised learning perspective in online learning setting. We maximize well known metric for class-imbalance learning in online learning framework. Specifically, we show that maximizing is equivalent to minimizing a convex surrogate loss function and based on that we propose novel online learning algorithm for anomaly detection. We then show, by extensive experiments, that the performance of the proposed algorithm with respect to @math metric is as good as a recently proposed Cost-Sensitive Online Classification(CSOC) algorithm for class-imbalance learning over various benchmarked data sets while keeping running time close to the perception algorithm. Our another conclusion is that other competitive online algorithms do not perform consistently over data sets of varying size. This shows the potential applicability of our proposed approach. | Work presented in this paper spans two main themes in data mining and machine learning: Online learning and class-imbalance learning. Although there have been many works in both domain separately @cite_14 @cite_16 , little work has been done that jointly solves online learning and class-imbalance learning. Below we briefly describe work in each domain that closely matches our work. | {
"cite_N": [
"@cite_14",
"@cite_16"
],
"mid": [
"2148143831",
"1563938718"
],
"abstract": [
"An approach to the construction of classifiers from imbalanced datasets is described. A dataset is imbalanced if the classification categories are not approximately equally represented. Often real-world data sets are predominately composed of \"normal\" examples with only a small percentage of \"abnormal\" or \"interesting\" examples. It is also the case that the cost of misclassifying an abnormal (interesting) example as a normal example is often much higher than the cost of the reverse error. Under-sampling of the majority (normal) class has been proposed as a good means of increasing the sensitivity of a classifier to the minority class. This paper shows that a combination of our method of oversampling the minority (abnormal)cla ss and under-sampling the majority (normal) class can achieve better classifier performance (in ROC space)tha n only under-sampling the majority class. This paper also shows that a combination of our method of over-sampling the minority class and under-sampling the majority class can achieve better classifier performance (in ROC space)t han varying the loss ratios in Ripper or class priors in Naive Bayes. Our method of over-sampling the minority class involves creating synthetic minority class examples. Experiments are performed using C4.5, Ripper and a Naive Bayes classifier. The method is evaluated using the area under the Receiver Operating Characteristic curve (AUC)and the ROC convex hull strategy.",
"Many real world data mining applications involve learning from imbalanced data sets. Learning from data sets that contain very few instances of the minority (or interesting) class usually produces biased classifiers that have a higher predictive accuracy over the majority class(es), but poorer predictive accuracy over the minority class. SMOTE (Synthetic Minority Over-sampling TEchnique) is specifically designed for learning from imbalanced data sets. This paper presents a novel approach for learning from imbalanced data sets, based on a combination of the SMOTE algorithm and the boosting procedure. Unlike standard boosting where all misclassified examples are given equal weights, SMOTEBoost creates synthetic examples from the rare or minority class, thus indirectly changing the updating weights and compensating for skewed distributions. SMOTEBoost applied to several highly and moderately imbalanced data sets shows improvement in prediction performance on the minority class and overall improved F-values."
]
} |
1508.06708 | 2949812103 | This paper focuses on structured-output learning using deep neural networks for 3D human pose estimation from monocular images. Our network takes an image and 3D pose as inputs and outputs a score value, which is high when the image-pose pair matches and low otherwise. The network structure consists of a convolutional neural network for image feature extraction, followed by two sub-networks for transforming the image features and pose into a joint embedding. The score function is then the dot-product between the image and pose embeddings. The image-pose embedding and score function are jointly trained using a maximum-margin cost function. Our proposed framework can be interpreted as a special form of structured support vector machines where the joint feature space is discriminatively learned using deep neural networks. We test our framework on the Human3.6m dataset and obtain state-of-the-art results compared to other recent methods. Finally, we present visualizations of the image-pose embedding space, demonstrating the network has learned a high-level embedding of body-orientation and pose-configuration. | Traditional pictorial structure models usually apply linear filters on hand-crafted features, e.g., HoG and SIFT, to calculate the probability of the presence of body parts or adjacent body-joint pairs. As shown in @cite_33 , the quality of the features are critical to the performance, and, while successful for other tasks, these hand-crafted features may not be necessarily optimal for pose estimation. Alternatively, with sufficient data, it is possible to learn the features directly from training data. In recent years, deep neural networks, especially convolutional neural networks (CNN), have been shown to be effective in learning rich features @cite_17 @cite_30 . Jain al @cite_15 trains a CNN as a sliding-window detector for each body part, and the resulting body-joint detection maps are smoothed using a learned pairwise relationship between joints. Tompson al @cite_11 extends @cite_15 by feeding the body-joint detection maps into a modified convolutional layer that performs pairwise smoothing, allowing feature extraction and pairwise relationships to be jointly optimized. Chen al @cite_28 uses a deep CNN to predict the presence of joints and the pairwise relationships between joints, and the CNN output is then used as the input into a pictorial structure model for 2D pose estimation. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_28",
"@cite_17",
"@cite_15",
"@cite_11"
],
"mid": [
"",
"2009647132",
"2155394491",
"2953391683",
"2952504680",
"2952422028"
],
"abstract": [
"",
"",
"We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.",
"Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the network which was trained to perform object classification on ILSVRC13. We use features extracted from the network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or @math distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.",
"This paper introduces a new architecture for human pose estimation using a multi- layer convolutional network architecture and a modified learning technique that learns low-level features and higher-level weak spatial models. Unconstrained human pose estimation is one of the hardest problems in computer vision, and our new architecture and learning schema shows significant improvement over the current state-of-the-art results. The main contribution of this paper is showing, for the first time, that a specific variation of deep learning is able to outperform all existing traditional architectures on this task. The paper also discusses several lessons learned while researching alternatives, most notably, that it is possible to learn strong low-level feature detectors on features that might even just cover a few pixels in the image. Higher-level spatial models improve somewhat the overall result, but to a much lesser extent then expected. Many researchers previously argued that the kinematic structure and top-down information is crucial for this domain, but with our purely bottom up, and weak spatial model, we could improve other more complicated architectures that currently produce the best results. This mirrors what many other researchers, like those in the speech recognition, object recognition, and other domains have experienced.",
"This paper proposes a new hybrid architecture that consists of a deep Convolutional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques."
]
} |
1508.06708 | 2949812103 | This paper focuses on structured-output learning using deep neural networks for 3D human pose estimation from monocular images. Our network takes an image and 3D pose as inputs and outputs a score value, which is high when the image-pose pair matches and low otherwise. The network structure consists of a convolutional neural network for image feature extraction, followed by two sub-networks for transforming the image features and pose into a joint embedding. The score function is then the dot-product between the image and pose embeddings. The image-pose embedding and score function are jointly trained using a maximum-margin cost function. Our proposed framework can be interpreted as a special form of structured support vector machines where the joint feature space is discriminatively learned using deep neural networks. We test our framework on the Human3.6m dataset and obtain state-of-the-art results compared to other recent methods. Finally, we present visualizations of the image-pose embedding space, demonstrating the network has learned a high-level embedding of body-orientation and pose-configuration. | The advantage of these approaches is that the features extracted by deep networks usually lead to better performance. However the detection-based methods for 2D pose estimation are not directly applicable to 3d pose estimation due to the need to discretize a large pose space -- the number of joint positions grows cubicly with the resolution of the discretization, making inference computationally expensive @cite_13 . In addition, it is difficult to predict 3D coordinates from only a local window around a joint, without any other contextual information. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2171125807"
],
"abstract": [
"We consider the problem of automatically estimating the 3D pose of humans from images, taken from multiple calibrated views. We show that it is possible and tractable to extend the pictorial structures framework, popular for 2D pose estimation, to 3D. We discuss how to use this framework to impose view, skeleton, joint angle and intersection constraints in 3D. The 3D pictorial structures are evaluated on multiple view data from a professional football game. The evaluation is focused on computational tractability, but we also demonstrate how a simple 2D part detector can be plugged into the framework."
]
} |
1508.06708 | 2949812103 | This paper focuses on structured-output learning using deep neural networks for 3D human pose estimation from monocular images. Our network takes an image and 3D pose as inputs and outputs a score value, which is high when the image-pose pair matches and low otherwise. The network structure consists of a convolutional neural network for image feature extraction, followed by two sub-networks for transforming the image features and pose into a joint embedding. The score function is then the dot-product between the image and pose embeddings. The image-pose embedding and score function are jointly trained using a maximum-margin cost function. Our proposed framework can be interpreted as a special form of structured support vector machines where the joint feature space is discriminatively learned using deep neural networks. We test our framework on the Human3.6m dataset and obtain state-of-the-art results compared to other recent methods. Finally, we present visualizations of the image-pose embedding space, demonstrating the network has learned a high-level embedding of body-orientation and pose-configuration. | In contrast to detection-based methods, regression-based methods aim to directly predict the coordinates of the body-joints in the image. Toshev al @cite_0 trains a cascade CNN to predict the 2D coordinates of joints in the image, where the CNN inputs are the image patches centered at the coordinates predicted from the previous stage. Li al @cite_22 use a multi-task framework to train a CNN to directly predict a 2D human pose, where auxiliary tasks consisting of body-part detection guide the feature learning. This work was later extended for 3D pose estimation from single 2D images @cite_27 . | {
"cite_N": [
"@cite_0",
"@cite_27",
"@cite_22"
],
"mid": [
"2113325037",
"2293220651",
"2052678124"
],
"abstract": [
"We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.",
"In this paper, we propose a deep convolutional neural network for 3D human pose estimation from monocular images. We train the network using two strategies: (1) a multi-task framework that jointly trains pose regression and body part detectors; (2) a pre-training strategy where the pose regressor is initialized using a network trained for body part detection. We compare our network on a large data set and achieve significant improvement over baseline methods. Human pose estimation is a structured prediction problem, i.e., the locations of each body part are highly correlated. Although we do not add constraints about the correlations between body parts to the network, we empirically show that the network has disentangled the dependencies among different body parts, and learned their correlations.",
"We propose a heterogeneous multi-task learning framework for human pose estimation from monocular images using a deep convolutional neural network. In particular, we simultaneously learn a human pose regressor and sliding-window body-part and joint-point detectors in a deep network architecture. We show that including the detection tasks helps to regularize the network, directing it to converge to a good solution. We report competitive and state-of-art results on several datasets. We also empirically show that the learned neurons in the middle layer of our network are tuned to localized body parts."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.