aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1410.7074
2255503823
Methods for automated collection and annotation are changing the cost-structures of sampling surveys for a wide range of applications. Digital samples in the form of images or audio recordings can be collected rapidly, and annotated by computer programs or crowd workers. We consider the problem of estimating a population mean under these new cost-structures, and propose a Hybrid-Offset sampling design. This design utilizes two annotators: a primary, which is accurate but costly (e.g. a human expert) and an auxiliary which is noisy but cheap (e.g. a computer program), in order to minimize total sampling expenditures. Our analysis gives necessary conditions for the Hybrid-Offset design and specifies optimal sample sizes for both annotators. Simulations on data from a coral reef survey program indicate that the Hybrid-Offset design outperforms several alternative sampling designs. In particular, sampling expenditures are reduced 50 compared to the Conventional design currently deployed by the coral ecologists.
Our work is also related to active learning and transfer learning. It is related, in particular, to recent work on active transfer learning where labels are queried to optimize classifier performance in a target domain @cite_7 . A key difference between that work and ours is that active learning methods optimize the labeling effort to create the best (which then can, presumably, be used to label more data and in order to estimate the desired data-products). In contrast, we directly optimize the labeling effort to derive the desired (i.e. the population mean).
{ "cite_N": [ "@cite_7" ], "mid": [ "2149464712" ], "abstract": [ "Transfer learning algorithms are used when one has sufficient training data for one supervised learning task (the source task) but only very limited training data for a second task (the target task) that is similar but not identical to the first. These algorithms use varying assumptions about the similarity between the tasks to carry information from the source to the target task. Common assumptions are that only certain specific marginal or conditional distributions have changed while all else remains the same. Alternatively, if one has only the target task, but also has the ability to choose a limited amount of additional training data to collect, then active learning algorithms are used to make choices which will most improve performance on the target task. These algorithms may be combined into active transfer learning, but previous efforts have had to apply the two methods in sequence or use restrictive transfer assumptions. We propose two transfer learning algorithms that allow changes in all marginal and conditional distributions but assume the changes are smooth in order to achieve transfer between the tasks. We then propose an active learning algorithm for the second method that yields a combined active transfer learning algorithm. We demonstrate the algorithms on synthetic functions and a real-world task on estimating the yield of vineyards from images of the grapes." ] }
1410.7172
1531831816
Optimising black-box functions is important in many disciplines, such as tuning machine learning models, robotics, finance and mining exploration. Bayesian optimisation is a state-of-the-art technique for the global optimisation of black-box functions which are expensive to evaluate. At the core of this approach is a Gaussian process prior that captures our belief about the distribution over functions. However, in many cases a single Gaussian process is not flexible enough to capture non-stationarity in the objective function. Consequently, heteroscedasticity negatively affects performance of traditional Bayesian methods. In this paper, we propose a novel prior model with hierarchical parameter learning that tackles the problem of non-stationarity in Bayesian optimisation. Our results demonstrate substantial improvements in a wide range of applications, including automatic machine learning and mining exploration.
Several approaches have been proposed to manage heteroscedasticity with Gaussian processes. @cite_20 attempted to project inputs into a latent space that is stationary. This approach was later extended by @cite_9 . A latent space representation in higher dimensions was also proposed by @cite_3 . Others such as @cite_12 and @cite_11 have tried to model heteroscedasticity directly with the choice of covariance function. In 2005, gramacy2005bayesian proposed a treed GP model to attack non-stationarity. While this work, as well as, the work of @cite_2 , are the closest to ours, both were developed for modelling functions and not for global optimisation under a limited number of observations.
{ "cite_N": [ "@cite_9", "@cite_3", "@cite_2", "@cite_20", "@cite_12", "@cite_11" ], "mid": [ "2040850257", "2004777611", "", "2050497240", "", "1746819321" ], "abstract": [ "In geostatistics it is common practice to assume that the underlying spatial process is stationary and isotropic, i.e. the spatial distribution is unchanged when the origin of the index set is translated and under rotation about the origin. However, in environmental problems, such assumptions are not realistic since local influences in the correlation structure of the spatial process may be found in the data. The paper proposes a Bayesian model to address the anisot- ropy problem. Following Sampson and Guttorp, we define the correlation function of the spatial process by reference to a latent space, denoted by \"D\", where stationarity and isotropy hold. The space where the gauged monitoring sites lie is denoted by \"G\". We adopt a Bayesian approach in which the mapping between \"G\" and \"D\" is represented by an unknown function d (·). A Gaussian process prior distribution is defined for d (·). Unlike the Sampson-Guttorp approach, the mapping of both gauged and ungauged sites is handled in a single framework, and predictive inferences take explicit account of uncertainty in the mapping. Markov chain Monte Carlo methods are used to obtain samples from the posterior distributions. Two examples are discussed: a simulated data set and the solar radiation data set that also was analysed by Sampson and Guttorp. Copyright 2003 Royal Statistical Society.", "In this article, we propose a novel approach to modeling nonstationary spatial fields. The proposed method works by expanding the geographic plane over which these processes evolve into higher-dimensional spaces, transforming and clarifying complex patterns in the physical plane. By combining aspects of multidimensional scaling, group lasso, and latent variable models, a dimensionally sparse projection is found in which the originally nonstationary field exhibits stationarity. Following a comparison with existing methods in a simulated environment, dimension expansion is studied on a classic test-bed dataset historically used to study nonstationary models. Following this, we explore the use of dimension expansion in modeling air pollution in the United Kingdom, a process known to be strongly influenced by rural urban effects, amongst others, which gives rise to a nonstationary field.", "", "Abstract Estimation of the covariance structure of spatial processes is a fundamental prerequisite for problems of spatial interpolation and the design of monitoring networks. We introduce a nonparametric approach to global estimation of the spatial covariance structure of a random function Z(x, t) observed repeatedly at times ti (i = 1, …, T) at a finite number of sampling stations xi (i = 1, 2, …, N) in the plane. Our analyses assume temporal stationarity but do not assume spatial stationarity (or isotropy). We analyze the spatial dispersions var(Z(xi, t) − Z(xj, t)) as a natural metric for the spatial covariance structure and model these as a general smooth function of the geographic coordinates of station pairs (xi, xj ). The model is constructed in two steps. First, using nonmetric multidimensional scaling (MDS) we compute a two-dimensional representation of the sampling stations for which a monotone function of interpoint distances δij approximates the spatial dispersions. MDS transforms the problem...", "", "Gaussian processes (GPs) provide a principled, practical, probabilistic approach to learning in kernel machines. GPs have received increased attention in the machine-learning community over the past decade, and this book provides a long-needed systematic and unified treatment of theoretical and practical aspects of GPs in machine learning. The treatment is comprehensive and self-contained, targeted at researchers and students in machine learning and applied statistics.The book deals with the supervised-learning problem for both regression and classification, and includes detailed algorithms. A wide variety of covariance (kernel) functions are presented and their properties discussed. Model selection is discussed both from a Bayesian and a classical perspective. Many connections to other well-known techniques from machine learning and statistics are discussed, including support-vector machines, neural networks, splines, regularization networks, relevance vector machines and others. Theoretical issues including learning curves and the PAC-Bayesian framework are treated, and several approximation methods for learning with large datasets are discussed. The book contains illustrative examples and exercises, and code and datasets are available on the Web. Appendixes provide mathematical background and a discussion of Gaussian Markov processes." ] }
1410.7172
1531831816
Optimising black-box functions is important in many disciplines, such as tuning machine learning models, robotics, finance and mining exploration. Bayesian optimisation is a state-of-the-art technique for the global optimisation of black-box functions which are expensive to evaluate. At the core of this approach is a Gaussian process prior that captures our belief about the distribution over functions. However, in many cases a single Gaussian process is not flexible enough to capture non-stationarity in the objective function. Consequently, heteroscedasticity negatively affects performance of traditional Bayesian methods. In this paper, we propose a novel prior model with hierarchical parameter learning that tackles the problem of non-stationarity in Bayesian optimisation. Our results demonstrate substantial improvements in a wide range of applications, including automatic machine learning and mining exploration.
Warping is another popular approach for dealing with non-stationarity . Recently, @cite_0 proposed an input warping technique, using a parameterised Beta cumulative distribution function (CDF) as the warping function. The goal of input warping is to transform non-stationary functions to stationary ones by applying a Beta CDF mapping @math to each dimension @math . The new covariance becomes @math . We have found that input warping can lead to remarkable improvements in automatic algorithm configuration. However, the Beta CDF transformation has limitations, which we address in this paper by using a treed approach.
{ "cite_N": [ "@cite_0" ], "mid": [ "1533803232" ], "abstract": [ "Bayesian optimization has proven to be a highly effective methodology for the global optimization of unknown, expensive and multimodal functions. The ability to accurately model distributions over functions is critical to the effectiveness of Bayesian optimization. Although Gaussian processes provide a flexible prior over functions, there are various classes of functions that remain difficult to model. One of the most frequently occurring of these is the class of non-stationary functions. The optimization of the hyperparameters of machine learning algorithms is a problem domain in which parameters are often manually transformed a priori, for example by optimizing in \"log-space,\" to mitigate the effects of spatially-varying length scale. We develop a methodology for automatically learning a wide family of bijective transformations or warpings of the input space using the Beta cumulative distribution function. We further extend the warping framework to multi-task Bayesian optimization so that multiple tasks can be warped into a jointly stationary space. On a set of challenging benchmark optimization tasks, we observe that the inclusion of warping greatly improves on the state-of-the-art, producing better results faster and more reliably." ] }
1410.6629
1929261594
One of the ways in which attackers try to steal sensitive information from corporations is by sending spearphishing emails. This type of emails typically appear to be sent by one of the victim's coworkers, but have instead been crafted by an attacker. A particularly insidious type of spearphishing emails are the ones that do not only claim to come from a trusted party, but were actually sent from that party's legitimate email account that was compromised in the first place. In this paper, we propose a radical change of focus in the techniques used for detecting such malicious emails: instead of looking for particular features that are indicative of attack emails, we look for possible indicators of impersonation of the legitimate owners. We present IdentityMailer, a system that validates the authorship of emails by learning the typical email-sending behavior of users over time, and comparing any subsequent email sent from their accounts against this model. Our experiments on real world e-mail datasets demonstrate that our system can effectively block advanced email attacks sent from genuine email accounts, which traditional protection systems are unable to detect. Moreover, we show that it is resilient to an attacker willing to evade the system. To the best of our knowledge, IdentityMailer is the first system able to identify spearphishing emails that are sent from within an organization, by a skilled attacker having access to a compromised email account.
Content-analysis techniques look at the words in the message itself to determine if it is spam or not. Proposed methods include Na "ive Bayes, Support Vector Machines, or other machine learning algorithms @cite_20 @cite_39 @cite_23 @cite_17 . Other systems detect spam by looking at malicious URLs in the email @cite_44 @cite_3 . Content-analysis techniques work well in detecting spam, however are too computationally intensive to be applied to every email that a busy mail server receives @cite_35 . In , we solve this problem by analyzing emails as they get sent. We claim that this analysis is feasible, because the amount of emails that a mail server sends is lower than the amount of emails that it receives. Another problem of traditional content-analysis techniques is that they look for words that are indicative of spam. In the presence of a targeted attack, there might be no such words, since an attack email will use a language that is similar to the one used in everyday business emails. This is why in we learn the typical sending behavior of a user and match it against the emails she sends.
{ "cite_N": [ "@cite_35", "@cite_3", "@cite_39", "@cite_44", "@cite_23", "@cite_20", "@cite_17" ], "mid": [ "6401868", "1481472066", "1543155826", "", "1648885110", "2169384781", "2041597619" ], "abstract": [ "PCT No. PCT EP97 00301 Sec. 371 Date Aug. 20, 1997 Sec. 102(e) Date Aug. 20, 1997 PCT Filed Jan. 23, 1997 PCT Pub. No. WO97 27421 PCT Pub. Date Jul. 31, 1997An operating room light with ceiling suspension is provided with a shaft protruding from the ceiling and containing internal electrical conductors, the shaft being enveloped in its end region by a support body provided on its outer circumference with slip rings for establishing electrical connections between the shaft and the conductors contained in a swivel arm; the swivel arm is provided with a swivel arm head as part of a rotary joint, which envelops the support body and which contains on its barrel-shaped inside slip ring pickups for making contact with the slip rings of the support body; viewed in axial direction, the support body is provided at both of its ends with bearing bushes functioning as sliding bearings, the upper bearing bush supporting an axial bearing ring that bears thereon to hold the swivel arm head; the support body together with the bearing bushes is secured against radial and axial shifting by means of threaded pins passed through openings in the pivot as well as by a locking ring held by means of annular slot in the end region of the shaft. By virtue of conical transition regions, it is possible to slip the swivel arm head with its slip ring pickups onto the support body in simple manner, thus providing advantages in particular for assembly and maintenance and for cases in which swivel arms are mounted one above the other.", "In this paper we present Botlab, a platform that continually monitors and analyzes the behavior of spam-oriented botnets. Botlab gathers multiple real-time streams of information about botnets taken from distinct perspectives. By combining and analyzing these streams, Botlab can produce accurate, timely, and comprehensive data about spam botnet behavior. Our prototype system integrates information about spam arriving at the University of Washington, outgoing spam generated by captive botnet nodes, and information gleaned from DNS about URLs found within these spam messages. We describe the design and implementation of Botlab, including the challenges we had to overcome, such as preventing captive nodes from causing harm or thwarting virtual machine detection. Next, we present the results of a detailed measurement study of the behavior of the most active spam botnets. We find that six botnets are responsible for 79 of spam messages arriving at the UW campus. Finally, we present defensive tools that take advantage of the Botlab platform to improve spam filtering and protect users from harmful web sites advertised within botnet-generated spam.", "", "", "In addressing the growing problem of junk E-mail on the Internet, we examine methods for the automated construction of filters to eliminate such unwanted messages from a user’s mail stream. By casting this problem in a decision theoretic framework, we are able to make use of probabilistic learning methods in conjunction with a notion of differential misclassification cost to produce filters Which are especially appropriate for the nuances of this task. While this may appear, at first, to be a straight-forward text classification problem, we show that by considering domain-specific features of this problem in addition to the raw text of E-mail messages, we can produce much more accurate filters. Finally, we show the efficacy of such filters in a real world usage scenario, arguing that this technology is mature enough for deployment.", "We study the use of support vector machines (SVM) in classifying e-mail as spam or nonspam by comparing it to three other classification algorithms: Ripper, Rocchio, and boosting decision trees. These four algorithms were tested on two different data sets: one data set where the number of features were constrained to the 1000 best features and another data set where the dimensionality was over 7000. SVM performed best when using binary features. For both data sets, boosting trees and SVM had acceptable test performance in terms of accuracy and speed. However, SVM had significantly less training time.", "Spam is a key problem in electronic communication, including large-scale email systems and the growing number of blogs. Content-based filtering is one reliable method of combating this threat in its various forms, but some academic researchers and industrial practitioners disagree on how best to filter spam. The former have advocated the use of Support Vector Machines (SVMs) for content-based filtering, as this machine learning methodology gives state-of-the-art performance for text classification. However, similar performance gains have yet to be demonstrated for online spam filtering. Additionally, practitioners cite the high cost of SVMs as reason to prefer faster (if less statistically robust) Bayesian methods. In this paper, we offer a resolution to this controversy. First, we show that online SVMs indeed give state-of-the-art classification performance on online spam filtering on large benchmark data sets. Second, we show that nearly equivalent performance may be achieved by a Relaxed Online SVM (ROSVM) at greatly reduced computational cost. Our results are experimentally verified on email spam, blog spam, and splog detection tasks." ] }
1410.6629
1929261594
One of the ways in which attackers try to steal sensitive information from corporations is by sending spearphishing emails. This type of emails typically appear to be sent by one of the victim's coworkers, but have instead been crafted by an attacker. A particularly insidious type of spearphishing emails are the ones that do not only claim to come from a trusted party, but were actually sent from that party's legitimate email account that was compromised in the first place. In this paper, we propose a radical change of focus in the techniques used for detecting such malicious emails: instead of looking for particular features that are indicative of attack emails, we look for possible indicators of impersonation of the legitimate owners. We present IdentityMailer, a system that validates the authorship of emails by learning the typical email-sending behavior of users over time, and comparing any subsequent email sent from their accounts against this model. Our experiments on real world e-mail datasets demonstrate that our system can effectively block advanced email attacks sent from genuine email accounts, which traditional protection systems are unable to detect. Moreover, we show that it is resilient to an attacker willing to evade the system. To the best of our knowledge, IdentityMailer is the first system able to identify spearphishing emails that are sent from within an organization, by a skilled attacker having access to a compromised email account.
A number of systems have been proposed to counter specific types of spam, such as phishing. Such systems either look at features in the attack emails that are indicative of phishing content @cite_10 , or at characteristics of the web page that the links in the email point to @cite_15 . is more general, since it can detect any type of attack emails that is sent by compromised accounts. In addition, existing phishing techniques fail in detecting those emails that rely on advanced social engineering tactics, instead of redirecting the user to a phony login page.
{ "cite_N": [ "@cite_15", "@cite_10" ], "mid": [ "2139565456", "2134750673" ], "abstract": [ "Phishing is a significant problem involving fraudulent email and web sites that trick unsuspecting users into revealing private information. In this paper, we present the design, implementation, and evaluation of CANTINA, a novel, content-based approach to detecting phishing web sites, based on the TF-IDF information retrieval algorithm. We also discuss the design and evaluation of several heuristics we developed to reduce false positives. Our experiments show that CANTINA is good at detecting phishing sites, correctly labeling approximately 95 of phishing sites.", "Each month, more attacks are launched with the aim of making web users believe that they are communicating with a trusted entity for the purpose of stealing account information, logon credentials, and identity information in general. This attack method, commonly known as \"phishing,\" is most commonly initiated by sending out emails with links to spoofed websites that harvest information. We present a method for detecting these attacks, which in its most general form is an application of machine learning on a feature set designed to highlight user-targeted deception in electronic communication. This method is applicable, with slight modification, to detection of phishing websites, or the emails used to direct victims to these sites. We evaluate this method on a set of approximately 860 such phishing emails, and 6950 non-phishing emails, and correctly identify over 96 of the phishing emails while only mis-classifying on the order of 0.1 of the legitimate emails. We conclude with thoughts on the future for such techniques to specifically identify deception, specifically with respect to the evolutionary nature of the attacks and information available." ] }
1410.6629
1929261594
One of the ways in which attackers try to steal sensitive information from corporations is by sending spearphishing emails. This type of emails typically appear to be sent by one of the victim's coworkers, but have instead been crafted by an attacker. A particularly insidious type of spearphishing emails are the ones that do not only claim to come from a trusted party, but were actually sent from that party's legitimate email account that was compromised in the first place. In this paper, we propose a radical change of focus in the techniques used for detecting such malicious emails: instead of looking for particular features that are indicative of attack emails, we look for possible indicators of impersonation of the legitimate owners. We present IdentityMailer, a system that validates the authorship of emails by learning the typical email-sending behavior of users over time, and comparing any subsequent email sent from their accounts against this model. Our experiments on real world e-mail datasets demonstrate that our system can effectively block advanced email attacks sent from genuine email accounts, which traditional protection systems are unable to detect. Moreover, we show that it is resilient to an attacker willing to evade the system. To the best of our knowledge, IdentityMailer is the first system able to identify spearphishing emails that are sent from within an organization, by a skilled attacker having access to a compromised email account.
Another category of spam detection techniques looks at the way in which spammers use the TCP or SMTP protocol @cite_40 @cite_33 . These techniques work well in practice against most spam, but are focused on detecting hosts that belong to a botnet, and are therefore useless in detecting the type of attacks that is designed to prevent.
{ "cite_N": [ "@cite_40", "@cite_33" ], "mid": [ "1537207920", "9904157" ], "abstract": [ "Botnets are a significant source of abusive messaging (spam, phishing, etc) and other types of malicious traffic. A promising approach to help mitigate botnet-generated traffic is signal analysis of transport-layer (i.e. TCP IP) characteristics, e.g. timing, packet reordering, congestion, and flow-control. Prior work [4] shows that machine learning analysis of such traffic features on an SMTP MTA can accurately differentiate between botnet and legitimate sources. We make two contributions toward the real-world deployment of such techniques: i) an architecture for real-time on-line operation; and ii) auto-learning of the unsupervised model across different environments without human labeling (i.e. training). We present a \"SpamFlow\" SpamAssassin plugin and the requisite auxiliary daemons to integrate transport-layer signal analysis with a popular open-source spam filter. Using our system, we detail results from a production deployment where our auto-learning technique achieves better than 95 percent accuracy, precision, and recall after reception of ≈ 1,000 emails.", "Traditional spam detection systems either rely on content analysis to detect spam emails, or attempt to detect spammers before they send a message, (i.e., they rely on the origin of the message). In this paper, we introduce a third approach: we present a system for filtering spam that takes into account how messages are sent by spammers. More precisely, we focus on the email delivery mechanism, and analyze the communication at the SMTP protocol level. We introduce two complementary techniques as concrete instances of our new approach. First, we leverage the insight that different mail clients (and bots) implement the SMTP protocol in slightly different ways. We automatically learn these SMTP dialects and use them to detect bots during an SMTP transaction. Empirical results demonstrate that this technique is successful in identifying (and rejecting) bots that attempt to send emails. Second, we observe that spammers also take into account server feedback (for example to detect and remove non-existent recipients from email address lists). We can take advantage of this observation by returning fake information, thereby poisoning the server feedback on which the spammers rely. The results of our experiments show that by sending misleading information to a spammer, it is possible to prevent recipients from receiving subsequent spam emails from that same spammer." ] }
1410.6629
1929261594
One of the ways in which attackers try to steal sensitive information from corporations is by sending spearphishing emails. This type of emails typically appear to be sent by one of the victim's coworkers, but have instead been crafted by an attacker. A particularly insidious type of spearphishing emails are the ones that do not only claim to come from a trusted party, but were actually sent from that party's legitimate email account that was compromised in the first place. In this paper, we propose a radical change of focus in the techniques used for detecting such malicious emails: instead of looking for particular features that are indicative of attack emails, we look for possible indicators of impersonation of the legitimate owners. We present IdentityMailer, a system that validates the authorship of emails by learning the typical email-sending behavior of users over time, and comparing any subsequent email sent from their accounts against this model. Our experiments on real world e-mail datasets demonstrate that our system can effectively block advanced email attacks sent from genuine email accounts, which traditional protection systems are unable to detect. Moreover, we show that it is resilient to an attacker willing to evade the system. To the best of our knowledge, IdentityMailer is the first system able to identify spearphishing emails that are sent from within an organization, by a skilled attacker having access to a compromised email account.
A large corpus of research has been performed on determining the authorship of written text. These techniques typically leverage stylometry and machine learning and return the most probable author among a set of candidates @cite_1 @cite_8 @cite_5 @cite_19 @cite_36 . From our point of view, these approaches suffer from two major problems: the first one is that they typically need a set of possible authors, which in our case we do not have. The second problem is that email bodies are often times too short to reliably determine the author by just looking at stylometry @cite_24 . proposed a system that looks at the writing style of an email, and is able to tell whether that email was written by an author or not @cite_7 . This approach may solve the first problem, but does not solve the second one, in which we have emails that are too short to make a meaningful decision. To mitigate this problem, in we leverage many other features other than stylometry, such as the times at which a user sends emails, or her social network.
{ "cite_N": [ "@cite_7", "@cite_8", "@cite_36", "@cite_1", "@cite_24", "@cite_19", "@cite_5" ], "mid": [ "2234522956", "51398944", "1529790664", "2117013562", "2044794736", "2081912971", "2123407642" ], "abstract": [ "We examine a related, but distinct, problem to spam detection. Instead of trying to decide if email is spam or ham, we try to determine if email purporting to be from a known correspondent actually comes from that person – this may be seen as a way to address a class of targeted email attacks. We propose two methods, geolocation and stylometry analysis. The efficacy of geolocation was evaluated using over 73,000 emails collected from real users; stylometry, for comparison with related work from the area of computer forensics, was evaluated using selections from the Enron corpus. Both methods show promise for addressing the problem, and are complementary to existing anti-spam techniques. Neither requires global changes to email infrastructure, and both are done on the email client side, a practical means to empower end users with respect to security. Furthermore, both methods are lightweight in the sense that they leverage existing information and software in new ways, instead of needing massive deployments of untried applications.", "E-mail has become the most popular Internet application and with its rise in use has come an inevitable increase in the use of e-mail for criminal purposes. It is possible for an e-mail message to be sent anonymously or through spoofed servers. Computer forensics analysts need a tool that can be used to identify the author of such e-mail messages. This thesis describes the development of such a tool using techniques from the fields of stylometry and machine learning. An author's style can be reduced to a pattern by making measurements of various stylometric features from the text. E-mail messages also contain macro-structural features that can be measured. These features together can be used with the Support Vector Machine learning algorithm to classify or attribute authorship of e-mail messages to an author providing a suitable sample of messages is available for comparison. In an investigation, the set of authors may need to be reduced from an initial large list of possible suspects. This research has trialled authorship characterisation based on sociolinguistic cohorts, such as gender and language background, as a technique for profiling the anonymous message so that the suspect list can be reduced.", "Source code author identification deals with identifying the most likely author of a computer program, given a set of predefined author candidates. There are several scenarios where digital evidence of this kind plays a role in investigation and adjudication, such as code authorship disputes, intellectual property infringement, tracing the source of code left in the system after a cyber attack, and so forth. As in any identification task, the disputed program is compared to undisputed, known programming samples by the predefined author candidates. We present a new approach, called the SCAP (Source Code Author Profiles) approach, based on byte-level n-gram profiles representing the source code author’s style. The SCAP method extends a method originally applied to natural language text authorship attribution; we show that an n-gram approach also suits the characteristics of source code analysis. The methodological extension includes a simplified profile and a less complicated, but more effective, similarity measure. Experiments on data sets of different programming-language (Java or C++) and commented commentless code demonstrate the effectiveness of these extensions. The SCAP approach is programming-language independent. Moreover, the SCAP approach deals surprisingly well with cases where only a limited amount of very short programs per programmer is available for training. Finally, it is also demonstrated that SCAP effectiveness persists even in the absence of comments in the source code, a condition usually met in cyber-crime cases. 1. The Forensic Significance of Source Code Nowadays, in a wide variety of legal cases it is important to identify the author of a usually limited piece of programming code. Such situations include cyber attacks in the form of viruses, Trojan horses, logic bombs, fraud, and credit card cloning, code authorship disputes, and intellectual property infringement. Identifying the authorship of malicious or stolen source code in a reliable way has become a primary goal for digital investigators (Spafford and Weeber 1993). Please see Appendix 1 for a legal analysis of the forensic significance of source code.", "The identification of the authorship of e-mail messages is of increasing importance due to an increase in the use of e-mail for criminal purposes. An author’s unique writing style can be reduced to a pattern by making measurements of various stylometric features from the written text. This paper reports on work to optimize and extend an existing C# based stylometry system that identifies the author of an arbitrary e-mail by using fifty-five writing style features. The program has been extended to provide feature vector data in a format appropriate for distribution to other project teams for subsequent data mining and classification experiments.", "Stylometrists have proposed and used a wide variety of textual features or markers, but until recently very little attention has been focused on the question: where do textual features come from ? In many text-categorization tasks the choice of textual features is a crucial determinant of success, yet is typically left to the intuition of the analyst. We argue that it would be desirable, at least in some cases, if this part of the process were less dependent on subjective judgement. Accordingly, this paper compares five different methods of textual feature finding that do not need background knowledge external to the texts being analysed (three proposed by previous stylometers, two devised for this study). As these methods do not rely on parsing or semantic analysis, they are not tied to the English language only. Results of a benchmark test on ten representative text-classification problems suggest that the technique here designated Monte-Carlo Feature-Finding has certain advantages that deserve consideration by future workers in this area", "Online reputation systems are intended to facilitate the propagation of word of mouth as a credibility scoring mechanism for improved trust in electronic marketplaces. However, they experience two problems attributable to anonymity abuse-easy identity changes and reputation manipulation. In this study, we propose the use of stylometric analysis to help identify online traders based on the writing style traces inherent in their posted feedback comments. We incorporated a rich stylistic feature set and developed the Writeprint technique for detection of anonymous trader identities. The technique and extended feature set were evaluated on a test bed encompassing thousands of feedback comments posted by 200 eBay traders. Experiments conducted to assess the scalability (number of traders) and robustness (against intentional obfuscation) of the proposed approach found it to significantly outperform benchmark stylometric techniques. The results indicate that the proposed method may help militate against easy identity changes and reputation manipulation in electronic markets.", "There is an alarming increase in the number of cybercrime incidents through anonymous e-mails. The problem of e-mail authorship attribution is to identify the most plausible author of an anonymous e-mail from a group of potential suspects. Most previous contributions employed a traditional classification approach, such as decision tree and Support Vector Machine (SVM), to identify the author and studied the effects of different writing style features on the classification accuracy. However, little attention has been given on ensuring the quality of the evidence. In this paper, we introduce an innovative data mining method to capture the write-print of every suspect and model it as combinations of features that occurred frequently in the suspect's e-mails. This notion is called frequent pattern, which has proven to be effective in many data mining applications, but it is the first time to be applied to the problem of authorship attribution. Unlike the traditional approach, the extracted write-print by our method is unique among the suspects and, therefore, provides convincing and credible evidence for presenting it in a court of law. Experiments on real-life e-mails suggest that the proposed method can effectively identify the author and the results are supported by a strong evidence." ] }
1410.6629
1929261594
One of the ways in which attackers try to steal sensitive information from corporations is by sending spearphishing emails. This type of emails typically appear to be sent by one of the victim's coworkers, but have instead been crafted by an attacker. A particularly insidious type of spearphishing emails are the ones that do not only claim to come from a trusted party, but were actually sent from that party's legitimate email account that was compromised in the first place. In this paper, we propose a radical change of focus in the techniques used for detecting such malicious emails: instead of looking for particular features that are indicative of attack emails, we look for possible indicators of impersonation of the legitimate owners. We present IdentityMailer, a system that validates the authorship of emails by learning the typical email-sending behavior of users over time, and comparing any subsequent email sent from their accounts against this model. Our experiments on real world e-mail datasets demonstrate that our system can effectively block advanced email attacks sent from genuine email accounts, which traditional protection systems are unable to detect. Moreover, we show that it is resilient to an attacker willing to evade the system. To the best of our knowledge, IdentityMailer is the first system able to identify spearphishing emails that are sent from within an organization, by a skilled attacker having access to a compromised email account.
presented ASCAI @cite_42 , a system that detects if an email author has likely been forged. ASCAI looks at the most common n-grams in a user's emails, and flag as anomalous emails that contain words that the user rarely uses. Unlike , ASCAI looks for any word, instead of focusing on writeprint features (such as functional words). For this reason, this system would fail in detecting spearphishing emails whose content is about the same topics that the user typically discusses, but that have been authored by a different person. , on the other hand, has been designed to detect this type of stealthy spearphishing emails, and is therefore effective in blocking them.
{ "cite_N": [ "@cite_42" ], "mid": [ "1533307491" ], "abstract": [ "Phishing is a semantic attack that takes advantage of the naivety of the human behind electronic systems (e.g. e-banking). Educating end-users can minimize the impact of phishing attacks, however it remains relatively expensive and time consuming. Thus, many software-based solutions, such as classifiers, are being proposed by researchers. However, no software solutions have been proposed to minimize the impact of spear phishing attacks, which are the targeted form of phishing, and have a higher success rate than generic bulk phishing attacks. In this paper, we describe a novel framework to mitigate spear phishing attacks via the use of document authorship techniques — Anti-Spear phishing Content-based Authorship Identification (ASCAI). ASCAI informs the user of possible mismatches between the writing styles of a received email body and of trusted authors by studying the email body itself (i.e. the writeprint), as opposed to traditional user ID-based authentication techniques which can be spoofed or abused. As a proof of concept, we implemented the proposed framework using Source Code Author Profiles (SCAP), and the evaluation results are presented." ] }
1410.6629
1929261594
One of the ways in which attackers try to steal sensitive information from corporations is by sending spearphishing emails. This type of emails typically appear to be sent by one of the victim's coworkers, but have instead been crafted by an attacker. A particularly insidious type of spearphishing emails are the ones that do not only claim to come from a trusted party, but were actually sent from that party's legitimate email account that was compromised in the first place. In this paper, we propose a radical change of focus in the techniques used for detecting such malicious emails: instead of looking for particular features that are indicative of attack emails, we look for possible indicators of impersonation of the legitimate owners. We present IdentityMailer, a system that validates the authorship of emails by learning the typical email-sending behavior of users over time, and comparing any subsequent email sent from their accounts against this model. Our experiments on real world e-mail datasets demonstrate that our system can effectively block advanced email attacks sent from genuine email accounts, which traditional protection systems are unable to detect. Moreover, we show that it is resilient to an attacker willing to evade the system. To the best of our knowledge, IdentityMailer is the first system able to identify spearphishing emails that are sent from within an organization, by a skilled attacker having access to a compromised email account.
presented the Email Mining Toolkit (EMT) @cite_37 @cite_30 . This tool mines email logs to find communities of users who frequently interact with each other. After learning the communities, the system flags as anomalous emails that are addressed to people outside them. Although EMT leverages an idea similar to 's interaction features, it is tailored at detecting large-scale threats, such as worms spreading through email. The fact that leverages other types of features allow our system to detect more subtle, one-of-a-kind attack emails.
{ "cite_N": [ "@cite_30", "@cite_37" ], "mid": [ "2169300738", "1497249636" ], "abstract": [ "The Email Mining Toolkit (EMT) is a data mining system that computes behavior profiles or models of user email accounts. These models may be used for a multitude of tasks including forensic analyses and detection tasks of value to law enforcement and intelligence agencies, as well for as other typical tasks such as virus and spam detection. To demonstrate the power of the methods, we focus on the application of these models to detect the early onset of a viral propagation without “content-base ” (or signature-based) analysis in common use in virus scanners. We present several experiments using real email from 15 users with injected simulated viral emails and describe how the combination of different behavior models improves overall detection rates. The performance results vary depending upon parameter settings, approaching 99 p true positive (TP) (percentage of viral emails caught) in general cases and with 0.38 p false positive (FP) (percentage of emails with attachments that are mislabeled as viral). The models used for this study are based upon volume and velocity statistics of a user's email rate and an analysis of the user's (social) cliques revealed in the person's email behavior. We show by way of simulation that virus propagations are detectable since viruses may emit emails at rates different than human behavior suggests is normal, and email is directed to groups of recipients in ways that violate the users' typical communications with their social groups.", "This paper describes the forensic and intelligence analysis capabilities of the Email Mining Toolkit (EMT) under development at the Columbia Intrusion Detection (IDS) Lab. EMT provides the means of loading, parsing and analyzing email logs, including content, in a wide range of formats. Many tools and techniques have been available from the fields of Information Retrieval (IR) and Natural Language Processing (NLP) for analyzing documents of various sorts, including emails. EMT, however, extends these kinds of analyses with an entirely new set of analyses that model \"user behavior\". EMT thus models the behavior of individual user email accounts, or groups of accounts, including the \"social cliques\" revealed by a user's email behavior." ] }
1410.6629
1929261594
One of the ways in which attackers try to steal sensitive information from corporations is by sending spearphishing emails. This type of emails typically appear to be sent by one of the victim's coworkers, but have instead been crafted by an attacker. A particularly insidious type of spearphishing emails are the ones that do not only claim to come from a trusted party, but were actually sent from that party's legitimate email account that was compromised in the first place. In this paper, we propose a radical change of focus in the techniques used for detecting such malicious emails: instead of looking for particular features that are indicative of attack emails, we look for possible indicators of impersonation of the legitimate owners. We present IdentityMailer, a system that validates the authorship of emails by learning the typical email-sending behavior of users over time, and comparing any subsequent email sent from their accounts against this model. Our experiments on real world e-mail datasets demonstrate that our system can effectively block advanced email attacks sent from genuine email accounts, which traditional protection systems are unable to detect. Moreover, we show that it is resilient to an attacker willing to evade the system. To the best of our knowledge, IdentityMailer is the first system able to identify spearphishing emails that are sent from within an organization, by a skilled attacker having access to a compromised email account.
proposed a system that learns the behavior of users on Online Social Networks (OSN) and flags anomalous messages as possible compromises @cite_16 . Because of the high number of false positives, their system can only detect large-scale campaigns, by aggregating similar anomalous messages. As we have shown, is able to detect attacks that are composed of a single email, and which have not been seen before.
{ "cite_N": [ "@cite_16" ], "mid": [ "2397135192" ], "abstract": [ "As social networking sites have risen in popularity, cyber-criminals started to exploit these sites to spread malware and to carry out scams. Previous work has extensively studied the use of fake (Sybil) accounts that attackers set up to distribute spam messages (mostly messages that contain links to scam pages or drive-by download sites). Fake accounts typically exhibit highly anomalous behavior, and hence, are relatively easy to detect. As a response, attackers have started to compromise and abuse legitimate accounts. Compromising legitimate accounts is very effective, as attackers can leverage the trust relationships that the account owners have established in the past. Moreover, compromised accounts are more difficult to clean up because a social network provider cannot simply delete the correspond-" ] }
1410.6447
1509350468
Region search is widely used for object localisation in computer vision. After projecting the score of an image classifier into an image plane, region search aims to find regions that precisely localise desired objects. The recently proposed region search methods, such as efficient subwindow search and efficient region search, usually find regions with maximal score. For some classifiers and scenarios, the projected scores are nearly all positive or very noisy, then maximising the score of a region results in localising nearly the entire images as objects, or causes localisation results unstable. In this study, the authors observe that the projected scores with large magnitudes are mainly concentrated on or around objects. On the basis of this observation, they propose a region search method for object localisation, named level set maximum-weight connected subgraph (LS-MWCS). It localises objects by searching regions by graph mode-seeking rather than the maximal score. The score density by localised region can be controlled by a parameter flexibly. They also prove an interesting property of the proposed LS-MWCS, which guarantees that the region with desired density can be found. Moreover, the LS-MWCS can be efficiently solved by the belief propagation scheme. The effectiveness of the author's method is validated on the problem of weakly-supervised object localisation. Quantitative results on synthetic and real data demonstrate the superiorities of their method compared to other state-of-the-art methods.
Region search is a key technology in weakly supervised localization (WSL). WSL is usually modeled as multiple instance learning (MIL). In the MIL setting, each image is modeled as a bag of regions, and each region is an instance. With two classes, the negative bag only contains negative instances and the positive consists of at least one positive. The goal of MIL is to label the positive instances in the positive bags. Region search corresponds to finding the region (instance) in the positive image (bag) that triggers the positive label. In the past few years, many MIL algorithms have been successfully used for weakly supervised learning, such as MILboost @cite_21 and MI-SVM @cite_1 . In @cite_19 , a region weighting method is proposed for WSL, which is customized for bag-of-words feature representation and non-linear SVM classifiers. Region search has a close relationship with common pattern discovery from images that share common contents, such as co-segmentation and image feature matching @cite_18 @cite_8 @cite_15 .
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_21", "@cite_1", "@cite_19", "@cite_15" ], "mid": [ "2124404372", "2071730188", "1570918423", "2112343299", "1614115966", "2040908311" ], "abstract": [ "Abstract The wide-baseline stereo problem, i.e. the problem of establishing correspondences between a pair of images taken from different viewpoints is studied. A new set of image elements that are put into correspondence, the so called extremal regions , is introduced. Extremal regions possess highly desirable properties: the set is closed under (1) continuous (and thus projective) transformation of image coordinates and (2) monotonic transformation of image intensities. An efficient (near linear complexity) and practically fast detection algorithm (near frame rate) is presented for an affinely invariant stable subset of extremal regions, the maximally stable extremal regions (MSER). A new robust similarity measure for establishing tentative correspondences is proposed. The robustness ensures that invariants from multiple measurement regions (regions obtained by invariant constructions from extremal regions), some that are significantly larger (and hence discriminative) than the MSERs, may be used to establish tentative correspondences. The high utility of MSERs, multiple measurement regions and the robust metric is demonstrated in wide-baseline experiments on image pairs from both indoor and outdoor scenes. Significant change of scale (3.5×), illumination conditions, out-of-plane rotation, occlusion, locally anisotropic scale change and 3D translation of the viewpoint are all present in the test problems. Good estimates of epipolar geometry (average distance from corresponding points to the epipolar line below 0.09 of the inter-pixel distance) are obtained.", "In this paper, we propose an efficient algorithm, called vector field consensus, for establishing robust point correspondences between two sets of points. Our algorithm starts by creating a set of putative correspondences which can contain a very large number of false correspondences, or outliers, in addition to a limited number of true correspondences (inliers). Next, we solve for correspondence by interpolating a vector field between the two point sets, which involves estimating a consensus of inlier points whose matching follows a nonparametric geometrical constraint. We formulate this a maximum a posteriori (MAP) estimation of a Bayesian model with hidden latent variables indicating whether matches in the putative set are outliers or inliers. We impose nonparametric geometrical constraints on the correspondence, as a prior distribution, using Tikhonov regularizers in a reproducing kernel Hilbert space. MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We illustrate this method on data sets in 2D and 3D and demonstrate that it is robust to a very large number of outliers (even up to 90 ). We also show that in the special case where there is an underlying parametric geometrical model (e.g., the epipolar line constraint) that we obtain better results than standard alternatives like RANSAC if a large number of outliers are present. This suggests a two-stage strategy, where we use our nonparametric model to reduce the size of the putative set and then apply a parametric variant of our approach to estimate the geometric parameters. Our algorithm is computationally efficient and we provide code for others to use it. In addition, our approach is general and can be applied to other problems, such as learning with a badly corrupted training data set.", "Multiple Instance Learning (MIL) provides a framework for training a discriminative classifier from data with ambiguous labels. This framework is well suited for the task of learning object classifiers from weakly labeled image data, where only the presence of an object in an image is known, but not its location. Some recent work has explored the application of MIL algorithms to the tasks of image categorization and natural scene classification. In this paper we extend these ideas in a framework that uses MIL to recognize and localizeobjects in images. To achieve this we employ state of the art image descriptors and multiple stable segmentations. These components, combined with a powerful MIL algorithm, form our object recognition system called MILSS. We show highly competitive object categorization results on the Caltech dataset. To evaluate the performance of our algorithm further, we introduce the challenging Landmarks-18 dataset, a collection of photographs of famous landmarks from around the world. The results on this new dataset show the great potential of our proposed algorithm.", "Visual categorization problems, such as object classification or action recognition, are increasingly often approached using a detection strategy: a classifier function is first applied to candidate subwindows of the image or the video, and then the maximum classifier score is used for class decision. Traditionally, the subwindow classifiers are trained on a large collection of examples manually annotated with masks or bounding boxes. The reliance on time-consuming human labeling effectively limits the application of these methods to problems involving very few categories. Furthermore, the human selection of the masks introduces arbitrary biases (e.g. in terms of window size and location) which may be suboptimal for classification. In this paper we propose a novel method for learning a discriminative subwindow classifier from examples annotated with binary labels indicating the presence of an object or action of interest, but not its location. During training, our approach simultaneously localizes the instances of the positive class and learns a subwindow SVM to recognize them. We extend our method to classification of time series by presenting an algorithm that localizes the most discriminative set of temporal segments in the signal. We evaluate our approach on several datasets for object and action recognition and show that it achieves results similar and in many cases superior to those obtained with full supervision.", "Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ 2 and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.", "This paper proposes an efficient mixture model for establishing robust point correspondences between two sets of points under multi-layer motion. Our algorithm starts by creating a set of putative correspondences which can contain a number of false correspondences, or outliers, in addition to the true correspondences (inliers). Next we solve for correspondence by interpolating a set of spatial transformations on the putative correspondence set based on a mixture model, which involves estimating a consensus of inlier points whose matching follows a non-parametric geometrical constraint. We formulate this as a maximum a posteriori (MAP) estimation of a Bayesian model with hidden latent variables indicating whether matches in the putative set are outliers or inliers. We impose non-parametric geometrical constraints on the correspondence, as a prior distribution, in a reproducing kernel Hilbert space (RKHS). MAP estimation is performed by the EM algorithm which by also estimating the variance of the prior model (initialized to a large value) is able to obtain good estimates very quickly (e.g., avoiding many of the local minima inherent in this formulation). We further provide a fast implementation based on sparse approximation which can achieve a significant speed-up without much performance degradation. We illustrate the proposed method on 2D and 3D real images for sparse feature correspondence, as well as a public available dataset for shape matching. The quantitative results demonstrate that our method is robust to non-rigid deformation and multi-layer large discontinuous motion." ] }
1410.6516
1495664850
Two fundamental algorithm-design paradigms are Tree Search and Dynamic Programming. The techniques used therein have been shown to complement one another when solving the complete set partitioning problem, also known as the coalition structure generation problem [5]. Inspired by this observation, we develop in this paper an algorithm to solve the coalition structure generation problem on graphs, where the goal is to identifying an optimal partition of a graph into connected subgraphs. More specifically, we develop a new depth-first search algorithm, and combine it with an existing dynamic programming algorithm due to [9]. The resulting hybrid algorithm is empirically shown to significantly outperform both its constituent parts when the subset-evaluation function happens to have certain intuitive properties.
The pseudo code of @math is shown in Algorithm . For a proof of the correctness of this algorithm, see @cite_1 .
{ "cite_N": [ "@cite_1" ], "mid": [ "144107075" ], "abstract": [ "We present a new Dynamic Programming (DP) formulation of the Coalition Structure Generation (CSG) problem based on imposing a hierarchical organizational structure over the agents. We show the efficiency of this formulation by deriving DyPE, a new optimal DP algorithm which significantly outperforms current DP approaches in speed and memory usage. In the classic case, in which all coalitions are feasible, DyPE has half the memory requirements of other DP approaches. On graph-restricted CSG, in which feasibility is restricted by a (synergy) graph, DyPE has either the same or lower computational complexity depending on the underlying graph structure of the problem. Our empirical evaluation shows that DyPE outperforms the state-of-the-art DP approaches by several orders of magnitude in a large range of graph structures (e.g. for certain scalefree graphs DyPE reduces the memory requirements by @math and solves problems that previously needed hours in minutes)." ] }
1410.5877
2952948352
We explore how to improve machine translation systems by adding more translation data in situations where we already have substantial resources. The main challenge is how to buck the trend of diminishing returns that is commonly encountered. We present an active learning-style data solicitation algorithm to meet this challenge. We test it, gathering annotations via Amazon Mechanical Turk, and find that we get an order of magnitude increase in performance rates of improvement.
Active learning has been shown to be effective for improving NLP systems and reducing annotation burdens for a number of NLP tasks (see, e.g., @cite_4 @cite_0 @cite_10 @cite_3 @cite_11 @cite_16 ). The current paper is most highly related to previous work falling into three main areas: use of AL when large corpora already exist; cost-focused AL; and AL for SMT.
{ "cite_N": [ "@cite_4", "@cite_3", "@cite_0", "@cite_16", "@cite_10", "@cite_11" ], "mid": [ "2076083849", "1998259579", "2097388131", "2096507791", "2950905432", "2161181481" ], "abstract": [ "Corpus-based grammar induction relies on using many hand-parsed sentences as training examples. However, the construction of a training corpus with detailed syntactic analysis for every sentence is a labor-intensive task. We propose to use sample selection methods to minimize the amount of annotation needed in the training data, thereby reducing the workload of the human annotators. This paper shows that the amount of annotated training data can be reduced by 36 without degrading the quality of the induced grammars.", "Actively sampled data can have very different characteristics than passively sampled data. Therefore, it's promising to investigate using different inference procedures during AL than are used during passive learning (PL). This general idea is explored in detail for the focused case of AL with cost-weighted SVMs for imbalanced data, a situation that arises for many HLT tasks. The key idea behind the proposed InitPA method for addressing imbalance is to base cost models during AL on an estimate of overall corpus imbalance computed via a small unbiased sample rather than the imbalance in the labeled training data, which is the leading method used during PL.", "We explore how active learning with Support Vector Machines works well for a non-trivial task in natural language processing. We use Japanese word segmentation as a test case. In particular, we discuss how the size of a pool affects the learning curve. It is found that in the early stage of training with a larger pool, more labeled examples are required to achieve a given level of accuracy than those with a smaller pool. In addition, we propose a novel technique to use a large number of unlabeled examples effectively by adding them gradually to a pool. The experimental results show that our technique requires less labeled examples than those with the technique in previous research. To achieve 97.0 accuracy, the proposed technique needs 59.3 of labeled examples that are required when using the previous technique and only 17.4 of labeled examples with random sampling.", "While Active Learning (AL) has already been shown to markedly reduce the annotation efforts for many sequence labeling tasks compared to random selection, AL remains unconcerned about the internal structure of the selected sequences (typically, sentences). We propose a semi-supervised AL approach for sequence labeling where only highly uncertain subsequences are presented to human annotators, while all others in the selected sequences are automatically labeled. For the task of entity recognition, our experiments reveal that this approach reduces annotation efforts in terms of manually labeled tokens by up to 60 compared to the standard, fully supervised AL scheme.", "There is a broad range of BioNLP tasks for which active learning (AL) can significantly reduce annotation costs and a specific AL algorithm we have developed is particularly effective in reducing annotation costs for these tasks. We have previously developed an AL algorithm called ClosestInitPA that works best with tasks that have the following characteristics: redundancy in training material, burdensome annotation costs, Support Vector Machines (SVMs) work well for the task, and imbalanced datasets (i.e. when set up as a binary classification problem, one class is substantially rarer than the other). Many BioNLP tasks have these characteristics and thus our AL algorithm is a natural approach to apply to BioNLP tasks.", "Most previous work on trainable language generation has focused on two paradigms: (a) using a statistical model to rank a set of generated utterances, or (b) using statistics to inform the generation decision process. Both approaches rely on the existence of a handcrafted generator, which limits their scalability to new domains. This paper presents Bagel, a statistical language generator which uses dynamic Bayesian networks to learn from semantically-aligned data produced by 42 untrained annotators. A human evaluation shows that Bagel can generate natural and informative utterances from unseen inputs in the information presentation domain. Additionally, generation performance on sparse datasets is improved significantly by using certainty-based active learning, yielding ratings close to the human gold standard with a fraction of the data." ] }
1410.5877
2952948352
We explore how to improve machine translation systems by adding more translation data in situations where we already have substantial resources. The main challenge is how to buck the trend of diminishing returns that is commonly encountered. We present an active learning-style data solicitation algorithm to meet this challenge. We test it, gathering annotations via Amazon Mechanical Turk, and find that we get an order of magnitude increase in performance rates of improvement.
On the other hand, in the current paper, we demonstrate how to apply AL in situations where we already have large corpora. Our goal is to buck the trend of diminishing returns and use AL to add data to build some of the highest-performing MT systems in the world while keeping annotation costs low. See Figure from , which contrasts where @cite_7 @cite_5 stop their investigations with where we begin our studies.
{ "cite_N": [ "@cite_5", "@cite_7" ], "mid": [ "2108126316", "2105410942" ], "abstract": [ "We report on an active learning experiment for named entity recognition in the astronomy domain. Active learning has been shown to reduce the amount of labelled data required to train a supervised learner by selectively sampling more informative data points for human annotation. We inspect double annotation data from the same domain and quantify potential problems concerning annotators' performance. For data selectively sampled according to different selection metrics, we find lower inter-annotator agreement and higher per token annotation times. However, overall results confirm the utility of active learning.", "Statistical machine translation (SMT) models need large bilingual corpora for training, which are unavailable for some language pairs. This paper provides the first serious experimental study of active learning for SMT. We use active learning to improve the quality of a phrase-based SMT system, and show significant improvements in translation compared to a random sentence selection baseline, when test and training data are taken from the same or different domains. Experimental results are shown in a simulated setting using three language pairs, and in a realistic situation for Bangla-English, a language pair with limited translation resources." ] }
1410.5877
2952948352
We explore how to improve machine translation systems by adding more translation data in situations where we already have substantial resources. The main challenge is how to buck the trend of diminishing returns that is commonly encountered. We present an active learning-style data solicitation algorithm to meet this challenge. We test it, gathering annotations via Amazon Mechanical Turk, and find that we get an order of magnitude increase in performance rates of improvement.
The other major difference is that @cite_7 @cite_5 measure annotation cost by # of sentences. In contrast, we bring to light some potential drawbacks of this practice, showing it can lead to different conclusions than if other annotation cost metrics are used, such as time and money, which are the metrics that we use.
{ "cite_N": [ "@cite_5", "@cite_7" ], "mid": [ "2108126316", "2105410942" ], "abstract": [ "We report on an active learning experiment for named entity recognition in the astronomy domain. Active learning has been shown to reduce the amount of labelled data required to train a supervised learner by selectively sampling more informative data points for human annotation. We inspect double annotation data from the same domain and quantify potential problems concerning annotators' performance. For data selectively sampled according to different selection metrics, we find lower inter-annotator agreement and higher per token annotation times. However, overall results confirm the utility of active learning.", "Statistical machine translation (SMT) models need large bilingual corpora for training, which are unavailable for some language pairs. This paper provides the first serious experimental study of active learning for SMT. We use active learning to improve the quality of a phrase-based SMT system, and show significant improvements in translation compared to a random sentence selection baseline, when test and training data are taken from the same or different domains. Experimental results are shown in a simulated setting using three language pairs, and in a realistic situation for Bangla-English, a language pair with limited translation resources." ] }
1410.5884
2271601362
The mean field algorithm is a widely used approximate inference algorithm for graphical models whose exact inference is intractable. In each iteration of mean field, the approximate marginals for each variable are updated by getting information from the neighbors. This process can be equivalently converted into a feedforward network, with each layer representing one iteration of mean field and with tied weights on all layers. This conversion enables a few natural extensions, e.g. untying the weights in the network. In this paper, we study these mean field networks (MFNs), and use them as inference tools as well as discriminative models. Preliminary experiment results show that MFNs can learn to do inference very efficiently and perform significantly better than mean field as discriminative models.
Previous work by Justin Domke @cite_5 @cite_1 and Stoyanov al @cite_6 are the most related to ours. In @cite_5 @cite_1 , the author described the idea of truncating message-passing at learning and test time to a fixed number of steps, and back-propagating through the truncated inference procedure to update parameters of the underlying graphical model. In @cite_6 the authors proposed to train graphical models in a discriminative fashion to directly minimize empirical risk, and used back-propagation to optimize the graphical model parameters.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_6" ], "mid": [ "1999726054", "", "2186629860" ], "abstract": [ "Training of conditional random fields often takes the form of a double-loop procedure with message-passing inference in the inner loop. This can be very expensive, as the need to solve the inner loop to high accuracy can require many message-passing iterations. This paper seeks to reduce the expense of such training, by redefining the training objective in terms of the approximate marginals obtained after message-passing is “truncated” to a fixed number of iterations. An algorithm is derived to efficiently compute the exact gradient of this objective. On a common pixel labeling benchmark, this procedure improves training speeds by an order of magnitude, and slightly improves inference accuracy if a very small number of message-passing iterations are used at test time.", "", "Graphical models are often used ,\" with approximations in the topology, inference, and prediction. Yet it is still common to train their parameters to approximately maximize training likelihood. We argue that instead, one should seek the parameters that minimize the empirical risk of the entire imperfect system. We show how to locally optimize this risk using back-propagation and stochastic metadescent. Over a range of synthetic-data problems, compared to the usual practice of choosing approximate MAP parameters, our approach signicantly reduces loss on test data, sometimes by an order of magnitude." ] }
1410.5884
2271601362
The mean field algorithm is a widely used approximate inference algorithm for graphical models whose exact inference is intractable. In each iteration of mean field, the approximate marginals for each variable are updated by getting information from the neighbors. This process can be equivalently converted into a feedforward network, with each layer representing one iteration of mean field and with tied weights on all layers. This conversion enables a few natural extensions, e.g. untying the weights in the network. In this paper, we study these mean field networks (MFNs), and use them as inference tools as well as discriminative models. Preliminary experiment results show that MFNs can learn to do inference very efficiently and perform significantly better than mean field as discriminative models.
Another work by @cite_3 briefly draws a connection between mean field inference of a specific binary MRF with neural networks, but did not explore further variations.
{ "cite_N": [ "@cite_3" ], "mid": [ "2129981175" ], "abstract": [ "Convolutional networks have achieved a great deal of success in high-level vision problems such as object recognition. Here we show that they can also be used as a general method for low-level image processing. As an example of our approach, convolutional networks are trained using gradient learning to solve the problem of restoring noisy or degraded images. For our training data, we have used electron microscopic images of neural circuitry with ground truth restorations provided by human experts. On this dataset, Markov random field (MRF), conditional random field (CRF), and anisotropic diffusion algorithms perform about the same as simple thresholding, but superior performance is obtained with a convolutional network containing over 34,000 adjustable parameters. When restored by this convolutional network, the images are clean enough to be used for segmentation, whereas the other approaches fail in this respect. We do not believe that convolutional networks are fundamentally superior to MRFs as a representation for image processing algorithms. On the contrary, the two approaches are closely related. But in practice, it is possible to train complex convolutional networks, while even simple MRF models are hindered by problems with Bayesian learning and inference procedures. Our results suggest that high model complexity is the single most important factor for good performance, and this is possible with convolutional networks." ] }
1410.5884
2271601362
The mean field algorithm is a widely used approximate inference algorithm for graphical models whose exact inference is intractable. In each iteration of mean field, the approximate marginals for each variable are updated by getting information from the neighbors. This process can be equivalently converted into a feedforward network, with each layer representing one iteration of mean field and with tied weights on all layers. This conversion enables a few natural extensions, e.g. untying the weights in the network. In this paper, we study these mean field networks (MFNs), and use them as inference tools as well as discriminative models. Preliminary experiment results show that MFNs can learn to do inference very efficiently and perform significantly better than mean field as discriminative models.
A few papers have discussed the compatibility between learning and approximate inference algorithms theoretically. @cite_4 shows that inconsistent learning may be beneficicial when approximate inference is used at test time, as long as the learning and test time inference are properly aligned. @cite_7 on the other hand shows that even when using the same approximate inference algorithm at training and test time can have problematic results when the learning algorithm is not compatible with inference. MFNs do not have this problem, as training follows the exact gradient of the loss function.
{ "cite_N": [ "@cite_4", "@cite_7" ], "mid": [ "2096920988", "2159992248" ], "abstract": [ "Consider the problem of joint parameter estimation and prediction in a Markov random field: that is, the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. Working under the restriction of limited computation, we analyze a joint method in which the same convex variational relaxation is used to construct an M-estimator for fitting parameters, and to perform approximate marginalization for the prediction step. The key result of this paper is that in the computation-limited setting, using an inconsistent parameter estimator (i.e., an estimator that returns the \"wrong\" model even in the infinite data limit) is provably beneficial, since the resulting errors can partially compensate for errors made by using an approximate prediction technique. En route to this result, we analyze the asymptotic properties of M-estimators based on convex variational relaxations, and establish a Lipschitz stability property that holds for a broad class of convex variational methods. This stability result provides additional incentive, apart from the obvious benefit of unique global optima, for using message-passing methods based on convex variational relaxations. We show that joint estimation prediction based on the reweighted sum-product algorithm substantially outperforms a commonly used heuristic based on ordinary sum-product.", "In many structured prediction problems, the highest-scoring labeling is hard to compute exactly, leading to the use of approximate inference methods. However, when inference is used in a learning algorithm, a good approximation of the score may not be sufficient. We show in particular that learning can fail even with an approximate inference method with rigorous approximation guarantees. There are two reasons for this. First, approximate methods can effectively reduce the expressivity of an underlying model by making it impossible to choose parameters that reliably give good predictions. Second, approximations can respond to parameter changes in such a way that standard learning algorithms are misled. In contrast, we give two positive results in the form of learning bounds for the use of LP-relaxed inference in structured perceptron and empirical risk minimization settings. We argue that without understanding combinations of inference and learning, such as these, that are appropriately compatible, learning performance under approximate inference cannot be guaranteed." ] }
1410.5884
2271601362
The mean field algorithm is a widely used approximate inference algorithm for graphical models whose exact inference is intractable. In each iteration of mean field, the approximate marginals for each variable are updated by getting information from the neighbors. This process can be equivalently converted into a feedforward network, with each layer representing one iteration of mean field and with tied weights on all layers. This conversion enables a few natural extensions, e.g. untying the weights in the network. In this paper, we study these mean field networks (MFNs), and use them as inference tools as well as discriminative models. Preliminary experiment results show that MFNs can learn to do inference very efficiently and perform significantly better than mean field as discriminative models.
On the neural networks side, people have tried to use a neural network to approximate intractable posterior distributions for a long time, especially for learning sigmoid belief networks, see for example @cite_8 and recent paper @cite_0 and citations therein. As far as we know, no previous work on the neural network side have discussed the connection with mean field or belief propagation type methods used for variational inference in graphical models.
{ "cite_N": [ "@cite_0", "@cite_8" ], "mid": [ "2122262818", "2026799324" ], "abstract": [ "Highly expressive directed latent variable models, such as sigmoid belief networks, are difficult to train on large datasets because exact inference in them is intractable and none of the approximate inference methods that have been applied to them scale well. We propose a fast non-iterative approximate inference method that uses a feedforward network to implement efficient exact sampling from the variational posterior. The model and this inference network are trained jointly by maximizing a variational lower bound on the log-likelihood. Although the naive estimator of the inference network gradient is too high-variance to be useful, we make it practical by applying several straightforward model-independent variance reduction techniques. Applying our approach to training sigmoid belief networks and deep autoregressive networks, we show that it outperforms the wake-sleep algorithm on MNIST and achieves state-of-the-art results on the Reuters RCV1 document dataset.", "Discovering the structure inherent in a set of patterns is a fundamental aim of statistical inference or learning. One fruitful approach is to build a parameterized stochastic generative model, independent draws from which are likely to produce the patterns. For all but the simplest generative models, each pattern can be generated in exponentially many ways. It is thus intractable to adjust the parameters to maximize the probability of the observed patterns. We describe a way of finessing this combinatorial explosion by maximizing an easily computed lower bound on the probability of the observations. Our method can be viewed as a form of hierarchical self-supervised learning that may relate to the function of bottom-up and top-down cortical processing pathways." ] }
1410.5884
2271601362
The mean field algorithm is a widely used approximate inference algorithm for graphical models whose exact inference is intractable. In each iteration of mean field, the approximate marginals for each variable are updated by getting information from the neighbors. This process can be equivalently converted into a feedforward network, with each layer representing one iteration of mean field and with tied weights on all layers. This conversion enables a few natural extensions, e.g. untying the weights in the network. In this paper, we study these mean field networks (MFNs), and use them as inference tools as well as discriminative models. Preliminary experiment results show that MFNs can learn to do inference very efficiently and perform significantly better than mean field as discriminative models.
A recent paper @cite_2 develops approximate MCMC methods with limited inference budget, which shares the spirit of our work.
{ "cite_N": [ "@cite_2" ], "mid": [ "2115067168" ], "abstract": [ "Can we make Bayesian posterior MCMC sampling more efficient when faced with very large datasets? We argue that computing the likelihood for N datapoints in the Metropolis-Hastings (MH) test to reach a single binary decision is computationally inefficient. We introduce an approximate MH rule based on a sequential hypothesis test that allows us to accept or reject samples with high confidence using only a fraction of the data required for the exact MH rule. While this method introduces an asymptotic bias, we show that this bias can be controlled and is more than offset by a decrease in variance due to our ability to draw more samples per unit of time." ] }
1410.5605
1712631158
In this paper, we shall consider the problem of deploying attention to the subsets of the video streams for collating the most relevant data and information of interest related to a given task. We formalize this monitoring problem as a foraging problem. We propose a probabilistic framework to model observer’s attentive behavior as the behavior of a forager. The forager, moment to moment, focuses its attention on the most informative stream camera, detects interesting objects or activities, or switches to a more profitable stream. The approach proposed here is suitable to be exploited for multistream video summarization. Meanwhile, it can serve as a preliminary step for more sophisticated video surveillance, e.g., activity and behavior analysis. Experimental results achieved on the UCR Videoweb Activities Data Set, a publicly available data set, are presented to illustrate the utility of the proposed technique.
At this point it is worth noting that, in the effort towards a general framework for stream selection and handling, all works above, differently from the approach we present here, are quite agnostic about the image analysis techniques to adopt. They mostly rely on basic tools (e.g., dense optical flow @cite_46 , Camshift tracking manually initialized @cite_8 , simple frame-to-frame SIFT computation @cite_20 ). However, from a general standpoint, moving object detection and recognition, tracking, behavioral analysis are stages that deeply involve the realms of image processing and machine vision. In these research areas, one major concern that has been an omnipresent topic during the last years is how to restrict the large amount of visual data to a manageable rate @cite_27 @cite_2 .
{ "cite_N": [ "@cite_8", "@cite_27", "@cite_2", "@cite_46", "@cite_20" ], "mid": [ "2119941274", "2088298109", "2118825302", "2083545271", "2083632634" ], "abstract": [ "In this paper, an approach for camera assignment and handoff in a video network based on a set of user-supplied criteria is proposed. The approach is based on game theory, where bargaining mechanisms are considered for collaborations as well as for resolving conflicts among the available cameras. Camera utilities and person utilities are computed based on a set of user-supplied criteria, which are used in the process of developing the bargaining mechanisms. Different criteria and their combination are compared with each other to understand their effect on camera assignment. Experiments for multicamera multiperson cases are provided to corroborate the proposed approach. Intuitive evaluation measures are used to evaluate the performance of the system in real-world scenarios. The proposed approach is also compared with two recent approaches based on different principles. The experimental results show that the proposed approach is computationally more efficient, more robust and more flexible in dealing with the user-supplied criteria.", "Pan-tilt-zoom (PTZ) cameras are able to dynamically modify their field of view (FOV). This functionality introduces new capabilities to camera networks such as increasing the resolution of moving targets and adapting the sensor coverage. On the other hand, PTZ functionality requires solutions to new challenges such as controlling the PTZ parameters, estimating the ego motion of the cameras, and calibrating the moving cameras.This tutorial provides an overview of the main video processing techniques and the currents trends in this active field of research. Autonomous PTZ cameras mainly aim to detect and track targets with the largest possible resolution. Autonomous PTZ operation is activated once the network detects and identifies an object as sensible target and requires accurate control of the PTZ parameters and coordination among the cameras in the network. Therefore, we present cooperative localization and tracking methods, i.e., multiagentand consensus-based approaches to jointly compute the target's properties such as ground-plane position and velocity. Stereo vision exploiting wide baselines can be used to derive three-dimensional (3-D) target localization. This tutorial further presents different techniques for controlling PTZ camera handoff, configuring the network to dynamically track targets, and optimizing the network configuration to increase coverage probability. It also discusses implementation aspects for these video processing techniques on embedded smart cameras, with a special focus on data access properties.", "The development of distributed computer vision algorithms promises to significantly advance the state of the art in computer vision systems by improving their efficiency and scalability (through the efficient integration of local information with global optimality guarantees) as well as their robustness to outliers and node failures (because of the use of redundant information). However, in order for this promise to be fulfilled, a number of fundamental challenges to the existing technology in computer vision, distributed optimization, and wireless sensor networks (WSNs) must be addressed.", "Camera network systems generate large volumes of potentially useful data, but extracting value from multiple, related videos can be a daunting task for a human reviewer. Multicamera video summarization seeks to make this task more tractable by generating a reduced set of output summary videos that concisely capture important portions of the input set. We present a system that approaches summarization at the level of detected activity motifs and shortens the input videos by compacting the representation of individual activities. Additionally, redundancy is removed across camera views by omitting from the summary activity occurrences that can be predicted by other occurrences. The system also detects anomalous events within a unified framework and can highlight them in the summary. Our contributions are a method for selecting useful parts of an activity to present to a viewer using activity motifs and a novel framework to score the importance of activity occurrences and allow transfer of importance between temporally related activities without solving the correspondence problem. We provide summarization results for a two camera network, an eleven camera network, and data from PETS 2001. We also include results from Amazon Mechanical Turk human experiments to evaluate how our visualization decisions affect task performance.", "In this article we present an approach to object tracking handover in a network of smart cameras, based on self-interested autonomous agents, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to learn the vision graph, that is, the camera neighbourhood relations, during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online, enabling efficient deployment in unknown scenarios and camera network topologies, and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multicamera calibration can be avoided. We have evaluated our approach both in a simulation study and in network of real distributed smart cameras." ] }
1410.5605
1712631158
In this paper, we shall consider the problem of deploying attention to the subsets of the video streams for collating the most relevant data and information of interest related to a given task. We formalize this monitoring problem as a foraging problem. We propose a probabilistic framework to model observer’s attentive behavior as the behavior of a forager. The forager, moment to moment, focuses its attention on the most informative stream camera, detects interesting objects or activities, or switches to a more profitable stream. The approach proposed here is suitable to be exploited for multistream video summarization. Meanwhile, it can serve as a preliminary step for more sophisticated video surveillance, e.g., activity and behavior analysis. Experimental results achieved on the UCR Videoweb Activities Data Set, a publicly available data set, are presented to illustrate the utility of the proposed technique.
Second, the visual attention problem is formulated as a foraging problem by extending previous work on L 'evy flights as a prior for sampling gaze shift amplitudes @cite_48 , which mainly relied on bottom-up salience. At the same time, task dependence is introduced, which is not achieved through ad hoc procedures. It is naturally integrated within attentional mechanisms in terms of rewards experienced in the attentive stage when the stream is explored. This issue is seldom taken into account in computational models of visual attention (see @cite_25 @cite_13 but in particular Tatler @cite_42 ). A preliminary study on this challenging problem has been presented in @cite_55 , but limited to the task of searching for text in static images.
{ "cite_N": [ "@cite_48", "@cite_55", "@cite_42", "@cite_13", "@cite_25" ], "mid": [ "2071621173", "1987432420", "2081913479", "1993254594", "2164084182" ], "abstract": [ "Visual attention guides our gaze to relevant parts of the viewed scene, yet the moment-to-moment relocation of gaze can be different among observers even though the same locations are taken into account. Surprisingly, the variability of eye movements has been so far overlooked by the great majority of computational models of visual attention. In this paper we present the ecological sampling model, a stochastic model of eye guidance explaining such variability. The gaze shift mechanism is conceived as an active random sampling that the foraging eye carries out upon the visual landscape, under the constraints set by the observable features and the global complexity of the landscape. By drawing on results reported in the foraging literature, the actual gaze relocation is eventually driven by a stochastic differential equation whose noise source is sampled from a mixture of α-stable distributions. This way, the sampling strategy proposed here allows to mimic a fundamental property of the eye guidance mechanism: where we choose to look next at any given moment in time, it is not completely deterministic, but neither is it completely random To show that the model yields gaze shift motor behaviors that exhibit statistics similar to those displayed by human observers, we compare simulation outputs with those obtained from eye-tracked subjects while viewing complex dynamic scenes.", "We introduce a model of attentional eye guidance based on the rationale that the deployment of gaze is to be considered in the context of a general action-perception loop relying on two strictly intertwined processes: sensory processing, depending on current gaze position, identifies sources of information that are most valuable under the given task; motor processing links such information with the oculomotor act by sampling the next gaze position and thus performing the gaze shift. In such a framework, the choice of where to look next is task-dependent and oriented to classes of objects embedded within pictures of complex scenes. The dependence on task is taken into account by exploiting the value and the payoff of gazing at certain image patches or proto-objects that provide a sparse representation of the scene objects. The different levels of the action-perception loop are represented in probabilistic form and eventually give rise to a stochastic process that generates the gaze sequence. This way the model also accounts for statistical properties of gaze shifts such as individual scan path variability. Results of the simulations are compared either with experimental data derived from publicly available datasets and from our own experiments.", "Models of gaze allocation in complex scenes are derived mainly from studies of static picture viewing. The dominant framework to emerge has been image salience, where properties of the stimulus play a crucial role in guiding the eyes. However, salience-based schemes are poor at accounting for many aspects of picture viewing and can fail dramatically in the context of natural task performance. These failures have led to the development of new models of gaze allocation in scene viewing that address a number of these issues. However, models based on the picture-viewing paradigm are unlikely to generalize to a broader range of experimental contexts, because the stimulus context is limited, and the dynamic, task-driven nature of vision is not represented. We argue that there is a need to move away from this class of model and find the principles that govern gaze allocation in a broader range of settings. We outline the major limitations of salience-based selection schemes and highlight what we have learned from studies of gaze allocation in natural vision. Clear principles of selection are found across many instances of natural vision and these are not the principles that might be expected from picture-viewing studies. We discuss the emerging theoretical framework for gaze allocation on the basis of reward maximization and uncertainty reduction.", "", "Modeling visual attention-particularly stimulus-driven, saliency-based attention-has been a very active research area over the past 25 years. Many different models of attention are now available which, aside from lending theoretical contributions to other fields, have demonstrated successful applications in computer vision, mobile robotics, and cognitive systems. Here we review, from a computational perspective, the basic concepts of attention implemented in these models. We present a taxonomy of nearly 65 models, which provides a critical comparison of approaches, their capabilities, and shortcomings. In particular, 13 criteria derived from behavioral and computational studies are formulated for qualitative comparison of attention models. Furthermore, we address several challenging issues with models, including biological plausibility of the computations, correlation with eye movement datasets, bottom-up and top-down dissociation, and constructing meaningful performance measures. Finally, we highlight current research trends in attention modeling and provide insights for future." ] }
1410.5524
1921721316
Binary codes have been widely used in vision problems as a compact feature representation to achieve both space and time advantages. Various methods have been proposed to learn data-dependent hash functions which map a feature vector to a binary code. However, considerable data information is inevitably lost during the binarization step which also causes ambiguity in measuring sample similarity using Hamming distance. Besides, the learned hash functions cannot be changed after training, which makes them incapable of adapting to new data outside the training data set. To address both issues, in this paper we propose a flexible bitwise weight learning framework based on the binary codes obtained by state-of-the-art hashing methods, and incorporate the learned weights into the weighted Hamming distance computation. We then formulate the proposed framework as a ranking problem and leverage the Ranking SVM model to offline tackle the weight learning. The framework is further extended to an online mode which updates the weights at each time new data comes, thereby making it scalable to large and dynamic data sets. Extensive experimental results demonstrate significant performance gains of using binary codes with bitwise weighting in image retrieval tasks. It is appealing that the online weight learning leads to comparable accuracy with its offline counterpart, which thus makes our approach practical for realistic applications.
Given the limitation of Hamming distance metric, some works have tried to improve it beyond raw binary code by computing bitwise weights. @cite_5 proposed a query-adaptive Hamming distance by assigning dynamic class dependent weights to hash bits. @cite_12 leverages listwise supervision to directly learn hash functions to generate binary codes which encode ranking information. However, this approach is non-convex and is sensitive to initialization. WhRank @cite_6 combines data-adaptive and query-adaptive weights in a weighted Hamming distance measure by exploiting statistical properties between similar and dissimilar codes before applying hash functions. It is general to use for different types of binary codes and showed improvement beyond base code using standard Hamming distance. This is the most similar work to ours in the sense of computing a weight vector for each bit. However, the weight learning method used in WhRank lacks a specific optimization goal and are largely based on observations. Most of the above mentioned weighting methods are offline learned and keep static afterwards.
{ "cite_N": [ "@cite_5", "@cite_6", "@cite_12" ], "mid": [ "", "2170314267", "2126210882" ], "abstract": [ "", "Binary hashing has been widely used for efficient similarity search due to its query and storage efficiency. In most existing binary hashing methods, the high-dimensional data are embedded into Hamming space and the distance or similarity of two points are approximated by the Hamming distance between their binary codes. The Hamming distance calculation is efficient, however, in practice, there are often lots of results sharing the same Hamming distance to a query, which makes this distance measure ambiguous and poses a critical issue for similarity search where ranking is important. In this paper, we propose a weighted Hamming distance ranking algorithm (WhRank) to rank the binary codes of hashing methods. By assigning different bit-level weights to different hash bits, the returned binary codes are ranked at a finer-grained binary code level. We give an algorithm to learn the data-adaptive and query-sensitive weight for each hash bit. Evaluations on two large-scale image data sets demonstrate the efficacy of our weighted Hamming distance for binary code ranking.", "Hashing techniques have been intensively investigated in the design of highly efficient search engines for large-scale computer vision applications. Compared with prior approximate nearest neighbor search approaches like tree-based indexing, hashing-based search schemes have prominent advantages in terms of both storage and computational efficiencies. Moreover, the procedure of devising hash functions can be easily incorporated into sophisticated machine learning tools, leading to data-dependent and task-specific compact hash codes. Therefore, a number of learning paradigms, ranging from unsupervised to supervised, have been applied to compose appropriate hash functions. However, most of the existing hash function learning methods either treat hash function design as a classification problem or generate binary codes to satisfy pair wise supervision, and have not yet directly optimized the search accuracy. In this paper, we propose to leverage list wise supervision into a principled hash function learning framework. In particular, the ranking information is represented by a set of rank triplets that can be used to assess the quality of ranking. Simple linear projection-based hash functions are solved efficiently through maximizing the ranking quality over the training data. We carry out experiments on large image datasets with size up to one million and compare with the state-of-the-art hashing techniques. The extensive results corroborate that our learned hash codes via list wise supervision can provide superior search accuracy without incurring heavy computational overhead." ] }
1410.5861
1907753563
The focus of the action understanding literature has predominately been classification, how- ever, there are many applications demanding richer action understanding such as mobile robotics and video search, with solutions to classification, localization and detection. In this paper, we propose a compositional model that leverages a new mid-level representation called compositional trajectories and a locally articulated spatiotemporal deformable parts model (LALSDPM) for fully action understanding. Our methods is advantageous in capturing the variable structure of dynamic human activity over a long range. First, the compositional trajectories capture long-ranging, frequently co-occurring groups of trajectories in space time and represent them in discriminative hierarchies, where human motion is largely separated from camera motion; second, LASTDPM learns a structured model with multi-layer deformable parts to capture multiple levels of articulated motion. We implement our methods and demonstrate state of the art performance on all three problems: action detection, localization, and recognition.
Recently, researchers have focused on developing better video feature and representation. Representative low-level features include HoG3D @cite_37 , HOG HOF @cite_37 , dense trajectory @cite_34 and its variants @cite_36 @cite_19 . Middle-level representations that utilize human pose @cite_29 @cite_4 @cite_22 provide a different angle to the problem and demonstrate compensative to low-level features. High-level representations such as Action Bank @cite_1 introduces action space and carry rich semantic meaning. More recently, deep learning @cite_7 is applied for large-scale action recognition.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_22", "@cite_7", "@cite_36", "@cite_29", "@cite_1", "@cite_19", "@cite_34" ], "mid": [ "2024868105", "187717959", "2139857301", "", "1996904744", "", "2063153269", "2105101328", "2126574503" ], "abstract": [ "In this work, we present a novel local descriptor for video sequences. The proposed descriptor is based on histograms of oriented 3D spatio-temporal gradients. Our contribution is four-fold. (i) To compute 3D gradients for arbitrary scales, we develop a memory-efficient algorithm based on integral videos. (ii) We propose a generic 3D orientation quantization which is based on regular polyhedrons. (iii) We perform an in-depth evaluation of all descriptor parameters and optimize them for action recognition. (iv) We apply our descriptor to various action datasets (KTH, Weizmann, Hollywood) and show that we outperform the state-of-the-art.", "Recent work in human activity recognition has focused on bottom-up approaches that rely on spatiotemporal features, both dense and sparse. In contrast, articulated motion, which naturally incorporates explicit human action information, has not been heavily studied; a fact likely due to the inherent challenge in modeling and inferring articulated human motion from video. However, recent developments in data-driven human pose estimation have made it plausible. In this paper, we extend these developments with a new middle-level representation called dynamic pose that couples the local motion information directly and independently with human skelet al pose, and present an appropriate distance function on the dynamic poses. We demonstrate the representative power of dynamic pose over raw skelet al pose in an activity recognition setting, using simple codebook matching and support vector machines as the classifier. Our results conclusively demonstrate that dynamic pose is a more powerful representation of human action than skelet al pose.", "We address action recognition in videos by modeling the spatial-temporal structures of human poses. We start by improving a state of the art method for estimating human joint locations from videos. More precisely, we obtain the K-best estimations output by the existing method and incorporate additional segmentation cues and temporal constraints to select the best'' one. Then we group the estimated joints into five body parts (e.g. the left arm) and apply data mining techniques to obtain a representation for the spatial-temporal structures of human actions. This representation captures the spatial configurations of body parts in one frame (by spatial-part-sets) as well as the body part movements(by temporal-part-sets) which are characteristic of human actions. It is interpretable, compact, and also robust to errors on joint estimations. Experimental results first show that our approach is able to localize body joints more accurately than existing methods. Next we show that it outperforms state of the art action recognizers on the UCF sport, the Keck Gesture and the MSR-Action3D datasets.", "", "Several recent works on action recognition have attested the importance of explicitly integrating motion characteristics in the video description. This paper establishes that adequately decomposing visual motion into dominant and residual motions, both in the extraction of the space-time trajectories and for the computation of descriptors, significantly improves action recognition algorithms. Then, we design a new motion descriptor, the DCS descriptor, based on differential motion scalar quantities, divergence, curl and shear features. It captures additional information on the local motion patterns enhancing results. Finally, applying the recent VLAD coding technique proposed in image retrieval provides a substantial improvement for action recognition. Our three contributions are complementary and lead to outperform all reported results by a significant margin on three challenging datasets, namely Hollywood 2, HMDB51 and Olympic Sports.", "", "Activity recognition in video is dominated by low- and mid-level features, and while demonstrably capable, by nature, these features carry little semantic meaning. Inspired by the recent object bank approach to image representation, we present Action Bank, a new high-level representation of video. Action bank is comprised of many individual action detectors sampled broadly in semantic space as well as viewpoint space. Our representation is constructed to be semantically rich and even when paired with simple linear SVM classifiers is capable of highly discriminative performance. We have tested action bank on four major activity recognition benchmarks. In all cases, our performance is better than the state of the art, namely 98.2 on KTH (better by 3.3 ), 95.0 on UCF Sports (better by 3.7 ), 57.9 on UCF50 (baseline is 47.9 ), and 26.9 on HMDB51 (baseline is 23.2 ). Furthermore, when we analyze the classifiers, we find strong transfer of semantics from the constituent action detectors to the bank classifier.", "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art.", "Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports." ] }
1410.5861
1907753563
The focus of the action understanding literature has predominately been classification, how- ever, there are many applications demanding richer action understanding such as mobile robotics and video search, with solutions to classification, localization and detection. In this paper, we propose a compositional model that leverages a new mid-level representation called compositional trajectories and a locally articulated spatiotemporal deformable parts model (LALSDPM) for fully action understanding. Our methods is advantageous in capturing the variable structure of dynamic human activity over a long range. First, the compositional trajectories capture long-ranging, frequently co-occurring groups of trajectories in space time and represent them in discriminative hierarchies, where human motion is largely separated from camera motion; second, LASTDPM learns a structured model with multi-layer deformable parts to capture multiple levels of articulated motion. We implement our methods and demonstrate state of the art performance on all three problems: action detection, localization, and recognition.
Given a video with human action, localization answers the question of when and where the action happens. In @cite_5 , salient spatiotemporal structures form clusters of dense trajectories @cite_34 are detected as candidates for the parts of an action; a graphical model captures spatiotemporal dependencies and is used to infer the action localization. Note that the location of salient structures is fixed before learning the graphical model, unlike in our case which jointly learns both. propose a figure-centric model @cite_39 for joint action localization and recognition, while the localization is based on bounding box of human detection, and implicitly enforces temporal constrains between neighboring frames, but they assume figure is fully visible for the entire duration of video. @cite_2 propose a new representation called hierarchical space-time segments for action recognition and localization, which leverages the power of hierarchical segmentation in frame level.
{ "cite_N": [ "@cite_5", "@cite_34", "@cite_2", "@cite_39" ], "mid": [ "2055753778", "2126574503", "", "2131311058" ], "abstract": [ "We describe a mid-level approach for action recognition. From an input video, we extract salient spatio-temporal structures by forming clusters of trajectories that serve as candidates for the parts of an action. The assembly of these clusters into an action class is governed by a graphical model that incorporates appearance and motion constraints for the individual parts and pairwise constraints for the spatio-temporal dependencies among them. During training, we estimate the model parameters discriminatively. During classification, we efficiently match the model to a video using discrete optimization. We validate the model's classification ability in standard benchmark datasets and illustrate its potential to support a fine-grained analysis that not only gives a label to a video, but also identifies and localizes its constituent parts.", "Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach to describe videos by dense trajectories. We sample dense points from each frame and track them based on displacement information from a dense optical flow field. Given a state-of-the-art optical flow algorithm, our trajectories are robust to fast irregular motions as well as shot boundaries. Additionally, dense trajectories cover the motion information in videos well. We, also, investigate how to design descriptors to encode the trajectory information. We introduce a novel descriptor based on motion boundary histograms, which is robust to camera motion. This descriptor consistently outperforms other state-of-the-art descriptors, in particular in uncontrolled realistic videos. We evaluate our video description in the context of action classification with a bag-of-features approach. Experimental results show a significant improvement over the state of the art on four datasets of varying difficulty, i.e. KTH, YouTube, Hollywood2 and UCF sports.", "", "In this paper we develop an algorithm for action recognition and localization in videos. The algorithm uses a figure-centric visual word representation. Different from previous approaches it does not require reliable human detection and tracking as input. Instead, the person location is treated as a latent variable that is inferred simultaneously with action recognition. A spatial model for an action is learned in a discriminative fashion under a figure-centric representation. Temporal smoothness over video sequences is also enforced. We present results on the UCF-Sports dataset, verifying the effectiveness of our model in situations where detection and tracking of individuals is challenging." ] }
1410.5861
1907753563
The focus of the action understanding literature has predominately been classification, how- ever, there are many applications demanding richer action understanding such as mobile robotics and video search, with solutions to classification, localization and detection. In this paper, we propose a compositional model that leverages a new mid-level representation called compositional trajectories and a locally articulated spatiotemporal deformable parts model (LALSDPM) for fully action understanding. Our methods is advantageous in capturing the variable structure of dynamic human activity over a long range. First, the compositional trajectories capture long-ranging, frequently co-occurring groups of trajectories in space time and represent them in discriminative hierarchies, where human motion is largely separated from camera motion; second, LASTDPM learns a structured model with multi-layer deformable parts to capture multiple levels of articulated motion. We implement our methods and demonstrate state of the art performance on all three problems: action detection, localization, and recognition.
Action detection holds no assumption of given video and answers the question of whether, when and where certain action happens. A line of works detect action by explicit template matching process. The global template can be explicitly constructed @cite_18 @cite_3 @cite_28 @cite_6 @cite_10 , or estimated from many exemplars @cite_31 . These methods all have rigid templates, but recent work has emphasized non-rigid templates such as @cite_15 which divides the global template into independent parts and then integrates their scores for matching---note that the parts in their work are supervised unlike in our method which are latent---and @cite_17 that capture an action as a sequence of frame exemplars. Another line of works explore the notion of , @cite_24 extend part from spatial segment to a set of consecutive video frames, but their method can only detect action temporally; SDPM @cite_16 directly extend DPM to space-time domain, but the part structures from their two layer model are initialized in a data-driven manner.
{ "cite_N": [ "@cite_18", "@cite_15", "@cite_28", "@cite_6", "@cite_3", "@cite_24", "@cite_31", "@cite_16", "@cite_10", "@cite_17" ], "mid": [ "2041941194", "2137981002", "2146634731", "2138105460", "2165715280", "1498368596", "2101194540", "2095661305", "", "2547062449" ], "abstract": [ "This paper addresses action spotting, the spatiotemporal detection and localization of human actions in video. A novel compact local descriptor of video dynamics in the context of action spotting is introduced based on visual spacetime oriented energy measurements. This descriptor is efficiently computed directly from raw image intensity data and thereby forgoes the problems typically associated with flow-based features. An important aspect of the descriptor is that it allows for the comparison of the underlying dynamics of two spacetime video segments irrespective of spatial appearance, such as differences induced by clothing, and with robustness to clutter. An associated similarity measure is introduced that admits efficient exhaustive search for an action template across candidate video sequences. Empirical evaluation of the approach on a set of challenging natural videos suggests its efficacy.", "Real-world actions occur often in crowded, dynamic environments. This poses a difficult challenge for current approaches to video event detection because it is difficult to segment the actor from the background due to distracting motion from other objects in the scene. We propose a technique for event recognition in crowded videos that reliably identifies actions in the presence of partial occlusion and background clutter. Our approach is based on three key ideas: (1) we efficiently match the volumetric representation of an event against oversegmented spatio-temporal video volumes; (2) we augment our shape-based features using flow; (3) rather than treating an event template as an atomic entity, we separately match by parts (both in space and time), enabling robustness against occlusions and actor variability. Our experiments on human actions, such as picking up a dropped object or waving in a crowd show reliable detection with few false positives.", "Human action in video sequences can be seen as silhouettes of a moving torso and protruding limbs undergoing articulated motion. We regard human actions as three-dimensional shapes induced by the silhouettes in the space-time volume. We adopt a recent approach [14] for analyzing 2D shapes and generalize it to deal with volumetric space-time action shapes. Our method utilizes properties of the solution to the Poisson equation to extract space-time features such as local space-time saliency, action dynamics, shape structure, and orientation. We show that these features are useful for action recognition, detection, and clustering. The method is fast, does not require video alignment, and is applicable in (but not limited to) many scenarios where the background is known. Moreover, we demonstrate the robustness of our method to partial occlusions, nonrigid deformations, significant changes in scale and viewpoint, high irregularities in the performance of an action, and low-quality video.", "Our goal is to recognize human action at a distance, at resolutions where a whole person may be, say, 30 pixels tall. We introduce a novel motion descriptor based on optical flow measurements in a spatiotemporal volume for each stabilized human figure, and an associated similarity measure to be used in a nearest-neighbor framework. Making use of noisy optical flow measurements is the key challenge, which is addressed by treating optical flow not as precise pixel displacements, but rather as a spatial pattern of noisy measurements which are carefully smoothed and aggregated to form our spatiotemporal motion descriptor. To classify the action being performed by a human figure in a query sequence, we retrieve nearest neighbor(s) from a database of stored, annotated video sequences. We can also use these retrieved exemplars to transfer 2D 3D skeletons onto the figures in the query sequence, as well as two forms of data-based action synthesis \"do as I do\" and \"do as I say\". Results are demonstrated on ballet, tennis as well as football datasets.", "A view-based approach to the representation and recognition of human movement is presented. The basis of the representation is a temporal template-a static vector-image where the vector value at each point is a function of the motion properties at the corresponding spatial location in an image sequence. Using aerobics exercises as a test domain, we explore the representational power of a simple, two component version of the templates: The first value is a binary value indicating the presence of motion and the second value is a function of the recency of motion in a sequence. We then develop a recognition method matching temporal templates against stored instances of views of known actions. The method automatically performs temporal segmentation, is invariant to linear changes in speed, and runs in real-time on standard platforms.", "Much recent research in human activity recognition has focused on the problem of recognizing simple repetitive (walking, running, waving) and punctual actions (sitting up, opening a door, hugging). However, many interesting human activities are characterized by a complex temporal composition of simple actions. Automatic recognition of such complex actions can benefit from a good understanding of the temporal structures. We present in this paper a framework for modeling motion by exploiting the temporal structure of the human activities. In our framework, we represent activities as temporal compositions of motion segments. We train a discriminative model that encodes a temporal decomposition of video sequences, and appearance models for each motion segment. In recognition, a query video is matched to the model according to the learned appearances and motion segment decomposition. Classification is made based on the quality of matching between the motion segment classifiers and the temporal segments in the query sequence. To validate our approach, we introduce a new dataset of complex Olympic Sports activities. We show that our algorithm performs better than other state of the art methods.", "In this paper we introduce a template-based method for recognizing human actions called action MACH. Our approach is based on a maximum average correlation height (MACH) filter. A common limitation of template-based methods is their inability to generate a single template using a collection of examples. MACH is capable of capturing intra-class variability by synthesizing a single Action MACH filter for a given action class. We generalize the traditional MACH filter to video (3D spatiotemporal volume), and vector valued data. By analyzing the response of the filter in the frequency domain, we avoid the high computational cost commonly incurred in template-based approaches. Vector valued data is analyzed using the Clifford Fourier transform, a generalization of the Fourier transform intended for both scalar and vector-valued data. Finally, we perform an extensive set of experiments and compare our method with some of the most recent approaches in the field by using publicly available datasets, and two new annotated human action datasets which include actions performed in classic feature films and sports broadcast television.", "Deformable part models have achieved impressive performance for object detection, even on difficult image datasets. This paper explores the generalization of deformable part models from 2D images to 3D spatiotemporal volumes to better study their effectiveness for action detection in video. Actions are treated as spatiotemporal patterns and a deformable part model is generated for each action from a collection of examples. For each action model, the most discriminative 3D sub volumes are automatically selected as parts and the spatiotemporal relations between their locations are learned. By focusing on the most distinctive parts of each action, our models adapt to intra-class variation and show robustness to clutter. Extensive experiments on several video datasets demonstrate the strength of spatiotemporal DPMs for classifying and localizing actions.", "", "In this paper, we present a Deformable Action Template (DAT) model that is learnable from cluttered real-world videos with weak supervisions. In our generative model, an action template is a sequence of image templates each of which consists of a set of shape and motion primitives (Gabor wavelets and optical-flow patches) at selected orientations and locations. These primitives are allowed to slightly perturb their locations and orientations to account for spatial deformations. We use a shared pursuit algorithm to automatically discover a best set of primitives and weights by maximizing the likelihood over one or more aligned training examples. Since it is extremely hard to accurately label human actions from real-world videos, we use a three-step semi-supervised learning procedure. 1) For each human action class, a template is initialized from a labeled (one bounding-box per frame) training video. 2) The template is used to detect actions from other training videos of the same class by a dynamic space-time warping algorithm, which searches a best match between the template and target video in 5D space (x, y, scale, t tempiate and t target ) using dynamic programming. 3) The template is updated by the shared pursuit algorithm over all aligned videos. The 2nd and 3rd steps iterate several times to arrive at an optimal action template. We tested our algorithm on a cluttered action dataset (the CMU dataset) and achieved favorable performance than [7]. Our classification performance on the KTH dataset is also comparable to state-of-the-arts." ] }
1410.5024
1578320724
In order to improve the performance of least mean square (LMS)-based adaptive filtering for identifying block-sparse systems, a new adaptive algorithm called block-sparse LMS (BS-LMS) is proposed in this paper. The basis of the proposed algorithm is to insert a penalty of block-sparsity, which is a mixed @math norm of adaptive tap-weights with equal group partition sizes, into the cost function of traditional LMS algorithm. To describe a block-sparse system response, we first propose a Markov-Gaussian model, which can generate a kind of system responses of arbitrary average sparsity and arbitrary average block length using given parameters. Then we present theoretical expressions of the steady-state misadjustment and transient convergence behavior of BS-LMS with an appropriate group partition size for white Gaussian input data. Based on the above results, we theoretically demonstrate that BS-LMS has much better convergence behavior than @math -LMS with the same small level of misadjustment. Finally, numerical experiments verify that all of the theoretical analysis agrees well with simulation results in a large range of parameters.
The identification of an unknown system with sparse impulse response could be accelerated and enhanced by introducing a sparsity constraint into the cost function of LMS, where the sparsity constraint could be approximated @math norm @cite_19 , @math norm @cite_36 , reweighted @math norm @cite_36 @cite_16 , smoothed @math norm @cite_4 @cite_30 , @math norm @cite_21 @cite_37 , or a convex sparsity penalty @cite_26 . However, literature on adaptive filtering algorithms benefiting from block-sparsity is scarce. Thus, it is important to further improve the performance by utilizing block structure. Among the above algorithms, @math -LMS @cite_19 demonstrates rather good performance in experiments and has comprehensive theoretical guarantee @cite_7 . Therefore, in this work we generalize @math -LMS to BS-LMS by utilizing block-sparsity. Part of our derivations (mainly in Section ) are based on the approach in @cite_7 . However, the main contribution of this paper, including BS-LMS algorithm Section ), the Markov-Gaussian block-sparse model (Section ), and superior performance analysis (Section ), are brand-new compared to the above references.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_26", "@cite_4", "@cite_7", "@cite_36", "@cite_21", "@cite_19", "@cite_16" ], "mid": [ "", "", "2107806930", "2100068253", "2146725538", "2098674248", "1981035919", "2293909378", "" ], "abstract": [ "", "", "We consider adaptive system identification problems with convex constraints and propose a family of regularized Least-Mean-Square (LMS) algorithms. We show that with a properly selected regularization parameter the regularized LMS provably dominates its conventional counterpart in terms of mean square deviations. We establish simple and closed-form expressions for choosing this regularization parameter. For identifying an unknown sparse system we propose sparse and group-sparse LMS algorithms, which are special examples of the regularized LMS family. Simulation results demonstrate the advantages of the proposed filters in both convergence rate and steady-state error under sparsity assumptions on the true coefficient vector.", "In this paper, a fast algorithm for overcomplete sparse decomposition, called SL0, is proposed. The algorithm is essentially a method for obtaining sparse solutions of underdetermined systems of linear equations, and its applications include underdetermined sparse component analysis (SCA), atomic decomposition on overcomplete dictionaries, compressed sensing, and decoding real field codes. Contrary to previous methods, which usually solve this problem by minimizing the l 1 norm using linear programming (LP) techniques, our algorithm tries to directly minimize the l 1 norm. It is experimentally shown that the proposed algorithm is about two to three orders of magnitude faster than the state-of-the-art interior-point LP solvers, while providing the same (or better) accuracy.", "As one of the recently proposed algorithms for sparse system identification, I0 norm constraint Least Mean Square (io-LMS) algorithm modifies the cost function of the traditional method with a penalty of tap-weight sparsity. The performance of I0-LMS is quite attractive compared with its various precursors. However, there has been no detailed study of its performance. This paper presents comprehensive theoretical performance analysis of I0-LMS for white Gaussian input data based on some reasonable assumptions, which are reasonable in a large range of parameter setting. Expressions for steady-state mean square deviation (MSD) are derived and discussed with respect to algorithm parameters and system sparsity. The parameter selection rule is established for achieving the best performance. Approximated with Taylor series, the instantaneous behavior is also derived. In addition, the relationship between I0-LMS and some previous arts and the sufficient conditions for I0-LMS to accelerate convergence are set up. Finally, all of the theoretical results are compared with simulations and are shown to agree well in a wide range of parameters.", "We propose a new approach to adaptive system identification when the system model is sparse. The approach applies l 1 relaxation, common in compressive sensing, to improve the performance of LMS-type adaptive methods. This results in two new algorithms, the zero-attracting LMS (ZA-LMS) and the reweighted zero-attracting LMS (RZA-LMS). The ZA-LMS is derived via combining a l 1 norm penalty on the coefficients into the quadratic LMS cost function, which generates a zero attractor in the LMS iteration. The zero attractor promotes sparsity in taps during the filtering process, and therefore accelerates convergence when identifying sparse systems. We prove that the ZA-LMS can achieve lower mean square error than the standard LMS. To further improve the filtering performance, the RZA-LMS is developed using a reweighted zero attractor. The performance of the RZA-LMS is superior to that of the ZA-LMS numerically. Experiments demonstrate the advantages of the proposed filters in both convergence rate and steady-state behavior under sparsity assumptions on the true coefficient vector. The RZA-LMS is also shown to be robust when the number of non-zero taps increases.", "In order to improve the sparsity exploitation performance of norm constraint least mean square (LMS) algorithms, a novel adaptive algorithm is proposed by introducing a variable p-norm-like constraint into the cost function of the LMS algorithm, which exerts a zero attraction to the weight updating iterations. The parameter p of the p-norm-like constraint is adjusted iteratively along the negative gradient direction of the cost function. Numerical simulations show that the proposed algorithm has better performance than traditional l\"0 and l\"1 norm constraint LMS algorithms.", "In order to improve the performance of least mean square (LMS) based system identification of sparse systems, a new adaptive algorithm is proposed which utilizes the sparsity property of such systems. A general approximating approach on l 0 norm-a typical metric of system sparsity, is proposed and integrated into the cost function of the LMS algorithm. This integration is equivalent to add a zero attractor in the iterations, by which the convergence rate of small coefficients, that dominate the sparse system, can be effectively improved. Moreover, using partial updating method, the computational complexity is reduced. The simulations demonstrate that the proposed algorithm can effectively improve the performance of LMS-based identification algorithms on sparse system.", "" ] }
1410.5024
1578320724
In order to improve the performance of least mean square (LMS)-based adaptive filtering for identifying block-sparse systems, a new adaptive algorithm called block-sparse LMS (BS-LMS) is proposed in this paper. The basis of the proposed algorithm is to insert a penalty of block-sparsity, which is a mixed @math norm of adaptive tap-weights with equal group partition sizes, into the cost function of traditional LMS algorithm. To describe a block-sparse system response, we first propose a Markov-Gaussian model, which can generate a kind of system responses of arbitrary average sparsity and arbitrary average block length using given parameters. Then we present theoretical expressions of the steady-state misadjustment and transient convergence behavior of BS-LMS with an appropriate group partition size for white Gaussian input data. Based on the above results, we theoretically demonstrate that BS-LMS has much better convergence behavior than @math -LMS with the same small level of misadjustment. Finally, numerical experiments verify that all of the theoretical analysis agrees well with simulation results in a large range of parameters.
The idea of using mixed norm, such as @math norm @cite_1 @cite_33 @cite_0 , approximated @math norm @cite_28 , @math norm @cite_31 , to handle block-sparsity has been adopted in sparse signal recovery. By exploiting block structure, recovery may be possible under more general conditions, which demonstrates superior performance brought about by mixed norm. Furthermore, after mixed norm is introduced, the reconstruction error in the presence of noise becomes smaller compared with the conventional algorithms. Besides mixed norm, there are some other approaches in block-sparse signal recovery, including greedy algorithms @cite_32 @cite_23 @cite_24 @cite_20 , Bayesian CS framework-based algorithms @cite_6 @cite_12 , the dynamic programming-based algorithm @cite_34 and the decoding-based algorithm @cite_2 .
{ "cite_N": [ "@cite_33", "@cite_28", "@cite_1", "@cite_32", "@cite_6", "@cite_0", "@cite_24", "@cite_23", "@cite_2", "@cite_31", "@cite_34", "@cite_12", "@cite_20" ], "mid": [ "2098996169", "2101782279", "2147276092", "2125680629", "1981157266", "2128007683", "2170844819", "2135780853", "2098149254", "2114595593", "1837471008", "2033419225", "2148527554" ], "abstract": [ "Let A be an M by N matrix (M 1 - 1 d, and d = Omega(log(1 isin) isin3) . The relaxation given in (*) can be solved in polynomial time using semi-definite programming.", "In this paper, we consider compressed sensing (CS) of block-sparse signals, i.e., sparse signals that have nonzero coefficients occurring in clusters. An efficient algorithm, called zero-point attracting projection (ZAP) algorithm, is extended to the scenario of block CS. The block version of ZAP algorithm employs an approximate l 2,0 norm as the cost function, and finds its minimum in the solution space via iterations. For block sparse signals, an analysis of the stability of the local minimums of this cost function under the perturbation of noise reveals an advantage of the proposed algorithm over its original non-block version in terms of reconstruction error. Finally, numerical experiments show that the proposed algorithm outperforms other state of the art methods for the block sparse problem in various respects, especially the stability under noise.", "Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper, we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modeled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose nonzero elements appear in fixed blocks. We then propose a mixed lscr2 lscr1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP, we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.", "Compressive sensing (CS) is an alternative to Shannon Nyquist sampling for the acquisition of sparse or compressible signals that can be well approximated by just K ? N elements from an N -dimensional basis. Instead of taking periodic samples, CS measures inner products with M < N random vectors and then recovers the signal via a sparsity-seeking optimization or greedy algorithm. Standard CS dictates that robust signal recovery is possible from M = O(K log(N K)) measurements. It is possible to substantially decrease M without sacrificing robustness by leveraging more realistic signal models that go beyond simple sparsity and compressibility by including structural dependencies between the values and locations of the signal coefficients. This paper introduces a model-based CS theory that parallels the conventional theory and provides concrete guidelines on how to create model-based recovery algorithms with provable performance guarantees. A highlight is the introduction of a new class of structured compressible signals along with a new sufficient condition for robust structured compressible signal recovery that we dub the restricted amplification property, which is the natural counterpart to the restricted isometry property of conventional CS. Two examples integrate two relevant signal models-wavelet trees and block sparsity-into two state-of-the-art CS recovery algorithms and prove that they offer robust recovery from just M = O(K) measurements. Extensive numerical simulations demonstrate the validity and applicability of our new theory and algorithms.", "In traditional framework of compressive sensing (CS), only sparse prior on the property of signals in time or frequency domain is adopted to guarantee the exact inverse recovery. Other than sparse prior, structures on the sparse pattern of the signal have also been used as an additional prior, called model-based compressive sensing, such as clustered structure and tree structure on wavelet coefficients. In this paper, the cluster structured sparse signals are investigated. Under the framework of Bayesian compressive sensing, a hierarchical Bayesian model is employed to model both the sparse prior and cluster prior, then Markov Chain Monte Carlo (MCMC) sampling is implemented for the inference. Unlike the state-of-the-art algorithms which are also taking into account the cluster prior, the proposed algorithm solves the inverse problem automatically-prior information on the number of clusters and the size of each cluster is unknown. The experimental results show that the proposed algorithm outperforms many state-of-the-art algorithms.", "It has been known for a while that l1-norm relaxation can in certain cases solve an under-determined system of linear equations. Recently, E. Candes (\"Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,\" IEEE Trans. Information Theory, vol. 52, no. 12, pp. 489-509, Dec. 2006) and D. Donoho (\"High-dimensional centrally symmetric polytopes with neighborlines proportional to dimension,\" Disc. Comput. Geometry, vol. 35, no. 4, pp. 617-652, 2006) proved (in a large dimensional and statistical context) that if the number of equations (measurements in the compressed sensing terminology) in the system is proportional to the length of the unknown vector then there is a sparsity (number of nonzero elements of the unknown vector) also proportional to the length of the unknown vector such that l1-norm relaxation succeeds in solving the system. In this paper, in a large dimensional and statistical context, we determine sharp lower bounds on the values of allowable sparsity for any given number (proportional to the length of the unknown vector) of equations for the case of the so-called block-sparse unknown vectors considered in \"On the reconstruction of block-sparse signals with an optimal number of measurements,\" (M. , IEEE Trans, Signal Processing, submitted for publication.", "Compressive Sensing (CS) combines sampling and compression into a single sub-Nyquist linear measurement process for sparse and compressible signals. In this paper, we extend the theory of CS to include signals that are concisely represented in terms of a graphical model. In particular, we use Markov Random Fields (MRFs) to represent sparse signals whose nonzero coefficients are clustered. Our new model-based recovery algorithm, dubbed Lattice Matching Pursuit (LaMP), stably recovers MRF-modeled signals using many fewer measurements and computations than the current state-of-the-art algorithms.", "We consider efficient methods for the recovery of block-sparse signals-i.e., sparse signals that have nonzero entries occurring in clusters-from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce. We then show that a block-version of the orthogonal matching pursuit algorithm recovers block -sparse signals in no more than steps if the block-coherence is sufficiently small. The same condition on block-coherence is shown to guarantee successful recovery through a mixed -optimization approach. This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.", "We consider the deterministic construction of a measurement matrix and a recovery method for signals that are block sparse. A signal that has dimension N = nd, which consists of n blocks of size d, is called (s, d)-block sparse if only s blocks out of n are nonzero. We construct an explicit linear mapping Phi that maps the (s, d) -block sparse signal to a measurement vector of dimension M, where s - d < N (1- (1- M N)d d+1) - o(1). We show that if the (s,d)- block sparse signal is chosen uniformly at random then the signal can almost surely be reconstructed from the measurement vector in O(N3) computations.", "Given a dictionary that consists of multiple blocks and a signal that lives in the range space of only a few blocks, we study the problem of finding a block-sparse representation of the signal, i.e., a representation that uses the minimum number of blocks. Motivated by signal image processing and computer vision applications, such as face recognition, we consider the block-sparse recovery problem in the case where the number of atoms in each block is arbitrary, possibly much larger than the dimension of the underlying subspace. To find a block-sparse representation of a signal, we propose two classes of nonconvex optimization programs, which aim to minimize the number of nonzero coefficient blocks and the number of nonzero reconstructed vectors from the blocks, respectively. Since both classes of problems are NP-hard, we propose convex relaxations and derive conditions under which each class of the convex programs is equivalent to the original nonconvex formulation. Our conditions depend on the notions of mutual and cumulative subspace coherence of a dictionary, which are natural generalizations of existing notions of mutual and cumulative coherence. We evaluate the performance of the proposed convex programs through simulations as well as real experiments on face recognition. We show that treating the face recognition problem as a block-sparse recovery problem improves the state-of-the-art results by 10 with only 25 of the training data.", "We introduce a new signal model, called (K,C)-sparse, to capture K-sparse signals in N dimensions whose nonzero coefficients are contained within at most C clusters, with C < K << N. In contrast to the existing work in the sparse approximation and compressive sensing literature on block sparsity, no prior knowledge of the locations and sizes of the clusters is assumed. We prove that O (K + C log(N C))) random projections are sufficient for (K,C)-model sparse signal recovery based on subspace enumeration. We also provide a robust polynomialtime recovery algorithm for (K,C)-model sparse signals with provable estimation guarantees.", "We examine the recovery of block sparse signals and extend the recovery framework in two important directions; one by exploiting the signals' intra-block correlation and the other by generalizing the signals' block structure. We propose two families of algorithms based on the framework of block sparse Bayesian learning (BSBL). One family, directly derived from the BSBL framework, require knowledge of the block structure. Another family, derived from an expanded BSBL framework, are based on a weaker assumption on the block structure, and can be used when the block structure is completely unknown. Using these algorithms, we show that exploiting intra-block correlation is very helpful in improving recovery performance. These algorithms also shed light on how to modify existing algorithms or design new ones to exploit such correlation and improve performance.", "This paper examines the ability of greedy algorithms to estimate a block sparse parameter vector from noisy measurements. In particular, block sparse versions of the orthogonal matching pursuit and thresholding algorithms are analyzed under both adversarial and Gaussian noise models. In the adversarial setting, it is shown that estimation accuracy comes within a constant factor of the noise power. Under Gaussian noise, the Cramer-Rao bound is derived, and it is shown that the greedy techniques come close to this bound at high signal-to-noise ratio. The guarantees are numerically compared with the actual performance of block and non-block algorithms, identifying situations in which block sparse techniques improve upon the scalar sparsity approach. Specifically, we show that block sparse methods are particularly successful when the atoms within each block are nearly orthogonal." ] }
1410.5024
1578320724
In order to improve the performance of least mean square (LMS)-based adaptive filtering for identifying block-sparse systems, a new adaptive algorithm called block-sparse LMS (BS-LMS) is proposed in this paper. The basis of the proposed algorithm is to insert a penalty of block-sparsity, which is a mixed @math norm of adaptive tap-weights with equal group partition sizes, into the cost function of traditional LMS algorithm. To describe a block-sparse system response, we first propose a Markov-Gaussian model, which can generate a kind of system responses of arbitrary average sparsity and arbitrary average block length using given parameters. Then we present theoretical expressions of the steady-state misadjustment and transient convergence behavior of BS-LMS with an appropriate group partition size for white Gaussian input data. Based on the above results, we theoretically demonstrate that BS-LMS has much better convergence behavior than @math -LMS with the same small level of misadjustment. Finally, numerical experiments verify that all of the theoretical analysis agrees well with simulation results in a large range of parameters.
Recursive least squares (RLS) is another important branch in adaptive filtering. Its faster convergence rate compared to LMS makes RLS an intriguing adaptive paradigm. In @cite_18 , group sparsity cognizant RLS is proposed by using various mixed norms, including @math norm, @math norm, @math norm, and @math norm. Numerical experiments show that the novel group sparse RLS is effective and robust for the block-sparse system identification problem, and provides improved performance when compared to the references that only exploit sparsity.
{ "cite_N": [ "@cite_18" ], "mid": [ "1551912413" ], "abstract": [ "SUMMARY Group sparsity is one of the important signal priors for regularization of inverse problems. Sparsity with group structure is encountered in numerous applications. However, despite the abundance of sparsity-based adaptive algorithms, attempts at group sparse adaptive methods are very scarce. In this paper, we introduce novel recursive least squares (RLS) adaptive algorithms regularized via penalty functions, which promote group sparsity. We present a new analytic approximation for lp,0 norm to utilize it as a group sparse regularizer. Simulation results confirm the improved performance of the new group sparse algorithms over regular and sparse RLS algorithms when group sparse structure is present. Copyright © 2013 John Wiley & Sons, Ltd." ] }
1410.5024
1578320724
In order to improve the performance of least mean square (LMS)-based adaptive filtering for identifying block-sparse systems, a new adaptive algorithm called block-sparse LMS (BS-LMS) is proposed in this paper. The basis of the proposed algorithm is to insert a penalty of block-sparsity, which is a mixed @math norm of adaptive tap-weights with equal group partition sizes, into the cost function of traditional LMS algorithm. To describe a block-sparse system response, we first propose a Markov-Gaussian model, which can generate a kind of system responses of arbitrary average sparsity and arbitrary average block length using given parameters. Then we present theoretical expressions of the steady-state misadjustment and transient convergence behavior of BS-LMS with an appropriate group partition size for white Gaussian input data. Based on the above results, we theoretically demonstrate that BS-LMS has much better convergence behavior than @math -LMS with the same small level of misadjustment. Finally, numerical experiments verify that all of the theoretical analysis agrees well with simulation results in a large range of parameters.
In some of above references @cite_1 @cite_33 @cite_0 @cite_28 @cite_31 @cite_32 @cite_23 @cite_20 @cite_2 @cite_18 , it is assumed that the dispersive active regions are located randomly in known partition groups (Fig. (d)). However, one may readily accept that this assumption is impracticable in real scenarios. In fact, the location of each cluster is arbitrary and totally unknown. In this paper, we utilize mixed @math norm in which all of the group partition sizes are the same for practice (Fig. (e, f)). Furthermore, in order to avoid the confusion of blocks in unknown system response and the partition blocks in adaptive tap-weights, we adopt or to indicate the system coefficient blocks and to denote the partitions in adaptive tap-weights. Based on the theoretical analysis, we will further study the optimal group partition size and demonstrate that the proposed algorithm with an appropriate group partition size achieves superior performance than @math -LMS.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_28", "@cite_1", "@cite_32", "@cite_0", "@cite_23", "@cite_2", "@cite_31", "@cite_20" ], "mid": [ "1551912413", "2098996169", "2101782279", "2147276092", "2125680629", "2128007683", "2135780853", "2098149254", "2114595593", "2148527554" ], "abstract": [ "SUMMARY Group sparsity is one of the important signal priors for regularization of inverse problems. Sparsity with group structure is encountered in numerous applications. However, despite the abundance of sparsity-based adaptive algorithms, attempts at group sparse adaptive methods are very scarce. In this paper, we introduce novel recursive least squares (RLS) adaptive algorithms regularized via penalty functions, which promote group sparsity. We present a new analytic approximation for lp,0 norm to utilize it as a group sparse regularizer. Simulation results confirm the improved performance of the new group sparse algorithms over regular and sparse RLS algorithms when group sparse structure is present. Copyright © 2013 John Wiley & Sons, Ltd.", "Let A be an M by N matrix (M 1 - 1 d, and d = Omega(log(1 isin) isin3) . The relaxation given in (*) can be solved in polynomial time using semi-definite programming.", "In this paper, we consider compressed sensing (CS) of block-sparse signals, i.e., sparse signals that have nonzero coefficients occurring in clusters. An efficient algorithm, called zero-point attracting projection (ZAP) algorithm, is extended to the scenario of block CS. The block version of ZAP algorithm employs an approximate l 2,0 norm as the cost function, and finds its minimum in the solution space via iterations. For block sparse signals, an analysis of the stability of the local minimums of this cost function under the perturbation of noise reveals an advantage of the proposed algorithm over its original non-block version in terms of reconstruction error. Finally, numerical experiments show that the proposed algorithm outperforms other state of the art methods for the block sparse problem in various respects, especially the stability under noise.", "Traditional sampling theories consider the problem of reconstructing an unknown signal x from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that x lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which x lies in a union of subspaces. In this paper, we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which x lies in a sum of k subspaces, chosen from a larger set of m possibilities. The samples are modeled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose nonzero elements appear in fixed blocks. We then propose a mixed lscr2 lscr1 program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP, we also prove stability of our approach in the presence of noise and modeling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.", "Compressive sensing (CS) is an alternative to Shannon Nyquist sampling for the acquisition of sparse or compressible signals that can be well approximated by just K ? N elements from an N -dimensional basis. Instead of taking periodic samples, CS measures inner products with M < N random vectors and then recovers the signal via a sparsity-seeking optimization or greedy algorithm. Standard CS dictates that robust signal recovery is possible from M = O(K log(N K)) measurements. It is possible to substantially decrease M without sacrificing robustness by leveraging more realistic signal models that go beyond simple sparsity and compressibility by including structural dependencies between the values and locations of the signal coefficients. This paper introduces a model-based CS theory that parallels the conventional theory and provides concrete guidelines on how to create model-based recovery algorithms with provable performance guarantees. A highlight is the introduction of a new class of structured compressible signals along with a new sufficient condition for robust structured compressible signal recovery that we dub the restricted amplification property, which is the natural counterpart to the restricted isometry property of conventional CS. Two examples integrate two relevant signal models-wavelet trees and block sparsity-into two state-of-the-art CS recovery algorithms and prove that they offer robust recovery from just M = O(K) measurements. Extensive numerical simulations demonstrate the validity and applicability of our new theory and algorithms.", "It has been known for a while that l1-norm relaxation can in certain cases solve an under-determined system of linear equations. Recently, E. Candes (\"Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information,\" IEEE Trans. Information Theory, vol. 52, no. 12, pp. 489-509, Dec. 2006) and D. Donoho (\"High-dimensional centrally symmetric polytopes with neighborlines proportional to dimension,\" Disc. Comput. Geometry, vol. 35, no. 4, pp. 617-652, 2006) proved (in a large dimensional and statistical context) that if the number of equations (measurements in the compressed sensing terminology) in the system is proportional to the length of the unknown vector then there is a sparsity (number of nonzero elements of the unknown vector) also proportional to the length of the unknown vector such that l1-norm relaxation succeeds in solving the system. In this paper, in a large dimensional and statistical context, we determine sharp lower bounds on the values of allowable sparsity for any given number (proportional to the length of the unknown vector) of equations for the case of the so-called block-sparse unknown vectors considered in \"On the reconstruction of block-sparse signals with an optimal number of measurements,\" (M. , IEEE Trans, Signal Processing, submitted for publication.", "We consider efficient methods for the recovery of block-sparse signals-i.e., sparse signals that have nonzero entries occurring in clusters-from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we introduce. We then show that a block-version of the orthogonal matching pursuit algorithm recovers block -sparse signals in no more than steps if the block-coherence is sufficiently small. The same condition on block-coherence is shown to guarantee successful recovery through a mixed -optimization approach. This complements previous recovery results for the block-sparse case which relied on small block-restricted isometry constants. The significance of the results presented in this paper lies in the fact that making explicit use of block-sparsity can provably yield better reconstruction properties than treating the signal as being sparse in the conventional sense, thereby ignoring the additional structure in the problem.", "We consider the deterministic construction of a measurement matrix and a recovery method for signals that are block sparse. A signal that has dimension N = nd, which consists of n blocks of size d, is called (s, d)-block sparse if only s blocks out of n are nonzero. We construct an explicit linear mapping Phi that maps the (s, d) -block sparse signal to a measurement vector of dimension M, where s - d < N (1- (1- M N)d d+1) - o(1). We show that if the (s,d)- block sparse signal is chosen uniformly at random then the signal can almost surely be reconstructed from the measurement vector in O(N3) computations.", "Given a dictionary that consists of multiple blocks and a signal that lives in the range space of only a few blocks, we study the problem of finding a block-sparse representation of the signal, i.e., a representation that uses the minimum number of blocks. Motivated by signal image processing and computer vision applications, such as face recognition, we consider the block-sparse recovery problem in the case where the number of atoms in each block is arbitrary, possibly much larger than the dimension of the underlying subspace. To find a block-sparse representation of a signal, we propose two classes of nonconvex optimization programs, which aim to minimize the number of nonzero coefficient blocks and the number of nonzero reconstructed vectors from the blocks, respectively. Since both classes of problems are NP-hard, we propose convex relaxations and derive conditions under which each class of the convex programs is equivalent to the original nonconvex formulation. Our conditions depend on the notions of mutual and cumulative subspace coherence of a dictionary, which are natural generalizations of existing notions of mutual and cumulative coherence. We evaluate the performance of the proposed convex programs through simulations as well as real experiments on face recognition. We show that treating the face recognition problem as a block-sparse recovery problem improves the state-of-the-art results by 10 with only 25 of the training data.", "This paper examines the ability of greedy algorithms to estimate a block sparse parameter vector from noisy measurements. In particular, block sparse versions of the orthogonal matching pursuit and thresholding algorithms are analyzed under both adversarial and Gaussian noise models. In the adversarial setting, it is shown that estimation accuracy comes within a constant factor of the noise power. Under Gaussian noise, the Cramer-Rao bound is derived, and it is shown that the greedy techniques come close to this bound at high signal-to-noise ratio. The guarantees are numerically compared with the actual performance of block and non-block algorithms, identifying situations in which block sparse techniques improve upon the scalar sparsity approach. Specifically, we show that block sparse methods are particularly successful when the atoms within each block are nearly orthogonal." ] }
1410.5476
2951182230
In the recent years, the Metis prover based on ordered paramodulation and model elimination has replaced the earlier built-in methods for general-purpose proof automation in HOL4 and Isabelle HOL. In the annual CASC competition, the leanCoP system based on connection tableaux has however performed better than Metis. In this paper we show how the leanCoP's core algorithm can be implemented inside HOLLight. leanCoP's flagship feature, namely its minimalistic core, results in a very simple proof system. This plays a crucial role in extending the MESON proof reconstruction mechanism to connection tableaux proofs, providing an implementation of leanCoP that certifies its proofs. We discuss the differences between our direct implementation using an explicit Prolog stack, to the continuation passing implementation of MESON present in HOLLight and compare their performance on all core HOLLight goals. The resulting prover can be also used as a general purpose TPTP prover. We compare its performance against the resolution based Metis on TPTP and other interesting datasets.
Compact ATP calculi such as @cite_22 and @cite_17 have been used for some time in @cite_27 @cite_12 and s @cite_11 as general first-order automation tactics for discharging goals that are already simple enough. With the arrival of large-theory hammer'' linkups @cite_18 @cite_24 @cite_26 @cite_29 between ITPs, state-of-the-art ATPs such as @cite_14 and @cite_0 , and premise selection methods @cite_8 , such tactics also became used as a relatively cheap method for reconstructing the (minimized) proofs found by the stronger ATPs. In particular, Hurd's has been adopted as the main proof reconstruction tool used by 's linkup @cite_23 @cite_1 , while Harrison's version of could reconstruct in 1 second about 80 proofs found by in the first experiments with the linkup @cite_13 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_14", "@cite_22", "@cite_8", "@cite_29", "@cite_1", "@cite_17", "@cite_24", "@cite_0", "@cite_27", "@cite_23", "@cite_13", "@cite_12", "@cite_11" ], "mid": [ "", "", "1781094", "2040816116", "2097437276", "62264973", "1599039905", "2005090804", "2113065066", "1590233219", "1819951499", "1524804222", "49696946", "", "1664582451" ], "abstract": [ "", "", "In this paper we give a short introduction in first-order theorem proving and the use of the theorem prover Vampire. We discuss the superposition calculus and explain the key concepts of saturation and redundancy elimination, present saturation algorithms and preprocessing, and demonstrate how these concepts are implemented in Vampire. Further, we also cover more recent topics and features of Vampire designed for advanced applications, including satisfiability checking, theory reasoning, interpolation, consequence elimination, and program analysis.", "", "In this paper, an overview of state-of-the-art techniques for premise selection in large theory mathematics is provided, and new premise selection techniques are introduced. Several evaluation metrics are introduced, compared and their appropriateness is discussed in the context of automated reasoning in large theory mathematics. The methods are evaluated on the MPTP2078 benchmark, a subset of the Mizar library, and a 10 improvement is obtained over the best method so far.", "Sledgehammer integrates automatic theorem provers in the proof assistant Isabelle HOL. A key component, the relevance filter, heuristically ranks the thousands of facts available and selects a subset, based on syntactic similarity to the current goal. We introduce MaSh, an alternative that learns from successful proofs. New challenges arose from our \"zero-click\" vision: MaSh should integrate seamlessly with the users' workflow, so that they benefit from machine learning without having to install software, set up servers, or guide the learning. The underlying machinery draws on recent research in the context of Mizar and HOL Light, with a number of enhancements. MaSh outperforms the old relevance filter on large formalizations, and a particularly strong filter is obtained by combining the two filters.", "Sledgehammer, a component of the interactive theorem prover Isabelle, finds proofs in higher-order logic by calling the automated provers for first-order logic E, SPASS and Vampire. This paper is the largest and most detailed empirical evaluation of such a link to date. Our test data consists of 1240 proof goals arising in 7 diverse Isabelle theories, thus representing typical Isabelle proof obligations. We measure the effectiveness of Sledgehammer and many other parameters such as run time and complexity of proofs. A facility for minimizing the number of facts needed to prove a goal is presented and analyzed.", "A proof procedure based on a theorem of Herbrand and utilizing the matching technique of Prawitz is presented. In general, Herbrand-type proof procedures proceed by generating over increasing numbers of candidates for the truth-functionally contradictory statement the procedures seek. A trial is successful when some candidate is in fact a contradictory statement. In procedures to date the number of candidates developed before a contradictory statement is found (if one is found) varies roughly exponentially with the size of the contradictory statement. (“Size” might be measured by the number of clauses in the conjunctive normal form of the contradictory statement.) Although basically subject to the same rate of growth, the procedure introduced here attempts to drastically trim the number of candidates at an intermediate level of development. This is done by retaining beyond a certain level only candidates already “partially contradictory.” The major task usually is finding the partially contradictory sets. However, the number of candidate sets required to find these subsets of the contradictory set is generally much smaller than the number required to find the full contradictory set.", "HOL(y)Hammer is an online AI ATP service for formal (computer-understandable) mathematics encoded in the HOL Light system. The service allows its users to upload and automatically process an arbitrary formal development (project) based on HOL Light, and to attack arbitrary conjectures that use the concepts defined in some of the uploaded projects. For that, the service uses several automated reasoning systems combined with several premise selection methods trained on all the project proofs. The projects that are readily available on the server for such query answering include the recent versions of the Flyspeck, Multivariate Analysis and Complex Analysis libraries. The service runs on a 48-CPU server, currently employing in parallel for each task 7 AI ATP combinations and 4 decision procedures that contribute to its overall performance. The system is also available for local installation by interested users, who can customize it for their own proof development. An Emacs interface allowing parallel asynchronous queries to the service is also provided. The overall structure of the service is outlined, problems that arise and their solutions are discussed, and an initial account of using the system is given.", "E is an equational theorem prover for clausal logic with equality. We describe the latest version, E 0.81 Tumsong, with special emphasis on the important aspects that have changed compared to previously described versions.", "A generic tableau prover has been implemented and integrated with Isabelle (Paulson, 1994). Compared with classical first-order logic provers, it has numerous extensions that allow it to reason with any supplied set of tableau rules. It has a higherorder syntax in order to support user-defined binding operators, such as those of set theory. The unification algorithm is first-order instead of higher-order, but it includes modifications to handle bound variables. The proof, when found, is returned to Isabelle as a list of tactics. Because Isabelle verifies the proof, the prover can cut corners for efficiency’s sake without compromising soundness. For example, the prover can use type information to guide the search without storing type information in full. Categories: F.4, I.1", "Interactive proof assistants should verify the proofs they receive from automatic theorem provers. Normally this proof reconstruction takes place internally, forming part of the integration between the two tools. We have implemented source-level proof reconstruction: resolution proofs are automatically translated to Isabelle proof scripts. Users can insert this text into their proof development or (if they wish) examine it manually. Each step of a proof is justified by calling Hurd's Metis prover, which we have ported to Isabelle. A recurrent issue in this project is the treatment of Isabelle's axiomatic type classes.", "PRocH is a proof reconstruction tool that imports in HOL Light proofs produced by ATPs on the recently developed translation of HOL Light and Flyspeck problems to ATP formats. PRocH combines several reconstruction methods in parallel, but the core improvement over previous methods is obtained by re-playing in the HOL logic the detailed inference steps recorded in the ATP (TPTP) proofs, using several internal HOL Light inference methods. These methods range from fast variable matching and more involved rewriting, to full first-order theorem proving using the MESON tactic. The system is described and its performance is evaluated here on a large set of Flyspeck problems.", "", "Many implementations of model elimination perform proof search by iteratively increasing a bound on the total size of the proof. We propose an optimized version of this search mode using a simple divide-and-conquer refinement. Optimized and unoptimized modes are compared, together with depth-bounded and best-first search, over the entire TPTP problem library. The optimized size-bounded mode seems to be the overall winner, but for each strategy there are problems on which it performs best. Some attempt is made to analyze why. We emphasize that our optimization, and other implementation techniques like caching, are rather general: they are not dependent on the details of model elimination, or even that the search is concerned with theorem proving. As such, we believe that this study is a useful complement to research on extending the model elimination calculus." ] }
1410.4871
2078244221
Texture characterization is a central element in many image processing applications. Multifractal analysis is a useful signal and image processing tool, yet, the accurate estimation of multifractal parameters for image texture remains a challenge. This is due in the main to the fact that current estimation procedures consist of performing linear regressions across frequency scales of the 2D dyadic wavelet transform, for which only a few such scales are computable for images. The strongly non-Gaussian nature of multifractal processes, combined with their complicated dependence structure, makes it difficult to develop suitable models for parameter estimation. Here, we propose a Bayesian procedure that addresses the difficulties in the estimation of the multifractality parameter. The originality of the procedure is threefold. The construction of a generic semiparametric statistical model for the logarithm of wavelet leaders; the formulation of Bayesian estimators that are associated with this model and the set of parameter values admitted by multifractal theory; the exploitation of a suitable Whittle approximation within the Bayesian model which enables the otherwise infeasible evaluation of the posterior distribution associated with the model. Performance is assessed numerically for several 2D multifractal processes, for several image sizes and a large range of process parameters. The procedure yields significant benefits over current benchmark estimators in terms of estimation performance and ability to discriminate between the two most commonly used classes of multifractal process models. The gains in performance are particularly pronounced for small image sizes, notably enabling for the first time the analysis of image patches as small as @math pixels.
There are a limited number of reports in the literature that attempt to overcome the limitations of multifractal analysis for images described above. The has been proposed and studied in, e.g., @cite_14 @cite_15 @cite_54 and formulates parameter inference as the solution (in the least squares sense) of an over-determined system of equations that are derived from the moments of the data. The method depends strongly on fully parametric models and yields, to the best of our knowledge, only limited benefits in practical applications. Although classical in parameter inference, maximum likelihood (ML) and Bayesian estimation methods have mostly been formulated for a few specific self-similar and multifractal processes @cite_33 @cite_51 . The main reason for this lies in the complex statistical properties of most of these processes, which exhibit marginal distributions that are strongly non-Gaussian as well as intricate algebraically decaying dependence structures that remain poorly studied to date. The same remark is true for their wavelet coefficients and wavelet leaders, see, e.g., @cite_29 @cite_28 .
{ "cite_N": [ "@cite_14", "@cite_33", "@cite_28", "@cite_54", "@cite_29", "@cite_15", "@cite_51" ], "mid": [ "2248693534", "", "143740007", "1993817335", "1988927463", "1985058574", "2027561985" ], "abstract": [ "", "", "From a theoretical perspective, scale invariance, or simply scaling, can fruitfully be modeled with classes of multifractal stochastic processes, designed from positive multiplicative martingales (or cascades). From a practical perspective, scaling in real-world data is often analyzed by means of multiresolution quantities. The present contribution focuses on three different types of such multiresolution quantities, namely increment, wavelet and Leader coefficients, as well as on a specific multifractal processes, referred to as Infinitely Divisible Motions and fractional Brownian motion in multifractal time. It aims at studying, both analytically and by numerical simulations, the impact of varying the number of vanishing moments of the mother wavelet and the order of the increments on the decay rate of the (higher order) covariance functions of the (q-th power of the absolute values of these) multiresolution coefficients. The key result obtained here consist of the fact that, though it fastens the decay of the covariance functions, as is the case for fractional Brownian motions, increasing the number of vanishing moments of the mother wavelet or the order of the increments does not induce any faster decay for the (higher order) covariance functions", "Abstract In this paper, we make a short overview of continuous cascade models recently introduced to model asset return fluctuations. We show that these models account in a very parcimonious manner for most of ‘stylized facts’ of financial time-series. We review in more details the simplest continuous cascade namely the log-normal multifractal random walk (MRW). It can simply be considered as a stochastic volatility model where the (log-) volatility memory has a peculiar ‘logarithmic’ shape. This model possesses some appealing stability properties with respect to time aggregation. We describe how one can estimate it using a GMM method and we present some applications to volatility and (VaR) Value at Risk forecasting.", "The probability distribution of the cascade generators in a random multiplicative cascade represents a hidden parameter which is reflected in the fine scale limiting behavior of the scaling exponents (sample moments) of a single sample cascade realization as a.s. constants. We identify a large class of cascade generators uniquely determined by these scaling exponents. For this class we provide both asymptotic consistency and confidence intervals for two different estimators of the cumulant generating function (log Laplace transform) of the cascade generator distribution. These results are derived from investigation of the convergence properties of the fine scale sample moments of a single cascade realization.", "Multifractal processes have recently been proposed as a new formalism for modeling the time series of returns in finance. The major attraction of these processes is their ability to generate various degrees of long memory in different powers of returns—a feature that has been found in virtually all financial data. Initial difficulties stemming from nonstationarity and the combinatorial nature of the original model have been overcome by the introduction of an iterative Markov-switching multifractal model which allows for estimation of its parameters via maximum likelihood (ML) and Bayesian forecasting of volatility. However, applicability of MLE is restricted to cases with a discrete distribution of volatility components. From a practical point of view, ML also becomes computationally unfeasible for large numbers of components even if they are drawn from a discrete distribution. Here we propose an alternative generalized method of moments (GMM) estimator together with linear forecasts which in principle is...", "The role of the wavelet transformation as a whitening filter for 1 f processes is exploited to address problems of parameter and signal estimations for 1 f processes embedded in white background noise. Robust, computationally efficient, and consistent iterative parameter estimation algorithms are derived based on the method of maximum likelihood, and Cramer-Rao bounds are obtained. Included among these algorithms are optimal fractal dimension estimators for noisy data. Algorithms for obtaining Bayesian minimum-mean-square signal estimates are also derived together with an explicit formula for the resulting error. These smoothing algorithms find application in signal enhancement and restoration. The parameter estimation algorithms find application in signal enhancement and restoration. The parameter estimation algorithms, in addition to solving the spectrum estimation problem and to providing parameters for the smoothing process, are useful in problems of signal detection and classification. Results from simulations are presented to demonstrated the viability of the algorithms. >" ] }
1410.4871
2078244221
Texture characterization is a central element in many image processing applications. Multifractal analysis is a useful signal and image processing tool, yet, the accurate estimation of multifractal parameters for image texture remains a challenge. This is due in the main to the fact that current estimation procedures consist of performing linear regressions across frequency scales of the 2D dyadic wavelet transform, for which only a few such scales are computable for images. The strongly non-Gaussian nature of multifractal processes, combined with their complicated dependence structure, makes it difficult to develop suitable models for parameter estimation. Here, we propose a Bayesian procedure that addresses the difficulties in the estimation of the multifractality parameter. The originality of the procedure is threefold. The construction of a generic semiparametric statistical model for the logarithm of wavelet leaders; the formulation of Bayesian estimators that are associated with this model and the set of parameter values admitted by multifractal theory; the exploitation of a suitable Whittle approximation within the Bayesian model which enables the otherwise infeasible evaluation of the posterior distribution associated with the model. Performance is assessed numerically for several 2D multifractal processes, for several image sizes and a large range of process parameters. The procedure yields significant benefits over current benchmark estimators in terms of estimation performance and ability to discriminate between the two most commonly used classes of multifractal process models. The gains in performance are particularly pronounced for small image sizes, notably enabling for the first time the analysis of image patches as small as @math pixels.
One exception is given by the fractional Brownian motion (in 1D) and fractional Brownian fields (in 2D) (fBm), that are jointly Gaussian self-similar (i.e., @math ) processes with fully parametric covariance structure appropriate for ML and Bayesian estimation. Examples of ML and Bayesian estimators for 1D fBm formulated in the spectral or wavelet domains can be found in @cite_51 @cite_33 @cite_50 @cite_1 . For images, an ML estimator has been proposed in @cite_32 (note, however, that the estimation problem is reduced to a univariate formulation for the rows columns of the image there).
{ "cite_N": [ "@cite_33", "@cite_1", "@cite_32", "@cite_50", "@cite_51" ], "mid": [ "", "1972784294", "2069235103", "2503326828", "2027561985" ], "abstract": [ "", "We consider a time series X = X k , k ∈ Z with memory parameter do ∈ R. This time series is either stationary or can be made stationary after differencing a finite number of times. We study the \"local Whittle wavelet estimator\" of the memory parameter d 0 . This is a wavelet-based semiparametric pseudo-likelihood maximum method estimator. The estimator may depend on a given finite range of scales or on a range which becomes infinite with the sample size. We show that the estimator is consistent and rate optimal if X is a linear process, and is asymptotically normal if X is Gaussian.", "Fractals have been shown to be useful in characterizing texture in a variety of contexts. Use of this methodology normally involves measurement of a parameter H, which is directly related to fractal dimension. In this work the basic theory of fractional Brownian motion is extended to the discrete case. It is shown that the power spectral density of such a discrete process is only approximately proportional to |f|a instead of in direct proportion as in the continuous case. An asymptotic Cramer-Rao bound is derived for the variance of an estimate of H. Subsequently, a maximum likelihood estimator (MLE) is developed to estimate H. It is shown that the variance of this estimator nearly achieves the minimum bound. A generation algorithm for discrete fractional motion is presented and used to demonstrate the capabilities of the MLE when the discrete fractional Brownian process is contaminated with additive Gaussian noise. The results show that even at signal-to-noise ratios of 30 dB, significant errors in estimation of H can result when noise is present. The MLE is then applied to X-ray images of the human calcaneus to demonstrate how the line-to-line formulation can be applied to the two-dimensional case. These results indicate that it has strong potential for quantifying texture.", "Since the seminal works by Granger and Joyeux (1980) and Hosking (1981), estimations of long-memory time series models have been receiving considerable attention and a number of parameter estimation procedures have been proposed. This paper gives an overview of this plethora of methodologies with special focus on likelihood-based techniques. Broadly speaking, likelihood-based techniques can be classified into the following categories: the exact maximum likelihood (ML) estimation (Sowell, 1992; Dahlhaus, 1989), ML estimates based on autoregressive approximations (Granger & Joyeux, 1980; Li & McLeod, 1986), Whittle estimates (Fox & Taqqu, 1986; Giraitis & Surgailis, 1990), Whittle estimates with autoregressive truncation (Beran, 1994a), approximate estimates based on the Durbin–Levinson algorithm (Haslett & Raftery, 1989), state-space-based maximum likelihood estimates for ARFIMA models (Chan & Palma, 1998), and estimation of stochastic volatility models (Ghysels, Harvey, & Renault, 1996; Breidt, Crato, & de Lima, 1998; Chan & Petris, 2000) among others. Given the diversified applications of these techniques in different areas, this review aims at providing a succinct survey of these methodologies as well as an overview of important related problems such as the ML estimation with missing data (Palma & Chan, 1997), influence of subsets of observations on estimates and the estimation of seasonal long-memory models (Palma & Chan, 2005). Performances and asymptotic properties of these techniques are compared and examined. Inter-connections and finite sample performances among these procedures are studied. Finally, applications to financial time series of these methodologies are discussed.", "The role of the wavelet transformation as a whitening filter for 1 f processes is exploited to address problems of parameter and signal estimations for 1 f processes embedded in white background noise. Robust, computationally efficient, and consistent iterative parameter estimation algorithms are derived based on the method of maximum likelihood, and Cramer-Rao bounds are obtained. Included among these algorithms are optimal fractal dimension estimators for noisy data. Algorithms for obtaining Bayesian minimum-mean-square signal estimates are also derived together with an explicit formula for the resulting error. These smoothing algorithms find application in signal enhancement and restoration. The parameter estimation algorithms find application in signal enhancement and restoration. The parameter estimation algorithms, in addition to solving the spectrum estimation problem and to providing parameters for the smoothing process, are useful in problems of signal detection and classification. Results from simulations are presented to demonstrated the viability of the algorithms. >" ] }
1410.4871
2078244221
Texture characterization is a central element in many image processing applications. Multifractal analysis is a useful signal and image processing tool, yet, the accurate estimation of multifractal parameters for image texture remains a challenge. This is due in the main to the fact that current estimation procedures consist of performing linear regressions across frequency scales of the 2D dyadic wavelet transform, for which only a few such scales are computable for images. The strongly non-Gaussian nature of multifractal processes, combined with their complicated dependence structure, makes it difficult to develop suitable models for parameter estimation. Here, we propose a Bayesian procedure that addresses the difficulties in the estimation of the multifractality parameter. The originality of the procedure is threefold. The construction of a generic semiparametric statistical model for the logarithm of wavelet leaders; the formulation of Bayesian estimators that are associated with this model and the set of parameter values admitted by multifractal theory; the exploitation of a suitable Whittle approximation within the Bayesian model which enables the otherwise infeasible evaluation of the posterior distribution associated with the model. Performance is assessed numerically for several 2D multifractal processes, for several image sizes and a large range of process parameters. The procedure yields significant benefits over current benchmark estimators in terms of estimation performance and ability to discriminate between the two most commonly used classes of multifractal process models. The gains in performance are particularly pronounced for small image sizes, notably enabling for the first time the analysis of image patches as small as @math pixels.
As far as MMC processes are concerned, @cite_8 proposes an ML approach in the time domain for one specific process. However, the method relies strongly on the particular construction of this process and cannot easily accommodate more general model classes. Moreover, the method is formulated for 1D signals only. Finally, a Bayesian estimation procedure for the parameter @math of multifractal time series has recently been proposed in @cite_3 . Unlike the methods mentioned above, it does not rely on specific assumptions but instead employs a heuristic semi-parametric model for the statistics of the logarithm of wavelet leaders associated with univariate MMC processes. Yet, it is designed for and can only be applied to univariate time series of small sample size.
{ "cite_N": [ "@cite_3", "@cite_8" ], "mid": [ "2026078231", "1542777121" ], "abstract": [ "Multifractal analysis has matured into a widely used signal and image processing tool. Due to the statistical nature of multifractal processes (strongly non-Gaussian and intricate dependence) the accurate estimation of multifractal parameters is very challenging in situations where the sample size is small (notably including a range of biomedical applications) and currently available estimators need to be improved. To overcome such limitations, the present contribution proposes a Bayesian estimation procedure for the multifractality (or intermittence) parameter. Its originality is threefold: First, the use of wavelet leaders, a recently introduced multiresolution quantity that has been shown to yield significant benefits for multifractal analysis; Second, the construction of a simple yet generic semi-parametric model for the marginals and covariance structure of wavelet leaders for the large class of multiplicative cascade based multifractal processes; Third, the construction of original Bayesian estimators associated with the model and the constraints imposed by multifractal theory. Performance are numerically assessed and illustrated for synthetic multifractal processes for a range of multifractal parameter values. The proposed procedure yields significantly improved estimation performance for small sample sizes.", "We present an approximated maximum likelihood method for the multifractal random walk processes of [E. , Phys. Rev. E 64, 026103 (2001)]. The likelihood is computed using a Laplace approximation and a truncation in the dependency structure for the latent volatility. The procedure is implemented as a package in the R computer language. Its performance is tested on synthetic data and compared to an inference approach based on the generalized method of moments. The method is applied to estimate parameters for various financial stock indices." ] }
1410.4627
2952751501
Although the human visual system can recognize many concepts under challenging conditions, it still has some biases. In this paper, we investigate whether we can extract these biases and transfer them into a machine recognition system. We introduce a novel method that, inspired by well-known tools in human psychophysics, estimates the biases that the human visual system might use for recognition, but in computer vision feature spaces. Our experiments are surprising, and suggest that classifiers from the human visual system can be transferred into a machine with some success. Since these classifiers seem to capture favorable biases in the human visual system, we further present an SVM formulation that constrains the orientation of the SVM hyperplane to agree with the bias from human visual system. Our results suggest that transferring this human bias into machines may help object recognition systems generalize across datasets and perform better when very little training data is available.
: Our methods build upon work to extract mental images from a user's head for both general objects @cite_3 , faces @cite_27 , and scenes @cite_10 . However, our work differs because we estimate mental images in state-of-the-art computer vision feature spaces, which allows us to integrate the mental images into a machine recognition system.
{ "cite_N": [ "@cite_27", "@cite_10", "@cite_3" ], "mid": [ "", "4407728", "2165065544" ], "abstract": [ "", "Our perceptions are guided both by the bottom-up information entering our eyes, as well as our top-down expectations of what we will see. Although bottom-up visual processing has been extensively studied, comparatively little is known about top-down signals. Here, we describe REVEAL (Representations Envisioned Via Evolutionary ALgorithm), a method for visualizing an observer's internal representation of a complex, real-world scene, allowing us to, for the first time, visualize the top-down information in an observer's mind. REVEAL rests on two innovations for solving this high dimensional problem: visual noise that samples from natural image statistics, and a computer algorithm that collaborates with human observers to efficiently obtain a solution. In this work, we visualize observers' internal representations of a visual scene category (street) using an experiment in which the observer views the naturalistic visual noise and collaborates with the algorithm to externalize his internal representation. As no scene information was presented, observers had to use their internal knowledge of the target, matching it with the visual features in the noise. We matched reconstructed images with images of real-world street scenes to enhance visualization. Critically, we show that the visualized mental images can be used to predict rapid scene detection performance, as each observer had faster and more accurate responses to detecting real-world images that were the most similar to his reconstructed street templates. These results show that it is possible to visualize previously unobservable mental representations of real world stimuli. More broadly, REVEAL provides a general method for objectively examining the content of previously private, subjective mental experiences.", "Starting from a member of an image database designated the \"query image,\" traditional image retrieval techniques, for example, search by visual similarity, allow one to locate additional instances of a target category residing in the database. However, in many cases, the query image or, more generally, the target category, resides only in the mind of the user as a set of subjective visual patterns, psychological impressions, or \"mental pictures.\" Consequently, since image databases available today are often unstructured and lack reliable semantic annotations, it is often not obvious how to initiate a search session; this is the \"page zero problem.\" We propose a new statistical framework based on relevance feedback to locate an instance of a semantic category in an unstructured image database with no semantic annotation. A search session is initiated from a random sample of images. At each retrieval round, the user is asked to select one image from among a set of displayed images-the one that is closest in his opinion to the target class. The matching is then \"mental.\" Performance is measured by the number of iterations necessary to display an image which satisfies the user, at which point standard techniques can be employed to display other instances. Our core contribution is a Bayesian formulation which scales to large databases. The two key components are a response model which accounts for the user's subjective perception of similarity and a display algorithm which seeks to maximize the flow of information. Experiments with real users and two databases of 20,000 and 60,000 images demonstrate the efficiency of the search process." ] }
1410.4627
2952751501
Although the human visual system can recognize many concepts under challenging conditions, it still has some biases. In this paper, we investigate whether we can extract these biases and transfer them into a machine recognition system. We introduce a novel method that, inspired by well-known tools in human psychophysics, estimates the biases that the human visual system might use for recognition, but in computer vision feature spaces. Our experiments are surprising, and suggest that classifiers from the human visual system can be transferred into a machine with some success. Since these classifiers seem to capture favorable biases in the human visual system, we further present an SVM formulation that constrains the orientation of the SVM hyperplane to agree with the bias from human visual system. Our results suggest that transferring this human bias into machines may help object recognition systems generalize across datasets and perform better when very little training data is available.
The idea to transfer biases from the human mind into object recognition is inspired by many recent works that puts a human in the computer vision loop @cite_6 @cite_14 , trains recognition systems with active learning @cite_20 , and studies crowdsourcing @cite_26 @cite_35 . The primary difference of these approaches and our work is, rather than using crowds as a workforce, we want to extract biases from the worker's visual systems.
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_26", "@cite_6", "@cite_20" ], "mid": [ "2149489787", "", "2080942732", "2103490241", "2027953712" ], "abstract": [ "We show how to outsource data annotation to Amazon Mechanical Turk. Doing so has produced annotations in quite large numbers relatively cheaply. The quality is good, and can be checked and controlled. Annotations are produced quickly. We describe results for several different annotation problems. We describe some strategies for determining when the task is well specified and properly priced.", "", "We introduce Peekaboom, an entertaining web-based game that can help computers locate objects in images. People play the game because of its entertainment value, and as a side effect of them playing, we collect valuable image metadata, such as which pixels belong to which object in the image. The collected data could be applied towards constructing more accurate computer vision algorithms, which require massive amounts of training and testing data not currently available. Peekaboom has been played by thousands of people, some of whom have spent over 12 hours a day playing, and thus far has generated millions of data points. In addition to its purely utilitarian aspect, Peekaboom is an example of a new, emerging class of games, which not only bring people together for leisure purposes, but also exist to improve artificial intelligence. Such games appeal to a general audience, while providing answers to problems that computers cannot yet solve.", "We present an interactive, hybrid human-computer method for object classification. The method applies to classes of objects that are recognizable by people with appropriate expertise (e.g., animal species or airplane model), but not (in general) by people without such expertise. It can be seen as a visual version of the 20 questions game, where questions based on simple visual attributes are posed interactively. The goal is to identify the true class while minimizing the number of questions asked, using the visual content of the image. We introduce a general framework for incorporating almost any off-the-shelf multi-class object recognition algorithm into the visual 20 questions game, and provide methodologies to account for imperfect user responses and unreliable computer vision algorithms. We evaluate our methods on Birds-200, a difficult dataset of 200 tightly-related bird species, and on the Animals With Attributes dataset. Our results demonstrate that incorporating user input drives up recognition accuracy to levels that are good enough for practical applications, while at the same time, computer vision reduces the amount of human interaction required.", "Active learning and crowdsourcing are promising ways to efficiently build up training sets for object recognition, but thus far techniques are tested in artificially controlled settings. Typically the vision researcher has already determined the dataset's scope, the labels “actively” obtained are in fact already known, and or the crowd-sourced collection process is iteratively fine-tuned. We present an approach for live learning of object detectors, in which the system autonomously refines its models by actively requesting crowd-sourced annotations on images crawled from the Web. To address the technical issues such a large-scale system entails, we introduce a novel part-based detector amenable to linear classifiers, and show how to identify its most uncertain instances in sub-linear time with a hashing-based solution. We demonstrate the approach with experiments of unprecedented scale and autonomy, and show it successfully improves the state-of-the-art for the most challenging objects in the PASCAL benchmark. In addition, we show our detector competes well with popular nonlinear classifiers that are much more expensive to train." ] }
1410.4627
2952751501
Although the human visual system can recognize many concepts under challenging conditions, it still has some biases. In this paper, we investigate whether we can extract these biases and transfer them into a machine recognition system. We introduce a novel method that, inspired by well-known tools in human psychophysics, estimates the biases that the human visual system might use for recognition, but in computer vision feature spaces. Our experiments are surprising, and suggest that classifiers from the human visual system can be transferred into a machine with some success. Since these classifiers seem to capture favorable biases in the human visual system, we further present an SVM formulation that constrains the orientation of the SVM hyperplane to agree with the bias from human visual system. Our results suggest that transferring this human bias into machines may help object recognition systems generalize across datasets and perform better when very little training data is available.
Our work explores a novel application of feature visualizations @cite_28 @cite_16 @cite_8 . Rather than using feature visualizations to diagnose computer vision systems, we use them to inspect and learn biases in the human visual system.
{ "cite_N": [ "@cite_28", "@cite_16", "@cite_8" ], "mid": [ "1976101156", "1982428585", "1915485278" ], "abstract": [ "This paper shows that an image can be approximately reconstructed based on the output of a blackbox local description software such as those classically used for image indexing. Our approach consists first in using an off-the-shelf image database to find patches that are visually similar to each region of interest of the unknown input image, according to associated local descriptors. These patches are then warped into input image domain according to interest region geometry and seamlessly stitched together. Final completion of still missing texture-free regions is obtained by smooth interpolation. As demonstrated in our experiments, visually meaningful reconstructions are obtained just based on image local descriptors like SIFT, provided the geometry of regions of interest is known. The reconstruction most often allows the clear interpretation of the semantic image content. As a result, this work raises critical issues of privacy and rights when local descriptors of photos or videos are given away for indexing and search purpose.", "We introduce algorithms to visualize feature spaces used by object detectors. The tools in this paper allow a human to put on 'HOG goggles' and perceive the visual world as a HOG based object detector sees it. We found that these visualizations allow us to analyze object detection systems in new ways and gain new insight into the detector's failures. For example, when we visualize the features for high scoring false alarms, we discovered that, although they are clearly wrong in image space, they do look deceptively similar to true positives in feature space. This result suggests that many of these false alarms are caused by our choice of feature space, and indicates that creating a better learning algorithm or building bigger datasets is unlikely to correct these errors. By visualizing feature spaces, we can gain a more intuitive understanding of our detection systems.", "Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding of them remains limited. In this paper we conduct a direct analysis of the visual information contained in representations by asking the following question: given an encoding of an image, to which extent is it possible to reconstruct the image itself? To answer this question we contribute a general framework to invert representations. We show that this method can invert representations such as HOG more accurately than recent alternatives while being applicable to CNNs too. We then use this technique to study the inverse of recent state-of-the-art CNN image representations for the first time. Among our findings, we show that several layers in CNNs retain photographically accurate information about the image, with different degrees of geometric and photometric invariance." ] }
1410.4627
2952751501
Although the human visual system can recognize many concepts under challenging conditions, it still has some biases. In this paper, we investigate whether we can extract these biases and transfer them into a machine recognition system. We introduce a novel method that, inspired by well-known tools in human psychophysics, estimates the biases that the human visual system might use for recognition, but in computer vision feature spaces. Our experiments are surprising, and suggest that classifiers from the human visual system can be transferred into a machine with some success. Since these classifiers seem to capture favorable biases in the human visual system, we further present an SVM formulation that constrains the orientation of the SVM hyperplane to agree with the bias from human visual system. Our results suggest that transferring this human bias into machines may help object recognition systems generalize across datasets and perform better when very little training data is available.
We also build upon methods in transfer learning to incorporate priors into learning algorithms. A common transfer learning method for SVMs is to change the regularization term @math to @math where @math is the prior @cite_15 @cite_36 . However, this imposes a prior on both the norm and orientation of @math . In our case, since the visual bias does not provide an additional prior on the norm, we present a SVM formulation that constrains only the orientation of @math to be close to @math . Our approach extends sign constraints on SVMs @cite_19 , but instead enforces orientation constraints. Our method enforces a hard orientation constraint, which builds on soft orientation constraints @cite_31 .
{ "cite_N": [ "@cite_36", "@cite_19", "@cite_15", "@cite_31" ], "mid": [ "", "2135104113", "2010132303", "1988348003" ], "abstract": [ "", "Incorporation of prior knowledge into the learning process can significantly improve low-sample classification accuracy. We show how to introduce prior knowledge into linear support vector machines in form of constraints on the rotation of the normal to the separating hyperplane. Such knowledge frequently arises naturally, e.g., as inhibitory and excitatory influences of input variables. We demonstrate that the generalization ability of rotationally-constrained classifiers is improved by analyzing their VC and fat-shattering dimensions. Interestingly, the analysis shows that large-margin classification framework justifies the use of stronger prior knowledge than the traditional VC framework. Empirical experiments with text categorization and political party affiliation prediction confirm the usefulness of rotational prior knowledge.", "We present a hierarchical classification model that allows rare objects to borrow statistical strength from related objects that have many training examples. Unlike many of the existing object detection and recognition systems that treat different classes as unrelated entities, our model learns both a hierarchy for sharing visual appearance across 200 object categories and hierarchical parameters. Our experimental results on the challenging object localization and detection task demonstrate that the proposed model substantially improves the accuracy of the standard single object detectors that ignore hierarchical structure altogether.", "Our objective is transfer training of a discriminatively trained object category detector, in order to reduce the number of training images required. To this end we propose three transfer learning formulations where a template learnt previously for other categories is used to regularize the training of a new category. All the formulations result in convex optimization problems. Experiments (on PASCAL VOC) demonstrate significant performance gains by transfer learning from one class to another (e.g. motorbike to bicycle), including one-shot learning, specialization from class to a subordinate class (e.g. from quadruped to horse) and transfer using multiple components. In the case of multiple training samples it is shown that a detection performance approaching that of the state of the art can be achieved with substantially fewer training samples." ] }
1410.4521
2949358270
We frame the task of predicting a semantic labeling as a sparse reconstruction procedure that applies a target-specific learned transfer function to a generic deep sparse code representation of an image. This strategy partitions training into two distinct stages. First, in an unsupervised manner, we learn a set of generic dictionaries optimized for sparse coding of image patches. We train a multilayer representation via recursive sparse dictionary learning on pooled codes output by earlier layers. Second, we encode all training images with the generic dictionaries and learn a transfer function that optimizes reconstruction of patches extracted from annotated ground-truth given the sparse codes of their corresponding image patches. At test time, we encode a novel image using the generic dictionaries and then reconstruct using the transfer function. The output reconstruction is a semantic labeling of the test image. Applying this strategy to the task of contour detection, we demonstrate performance competitive with state-of-the-art systems. Unlike almost all prior work, our approach obviates the need for any form of hand-designed features or filters. To illustrate general applicability, we also show initial results on semantic part labeling of human faces. The effectiveness of our approach opens new avenues for research on deep sparse representations. Our classifiers utilize this representation in a novel manner. Rather than acting on nodes in the deepest layer, they attach to nodes along a slice through multiple layers of the network in order to make predictions about local patches. Our flexible combination of a generatively learned sparse representation with discriminatively trained transfer classifiers extends the notion of sparse reconstruction to encompass arbitrary semantic labeling tasks.
Contour detection has long been a major research focus in computer vision. Arbel 'aez @cite_21 catalogue a vast set of historical and modern algorithms. Three different approaches @cite_21 @cite_5 @cite_12 appear competitive for state-of-the-art accuracy. Arbel 'aez @cite_21 derive pairwise pixel affinities from local color and texture gradients @cite_15 and apply spectral clustering @cite_4 followed by morphological operations to obtain a global boundary map.
{ "cite_N": [ "@cite_4", "@cite_21", "@cite_5", "@cite_15", "@cite_12" ], "mid": [ "2121947440", "2110158442", "2165140157", "2119823327", "" ], "abstract": [ "We propose a novel approach for solving the perceptual grouping problem in vision. Rather than focusing on local features and their consistencies in the image data, our approach aims at extracting the global impression of an image. We treat image segmentation as a graph partitioning problem and propose a novel global criterion, the normalized cut, for segmenting the graph. The normalized cut criterion measures both the total dissimilarity between the different groups as well as the total similarity within the groups. We show that an efficient computational technique based on a generalized eigenvalue problem can be used to optimize this criterion. We applied this approach to segmenting static images, as well as motion sequences, and found the results to be very encouraging.", "This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.", "Finding contours in natural images is a fundamental problem that serves as the basis of many tasks such as image segmentation and object recognition. At the core of contour detection technologies are a set of hand-designed gradient features, used by most approaches including the state-of-the-art Global Pb (gPb) operator. In this work, we show that contour detection accuracy can be significantly improved by computing Sparse Code Gradients (SCG), which measure contrast using patch representations automatically learned through sparse coding. We use K-SVD for dictionary learning and Orthogonal Matching Pursuit for computing sparse codes on oriented local neighborhoods, and apply multi-scale pooling and power transforms before classifying them with linear SVMs. By extracting rich representations from pixels and avoiding collapsing them prematurely, Sparse Code Gradients effectively learn how to measure local contrasts and find contours. We improve the F-measure metric on the BSDS500 benchmark to 0.74 (up from 0.71 of gPb contours). Moreover, our learning approach can easily adapt to novel sensor data such as Kinect-style RGB-D cameras: Sparse Code Gradients on depth maps and surface normals lead to promising contour detection using depth and depth+color, as verified on the NYU Depth Dataset.", "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.", "" ] }
1410.4521
2949358270
We frame the task of predicting a semantic labeling as a sparse reconstruction procedure that applies a target-specific learned transfer function to a generic deep sparse code representation of an image. This strategy partitions training into two distinct stages. First, in an unsupervised manner, we learn a set of generic dictionaries optimized for sparse coding of image patches. We train a multilayer representation via recursive sparse dictionary learning on pooled codes output by earlier layers. Second, we encode all training images with the generic dictionaries and learn a transfer function that optimizes reconstruction of patches extracted from annotated ground-truth given the sparse codes of their corresponding image patches. At test time, we encode a novel image using the generic dictionaries and then reconstruct using the transfer function. The output reconstruction is a semantic labeling of the test image. Applying this strategy to the task of contour detection, we demonstrate performance competitive with state-of-the-art systems. Unlike almost all prior work, our approach obviates the need for any form of hand-designed features or filters. To illustrate general applicability, we also show initial results on semantic part labeling of human faces. The effectiveness of our approach opens new avenues for research on deep sparse representations. Our classifiers utilize this representation in a novel manner. Rather than acting on nodes in the deepest layer, they attach to nodes along a slice through multiple layers of the network in order to make predictions about local patches. Our flexible combination of a generatively learned sparse representation with discriminatively trained transfer classifiers extends the notion of sparse reconstruction to encompass arbitrary semantic labeling tasks.
Ren and Bo @cite_5 adopt the same pipeline, but use gradients of sparse codes instead of the color and texture gradients developed by Martin @cite_15 . Note that this is completely different from the manner in which we propose to use sparse coding for contour detection. @cite_5 , sparse codes from a dictionary of small @math patches serve as replacement for the textons @cite_2 used in previous work @cite_15 @cite_21 . Borrowing the hand-designed filtering scheme of @cite_15 , half-discs at multiple orientations act as regions over which codes are pooled into feature vectors and then classified using an SVM. In contrast, we use a range of patch resolutions, from @math to @math , without hand-designed gradient operations, in a reconstructive setting through application of a learned transfer dictionary. Our sparse codes assume a role different than that of serving as glorified textons.
{ "cite_N": [ "@cite_5", "@cite_15", "@cite_21", "@cite_2" ], "mid": [ "2165140157", "2119823327", "2110158442", "2141376824" ], "abstract": [ "Finding contours in natural images is a fundamental problem that serves as the basis of many tasks such as image segmentation and object recognition. At the core of contour detection technologies are a set of hand-designed gradient features, used by most approaches including the state-of-the-art Global Pb (gPb) operator. In this work, we show that contour detection accuracy can be significantly improved by computing Sparse Code Gradients (SCG), which measure contrast using patch representations automatically learned through sparse coding. We use K-SVD for dictionary learning and Orthogonal Matching Pursuit for computing sparse codes on oriented local neighborhoods, and apply multi-scale pooling and power transforms before classifying them with linear SVMs. By extracting rich representations from pixels and avoiding collapsing them prematurely, Sparse Code Gradients effectively learn how to measure local contrasts and find contours. We improve the F-measure metric on the BSDS500 benchmark to 0.74 (up from 0.71 of gPb contours). Moreover, our learning approach can easily adapt to novel sensor data such as Kinect-style RGB-D cameras: Sparse Code Gradients on depth maps and surface normals lead to promising contour detection using depth and depth+color, as verified on the NYU Depth Dataset.", "The goal of this work is to accurately detect and localize boundaries in natural scenes using local image measurements. We formulate features that respond to characteristic changes in brightness, color, and texture associated with natural boundaries. In order to combine the information from these features in an optimal way, we train a classifier using human labeled images as ground truth. The output of this classifier provides the posterior probability of a boundary at each image location and orientation. We present precision-recall curves showing that the resulting detector significantly outperforms existing approaches. Our two main results are 1) that cue combination can be performed adequately with a simple linear model and 2) that a proper, explicit treatment of texture is required to detect boundaries in natural images.", "This paper investigates two fundamental problems in computer vision: contour detection and image segmentation. We present state-of-the-art algorithms for both of these tasks. Our contour detector combines multiple local cues into a globalization framework based on spectral clustering. Our segmentation algorithm consists of generic machinery for transforming the output of any contour detector into a hierarchical region tree. In this manner, we reduce the problem of image segmentation to that of contour detection. Extensive experimental evaluation demonstrates that both our contour detection and segmentation methods significantly outperform competing algorithms. The automatically generated hierarchical segmentations can be interactively refined by user-specified annotations. Computation at multiple image resolutions provides a means of coupling our system to recognition applications.", "This paper provides an algorithm for partitioning grayscale images into disjoint regions of coherent brightness and texture. Natural images contain both textured and untextured regions, so the cues of contour and texture differences are exploited simultaneously. Contours are treated in the intervening contour framework, while texture is analyzed using textons. Each of these cues has a domain of applicability, so to facilitate cue combination we introduce a gating operator based on the texturedness of the neighborhood at a pixel. Having obtained a local measure of how likely two nearby pixels are to belong to the same region, we use the spectral graph theoretic framework of normalized cuts to find partitions of the image into regions of coherent texture and brightness. Experimental results on a wide range of images are shown." ] }
1410.4521
2949358270
We frame the task of predicting a semantic labeling as a sparse reconstruction procedure that applies a target-specific learned transfer function to a generic deep sparse code representation of an image. This strategy partitions training into two distinct stages. First, in an unsupervised manner, we learn a set of generic dictionaries optimized for sparse coding of image patches. We train a multilayer representation via recursive sparse dictionary learning on pooled codes output by earlier layers. Second, we encode all training images with the generic dictionaries and learn a transfer function that optimizes reconstruction of patches extracted from annotated ground-truth given the sparse codes of their corresponding image patches. At test time, we encode a novel image using the generic dictionaries and then reconstruct using the transfer function. The output reconstruction is a semantic labeling of the test image. Applying this strategy to the task of contour detection, we demonstrate performance competitive with state-of-the-art systems. Unlike almost all prior work, our approach obviates the need for any form of hand-designed features or filters. To illustrate general applicability, we also show initial results on semantic part labeling of human faces. The effectiveness of our approach opens new avenues for research on deep sparse representations. Our classifiers utilize this representation in a novel manner. Rather than acting on nodes in the deepest layer, they attach to nodes along a slice through multiple layers of the network in order to make predictions about local patches. Our flexible combination of a generatively learned sparse representation with discriminatively trained transfer classifiers extends the notion of sparse reconstruction to encompass arbitrary semantic labeling tasks.
Doll 'ar and Zitnick @cite_12 learn a random decision forest on feature channels consisting of image color, gradient magnitude at multiple orientations, and pairwise patch differences. They cluster ground-truth edge patches by similarity and train the random forest to predict structured output. The emphasis on describing local edge structure in both @cite_12 and previous work @cite_6 @cite_26 matches our intuition. However, sparse coding offers a more flexible methodology for achieving this goal. Unlike @cite_12 , we learn directly from image data (not predefined features), in an unsupervised manner, a generic (not contour-specific) representation, which can then be ported to many tasks via a second stage of supervised transfer learning.
{ "cite_N": [ "@cite_26", "@cite_6", "@cite_12" ], "mid": [ "2151049637", "2585809578", "" ], "abstract": [ "We propose a novel approach to both learning and detecting local contour-based representations for mid-level features. Our features, called sketch tokens, are learned using supervised mid-level information in the form of hand drawn contours in images. Patches of human generated contours are clustered to form sketch token classes and a random forest classifier is used for efficient detection in novel images. We demonstrate our approach on both top-down and bottom-up tasks. We show state-of-the-art results on the top-down task of contour detection while being over 200x faster than competing methods. We also achieve large improvements in detection accuracy for the bottom-up tasks of pedestrian and object detection as measured on INRIA and PASCAL, respectively. These gains are due to the complementary information provided by sketch tokens to low-level features such as gradient histograms.", "Figure ground assignment is a key step in perceptual organization which assigns contours to one of the two abutting regions, providing information about occlusion and allowing high-level processing to focus on non-accidental shapes of figural regions. In this paper, we develop a computational model for figure ground assignment in complex natural scenes. We utilize a large dataset of images annotated with human-marked segmentations and figure ground labels for training and quantitative evaluation. We operationalize the concept of familiar configuration by constructing prototypical local shapes, i.e. shapemes, from image data. Shapemes automatically encode mid-level visual cues to figure ground assignment such as convexity and parallelism. Based on the shapeme representation, we train a logistic classifier to locally predict figure ground labels. We also consider a global model using a conditional random field (CR.F) to enforce global figure ground consistency at T-junctions. We use loopy belief propagation to perform approximate inference on this model and learn maximum likelihood parameters from ground-truth labels. We find that the local shapeme model achieves an accuracy of 64 in predicting the correct figural assignment. This compares favorably to previous studies using classical figure ground cues [1]. We evaluate the global model using either a set of contours extracted from a low-level edge detector or the set of contours given by human segmentations. The global CRF model significantly improves the performance over the local model, most notably when using human-marked boundaries (78 ). These promising experimental results show that this is a feasible approach to bottom-up figure ground assignment in natural images.", "" ] }
1410.4521
2949358270
We frame the task of predicting a semantic labeling as a sparse reconstruction procedure that applies a target-specific learned transfer function to a generic deep sparse code representation of an image. This strategy partitions training into two distinct stages. First, in an unsupervised manner, we learn a set of generic dictionaries optimized for sparse coding of image patches. We train a multilayer representation via recursive sparse dictionary learning on pooled codes output by earlier layers. Second, we encode all training images with the generic dictionaries and learn a transfer function that optimizes reconstruction of patches extracted from annotated ground-truth given the sparse codes of their corresponding image patches. At test time, we encode a novel image using the generic dictionaries and then reconstruct using the transfer function. The output reconstruction is a semantic labeling of the test image. Applying this strategy to the task of contour detection, we demonstrate performance competitive with state-of-the-art systems. Unlike almost all prior work, our approach obviates the need for any form of hand-designed features or filters. To illustrate general applicability, we also show initial results on semantic part labeling of human faces. The effectiveness of our approach opens new avenues for research on deep sparse representations. Our classifiers utilize this representation in a novel manner. Rather than acting on nodes in the deepest layer, they attach to nodes along a slice through multiple layers of the network in order to make predictions about local patches. Our flexible combination of a generatively learned sparse representation with discriminatively trained transfer classifiers extends the notion of sparse reconstruction to encompass arbitrary semantic labeling tasks.
Yang @cite_14 study the problem of learning dictionaries for coupled feature spaces with image super-resolution as an application. We share their motivation of utilizing sparse coding in a transfer learning context. As the following sections detail, we differ in our choice of a modular training procedure split into distinct unsupervised (generic) and supervised (transfer) phases. We are unique in targeting contour detection and face part labeling as applications.
{ "cite_N": [ "@cite_14" ], "mid": [ "1967212196" ], "abstract": [ "In this paper, we propose a bilevel sparse coding model for coupled feature spaces, where we aim to learn dictionaries for sparse modeling in both spaces while enforcing some desired relationships between the two signal spaces. We first present our new general sparse coding model that relates signals from the two spaces by their sparse representations and the corresponding dictionaries. The learning algorithm is formulated as a generic bilevel optimization problem, which is solved by a projected first-order stochastic gradient descent algorithm. This general sparse coding model can be applied to many specific applications involving coupled feature spaces in computer vision and signal processing. In this work, we tailor our general model to learning dictionaries for compressive sensing recovery and single image super-resolution to demonstrate its effectiveness. In both cases, the new sparse coding model remarkably outperforms previous approaches in terms of recovery accuracy." ] }
1410.4373
2950404207
Highly dynamic networks are characterized by frequent changes in the availability of communication links. Many of these networks are in general partitioned into several components that keep splitting and merging continuously and unpredictably. We present an algorithm that strives to maintain a forest of spanning trees in such networks, without any kind of assumption on the rate of changes. Our algorithm is the adaptation of a coarse-grain interaction algorithm (, 2013) to the synchronous message passing model (for dynamic networks). While the high-level principles of the coarse-grain variant are preserved, the new algorithm turns out to be significantly more complex. In particular, it involves a new technique that consists of maintaining a distributed permutation of the set of all nodes IDs throughout the execution. The algorithm also inherits the properties of its original variant: It relies on purely localized decisions, for which no global information is ever collected at the nodes, and yet it maintains a number of critical properties whatever the frequency and scale of the changes. In particular, the network remains always covered by a spanning forest in which 1) no cycle can ever appear, 2) every node belongs to a tree, and 3) after an arbitrary number of edge disappearance, all maximal subtrees immediately restore exactly one token (at their root). These properties are ensured whatever the dynamics, even if it keeps going for an arbitrary long period of time. Optimality is not the focus here, however the number of tree per components -- the metric of interest here -- eventually converges to one if the network stops changing (which is never expected to happen, though). The algorithm correctness is proven and its behavior is tested through experimentation.
Another algorithm based on random walks is proposed by @cite_9 . Here, the tree is constantly redefined as the token moves (in a way that reminds the snake game). Since the token moves only over present edges, those edges that have disappeared are naturally cleaned out of the tree as the walk proceeds. Hence, the algorithm can tolerate failure of the tree edges. However it still suffers from detecting the disappearance of tokens using timeouts based on the cover time, which as we have seen, suits only slow dynamics.
{ "cite_N": [ "@cite_9" ], "mid": [ "1997912726" ], "abstract": [ "In this paper, we investigate random walk based token circulation in dynamic environments subject to faults. We describe hypotheses on the dynamic environment that allow random walks to meet the important property that the token visits any node infinitely often. The randomness of this scheme allows it to work on any topology, and requires no adaptation after a topological change, which is a desirable property for applications to dynamic systems. For random walks to be a traversal scheme and to solve the concurrency problem, one needs to guarantee that exactly one token circulates in the system. In the presence of transient faults, configurations with multiple tokens or with no token can occur. The meeting property of random walks solves the cases with multiple tokens. The reloading wave mechanism we propose, together with timeouts, allows us to detect and solve cases with no token. This traversal scheme is self-stabilizing, and universal, meaning that it needs no assumption on the system topology. We describe conditions on the dynamicity (with a local detection criterion) under which the algorithm is tolerant to dynamic reconfigurations. We conclude with a study on the time between two visits of the token to a node, which we use to tune the parameters of the reloading wave mechanism according to some system characteristics." ] }
1410.4373
2950404207
Highly dynamic networks are characterized by frequent changes in the availability of communication links. Many of these networks are in general partitioned into several components that keep splitting and merging continuously and unpredictably. We present an algorithm that strives to maintain a forest of spanning trees in such networks, without any kind of assumption on the rate of changes. Our algorithm is the adaptation of a coarse-grain interaction algorithm (, 2013) to the synchronous message passing model (for dynamic networks). While the high-level principles of the coarse-grain variant are preserved, the new algorithm turns out to be significantly more complex. In particular, it involves a new technique that consists of maintaining a distributed permutation of the set of all nodes IDs throughout the execution. The algorithm also inherits the properties of its original variant: It relies on purely localized decisions, for which no global information is ever collected at the nodes, and yet it maintains a number of critical properties whatever the frequency and scale of the changes. In particular, the network remains always covered by a spanning forest in which 1) no cycle can ever appear, 2) every node belongs to a tree, and 3) after an arbitrary number of edge disappearance, all maximal subtrees immediately restore exactly one token (at their root). These properties are ensured whatever the dynamics, even if it keeps going for an arbitrary long period of time. Optimality is not the focus here, however the number of tree per components -- the metric of interest here -- eventually converges to one if the network stops changing (which is never expected to happen, though). The algorithm correctness is proven and its behavior is tested through experimentation.
A recent work by @cite_8 addresses the maintenance of minimum spanning trees in dynamic networks. The paper shows that a solution to the problem can be updated after a topological change using @math messages (and same time), while the @math messages of the blast away'' approach was thought to be optimal. (This demonstrates, incidentally, the revelance of updating a solution rather than recomputing it from scratch in the case of minimum spanning trees.) The algorithm has good properties for highly dynamic networks. For instance, it considers as natural the fact that components may split or merge perpetually. Furthermore, it tolerates new topological events while an ongoing update operation is executing. In this case, update operations are enqueued and consistently executed one after the other. While this mechanism allows for an arbitrary number of topological events at times , it still requires that such burst of changes are only episodical and that the network remains eventually stable for (at least) a linear amount of time in the number of nodes, in order for the update operations to complete and thus the logical tree to be consistent with physical reality.
{ "cite_N": [ "@cite_8" ], "mid": [ "2003024612" ], "abstract": [ "In this article, we show that keeping track of history enables significant improvements in the communication complexity of dynamic network protocols. We present a communication optimal maintenance of a spanning tree in a dynamic network. The amortized (on the number of topological changes) message complexity is O(V), where V is the number of nodes in the network. The message size used by the algorithm is O(log vIDv) where vIDv is the size of the name space of the nodes. Typically, log vIDv e O(log V). Previous algorithms that adapt to dynamic networks involved Ω (E) messages per topological change—inherently paying for re-computation of the tree from scratch. Spanning trees are essential components in many distributed algorithms. Some examples include broadcast (dissemination of messages to all network nodes), multicast, reset (general adaptation of static algorithms to dynamic networks), routing, termination detection, and more. Thus, our efficient maintenance of a spanning tree implies the improvement of algorithms for these tasks. Our results are obtained using a novel technique to save communication. A node uses information received in the past in order to deduce present information from the fact that certain messages were NOT sent by the node's neighbor. This technique is one of our main contributions." ] }
1410.4449
2951063013
The complexity and cost of managing high-performance computing infrastructures are on the rise. Automating management and repair through predictive models to minimize human interventions is an attempt to increase system availability and contain these costs. Building predictive models that are accurate enough to be useful in automatic management cannot be based on restricted log data from subsystems but requires a holistic approach to data analysis from disparate sources. Here we provide a detailed multi-scale characterization study based on four datasets reporting power consumption, temperature, workload, and hardware software events for an IBM Blue Gene Q installation. We show that the system runs a rich parallel workload, with low correlation among its components in terms of temperature and power, but higher correlation in terms of events. As expected, power and temperature correlate strongly, while events display negative correlations with load and power. Power and workload show moderate correlations, and only at the scale of components. The aim of the study is a systematic, integrated characterization of the computing infrastructure and discovery of correlation sources and levels to serve as basis for future predictive modeling efforts.
Log analysis for characterization of large computing infrastructures has been the focus of numerous recent studies. The release of two Google workload traces has triggered a flurry of analysis activity. General statistics, descriptive analyses, and characterization studies @cite_12 @cite_10 have revealed higher levels of heterogeneity when compared to grid systems @cite_0 . Some modeling work has also appeared based on these data @cite_7 @cite_5 @cite_1 . While they have provided important insight into Google clusters, focusing only on workload aspects of the system has been limiting. To be effective, it is essential to integrate data from different components and sources. Other traces have also been studied in the past @cite_17 , and tools for their analysis developed @cite_2 , but again concentrating on a single data type. Here we perform similar analyses for a Blue Gene Q system but from several viewpoints: workload, RAS, power, and temperature , providing a more complete picture of the system under study.
{ "cite_N": [ "@cite_7", "@cite_1", "@cite_0", "@cite_2", "@cite_5", "@cite_10", "@cite_12", "@cite_17" ], "mid": [ "2143492785", "1931649315", "2136510202", "2158197021", "", "2129542763", "2060331550", "228898923" ], "abstract": [ "Designing cloud computing setups is a challenging task. It involves understanding the impact of a plethora of parameters ranging from cluster configuration, partitioning, networking characteristics, and the targeted applications' behavior. The design space, and the scale of the clusters, make it cumbersome and error-prone to test different cluster configurations using real setups. Thus, the community is increasingly relying on simulations and models of cloud setups to infer system behavior and the impact of design choices. The accuracy of the results from such approaches depends on the accuracy and realistic nature of the workload traces employed. Unfortunately, few cloud workload traces are available (in the public domain). In this paper, we present the key steps towards analyzing the traces that have been made public, e.g., from Google, and inferring lessons that can be used to design realistic cloud workloads as well as enable thorough quantitative studies of Hadoop design. Moreover, we leverage the lessons learned from the traces to undertake two case studies: (i) Evaluating Hadoop job schedulers, and (ii) Quantifying the impact of shared storage on Hadoop system performance.", "Continued reliance on human operators for managing data centers is a major impediment for them from ever reaching extreme dimensions. Large computer systems in general, and data centers in particular, will ultimately be managed using predictive computational and executable models obtained through data-science tools, and at that point, the intervention of humans will be limited to setting high-level goals and policies rather than performing low-level operations. Data-driven autonomics, where management and control are based on holistic predictive models that are built and updated using generated data, opens one possible path towards limiting the role of operators in data centers. In this paper, we present a data-science study of a public Google dataset collected in a 12K-node cluster with the goal of building and evaluating a predictive model for node failures. We use BigQuery, the big data SQL platform from the Google Cloud suite, to process massive amounts of data and generate a rich feature set characterizing machine state over time. We describe how an ensemble classifier can be built out of many Random Forest classifiers each trained on these features, to predict if machines will fail in a future 24-hour window. Our evaluation reveals that if we limit false positive rates to 5 , we can achieve true positive rates between 27 and 88 with precision varying between 50 and 72 . We discuss the practicality of including our predictive model as the central component of a data-driven autonomic manager and operating it on-line with live data streams (rather than off-line on data logs). All of the scripts used for BigQuery and classification analyses are publicly available from the authors' website.", "A new era of Cloud Computing has emerged, but the characteristics of Cloud load in data centers is not perfectly clear. Yet this characterization is critical for the design of novel Cloud job and resource management systems. In this paper, we comprehensively characterize the job task load and host load in a real-world production data center at Google Inc. We use a detailed trace of over 25 million tasks across over 12,500 hosts. We study the differences between a Google data center and other Grid HPC systems, from the perspective of both work load (w.r.t. jobs and tasks) and host load (w.r.t. machines). In particular, we study the job length, job submission frequency, and the resource utilization of jobs in the different systems, and also investigate valuable statistics of machine's maximum load, queue state and relative usage levels, with different job priorities and resource attributes. We find that the Google data center exhibits finer resource allocation with respect to CPU and memory than that of Grid HPC systems. Google jobs are always submitted with much higher frequency and they are much shorter than Grid jobs. As such, Google host load exhibits higher variance and noise.", "With the increasing presence, scale, and complexity of distributed systems, resource failures are becoming an important and practical topic of computer science research. While numerous failure models and failure-aware algorithms exist, their comparison has been hampered by the lack of public failure data sets and data processing tools. To facilitate the design, validation, and comparison of fault-tolerant models and algorithms, we have created the Failure Trace Archive (FTA)-an online, public repository of failure traces collected from diverse parallel and distributed systems. In this work, we first describe the design of the archive, in particular of the standard FTA data format, and the design of a toolbox that facilitates automated analysis of trace data sets. We also discuss the use of the FTA for various current and future purposes. Second, after applying the toolbox to nine failure traces collected from distributed systems used in various application domains (e.g., HPC, Internet operation, and various online applications), we present a comparative analysis of failures in various distributed systems. Our analysis presents various statistical insights and typical statistical modeling results for the availability of individual resources in various distributed systems. The analysis results underline the need for public availability of trace data from different distributed systems. Last, we show how different interpretations of the meaning of failure data can result in different conclusions for failure modeling and job scheduling in distributed systems. Our results for different interpretations show evidence that there may be a need for further revisiting existing failure-aware algorithms, when applied for general rather than for domain-specific distributed systems.", "", "To better understand the challenges in developing effective cloud-based resource schedulers, we analyze the first publicly available trace data from a sizable multi-purpose cluster. The most notable workload characteristic is heterogeneity: in resource types (e.g., cores:RAM per machine) and their usage (e.g., duration and resources needed). Such heterogeneity reduces the effectiveness of traditional slot- and core-based scheduling. Furthermore, some tasks are constrained as to the kind of machine types they can use, increasing the complexity of resource assignment and complicating task migration. The workload is also highly dynamic, varying over time and most workload features, and is driven by many short jobs that demand quick scheduling decisions. While few simplifying assumptions apply, we find that many longer-running jobs have relatively stable resource utilizations, which can help adaptive resource schedulers.", "Cloud computing offers high scalability, flexibility and cost-effectiveness to meet emerging computing requirements. Understanding the characteristics of real workloads on a large production cloud cluster benefits not only cloud service providers but also researchers and daily users. This paper studies a large-scale Google cluster usage trace dataset and characterizes how the machines in the cluster are managed and the workloads submitted during a 29-day period behave. We focus on the frequency and pattern of machine maintenance events, job- and task-level workload behavior, and how the overall cluster resources are utilized.", "Abstract : In this paper, we analyze seven MapReduce workload traces from production clusters at Facebook and at Cloudera customers in e-commerce, telecommunications media, and retail. Cumulatively, these traces comprise over a year's worth of data logged from over 5000 machines, and contain over two million jobs that perform 1.6 exabytes of I O. Key observations include input data forms up to 77 of all bytes, 90 of jobs access KB to GB sized files that make up less than 16 of stored bytes, up to 60 of jobs re-access data that has been touched within the past 6 hours, peak-to-median job submission rates are 9:1 or greater, an average of 68 of all compute time is spent in map, task-seconds-per-byte is a key metric for balancing compute and data bandwidth task durations range from seconds to hours, and five out of seven workloads contain map-only jobs. We have also deployed a public workload repository with workload replay tools so that the researchers can systematically assess design priorities and compare performance across diverse MapReduce workloads." ] }
1410.4449
2951063013
The complexity and cost of managing high-performance computing infrastructures are on the rise. Automating management and repair through predictive models to minimize human interventions is an attempt to increase system availability and contain these costs. Building predictive models that are accurate enough to be useful in automatic management cannot be based on restricted log data from subsystems but requires a holistic approach to data analysis from disparate sources. Here we provide a detailed multi-scale characterization study based on four datasets reporting power consumption, temperature, workload, and hardware software events for an IBM Blue Gene Q installation. We show that the system runs a rich parallel workload, with low correlation among its components in terms of temperature and power, but higher correlation in terms of events. As expected, power and temperature correlate strongly, while events display negative correlations with load and power. Power and workload show moderate correlations, and only at the scale of components. The aim of the study is a systematic, integrated characterization of the computing infrastructure and discovery of correlation sources and levels to serve as basis for future predictive modeling efforts.
RAS logs from IBM Blue Gene systems have been included in several earlier studies. @cite_8 prediction of events in a Blue Gene Q machine is attempted while an earlier study of a Blue Gene L installation is @cite_16 . Both compare several classification tools (SVM, customized KNN, ANN, feature selection, rule-based models). These predictive studies look only at RAS events, while adding further data from other system components could improve prediction accuracy significantly, as noted by the authors themselves. In this paper we provide the first step towards such an analysis, where we perform descriptive analytics mandatory before any prediction can be attempted.
{ "cite_N": [ "@cite_16", "@cite_8" ], "mid": [ "2094924503", "2182419557" ], "abstract": [ "Frequent failures are becoming a serious concern to the community of high-end computing, especially when the applications and the underlying systems rapidly grow in size and complexity. In order to develop effective fault-tolerant strategies, there is a critical need to predict failure events. To this end, we have collected detailed event logs from IBM BlueGene L, which has 128 K processors, and is currently the fastest supercomputer in the world. In this study, we first show how the event records can be converted into a data set that is appropriate for running classification techniques. Then we apply classifiers on the data, including RIPPER (a rule-based classifier), Support Vector Machines (SVMs), a traditional Nearest Neighbor method, and a customized Nearest Neighbor method. We show that the customized nearest neighbor approach can outperform RIPPER and SVMs in terms of both coverage and precision. The results suggest that the customized nearest neighbor approach can be used to alleviate the impact of failures.", "The increase in scale and complexity of large compute clusters motivates a need for representative workload benchmarks to evaluate the performance impact of system changes, so as to assist in designing better scheduling algorithms and in carrying out management activities. To achieve this goal, it is necessary to construct workload characterizations from which realistic performance benchmarks can be created. In this paper, we focus on characterizing run-time task resource usage for CPU, memory and disk. The goal is to find an accurate characterization that can faithfully reproduce the performance of historical workload traces in terms of key performance metrics, such as task wait time and machine resource utilization. Through experiments using workload traces from Google production clusters, we find that simply using the mean of task usage can generate synthetic workload traces that accurately reproduce resource utilizations and task waiting time. This seemingly surprising result can be justified by the fact that resource usage for CPU, memory and disk are relatively stable over time for the majority of the tasks. Our work not only presents a simple technique for constructing realistic workload benchmarks, but also provides insights into understanding workload performance in production compute clusters." ] }
1410.4207
2264653744
Since the rst publication of the Top 10" (2004), cross-site scripting (XSS) vulnerabilities have always been among the top 5 web application security bugs. Black-box vulnerability scanners are widely used in the industry to reproduce (XSS) attacks automatically. In spite of the technical sophistication and advancement, previous work showed that black-box scanners miss a non-negligible portion of vulnerabilities, and report non-existing, non-exploitable or uninteresting vulnerabilities. Unfortunately, these results hold true even for XSS vulnerabilities, which are relatively simple to trigger if compared, for instance, to logic aws. Black-box scanners have not been studied in depth on this vertical: knowing precisely how scanners try to detect XSS can provide useful insights to understand their limitations, to design better detection methods. In this paper, we present and discuss the results of a detailed and systematic study on 6 black-box web scanners (both proprietary and open source) that we conducted in coordination with the respective vendors. To this end, we developed an automated tool to (1) extract the payloads used by each scanner, (2) distill the " that have originated each payload, (3) evaluate them according to quality indicators, and (4) perform a cross-scanner analysis. Unlike previous work, our testbed application, which contains a large set of XSS vulnerabilities, including DOM XSS, was gradually retrotted to accomodate for the payloads that triggered no vulnerabilities. Our analysis reveals a highly fragmented scenario. Scanners exhibit a wide variety of distinct payloads, a non-uniform approach to fuzzing and mutating the payloads, and a very diverse detection eectiveness. Moreover, we found remarkable discrepancies in the type and structure of payloads, from complex attack strings that tackle rare corner cases, to basic payloads able to trigger only the simplest vulnerabilities. Although some scanners exhibited context awareness to some extent, the majority do not optimize the choice of payloads.
Another work by takes into consideration limitations in interacting with complex applications, due to the presence of multiple actions that can change the state of an application. They propose a method to infer the application internal state machine by navigating through it, observing differences in output and incrementally producing a model representing its state. They then employ the internal state machine to drive the scanner in finding and fuzzing input vectors to discover vulnerabilities. To evaluate the approach, they ran their state-aware scanner along with three other vulnerability scanners, using as metrics real total detections, false positives and code coverage. Like the previously cited work, @cite_20 differs from our approach in the focus of the analysis.
{ "cite_N": [ "@cite_20" ], "mid": [ "1861561811" ], "abstract": [ "Black-box web vulnerability scanners are a popular choice for finding security vulnerabilities in web applications in an automated fashion. These tools operate in a point-and-shootmanner, testing any web application-- regardless of the server-side language--for common security vulnerabilities. Unfortunately, black-box tools suffer from a number of limitations, particularly when interacting with complex applications that have multiple actions that can change the application's state. If a vulnerability analysis tool does not take into account changes in the web application's state, it might overlook vulnerabilities or completely miss entire portions of the web application. We propose a novel way of inferring the web application's internal state machine from the outside--that is, by navigating through the web application, observing differences in output, and incrementally producing a model representing the web application's state. We utilize the inferred state machine to drive a black-box web application vulnerability scanner. Our scanner traverses a web application's state machine to find and fuzz user-input vectors and discover security flaws. We implemented our technique in a prototype crawler and linked it to the fuzzing component from an open-source web vulnerability scanner. We show that our state-aware black-box web vulnerability scanner is able to not only exercise more code of the web application, but also discover vulnerabilities that other vulnerability scanners miss." ] }
1410.4307
2001357909
This paper considers a problem of distributed hypothesis testing and social learning. Individual nodes in a network receive noisy local (private) observations whose distribution is parameterized by a discrete parameter (hypotheses). The conditional distributions are known locally at the nodes, but the true parameter hypothesis is not known. An update rule is analyzed in which nodes first perform a Bayesian update of their belief (distribution estimate) of the parameter based on their local observation, communicate these updates to their neighbors, and then perform a "non-Bayesian" linear consensus using the log-beliefs of their neighbors. In this paper we show that under mild assumptions, the belief of any node in any incorrect hypothesis converges to zero exponentially fast, and we characterize the exponential rate of learning which is given in terms of the network structure and the divergences between the observations' distributions. Our main result is the concentration property established on the rate of convergence.
Several works @cite_15 @cite_6 @cite_9 @cite_37 @cite_41 consider an update rule which uses local Bayesian updating combined with a linear consensus strategy on the beliefs @cite_4 that enables all nodes in the network identify the true hypothesis. @cite_15 characterize the learning rate'' of the algorithm in terms of the total variational error across the network and provide an almost sure upper bound on this quantity in terms of the KL-divergences and influence vector of agents. In Corollary we analytically show that the proposed learning rule in this paper provides a strict improvement over linear consensus strategies @cite_15 . Simultaneous and independent works by @cite_23 and Nedi ' c et al @cite_20 consider a similar learning rule (with a change of order in the update steps). They obtain similar convergence and concentration results under the assumption of bounded ratios of likelihood functions. Nedi ' c et al @cite_20 analyze the learning rule for time-varying graphs. Theorem strengthens these results for static networks by providing a large deviation analysis for a broader class of likelihood functions which includes Gaussian mixtures.
{ "cite_N": [ "@cite_37", "@cite_4", "@cite_41", "@cite_9", "@cite_6", "@cite_23", "@cite_15", "@cite_20" ], "mid": [ "2124770452", "1998692453", "2501236262", "1558006055", "2011101252", "", "", "2130233156" ], "abstract": [ "In this paper, we present a model of distributed parameter estimation in networks, where agents have access to partially informative measurements over time. Each agent faces a local identification problem, in the sense that it cannot consistently estimate the parameter in isolation. We prove that, despite local identification problems, if agents update their estimates recursively as a function of their neighbors' beliefs, they can consistently estimate the true parameter provided that the communication network is strongly connected; that is, there exists an information path between any two agents in the network. We also show that the estimates of all agents are asymptotically normally distributed. Finally, we compute the asymptotic variance of the agents' estimates in terms of their observation models and the network topology, and provide conditions under which the distributed estimators are as efficient as any centralized estimator.", "Abstract Consider a group of individuals who must act together as a team or committee, and suppose that each individual in the group has his own subjective probability distribution for the unknown value of some parameter. A model is presented which describes how the group might reach agreement on a common subjective probability distribution for the parameter by pooling their individual opinions. The process leading to the consensus is explicitly described and the common distribution that is reached is explicitly determined. The model can also be applied to problems of reaching a consensus when the opinion of each member of the group is represented simply as a point estimate of the parameter rather than as a probability distribution.", "In this paper, we address distributed hypothesis testing (DHT) in sensor networks and Bayesian networks using the average-consensus algorithm of Olfati-Saber & Murray. As a byproduct, we obtain a novel belief propagation algorithm called Belief Consensus. This algorithm works for connected networks with loops and arbitrary degree sequence. Belief consensus allows distributed computation of products of n beliefs (or conditional probabilities) that belong to n different nodes of a network. This capability enables distributed hypothesis testing for a broad variety of applications. We show that this belief propagation admits a Lyapunov function that quantifies the collective disbelief in the network. Belief consensus benefits from scalability, robustness to link failures, convergence under variable topology, asynchronous features of average-consensus algorithm. Some connections between small-word networks and speed of convergence of belief consensus are discussed. A detailed example is provided for distributed detection of multi-target formations in a sensor network. The entire network is capable of reaching a common set of beliefs associated with correctness of different hypotheses. We demonstrate that our DHT algorithm successfully identifies a test formation in a network of sensors with self-constructed statistical models.", "This paper examines how the structure of a social network and the quality of information available to different agents determine the speed of social learning. To this end, we study a variant of the seminal model of DeGroot (1974), according to which agents linearly combine their personal experiences with the views of their neighbors. We show that the rate of learning has a simple analytical characterization in terms of the relative entropy of agents’ signal structures and their eigenvector centralities. Our characterization establishes that the way information is dispersed throughout the social network has non-trivial implications for the rate of learning. In particular, we show that when the informativeness of different agents’ signal structures are comparable in the sense of Blackwell (1953), then a positive assortative matching of signal qualities and eigenvector centralities maximizes the rate of learning. On the other hand, if information structures are such that each individual possesses some information crucial for learning, then the rate of learning is higher when agents with the best signals are located at the periphery of the network. Finally, we show that the extent of asymmetry in the structure of the social network plays a key role in the long-run dynamics of the beliefs.", "In this paper we present an optimization-based view of distributed parameter estimation and observational social learning in networks. Agents receive a sequence of random, independent and identically distributed (i.i.d.) signals, each of which individually may not be informative about the underlying true state, but the signals together are globally informative enough to make the true state identifiable. Using an optimization-based characterization of Bayesian learning as proximal stochastic gradient descent (with Kullback-Leibler divergence from a prior as a proximal function), we show how to efficiently use a distributed, online variant of Nesterov's dual averaging method to solve the estimation with purely local information. When the true state is globally identifiable, and the network is connected, we prove that agents eventually learn the true parameter using a randomized gossip scheme. We demonstrate that with high probability the convergence is exponentially fast with a rate dependent on the KL divergence of observations under the true state from observations under the second likeliest state. Furthermore, our work also highlights the possibility of learning under continuous adaptation of network which is a consequence of employing constant, unit stepsize for the algorithm.", "", "", "The problem of distributed detection and estimation in a sensor network over a multiaccess fading channel is considered. A communication scheme known as the type-based random access (TBRA) is employed and its performance is characterized with respect to the mean transmission rate and the channel coherence index. For extreme values of channel coherence index i.e., 0 and infin, we give an optimal TBRA scheme which is essentially a sensor activation strategy that achieves the optimal allocation of transmission energy to spatial and temporal domains. For channels with zero coherence index, it is shown that there exists a finite optimal mean transmission rate maximizing performance. This optimal rate can be calculated numerically or estimated using the Gaussian approximation. On the other hand, for channels with infinite coherence index (i.e., no fading) the optimal strategy is to allocate all the energy to the spatial domain. Numerical examples and simulations confirm our theory." ] }
1410.3944
2030643321
Signal processing on graph is attracting more and more attentions. For a graph signal in the low-frequency subspace, the missing data associated with unsampled vertices can be reconstructed through the sampled data by exploiting the smoothness of the graph signal. In this paper, the concept of local set is introduced and two local-set-based iterative methods are proposed to reconstruct bandlimited graph signal from sampled data. In each iteration, one of the proposed methods reweights the sampled residuals for different vertices, while the other propagates the sampled residuals in their respective local sets. These algorithms are built on frame theory and the concept of local sets, based on which several frames and contraction operators are proposed. We then prove that the reconstruction methods converge to the original signal under certain conditions and demonstrate the new methods lead to a significantly faster convergence compared with the baseline method. Furthermore, the correspondence between graph signal sampling and time-domain irregular sampling is analyzed comprehensively, which may be helpful to future works on graph signals. Computer simulations are conducted. The experimental results demonstrate the effectiveness of the reconstruction methods in various sampling geometries, imprecise priori knowledge of cutoff frequency, and noisy scenarios.
Smooth signals or approximately smooth signals over graph are common in practical applications @cite_15 @cite_26 @cite_31 @cite_38 , especially for those cases in which the graph topologies are constructed to enforce the smoothness property of signals @cite_25 . Exploiting the smoothness of a graph signal, it may be reconstructed through its entries on only a part of the vertices, i.e. samples of the graph signal.
{ "cite_N": [ "@cite_38", "@cite_31", "@cite_26", "@cite_15", "@cite_25" ], "mid": [ "1983058494", "1972048014", "2119699244", "1991252559", "2398592859" ], "abstract": [ "We propose a novel recovery algorithm for signals with complex, irregular structure that is commonly represented by graphs. Our approach is a generalization of the signal inpainting technique from classical signal processing. We formulate corresponding minimization problems and demonstrate that in many cases they have closed-form solutions. We discuss a relation of the proposed approach to regression, provide an upper bound on the error for our algorithm and compare the proposed technique with other existing algorithms on real-world datasets.", "We present an adaptive graph filtering approach to semi-supervised classification. Adaptive graph filters combine decisions from multiple graph filters using a weighting function that is optimized in a semi-supervised manner. We also demonstrate the multiresolution property of adaptive graph filters by connecting them to the diffusion wavelets. In our experiments, we apply the adaptive graph filters to the classification of online blogs and damage identification in indirect bridge structural health monitoring.", "In this paper, we propose a novel algorithm to interpolate data defined on graphs, using signal processing concepts. The interpolation of missing values from known samples appears in various applications, such as matrix vector completion, sampling of high-dimensional data, semi-supervised learning etc. In this paper, we formulate the data interpolation problem as a signal reconstruction problem on a graph, where a graph signal is defined as the information attached to each node (scalar or vector values mapped to the set of vertices edges of the graph). We use recent results for sampling in graphs to find classes of bandlimited (BL) graph signals that can be reconstructed from their partially observed samples. The interpolated signal is obtained by projecting the input signal into the appropriate BL graph signal space. Additionally, we impose a bilateral' weighting scheme on the links between known samples, which further improves accuracy. We use our proposed method for collaborative filtering in recommendation systems. Preliminary results show a very favorable trade-off between accuracy and complexity, compared to state of the art algorithms.", "In social settings, individuals interact through webs of relationships. Each individual is a node in a complex network (or graph) of interdependencies and generates data, lots of data. We label the data by its source, or formally stated, we index the data by the nodes of the graph. The resulting signals (data indexed by the nodes) are far removed from time or image signals indexed by well ordered time samples or pixels. DSP, discrete signal processing, provides a comprehensive, elegant, and efficient methodology to describe, represent, transform, analyze, process, or synthesize these well ordered time or image signals. This paper extends to signals on graphs DSP and its basic tenets, including filters, convolution, z-transform, impulse response, spectral representation, Fourier transform, frequency response, and illustrates DSP on graphs by classifying blogs, linear predicting and compressing data from irregularly located weather stations, or predicting behavior of customers of a mobile service provider.", "The construction of a meaningful graph plays a crucial role in the success of many graph-based data representations and algorithms, especially in the emerging field of signal processing on graphs. However, a meaningful graph is not always readily available from the data, nor easy to define depending on the application domain. In this paper, we address the problem of graph learning, where we are interested in learning graph topologies, namely, the relationships between data entities, that well explain the signal observations. In particular, we want to infer a graph such that the input data forms graph signals with smooth variations on the resulting topology. To this end, we adopt a factor analysis model for the graph signals and impose a Gaussian probabilistic prior on the latent variables that control these graph signals. We show that the Gaussian prior leads to an efficient representation that favors the smoothness property of the graph signals. We then propose an algorithm for learning graphs that enforce such smoothness property for the signal observations by minimizing the variations of the signals on the learned graph. Experiments on both synthetic and real world data demonstrate that the proposed graph learning framework can efficiently infer meaningful graph topologies from only the signal observations." ] }
1410.3944
2030643321
Signal processing on graph is attracting more and more attentions. For a graph signal in the low-frequency subspace, the missing data associated with unsampled vertices can be reconstructed through the sampled data by exploiting the smoothness of the graph signal. In this paper, the concept of local set is introduced and two local-set-based iterative methods are proposed to reconstruct bandlimited graph signal from sampled data. In each iteration, one of the proposed methods reweights the sampled residuals for different vertices, while the other propagates the sampled residuals in their respective local sets. These algorithms are built on frame theory and the concept of local sets, based on which several frames and contraction operators are proposed. We then prove that the reconstruction methods converge to the original signal under certain conditions and demonstrate the new methods lead to a significantly faster convergence compared with the baseline method. Furthermore, the correspondence between graph signal sampling and time-domain irregular sampling is analyzed comprehensively, which may be helpful to future works on graph signals. Computer simulations are conducted. The experimental results demonstrate the effectiveness of the reconstruction methods in various sampling geometries, imprecise priori knowledge of cutoff frequency, and noisy scenarios.
There has been some theoretical analysis on the sampling and reconstruction of bandlimited graph signals @cite_29 @cite_13 @cite_17 @cite_35 . Some existing works focus on the theoretical conditions for the exact reconstruction of bandlimited signals. The relationships between the sampling sets of unique reconstruction and the cutoff frequency of bandlimited signal space are established for normalized Laplacian @cite_29 and unnormalized Laplacian @cite_17 @cite_35 , respectively. Recently, a necessary and sufficient condition of exact reconstruction is established in @cite_0 . In order to reconstruct bandlimited graph signals from sampled data, several methods have been proposed. In @cite_26 a least square approach is proposed to solve this problem. Furthermore, an iterative reconstruction method is proposed and a tradeoff between smoothness and data-fitting is introduced for real world applications @cite_43 .
{ "cite_N": [ "@cite_35", "@cite_26", "@cite_29", "@cite_0", "@cite_43", "@cite_13", "@cite_17" ], "mid": [ "1986880804", "2119699244", "2024457004", "2030319053", "2095414057", "", "1974306717" ], "abstract": [ "We prove Poincare and Plancherel--Polya inequalities for weighted @math -spaces on weighted graphs in which the constants are explicitly expressed in terms of some geometric characteristics of a graph. We use a Poincare-type inequality to obtain some new relations between geometric and spectral properties of the combinatorial Laplace operator. Several well-known graphs are considered to demonstrate that our results are reasonably sharp. The Plancherel--Polya inequalities allow for application of the frame algorithm as a method for reconstruction of Paley--Wiener functions on weighted graphs from a set of samples. The results are illustrated by developing Shannon-type sampling in the case of a line graph.", "In this paper, we propose a novel algorithm to interpolate data defined on graphs, using signal processing concepts. The interpolation of missing values from known samples appears in various applications, such as matrix vector completion, sampling of high-dimensional data, semi-supervised learning etc. In this paper, we formulate the data interpolation problem as a signal reconstruction problem on a graph, where a graph signal is defined as the information attached to each node (scalar or vector values mapped to the set of vertices edges of the graph). We use recent results for sampling in graphs to find classes of bandlimited (BL) graph signals that can be reconstructed from their partially observed samples. The interpolated signal is obtained by projecting the input signal into the appropriate BL graph signal space. Additionally, we impose a bilateral' weighting scheme on the links between known samples, which further improves accuracy. We use our proposed method for collaborative filtering in recommendation systems. Preliminary results show a very favorable trade-off between accuracy and complexity, compared to state of the art algorithms.", "A notion of Paley-Wiener spaces on combinatorial graphs is introduced. It is shown that functions from some of these spaces are uniquely determined by their values on some sets of vertices which are called the uniqueness sets. Such uniqueness sets are described in terms of Poincare-Wirtinger-type inequalities. A reconstruction algorithm of Paley-Wiener functions from uniqueness sets which uses the idea of frames in Hilbert spaces is developed. Special consideration is given to the n-dimensional lattice, homogeneous trees, and eigenvalue and eigenfunction problems on finite graphs.", "In this paper, we extend the Nyquist-Shannon theory of sampling to signals defined on arbitrary graphs. Using spectral graph theory, we establish a cut-off frequency for all bandlimited graph signals that can be perfectly reconstructed from samples on a given subset of nodes. The result is analogous to the concept of Nyquist frequency in traditional signal processing. We consider practical ways of computing this cut-off and show that it is an improvement over previous results. We also propose a greedy algorithm to search for the smallest possible sampling set that guarantees unique recovery for a signal of given bandwidth. The efficacy of these results is verified through simple examples.", "In this paper, we present two localized graph filtering based methods for interpolating graph signals defined on the vertices of arbitrary graphs from only a partial set of samples. The first method is an extension of previous work on reconstructing bandlimited graph signals from partially observed samples. The iterative graph filtering approach very closely approximates the solution proposed in the that work, while being computationally more efficient. As an alternative, we propose a regularization based framework in which we define the cost of reconstruction to be a combination of smoothness of the graph signal and the reconstruction error with respect to the known samples, and find solutions that minimize this cost. We provide both a closed form solution and a computationally efficient iterative solution of the optimization problem. The experimental results on the recommendation system datasets demonstrate effectiveness of the proposed methods.", "", "In this paper we address sampling and approximation of functions on combinatorial graphs. We develop filtering on graphs by using Schrodinger’s group of operators generated by combinatorial Laplace operator. Then we construct a sampling theory by proving Poincare and Plancherel-Polya-type inequalities for functions on graphs. These results lead to a theory of sparse approximations on graphs and have potential applications to filtering, denoising, data dimension reduction, image processing, image compression, computer graphics, visualization and learning theory." ] }
1410.3944
2030643321
Signal processing on graph is attracting more and more attentions. For a graph signal in the low-frequency subspace, the missing data associated with unsampled vertices can be reconstructed through the sampled data by exploiting the smoothness of the graph signal. In this paper, the concept of local set is introduced and two local-set-based iterative methods are proposed to reconstruct bandlimited graph signal from sampled data. In each iteration, one of the proposed methods reweights the sampled residuals for different vertices, while the other propagates the sampled residuals in their respective local sets. These algorithms are built on frame theory and the concept of local sets, based on which several frames and contraction operators are proposed. We then prove that the reconstruction methods converge to the original signal under certain conditions and demonstrate the new methods lead to a significantly faster convergence compared with the baseline method. Furthermore, the correspondence between graph signal sampling and time-domain irregular sampling is analyzed comprehensively, which may be helpful to future works on graph signals. Computer simulations are conducted. The experimental results demonstrate the effectiveness of the reconstruction methods in various sampling geometries, imprecise priori knowledge of cutoff frequency, and noisy scenarios.
The problem of signal reconstruction is closely related to the frame theory, which is also involved in other areas of graph signal processing, e.g., wavelet and vertex-frequency analysis on graphs @cite_5 . Based on windowed graph Fourier transform and vertex-frequency analysis, windowed graph Fourier frames are studied in @cite_21 . A spectrum-adapted tight vertex-frequency frame is proposed in @cite_18 via translation on the graph. These works focus on vertex-frequency frames whose elements make up over-representation dictionaries, while in the reconstruction problem the frames are always composed by elements centering at the vertices in the sampling sets.
{ "cite_N": [ "@cite_5", "@cite_18", "@cite_21" ], "mid": [ "2158787690", "2117569556", "2016423476" ], "abstract": [ "We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian L. Given a wavelet generating kernel g and a scale parameter t, we define the scaled wavelet operator Ttg = g(tL). The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on g, this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing L. We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains.", "We consider the problem of designing spectral graph filters for the construction of dictionaries of atoms that can be used to efficiently represent signals residing on weighted graphs. While the filters used in previous spectral graph wavelet constructions are only adapted to the length of the spectrum, the filters proposed in this paper are adapted to the distribution of graph Laplacian eigenvalues, and therefore lead to atoms with better discriminatory power. Our approach is to first characterize a family of systems of uniformly translated kernels in the graph spectral domain that give rise to tight frames of atoms generated via generalized translation on the graph. We then warp the uniform translates with a function that approximates the cumulative spectral density function of the graph Laplacian eigenvalues. We use this approach to construct computationally efficient, spectrum-adapted, tight vertex-frequency and graph wavelet frames. We give numerous examples of the resulting spectrum-adapted graph filters, and also present an illustrative example of vertex-frequency analysis using the proposed construction.", "One of the key challenges in the area of signal processing on graphs is to design dictionaries and transform methods to identify and exploit structure in signals on weighted graphs. To do so, we need to account for the intrinsic geometric structure of the underlying graph data domain. In this paper, we generalize one of the most important signal processing tools - windowed Fourier analysis - to the graph setting. Our approach is to rst dene" ] }
1410.4296
1678604504
Datacenter networks and services are at risk in the face of disasters. Existing fault-tolerant storage services cannot even achieve a nil recovery point objective (RPO) as client-generated data may get lost before the termination of their migration across geo-replicated datacenters. SDN has proved instrumental in exploiting application-level information to optimise the routing of information. In this paper, we propose Software Defined Edge (SDE) or the implementation of SDN at the network edge to achieve nil RPO. We illustrate our proposal with a fault-tolerant key-value store that experimentally recovers from disaster within 30s. Although SDE is inherently fault-tolerant and scalable, its deployment raises new challenges on the partnership between ISPs and CDN providers. We conclude that failure detection information at the SDN-level can effectively benefit applications to recover from disaster.
Replication is the key to make data resilient to failures. A distributed storage uses replication to guarantee that requests issued by clients get served despite the crash of a server. In the case of a disaster that affects a whole datacenter, it is important to geo-replicate data by copying the data across datacenters located in different regions of the globe. Database solutions traditionally classify servers into a single primary and multiple backups. In such context, they exist two ways of recovering from disaster recovery @cite_8 . First, the client contacts the primary and the primary exchange messages with the backup(s) before the clients gets a response. To recover the data after a disaster at least one backup should be located in a different region from the primary, which makes this solution, called , slow as the client request latency increases with the distance between regions. Second, the client contacts only the primary to get a faster response before any message exchange with the backups, an efficient alternative called that unfortunately cannot guarantee recovery.
{ "cite_N": [ "@cite_8" ], "mid": [ "1514375659" ], "abstract": [ "Remote backup copies of databases are often maintained to ensure availability of data even in the presence of extensive failures, for which local replication mechanisms may be inadequate. We present two versions of an epoch algorithm for maintaining a consistent remote backup copy of a database. The algorithms ensure scalability, which makes them suitable for very large databases. The correctness and the performance of the algorithms are discussed, and an additional application for distributed group commit is given." ] }
1410.4296
1678604504
Datacenter networks and services are at risk in the face of disasters. Existing fault-tolerant storage services cannot even achieve a nil recovery point objective (RPO) as client-generated data may get lost before the termination of their migration across geo-replicated datacenters. SDN has proved instrumental in exploiting application-level information to optimise the routing of information. In this paper, we propose Software Defined Edge (SDE) or the implementation of SDN at the network edge to achieve nil RPO. We illustrate our proposal with a fault-tolerant key-value store that experimentally recovers from disaster within 30s. Although SDE is inherently fault-tolerant and scalable, its deployment raises new challenges on the partnership between ISPs and CDN providers. We conclude that failure detection information at the SDN-level can effectively benefit applications to recover from disaster.
An alternative to primary backups is to group servers by majorities, by simply redirecting any request issued by a client to a quorum of the servers. This strategy experiences a longer delay for read-only requests as a client must always wait for the participation of a number of servers that is linear in the total number of servers before getting an acknowledgement from the storage service. This is typically the technique used to synchronise remote servers based on the Paxos consensus protocol. Instead of considering majorities, an alternative is to exploit , or mutually interesting sets, of servers that indicate the minimum amount of servers where the data should be replicated @cite_10 . While our solution is similar to the combination of 1-safety and 2-safety approaches, it actually balances the load on quorums of servers rather than using the primary backups approach.
{ "cite_N": [ "@cite_10" ], "mid": [ "1989492148" ], "abstract": [ "This paper describes the design and implementation of SecondSite, a cloud-based service for disaster tolerance. SecondSite extends the Remus virtualization-based high availability system by allowing groups of virtual machines to be replicated across data centers over wide-area Internet links. The goal of the system is to commodify the property of availability, exposing it as a simple tick box when configuring a new virtual machine. To achieve this in the wide area, we have had to tackle the related issues of replication traffic bandwidth, reliable failure detection across geographic regions and traffic redirection over a wide-area network without compromising on transparency and consistency." ] }
1410.3756
2953045841
It is common for CCTV operators to overlook inter- esting events taking place within the crowd due to large number of people in the crowded scene (i.e. marathon, rally). Thus, there is a dire need to automate the detection of salient crowd regions acquiring immediate attention for a more effective and proactive surveillance. This paper proposes a novel framework to identify and localize salient regions in a crowd scene, by transforming low-level features extracted from crowd motion field into a global similarity structure. The global similarity structure representation allows the discovery of the intrinsic manifold of the motion dynamics, which could not be captured by the low-level representation. Ranking is then performed on the global similarity structure to identify a set of extrema. The proposed approach is unsupervised so learning stage is eliminated. Experimental results on public datasets demonstrates the effectiveness of exploiting such extrema in identifying salient regions in various crowd scenarios that exhibit crowding, local irregular motion, and unique motion areas such as sources and sinks.
Existing methods can be divided into two main approaches. The first approach analyzes crowd behaviors or activities based on the motion of individuals, where tracking of their trajectories is required @cite_4 @cite_24 @cite_2 @cite_6 @cite_16 @cite_3 @cite_8 . Commonly, the tracking approaches keep track of each individual motion and further apply a statistical model of the trajectories to identify the semantics or geometric structures of the scene, such as the walking paths, sources and sinks. Then, the learned semantics are compared to the query trajectories to detect anomaly. While in principle individuals should be tracked from the time they enter a scene, till the time they exit the scene to infer such semantics, it is inevitable that tracking tends to fail due to occlusion, clutter background and irregular motion in the crowded scenes. Therefore, the aforementioned methods work well, up to a certain extent, in sparse crowd scenes. They tend to fail in dense crowd scenes (Fig. ), where target tracking is extremely challenging.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_6", "@cite_3", "@cite_24", "@cite_2", "@cite_16" ], "mid": [ "2128534087", "", "2165609887", "2125095437", "2584479935", "2532363096", "" ], "abstract": [ "This paper considers the problem of automatically learning an activity-based semantic scene model from a stream of video data. A scene model is proposed that labels regions according to an identifiable activity in each region, such as entry exit zones, junctions, paths, and stop zones. We present several unsupervised methods that learn these scene elements and present results that show the efficiency of our approach. Finally, we describe how the models can be used to support the interpretation of moving objects in a visual surveillance environment.", "", "In this work we present a new crowd analysis algorithm powered by behavior priors that are learned on a large database of crowd videos gathered from the Internet. The algorithm works by first learning a set of crowd behavior priors off-line. During testing, crowd patches are matched to the database and behavior priors are transferred. We adhere to the insight that despite the fact that the entire space of possible crowd behaviors is infinite, the space of distinguishable crowd motion patterns may not be all that large. For many individuals in a crowd, we are able to find analogous crowd patches in our database which contain similar patterns of behavior that can effectively act as priors to constrain the difficult task of tracking an individual in a crowd. Our algorithm is data-driven and, unlike some crowd characterization methods, does not require us to have seen the test video beforehand. It performs like state-of-the-art methods for tracking people having common crowd behaviors and outperforms the methods when the tracked individual behaves in an unusual way.", "In this paper, a new Mixture model of Dynamic pedestrian-Agents (MDA) is proposed to learn the collective behavior patterns of pedestrians in crowded scenes. Collective behaviors characterize the intrinsic dynamics of the crowd. From the agent-based modeling, each pedestrian in the crowd is driven by a dynamic pedestrian-agent, which is a linear dynamic system with its initial and termination states reflecting a pedestrian's belief of the starting point and the destination. Then the whole crowd is modeled as a mixture of dynamic pedestrian-agents. Once the model is unsupervisedly learned from real data, MDA can simulate the crowd behaviors. Furthermore, MDA can well infer the past behaviors and predict the future behaviors of pedestrians given their trajectories only partially observed, and classify different pedestrian behaviors in the scene. The effectiveness of MDA and its applications are demonstrated by qualitative and quantitative experiments on the video surveillance dataset collected from the New York Grand Central Station.", "In this paper, we describe an unsupervised learning framework to segment a scene into semantic regions and to build semantic scene models from long-term observations of moving objects in the scene. First, we introduce two novel similarity measures for comparing trajectories in far-field visual surveillance. The measures simultaneously compare the spatial distribution of trajectories and other attributes, such as velocity and object size, along the trajectories. They also provide a comparison confidence measure which indicates how well the measured image-based similarity approximates true physical similarity. We also introduce novel clustering algorithms which use both similarity and comparison confidence. Based on the proposed similarity measures and clustering methods, a framework to learn semantic scene models by trajectory analysis is developed. Trajectories are first clustered into vehicles and pedestrians, and then further grouped based on spatial and velocity distributions. Different trajectory clusters represent different activities. The geometric and statistical models of structures in the scene, such as roads, walk paths, sources and sinks, are automatically learned from the trajectory clusters. Abnormal activities are detected using the semantic scene models. The system is robust to low-level tracking errors.", "This paper presents a target tracking framework for unstructured crowded scenes. Unstructured crowded scenes are defined as those scenes where the motion of a crowd appears to be random with different participants moving in different directions over time. This means each spatial location in such scenes supports more than one, or multi-modal, crowd behavior. The case of tracking in structured crowded scenes, where the crowd moves coherently in a common direction, and the direction of motion does not vary over time, was previously handled in [1]. In this work, we propose to model various crowd behavior (or motion) modalities at different locations of the scene by employing Correlated Topic Model (CTM) of [16]. In our construction, words correspond to low level quantized motion features and topics correspond to crowd behaviors. It is then assumed that motion at each location in an unstructured crowd scene is generated by a set of behavior proportions, where behaviors represent distributions over low-level motion features. This way any one location in the scene may support multiple crowd behavior modalities and can be used as prior information for tracking. Our approach enables us to model a diverse set of unstructured crowd domains, which range from cluttered time-lapse microscopy videos of cell populations in vitro, to footage of crowded sporting events.", "" ] }
1410.3764
1772249734
We present a new approach, called a lazy matching, to the problem of on-line matching on bipartite graphs. Imagine that one side of a graph is given and the vertices of the other side are arriving on-line. Originally, incoming vertex is either irrevocably matched to an another element or stays forever unmatched. A lazy algorithm is allowed to match a new vertex to a group of elements (possibly empty) and afterwords, forced against next vertices, may give up parts of the group. The restriction is that all the time each element is in at most one group. We present an optimal lazy algorithm (deterministic) and prove that its competitive ratio equals @math . The lazy approach allows us to break the barrier of @math , which is the best competitive ratio that can be guaranteed by any deterministic algorithm in the classical on-line matching.
Another similar approach was proposed by @cite_5 as . They consider the weighted matching problem where each incoming vertex @math may be assigned to one of its neighbors or left alone. Each vertex @math accepts at most @math vertices from @math with highest-weighted edges. Here roles of servers and tasks are switched. All tasks are given at once and servers are incoming on-line. Each server has to be assigned to at most one task. In the end, each task chooses at most @math servers from all the servers assigned to it -- the ones with the highest-weighted edge. The main difference from the lazy approach is that once a connection between a server and a task is established it cannot be changed till the very end. There is no such restriction in the lazy approach -- a server may drop its task and take a new one during the on-line process.
{ "cite_N": [ "@cite_5" ], "mid": [ "1578666690" ], "abstract": [ "We study an online weighted assignment problem with a set of fixed nodes corresponding to advertisers and online arrival of nodes corresponding to ad impressions. Advertiser a has a contract for n(a) impressions, and each impression has a set of weighted edges to advertisers. The problem is to assign the impressions online so that while each advertiser a gets n(a) impressions, the total weight of edges assigned is maximized. Our insight is that ad impressions allow for free disposal, that is, advertisers are indifferent to, or prefer being assigned more than n(a) impressions without changing the contract terms. This means that the value of an assignment only includes the n(a) highest-weighted items assigned to each node a. With free disposal, we provide an algorithm for this problem that achieves a competitive ratio of 1 ? 1 e against the offline optimum, and show that this is the best possible ratio. We use a primal dual framework to derive our results, applying a novel exponentially-weighted dual update rule. Furthermore, our algorithm can be applied to a general set of assignment problems including the ad words problem as a special case, matching the previously known 1 ? 1 e competitive ratio." ] }
1410.3751
2048106635
A reliable human skin detection method that is adaptable to different human skin colors and illumination conditions is essential for better human skin segmentation. Even though different human skin-color detection solutions have been successfully applied, they are prone to false skin detection and are not able to cope with the variety of human skin colors across different ethnic. Moreover, existing methods require high computational cost. In this paper, we propose a novel human skin detection approach that combines a smoothed 2-D histogram and Gaussian model, for automatic human skin detection in color image(s). In our approach, an eye detector is used to refine the skin model for a specific person. The proposed approach reduces computational costs as no training is required, and it improves the accuracy of skin detection despite wide variation in ethnicity and illumination. To the best of our knowledge, this is the first method to employ fusion strategy for this purpose. Qualitative and quantitative results on three standard public datasets and a comparison with state-of-the-art methods have shown the effectiveness and robustness of the proposed approach.
Other approaches are multilayer perceptron @cite_17 @cite_8 @cite_18 , Bayesian classifiers @cite_0 @cite_20 @cite_9 and random forest @cite_26 . In multilayer perceptron based skin classification, a neural network is trained to learn the complex class conditional distributions of the skin and non-skin pixels. @cite_17 proposed a Kohonen network-based skin detector where two Kohonen networks; skin only and skin plus non-skin detectors were trained from a set of about 500 manually labelled images to obtain an optimal result. @cite_0 used a Bayesian network with training data of 60,000 samples for skin modelling and classification. @cite_20 proposed the use of tree-augmented Naive Bayes classifiers for skin detection. The Bayesian decision rule to minimum cost is a well-established technique in statistical pattern classification. Jones and Rehg @cite_9 used the Bayesian decision rule with a 3D @math histogram model built from 2 billion pixels collected from 18,696 web images to perform skin detection. Readers are encourage to read @cite_12 @cite_14 for a detailed state of the art review. Although these solutions had been very successful, they suffer from a tradeoff between precision and computational complexity.
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_14", "@cite_8", "@cite_9", "@cite_0", "@cite_12", "@cite_20", "@cite_17" ], "mid": [ "2114400797", "2091313930", "", "2122358773", "2153746365", "2143413293", "2078088780", "1817561967", "2008138930" ], "abstract": [ "Several types of detectors such as ultraviolet (UV), infrared (IR), visible light (VL), different pressure, flame rod, and others are employed to detect a fire flame in power generation plants. However, these flame detectors have some performance problems. Therefore, this paper describes the image-processing method of fire detection as well as the neural-network modeling. Nowadays, the image-processing technique is broadly applied in the industrial fields. An extracted image information is taken into the inputs of the neural-network model. The neural-network model has strong adaptability and learning capability; therefore, this model can be suitable for pattern classification. The Ulsan Steam Power Generation Plant in Korea is employed as the test field. If this technique can be implemented in physical plants, the boilers can be operated economically and effectively.", "Skin detection is used in applications ranging from face detection, tracking body parts and hand gesture analysis, to retrieval and blocking objectionable content. For robust skin segmentation and detection, we investigate color classification based on random forest. A random forest is a statistical framework with a very high generalization accuracy and quick training times. The random forest approach is used with the IHLS color space for raw pixel based skin detection. We evaluate random forest based skin detection and compare it to Bayesian network, Multilayer Perceptron, SVM, AdaBoost, Naive Bayes and RBF network. Results on a database of 8991 images with manually annotated pixel-level ground truth show that with the IHLS color space, the random forest approach outperforms other approaches. We also show the effect of increasing the number of trees grown for random forest. With fewer trees we get faster training times and with 10 trees we get the highest F-score.", "", "This paper presents a novel neural network based technique for face detection that eliminates limitations pertaining to the skin color variations among people. We propose to model the skin color in the three dimensional RGB space which is a color cube consisting of all the possible color combinations. Skin samples in images with varying lighting conditions, from the Old Dominion University skin database, are used for obtaining a skin color distribution. The primary color components of each plane of the color cube are fed to a three-layered network, trained using the backpropagation algorithm with the skin samples, to extract the skin regions from the planes and interpolate them so as to provide an optimum decision boundary and hence the positive skin samples for the skin classifier. The use of the color cube eliminates the difficulties of finding the non-skin part of training samples since the interpolated data is consider skin and rest of the color cube is consider non-skin. Subsequent face detection is aided by the color, geometry and motion information analyses of each frame in a video sequence. The performance of the new face detection technique has been tested with real-time data of size 320 spl times 240 frames from video sequences captured by a surveillance camera. It is observed that the network can differentiate skin and non-skin effectively while minimizing false detections to a large extent when compared with the existing techniques. In addition, it is seen that the network is capable of performing face detection in complex lighting and background environments.", "The existence of large image datasets such as photos on the World Wide Web make it possible to build powerful generic models for low-level image attributes like color using simple histogram learning techniques. We describe the construction of color models for skin and non-skin classes from a dataset of nearly 1 billion labeled pixels. These classes exhibit a surprising degree of separability which we exploit by building a skin pixel detector that achieves an equal error rate of 88 . We compare the performance of histogram and mixture models in skin detection and find histogram models to be superior in accuracy and computational cost. Using aggregate features computed from the skin detector we build a remarkably effective detector for naked people. We believe this work is the most comprehensive and detailed exploration of skin color models to date.", "The automated detection and tracking of humans in computer vision necessitates improved modeling of the human skin appearance. We propose a Bayesian network approach for skin detection. We test several classifiers and propose a methodology for incorporating unlabeled data. We apply the semi-supervised approach to skin detection and we show that learning the structure of Bayesian network classifiers enables learning good classifiers with a small labeled set and a large unlabeled set.", "Skin detection plays an important role in a wide range of image processing applications ranging from face detection, face tracking, gesture analysis, content-based image retrieval systems and to various human computer interaction domains. Recently, skin detection methodologies based on skin-color information as a cue has gained much attention as skin-color provides computationally effective yet, robust information against rotations, scaling and partial occlusions. Skin detection using color information can be a challenging task as the skin appearance in images is affected by various factors such as illumination, background, camera characteristics, and ethnicity. Numerous techniques are presented in literature for skin detection using color. In this paper, we provide a critical up-to-date review of the various skin modeling and classification strategies based on color information in the visual spectrum. The review is divided into three different categories: first, we present the various color spaces used for skin modeling and detection. Second, we present different skin modeling and classification approaches. However, many of these works are limited in performance due to real-world conditions such as illumination and viewing conditions. To cope up with the rapidly changing illumination conditions, illumination adaptation techniques are applied along with skin-color detection. Third, we present various approaches that use skin-color constancy and dynamic adaptation techniques to improve the skin detection performance in dynamically changing illumination and environmental conditions. Wherever available, we also indicate the various factors under which the skin detection techniques perform well.", "Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection.", "A large body of human image processing techniques use skin detection as a first primitive for subsequent feature extraction. Well established methods of colour modelling, such as histograms and Gaussian mixture models have enabled the construction of suitably accurate skin detectors. However such techniques are not ideal for use in adaptive real time environments. We describe methods of skin detection using a Self-Organising Map or SOM, and show performance comparable (94 accuracy on facial images) to conventional techniques. We also introduce the AXEON Learning Processor as the basis for a hardware skin detector, and outline the potential benefits of using this system in a demanding environment, such as filtering Internet traffic, to which conventional techniques are not best suited." ] }
1410.3438
1672939318
Most of the attention in statistical compression is given to the space used by the compressed sequence, a problem completely solved with optimal prefix codes. However, in many applications, the storage space used to represent the prefix code itself can be an issue. In this paper we introduce and compare several techniques to store prefix codes. Let @math be the sequence length and @math be the alphabet size. Then a naive storage of an optimal prefix code uses @math bits. Our first technique shows how to use @math bits to store the optimal prefix code. Then we introduce an approximate technique that, for any @math , @math bits to store a prefix code with average codeword length at most @math times the minimum. In all cases, our data structures allow encoding and decoding of any symbol in @math time. We experimentally compare our new techniques with the state of the art, showing that we achieve 6--8-fold space reductions, at the price of a slower encoding (2.5--8 times slower) and decoding (12--24 times slower). The approximations further reduce this space and improve the time significantly, up to recovering the speed of classical implementations, for a moderate penalty in the average code length. As a byproduct, we compare various heuristic, approximate, and optimal algorithms to generate length-restricted codes, showing that the optimal ones are clearly superior and practical enough to be implemented.
All these approximations require @math time plus the time to build the Huffman tree. A technique to obtain the optimal length-restricted prefix code, by Larmore and Hirshberg @cite_24 , performs in @math time by reducing the construction to a binary version of the coin-collector's problem.
{ "cite_N": [ "@cite_24" ], "mid": [ "2021302562" ], "abstract": [ "An O ( nL )-time algorithm is introduced for constructing an optimal Huffman code for a weighted alphabet of size n , where each code string must have length no greater than L . The algorithm uses O ( n ) space." ] }
1410.3438
1672939318
Most of the attention in statistical compression is given to the space used by the compressed sequence, a problem completely solved with optimal prefix codes. However, in many applications, the storage space used to represent the prefix code itself can be an issue. In this paper we introduce and compare several techniques to store prefix codes. Let @math be the sequence length and @math be the alphabet size. Then a naive storage of an optimal prefix code uses @math bits. Our first technique shows how to use @math bits to store the optimal prefix code. Then we introduce an approximate technique that, for any @math , @math bits to store a prefix code with average codeword length at most @math times the minimum. In all cases, our data structures allow encoding and decoding of any symbol in @math time. We experimentally compare our new techniques with the state of the art, showing that we achieve 6--8-fold space reductions, at the price of a slower encoding (2.5--8 times slower) and decoding (12--24 times slower). The approximations further reduce this space and improve the time significantly, up to recovering the speed of classical implementations, for a moderate penalty in the average code length. As a byproduct, we compare various heuristic, approximate, and optimal algorithms to generate length-restricted codes, showing that the optimal ones are clearly superior and practical enough to be implemented.
Multiplicative approximations have the potential of yielding codes that can be represented within @math bits. Adler and Maggs @cite_34 showed it generally takes more than ((9 40) n^ 1 (20 c) n ) bits to store a prefix code with average codeword length at most (c H (P) ). Gagie @cite_0 @cite_43 @cite_7 showed that, for any constant (c 1 ), it takes @math bits to store a prefix code with average codeword length at most (c H (P) + 2 ). He also showed his upper bound is nearly optimal because, for any positive constant @math , we cannot always store a prefix code with average codeword length at most (c H (P) + o ( n) ) in @math bits. Note that our result does not have the additive term @math '' in addition to the multiplicative term, which is very relevant on low-entropy texts.
{ "cite_N": [ "@cite_0", "@cite_43", "@cite_34", "@cite_7" ], "mid": [ "2080686589", "2150359208", "2066209652", "2011648667" ], "abstract": [ "", "We briefly survey some concepts related to empirical entropy--normal numbers, de Bruijn sequences and Markov processes-- and investigate how well it approximates Kolmogorov complexity. Our results suggest lth-order empirical entropy stops being a reasonable complexity metric for almost all strings of length m over alphabets of size n about when nl surpasses m.", "In this paper we examine the problem of sending an n-bit data item from a client to a server across an asymmetric communication channel. We demonstrate that there are scenarios in which a high-speed link from the server to the client can be used to greatly reduce the number of bits sent from the client to the server across a slower link. In particular, we assume that the data item is drawn from a probability distribution D that is known to the server but not to the client. We present several protocols in which the expected number of bits transmitted by the server and client are O(n) and O(H(D)+1), respectively, where H(D) is the binary entropy of D (and can range from 0 to n). These protocols are within a small constant factor of optimal in terms of the number of bits sent by the client. The expected number of rounds of communication between the server and client in the simplest of our protocols is O(H(D)). We also give a protocol for which the expected number of rounds is only O(1), but which requires more computational effort on the part of the server. A third technique provides a tradeoff between the computational effort and the number of rounds. These protocols are complemented by several lower bounds and impossibility results. We prove that all of our protocols are existentially optimal in terms of the number of bits sent by the server, i.e., there are distributions for which the total number of bits exchanged has to be at least n. In addition, we show that there is no protocol that is optimal for every distribution (as opposed to just existentially optimal) in terms of bits sent by the server. We demonstrate this by proving that it is undecidable to compute (even approximately), for an arbitrary distribution D, the expected number of bits that must be exchanged by the server and client on the distribution D.", "We show how any dynamic instantaneous compression algorithm can be converted to an asymmetric communication protocol, with which a server with high bandwidth can help clients with low bandwidth send it messages. Unlike previous authors, we do not assume the server knows the messages' distribution, and our protocols are the first to use only one round of communication for each message." ] }
1410.3726
2121126594
Ambiguity or uncertainty is a pervasive element of many real-world decision-making processes. Variation in decisions is a norm in this situation when the same problem is posed to different subjects. Psychological and metaphysical research has proven that decision making by humans is subjective. It is influenced by many factors such as experience, age, background, etc. Scene understanding is one of the computer vision problems that fall into this category. Conventional methods relax this problem by assuming that scene images are mutually exclusive; therefore, they focus on developing different approaches to perform the binary classification tasks. In this paper, we show that scene images are nonmutually exclusive and propose the fuzzy qualitative rank classifier (FQRC) to tackle the aforementioned problems. The proposed FQRC provides a ranking interpretation instead of binary decision. Evaluations in terms of qualitative and quantitative measurements using large numbers and challenging public scene datasets have shown the effectiveness of our proposed method in modeling the nonmutually exclusive scene images.
Scene understanding has been one of the mainstream tasks in computer vision. It differs from the conventional object detection or classification tasks, to the extent that a scene is composed of several entities that are often organized in an unpredictable layout @cite_30 . Surprisingly from our findings, there is very minimal or almost none that had tackled this problem using the fuzzy approach. The early efforts in this area were dominated by computer vision researchers who focus on using machine learning techniques. These prior works denoted the scene understanding problem were to assign one of the several possible classes to a scene image of unknown class.
{ "cite_N": [ "@cite_30" ], "mid": [ "2154301842" ], "abstract": [ "We present a new approach to model visual scenes in image collections, based on local invariant features and probabilistic latent space models. Our formulation provides answers to three open questions:(l) whether the invariant local features are suitable for scene (rather than object) classification; (2) whether unsupennsed latent space models can be used for feature extraction in the classification task; and (3) whether the latent space formulation can discover visual co-occurrence patterns, motivating novel approaches for image organization and segmentation. Using a 9500-image dataset, our approach is validated on each of these issues. First, we show with extensive experiments on binary and multi-class scene classification tasks, that a bag-of-visterm representation, derived from local invariant descriptors, consistently outperforms state-of-the-art approaches. Second, we show that probabilistic latent semantic analysis (PLSA) generates a compact scene representation, discriminative for accurate classification, and significantly more robust when less training data are available. Third, we have exploited the ability of PLSA to automatically extract visually meaningful aspects, to propose new algorithms for aspect-based image ranking and context-sensitive image segmentation." ] }
1410.3726
2121126594
Ambiguity or uncertainty is a pervasive element of many real-world decision-making processes. Variation in decisions is a norm in this situation when the same problem is posed to different subjects. Psychological and metaphysical research has proven that decision making by humans is subjective. It is influenced by many factors such as experience, age, background, etc. Scene understanding is one of the computer vision problems that fall into this category. Conventional methods relax this problem by assuming that scene images are mutually exclusive; therefore, they focus on developing different approaches to perform the binary classification tasks. In this paper, we show that scene images are nonmutually exclusive and propose the fuzzy qualitative rank classifier (FQRC) to tackle the aforementioned problems. The proposed FQRC provides a ranking interpretation instead of binary decision. Evaluations in terms of qualitative and quantitative measurements using large numbers and challenging public scene datasets have shown the effectiveness of our proposed method in modeling the nonmutually exclusive scene images.
Oliva and Torralba @cite_23 proposed a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represents the dominant spatial structure of a scene - the spatial envelope as scene representation. Then, a support vector machine (SVM) classifier with Gaussian kernel is employed to classify the scene classes. Fei-Fei and Perona @cite_34 proposed the Bayesian hierarchical model extended from latent dirichlet allocation (LDA) to learn natural scene categories. In their learning model, they represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning, finally they choose the best model as their classification result. @cite_22 inspired from the previous work and proposed probabilistic latent semantic analysis (pLSA) incorporate with KNN for scene classification. Vogel and Schiele @cite_33 used the occurring frequency of different concepts (water, rock, etc.) in an image as the intermediate features for scene image classification. The two-stage system makes use of an intermediary semantic level of block classification (concept level) to do retrieval based on the occurrence of such concepts in an image.
{ "cite_N": [ "@cite_34", "@cite_33", "@cite_22", "@cite_23" ], "mid": [ "2107034620", "2146022472", "1589362500", "1566135517" ], "abstract": [ "We propose a novel approach to learn and recognize natural scene categories. Unlike previous work, it does not require experts to annotate the training set. We represent the image of a scene by a collection of local regions, denoted as codewords obtained by unsupervised learning. Each region is represented as part of a \"theme\". In previous work, such themes were learnt from hand-annotations of experts, while our method learns the theme distributions as well as the codewords distribution over the themes without supervision. We report satisfactory categorization performances on a large set of 13 categories of complex scenes.", "In this paper, we present a novel image representation that renders it possible to access natural scenes by local semantic description. Our work is motivated by the continuing effort in content-based image retrieval to extract and to model the semantic content of images. The basic idea of the semantic modeling is to classify local image regions into semantic concept classes such as water, rocks, or foliage. Images are represented through the frequency of occurrence of these local concepts. Through extensive experiments, we demonstrate that the image representation is well suited for modeling the semantic content of heterogenous scene categories, and thus for categorization and retrieval. The image representation also allows us to rank natural scenes according to their semantic similarity relative to certain scene categories. Based on human ranking data, we learn a perceptually plausible distance measure that leads to a high correlation between the human and the automatically obtained typicality ranking. This result is especially valuable for content-based image retrieval where the goal is to present retrieval results in descending semantic similarity from the query.", "Given a set of images of scenes containing multiple object categories (e.g. grass, roads, buildings) our objective is to discover these objects in each image in an unsupervised manner, and to use this object distribution to perform scene classification. We achieve this discovery using probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature, here applied to a bag of visual words representation for each image. The scene classification on the object distribution is carried out by a k-nearest neighbour classifier. We investigate the classification performance under changes in the visual vocabulary and number of latent topics learnt, and develop a novel vocabulary using colour SIFT descriptors. Classification performance is compared to the supervised approaches of Vogel & Schiele [19] and Oliva & Torralba [11], and the semi-supervised approach of Fei Fei & Perona [3] using their own datasets and testing protocols. In all cases the combination of (unsupervised) pLSA followed by (supervised) nearest neighbour classification achieves superior results. We show applications of this method to image retrieval with relevance feedback and to scene classification in videos.", "In this paper, we propose a computational model of the recognition of real world scenes that bypasses the segmentation and the processing of individual objects or regions. The procedure is based on a very low dimensional representation of the scene, that we term the Spatial Envelope. We propose a set of perceptual dimensions (naturalness, openness, roughness, expansion, ruggedness) that represent the dominant spatial structure of a scene. Then, we show that these dimensions may be reliably estimated using spectral and coarsely localized information. The model generates a multidimensional space in which scenes sharing membership in semantic categories (e.g., streets, highways, coasts) are projected closed together. The performance of the spatial envelope model shows that specific information about object shape or identity is not a requirement for scene categorization and that modeling a holistic representation of the scene informs about its probable semantic category." ] }
1410.3726
2121126594
Ambiguity or uncertainty is a pervasive element of many real-world decision-making processes. Variation in decisions is a norm in this situation when the same problem is posed to different subjects. Psychological and metaphysical research has proven that decision making by humans is subjective. It is influenced by many factors such as experience, age, background, etc. Scene understanding is one of the computer vision problems that fall into this category. Conventional methods relax this problem by assuming that scene images are mutually exclusive; therefore, they focus on developing different approaches to perform the binary classification tasks. In this paper, we show that scene images are nonmutually exclusive and propose the fuzzy qualitative rank classifier (FQRC) to tackle the aforementioned problems. The proposed FQRC provides a ranking interpretation instead of binary decision. Evaluations in terms of qualitative and quantitative measurements using large numbers and challenging public scene datasets have shown the effectiveness of our proposed method in modeling the nonmutually exclusive scene images.
However, in scene classification task, it is very likely that a scene image can belongs to multiple classes. As a result of this, all the aforementioned solutions that assumed scene classes are mutually exclusive are not practical and often lead to classification errors. We believe that scene images are somewhat arbitrary and possibly sub-optimal as depicted in Fig. . To the best of our knowledge, there are numerous multi-label classification research @cite_29 @cite_25 ; however, only a few were focused in the domain of scene understanding. @cite_7 proposed an approach using SVM with cross-training to build the classifier for every base class. Then maximum a posteriori (MAP) principle is applied with the aid of prior probability calculation and gamma fit operation toward the single and multi-label training data. This is to obtain the desired threshold to determine whether a testing sample is fall into single label event or multiple label events.
{ "cite_N": [ "@cite_29", "@cite_25", "@cite_7" ], "mid": [ "2146241755", "", "2156935079" ], "abstract": [ "Multi-label classification methods are increasingly required by modern applications, such as protein function classification, music categorization, and semantic scene classification. This article introduces the task of multi-label classification, organizes the sparse related literature into a structured presentation and performs comparative experimental results of certain multilabel classification methods. It also contributes the definition of concepts for the quantification of the multi-label nature of a data set.", "", "In classic pattern recognition problems, classes are mutually exclusive by definition. Classification errors occur when the classes overlap in the feature space. We examine a different situation, occurring when the classes are, by definition, not mutually exclusive. Such problems arise in semantic scene and document classification and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classification, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a field scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a different treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classification; furthermore, our work appears to generalize to other classification problems of the same nature." ] }
1410.3726
2121126594
Ambiguity or uncertainty is a pervasive element of many real-world decision-making processes. Variation in decisions is a norm in this situation when the same problem is posed to different subjects. Psychological and metaphysical research has proven that decision making by humans is subjective. It is influenced by many factors such as experience, age, background, etc. Scene understanding is one of the computer vision problems that fall into this category. Conventional methods relax this problem by assuming that scene images are mutually exclusive; therefore, they focus on developing different approaches to perform the binary classification tasks. In this paper, we show that scene images are nonmutually exclusive and propose the fuzzy qualitative rank classifier (FQRC) to tackle the aforementioned problems. The proposed FQRC provides a ranking interpretation instead of binary decision. Evaluations in terms of qualitative and quantitative measurements using large numbers and challenging public scene datasets have shown the effectiveness of our proposed method in modeling the nonmutually exclusive scene images.
Inspired by @cite_7 , Zhang and Zhou @cite_27 introduced multi-label lazy learning K-nearest neighbor (ML-KNN) as their classification algorithm. This is to resolve the inefficiency of using multiple independent binary classifier for each class by using SVM. Statistical information from the training set and MAP principle is utilized to determine the best label for the test instance. Unfortunately, both these methods required manual human annotation of multi-label class training data to compute the prior probability based on frequency counting of training set. This is an impractical solution since a human decision is bias and inconsistent. It also leads to large number of classes with sparse sample @cite_29 . Besides that, human reasoning does not annotate an image as multi-class. For instance, referring to Fig. , it is very rare for one to say that this is a Coast + Mountain class scene image''. In general, one would rather comment this is a Coast'' or this is a Mountain'' scene.
{ "cite_N": [ "@cite_27", "@cite_29", "@cite_7" ], "mid": [ "2052684427", "2146241755", "2156935079" ], "abstract": [ "Multi-label learning originated from the investigation of text categorization problem, where each document may belong to several predefined topics simultaneously. In multi-label learning, the training set is composed of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances through analyzing training instances with known label sets. In this paper, a multi-label lazy learning approach named ML-KNN is presented, which is derived from the traditional K-nearest neighbor (KNN) algorithm. In detail, for each unseen instance, its K nearest neighbors in the training set are firstly identified. After that, based on statistical information gained from the label sets of these neighboring instances, i.e. the number of neighboring instances belonging to each possible class, maximum a posteriori (MAP) principle is utilized to determine the label set for the unseen instance. Experiments on three different real-world multi-label learning problems, i.e. Yeast gene functional analysis, natural scene classification and automatic web page categorization, show that ML-KNN achieves superior performance to some well-established multi-label learning algorithms.", "Multi-label classification methods are increasingly required by modern applications, such as protein function classification, music categorization, and semantic scene classification. This article introduces the task of multi-label classification, organizes the sparse related literature into a structured presentation and performs comparative experimental results of certain multilabel classification methods. It also contributes the definition of concepts for the quantification of the multi-label nature of a data set.", "In classic pattern recognition problems, classes are mutually exclusive by definition. Classification errors occur when the classes overlap in the feature space. We examine a different situation, occurring when the classes are, by definition, not mutually exclusive. Such problems arise in semantic scene and document classification and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classification, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a field scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a different treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classification; furthermore, our work appears to generalize to other classification problems of the same nature." ] }
1410.3726
2121126594
Ambiguity or uncertainty is a pervasive element of many real-world decision-making processes. Variation in decisions is a norm in this situation when the same problem is posed to different subjects. Psychological and metaphysical research has proven that decision making by humans is subjective. It is influenced by many factors such as experience, age, background, etc. Scene understanding is one of the computer vision problems that fall into this category. Conventional methods relax this problem by assuming that scene images are mutually exclusive; therefore, they focus on developing different approaches to perform the binary classification tasks. In this paper, we show that scene images are nonmutually exclusive and propose the fuzzy qualitative rank classifier (FQRC) to tackle the aforementioned problems. The proposed FQRC provides a ranking interpretation instead of binary decision. Evaluations in terms of qualitative and quantitative measurements using large numbers and challenging public scene datasets have shown the effectiveness of our proposed method in modeling the nonmutually exclusive scene images.
In what constitutes the closer work to ours in the fuzzy domain, Lim and Chan @cite_3 proposed a fuzzy qualitative framework and Cho and Chang @cite_17 employed a simple fuzzy logic with two monocular images to understand the scene images. However, their work suffered from 1) finding the appropriate resolution to build their 4-tuple membership function. Currently, the model parameters are chosen manually based on prior information and in a trial-and-error manner. This is a very tedious and time consuming approach; 2) only able to accommodate two feature vectors as input data; 3) the ranking is undefined and finally 4) tested on a very limited and easy dataset (a dataset that contains only 2 scene images).
{ "cite_N": [ "@cite_3", "@cite_17" ], "mid": [ "2037974855", "2067691492" ], "abstract": [ "Scene classification has been studied extensively in the recent past. Most of the state-of-the-art solutions assumed that scene classes are mutually exclusive. However, this is not true as a scene image may belongs to multiple classes and different people are tend to respond inconsistently even given a same scene image. In this paper, we propose a fuzzy qualitative approach to address this problem. That is, we first adopted the fuzzy quantity space to model the training data. Secondly, we present a novel weight function, w to train a fuzzy qualitative scene model in the fuzzy qualitative states. Finally, we introduce fuzzy qualitative partition to perform the scene classification. Empirical results using a standard dataset and a comparison with K-nearest neighbour has shown the effectiveness and robustness of the proposed method.", "This paper proposes a two-stage scene analysis scheme using a combined fuzzy logic-based technique. The first stage begins with generating fuzzy rules to describe the scene. Based on these fuzzy classification rules, each image pixel is inferred and then classified to the natural object category with the largest membership degree. The second stage involves a newly derived fuzzy K-nearest neighbor algorithm that further refines the classification result obtained. With this second stage, the proposed system is robust because it is demonstrated to be insensitive to the variations of membership functions and image noise contamination. Simulations of real world images have shown that the proposed scheme is very successful and the results are visually confirmed by human observation. The satisfactory results achieved in this paper suggest the feasibility of developing similar systems for other types of images aiming at image description problems." ] }
1410.3726
2121126594
Ambiguity or uncertainty is a pervasive element of many real-world decision-making processes. Variation in decisions is a norm in this situation when the same problem is posed to different subjects. Psychological and metaphysical research has proven that decision making by humans is subjective. It is influenced by many factors such as experience, age, background, etc. Scene understanding is one of the computer vision problems that fall into this category. Conventional methods relax this problem by assuming that scene images are mutually exclusive; therefore, they focus on developing different approaches to perform the binary classification tasks. In this paper, we show that scene images are nonmutually exclusive and propose the fuzzy qualitative rank classifier (FQRC) to tackle the aforementioned problems. The proposed FQRC provides a ranking interpretation instead of binary decision. Evaluations in terms of qualitative and quantitative measurements using large numbers and challenging public scene datasets have shown the effectiveness of our proposed method in modeling the nonmutually exclusive scene images.
In this paper, we extend the work of @cite_3 by learning the 4-tuple membership function from the training data. In order to achieve this, we used the histogram representation. It relaxes the difficulty of obtaining multi-label training data as to @cite_7 @cite_27 where the training steps require human intervention in manually annotate the multi-label training data. This is a daunting task as human decisions are subjective and huge amount of participants are needed. Besides that, a ranking method to describe the relationship of image to each scene class is introduced. In scene understanding, in particular where we model the scene images as non-mutually exclusive, the idea of inference engine with ranking interpretation is somehow new and unexplored.
{ "cite_N": [ "@cite_27", "@cite_7", "@cite_3" ], "mid": [ "2052684427", "2156935079", "2037974855" ], "abstract": [ "Multi-label learning originated from the investigation of text categorization problem, where each document may belong to several predefined topics simultaneously. In multi-label learning, the training set is composed of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances through analyzing training instances with known label sets. In this paper, a multi-label lazy learning approach named ML-KNN is presented, which is derived from the traditional K-nearest neighbor (KNN) algorithm. In detail, for each unseen instance, its K nearest neighbors in the training set are firstly identified. After that, based on statistical information gained from the label sets of these neighboring instances, i.e. the number of neighboring instances belonging to each possible class, maximum a posteriori (MAP) principle is utilized to determine the label set for the unseen instance. Experiments on three different real-world multi-label learning problems, i.e. Yeast gene functional analysis, natural scene classification and automatic web page categorization, show that ML-KNN achieves superior performance to some well-established multi-label learning algorithms.", "In classic pattern recognition problems, classes are mutually exclusive by definition. Classification errors occur when the classes overlap in the feature space. We examine a different situation, occurring when the classes are, by definition, not mutually exclusive. Such problems arise in semantic scene and document classification and in medical diagnosis. We present a framework to handle such problems and apply it to the problem of semantic scene classification, where a natural scene may contain multiple objects such that the scene can be described by multiple class labels (e.g., a field scene with a mountain in the background). Such a problem poses challenges to the classic pattern recognition paradigm and demands a different treatment. We discuss approaches for training and testing in this scenario and introduce new metrics for evaluating individual examples, class recall and precision, and overall accuracy. Experiments show that our methods are suitable for scene classification; furthermore, our work appears to generalize to other classification problems of the same nature.", "Scene classification has been studied extensively in the recent past. Most of the state-of-the-art solutions assumed that scene classes are mutually exclusive. However, this is not true as a scene image may belongs to multiple classes and different people are tend to respond inconsistently even given a same scene image. In this paper, we propose a fuzzy qualitative approach to address this problem. That is, we first adopted the fuzzy quantity space to model the training data. Secondly, we present a novel weight function, w to train a fuzzy qualitative scene model in the fuzzy qualitative states. Finally, we introduce fuzzy qualitative partition to perform the scene classification. Empirical results using a standard dataset and a comparison with K-nearest neighbour has shown the effectiveness and robustness of the proposed method." ] }
1410.2632
1535970326
The Agent Conversation Reasoning Engine (ACRE) is intended to aid agent developers to improve the management and reliability of agent communication. To evaluate its effectiveness, a problem scenario was created that could be used to compare code written with and without the use of ACRE by groups of test subjects.
Agent toolkits with support for conversations include COOL @cite_8 , Jackal @cite_13 and KaOS @cite_9 . Other than FSMs, alternative representations for protocols include Coloured Petri Nets @cite_10 and Global Session Types @cite_6 .
{ "cite_N": [ "@cite_13", "@cite_8", "@cite_9", "@cite_6", "@cite_10" ], "mid": [ "1539133482", "65045497", "1527923609", "189296993", "1510045852" ], "abstract": [ "Jackal is a Java-based tool for communicating with the KQML agent communication language. Some features which make it extremely valuable to agent development are its conversation management facilities, flexible, blackboard style interface and ease of integration. Jackal has been developed in support of an investigation of the use of agents in shop oor information ow. This paper describes Jackal at a surface and design level, and presents an example of its use in agent construction.", "Agent interaction takes place at several levels. Current work in the ARPA Knowledge Sharing Effort has addressed the information content level by the KIF language and the intentional level by the KQML language. In this paper we address the coordination level by means of our Coordination Language (COOL) that relies on speech act based communication, but integrates it in a structured conversation framework that captures the coordination mechanisms agents use when working together. We are currently using this language (i) to represent coordination mechanisms for the supply chain of manufacturing enterprises modeled as intelligent agents and (ii) as an environment for designing and validating coordination protocols for multi-agent systems. This paper describes the basic elements of this language: conversation objects, conversation rules, error recovery rules, continuation rules, conversation nesting. The actual COOL source code and a running trace for the n-queens problem are presented in the Appendix. Topic areas: Coordination, Intelligent agents in enterprise integration", "", "Global session types are behavioral types designed for specifying in a compact way multiparty interactions between distributed components, and verifying their correctness. We take advantage of the fact that global session types can be naturally represented as cyclic Prolog terms - which are directly supported by the Jason implementation of AgentSpeak - to allow simple automatic generation of self-monitoring MASs: given a global session type specifying an interaction protocol, and the implementation of a MAS where agents are expected to be compliant with it, we define a procedure for automatically deriving a self-monitoring MAS. Such a generated MAS ensures that agents conform to the protocol at run-time, by adding a monitor agent that checks that the ongoing conversation is correct w.r.t. the global session type.", "Several methodologies are supplied to multiagent system designers to help them defining their agents and their multiagent systems. These methodologies focus mainly on agents and on multiagent systems and barely consider how to design interaction protocols. A problem could emerge of this lack since interaction protocols are more and more complex. The aim of this article is to present our proposal of interaction protocol engineering which is based on the communication protocol engineering [10]. Interaction protocol engineering allows designers to define protocols from scratch. Our proposal is composed of five stages: analysis, formal description, validation, protocol synthesis and conformance testing." ] }
1410.2632
1535970326
The Agent Conversation Reasoning Engine (ACRE) is intended to aid agent developers to improve the management and reliability of agent communication. To evaluate its effectiveness, a problem scenario was created that could be used to compare code written with and without the use of ACRE by groups of test subjects.
The comparative evaluation of programming toolkits, paradigms and languages is a matter of some debate within the software engineering community. One popular approach is to divide subjects into two groups with each asked to perform the same task @cite_11 @cite_2 @cite_7 . To the greatest extent possible, objective quantitative measures are used to draw comparisons between the two groups. A common concept to evaluate is that of , which has been measured in numerous different ways including development time @cite_11 @cite_2 , non-comment line count @cite_2 and non-commented source code statements @cite_4 . These measures are used to ensure that a new approach does not result in a greater workload being placed on developers using it.
{ "cite_N": [ "@cite_4", "@cite_2", "@cite_7", "@cite_11" ], "mid": [ "2148679215", "", "2101545021", "2127010341" ], "abstract": [ "Several parallel programming languages, libraries and environments have been developed to ease the task of writing programs for multiprocessors. Proponents of each approach often point out various language features that are designed to provide the programmer with a simple programming interface. However, virtually no data exists that quantitatively evaluates the relative ease of use of different parallel programming languages. The paper borrows techniques from the software engineering field to quantify the complexity of three predominate programming models: shared memory, message passing and High-Performance Fortran. It is concluded that traditional software complexity metrics are effective indicators of the relative complexity of parallel programming languages. The impact of complexity on run-time performance is also discussed in the context of message passing versus HPF on an IBM SP2.", "", "Chip multi-processors (CMPs) have become ubiquitous, while tools that ease concurrent programming have not. The promise of increased performance for all applications through ever more parallel hardware requires good tools for concurrent programming, especially for average programmers. Transactional memory (TM) has enjoyed recent interest as a tool that can help programmers program concurrently. The transactional memory (TM) research community is heavily invested in the claim that programming with transactional memory is easier than alternatives (like locks), but evidence for or against the veracity of this claim is scant. In this paper, we describe a user-study in which 237 undergraduate students in an operating systems course implement the same programs using coarse and fine-grain locks, monitors, and transactions. We surveyed the students after the assignment, and examined their code to determine the types and frequency of programming errors for each synchronization technique. Inexperienced programmers found baroque syntax a barrier to entry for transactional programming. On average, subjective evaluation showed that students found transactions harder to use than coarse-grain locks, but slightly easier to use than fine-grained locks. Detailed examination of synchronization errors in the students' code tells a rather different story. Overwhelmingly, the number and types of programming errors the students made was much lower for transactions than for locks. On a similar programming problem, over 70 of students made errors with fine-grained locking, while less than 10 made errors with transactions.", "Context: Writing software for the current generation of parallel systems requires significant programmer effort, and the community is seeking alternatives that reduce effort while still achieving good performance. Objective: Measure the effect of parallel programming models (message-passing vs. PRAM-like) on programmer effort. Design, setting, and subjects: One group of subjects implemented sparse-matrix dense-vector multiplication using message-passing (MPI), and a second group solved the same problem using a PRAM-like model (XMTC). The subjects were students in two graduate-level classes: one class was taught MPI and the other was taught XMTC. Main outcome measures: Development time, program correctness. Results: Mean XMTC development time was 4.8h less than mean MPI development time (95 confidence interval, 2.0-7.7), a 46 reduction. XMTC programs were more likely to be correct, but the difference in correctness rates was not statistically significant (p=.16). Conclusions: XMTC solutions for this particular problem required less effort than MPI equivalents, but further studies are necessary which examine different types of problems and different levels of programmer experience." ] }
1410.3060
1506424797
The importance of stencil-based algorithms in computational science has focused attention on optimized parallel implementations for multilevel cache-based processors. Temporal blocking schemes leverage the large bandwidth and low latency of caches to accelerate stencil updates and approach theoretical peak performance. A key ingredient is the reduction of data traffic across slow data paths, especially the main memory interface. In this work we combine the ideas of multicore wavefront temporal blocking and diamond tiling to arrive at stencil update schemes that show large reductions in memory pressure compared to existing approaches. The resulting schemes show performance advantages in bandwidth-starved situations, which are exacerbated by the high bytes per lattice update case of variable coefficients. Our thread groups concept provides a controllable trade-off between concurrency and memory usage, shifting the pressure between the memory interface and the CPU. We present performance results on a contemp...
The importance of stencil computations and the inefficient performance of their na " i ve implementations on modern processors motivates researchers to study them extensively. The optimizations required to achieve the desired performance depend on the properties of the stencil operator and the capabilities of different resources in the processor. This case is made by Datta @cite_22 , where the performance of several combinations of optimization techniques, processors, and stencil operators is reported.
{ "cite_N": [ "@cite_22" ], "mid": [ "2148038801" ], "abstract": [ "As clock frequencies have tapered off and the number of cores on a chip has taken off, the challenge of effectively utilizing these multicore systems has become increasingly important. However, the diversity of multicore machines in today's market compels us to individually tune for each platform. This is especially true for problems with low computational intensity, since the improvements in memory latency and bandwidth are much slower than those of computational rates. One such kernel is a stencil, a regular nearest neighbor operation over the points in a structured grid. Stencils often arise from solving partial differential equations, which are found in almost every scientific discipline. In this thesis, we analyze three common three-dimensional stencils: the 7-point stencil, the 27-point stencil, and the Gauss-Seidel Red-Black Helmholtz kernel. We examine the performance of these stencil codes over a spectrum of multicore architectures, including the Intel Clovertown, Intel Nehalem, AMD Barcelona, the highly-multithreaded Sun Victoria Falls, and the low power IBM Blue Gene P. These platforms not only have significant variations in their core architectures, but also exhibit a 32× range in available hardware threads, a 4.5× range in attained DRAM bandwidth, and a 6.3× range in peak flop rates. Clearly, designing optimal code for such a diverse set of platforms represents a serious challenge. Unfortunately, compilers alone do not achieve satisfactory stencil code performance on this varied set of platforms. Instead, we have created an automatic stencil code tuner, or auto-tuner, that incorporates several optimizations into a single software framework. These optimizations hide memory latency, account for non-uniform memory access times, reduce the volume of data transferred, and take advantage of special instructions. The auto-tuner then searches over the space of optimizations, thereby allowing for much greater productivity than hand-tuning. The fully auto-tuned code runs up to 5.4× faster than a straightforward implementation and is more scalable across cores. By using performance models to identify performance limits, we determined that our auto-tuner can achieve over 95 of the attainable performance for all three stencils in our study. This demonstrates that auto-tuning is an important technique for fully exploiting available multicore resources." ] }
1410.3060
1506424797
The importance of stencil-based algorithms in computational science has focused attention on optimized parallel implementations for multilevel cache-based processors. Temporal blocking schemes leverage the large bandwidth and low latency of caches to accelerate stencil updates and approach theoretical peak performance. A key ingredient is the reduction of data traffic across slow data paths, especially the main memory interface. In this work we combine the ideas of multicore wavefront temporal blocking and diamond tiling to arrive at stencil update schemes that show large reductions in memory pressure compared to existing approaches. The resulting schemes show performance advantages in bandwidth-starved situations, which are exacerbated by the high bytes per lattice update case of variable coefficients. Our thread groups concept provides a controllable trade-off between concurrency and memory usage, shifting the pressure between the memory interface and the CPU. We present performance results on a contemp...
Temporal blocking techniques require careful handling of data dependencies acoss space iterations to ensure correctness. Several tiling techniques are proposed in the literature including: parallelepiped, split, overlapped, diamond, and hexagonal. These block shapes optimize for data locality, concurrency, or both. Reviews of these techniques can be found at Orozco @cite_21 and Zhou @cite_29 . We believe that diamond tiling is promising for efficiently providing both concurrency and data locality over the problems and computer architectures of our interest. Its attractiveness in recent years is evident in: @cite_21 , @cite_29 , Strzodka @cite_32 , Bandishti @cite_25 , and Grosser @cite_19 , where a GPU implementation of hexagonaltiling is proposed, then a study of hexagonal and diamond tiling is performed @cite_6 .
{ "cite_N": [ "@cite_29", "@cite_21", "@cite_32", "@cite_6", "@cite_19", "@cite_25" ], "mid": [ "2336353249", "1529356657", "2097757554", "2166622045", "1973532523", "2070961300" ], "abstract": [ "", "This paper proposes tiling techniques based on data dependencies and not in code structure. The work presented here leverages and expands previous work by the authors in the domain of non traditional tiling for parallel applications. The main contributions of this paper are: (1) A formal description of tiling from the point of view of the data produced and not from the source code. (2) A mathematical proof for an optimum tiling in terms of maximum reuse for stencil applications, addressing the disparity between computation power and memory bandwidth for many-core architectures. (3) A description and implementation of our tiling technique for well known stencil applications. (4) Experimental evidence that confirms the effectiveness of the tiling proposed to alleviate the disparity between computation power and memory bandwidth for many-core architectures. Our experiments, performed using one of the first Cyclops-64 many-core chips produced, confirm the effectiveness of our approach to reduce the total number of memory operations of stencil applications as well as the running time of the application.", "We present a time skewing algorithm that breaks the memory wall for certain iterative stencil computations. A stencil computation, even with constant weights, is a completely memory-bound algorithm. For example, for a large 3D domain of @math doubles and 100 iterations on a quad-core Xeon X5482 3.2GHz system, a hand-vectorized and parallelized naive 7-point stencil implementation achieves only 1.4 GFLOPS because the system memory bandwidth limits the performance. Although many efforts have been undertaken to improve the performance of such nested loops, for large data sets they still lag far behind synthetic benchmark performance. The state-of-art automatic locality optimizer PluTo achieves 3.7 GFLOPS for the above stencil, whereas a parallel benchmark executing the inner stencil computation directly on registers performs at 25.1 GFLOPS. In comparison, our algorithm achieves 13.0 GFLOPS (52 of the stencil peak benchmark).We present results for 2D and 3D domains in double precision including problems with gigabyte large data sets. The results are compared against hand-optimized naive schemes, PluTo, the stencil peak benchmark and results from literature. For constant stencils of slope one we break the dependence on the low system bandwidth and achieve at least 50 of the stencil peak, thus performing within a factor two of an ideal system with infinite bandwidth (the benchmark runs on registers without memory access). For large stencils and banded matrices the additional data transfers let the limitations of the system bandwidth come into play again, however, our algorithm still gains a large improvement over the other schemes.", "Iterative stencil computations are important in scientic computing and more and more also in the embedded and mobile domain. Recent publications have shown that tiling schemes that ensure concurrent start provide ecient ways to execute these kernels. Diamond tiling and hybrid-hexagonal tiling are two successful tiling schemes that enable concurrent start. Both have dierent advantages: diamond tiling is integrated in a general purpose optimization framework and uses a cost function to choose among tiling hyperplanes, whereas the more exible tile sizes of hybrid-hexagonal tiling have proven to be eective for the generation of GPU code. We show that these two approaches are even more interesting when combined. We revisit the formalization of diamond and hexagonal tiling, present the eects of tile size and wavefront choices on tile-level parallelism, and formulate constraints for optimal diamond tile shapes. We then extend the diamond tiling formulation into a hexagonal tiling one, combining the benets of both. The paper closes with an outlook of hexagonal tiling in higher dimensional spaces, an important generalization suitable for massively parallel architectures.", "Time-tiling is necessary for the efficient execution of iterative stencil computations. Classical hyper-rectangular tiles cannot be used due to the combination of backward and forward dependences along space dimensions. Existing techniques trade temporal data reuse for inefficiencies in other areas, such as load imbalance, redundant computations, or increased control flow overhead, therefore making it challenging for use with GPUs. We propose a time-tiling method for iterative stencil computations on GPUs. Our method does not involve redundant computations. It favors coalesced global-memory accesses, data reuse in local shared-memory or cache, avoidance of thread divergence, and concurrency, combining hexagonal tile shapes along the time and one spatial dimension with classical tiling along the other spatial dimensions. Hexagonal tiles expose multi-level parallelism as well as data reuse. Experimental results demonstrate significant performance improvements over existing stencil compilers.", "Most stencil computations allow tile-wise concurrent start, i.e., there always exists a face of the iteration space and a set of tiling hyperplanes such that all tiles along that face can be started concurrently. This provides load balance and maximizes parallelism. However, existing automatic tiling frameworks often choose hyperplanes that lead to pipelined start-up and load imbalance. We address this issue with a new tiling technique that ensures concurrent start-up as well as perfect load-balance whenever possible. We first provide necessary and sufficient conditions on tiling hyperplanes to enable concurrent start for programs with affine data accesses. We then provide an approach to find such hyperplanes. Experimental evaluation on a 12-core Intel Westmere shows that our code is able to outperform a tuned domain-specific stencil code generator by 4 to 27 , and previous compiler techniques by a factor of 2x to 10.14 x." ] }
1410.3060
1506424797
The importance of stencil-based algorithms in computational science has focused attention on optimized parallel implementations for multilevel cache-based processors. Temporal blocking schemes leverage the large bandwidth and low latency of caches to accelerate stencil updates and approach theoretical peak performance. A key ingredient is the reduction of data traffic across slow data paths, especially the main memory interface. In this work we combine the ideas of multicore wavefront temporal blocking and diamond tiling to arrive at stencil update schemes that show large reductions in memory pressure compared to existing approaches. The resulting schemes show performance advantages in bandwidth-starved situations, which are exacerbated by the high bytes per lattice update case of variable coefficients. Our thread groups concept provides a controllable trade-off between concurrency and memory usage, shifting the pressure between the memory interface and the CPU. We present performance results on a contemp...
The wavefront technique, which is introduced by Lamport @cite_34 (using the name hyperplane''), performs temporal blocking at adjacent grid points. This technique has been combined with other tiling approaches using single-thread wavefront temporal blocking as in @cite_32 , Wonnacott @cite_11 , and Nguyen @cite_27 , and using multi-threaded wavefront temporal blocking, as in Wellein @cite_14 .
{ "cite_N": [ "@cite_14", "@cite_32", "@cite_27", "@cite_34", "@cite_11" ], "mid": [ "2150319905", "2097757554", "2039378765", "2164890169", "2160106616" ], "abstract": [ "We present a pipelined wavefront parallelization approach for stencil-based computations. Within a fixed spatial domain successive wavefronts are executed by threads scheduled to a multicore processor chip with a shared outer level cache. By re-using data from cache in the successive wavefronts this multicore-aware parallelization strategy employs temporal blocking in a simple and efficient way. We use the Jacobi algorithm in three dimensions as a prototype or stencil-based computations and prove the efficiency of our approach on the latest generations of Intel's x86 quad- and hexa-core processors.", "We present a time skewing algorithm that breaks the memory wall for certain iterative stencil computations. A stencil computation, even with constant weights, is a completely memory-bound algorithm. For example, for a large 3D domain of @math doubles and 100 iterations on a quad-core Xeon X5482 3.2GHz system, a hand-vectorized and parallelized naive 7-point stencil implementation achieves only 1.4 GFLOPS because the system memory bandwidth limits the performance. Although many efforts have been undertaken to improve the performance of such nested loops, for large data sets they still lag far behind synthetic benchmark performance. The state-of-art automatic locality optimizer PluTo achieves 3.7 GFLOPS for the above stencil, whereas a parallel benchmark executing the inner stencil computation directly on registers performs at 25.1 GFLOPS. In comparison, our algorithm achieves 13.0 GFLOPS (52 of the stencil peak benchmark).We present results for 2D and 3D domains in double precision including problems with gigabyte large data sets. The results are compared against hand-optimized naive schemes, PluTo, the stencil peak benchmark and results from literature. For constant stencils of slope one we break the dependence on the low system bandwidth and achieve at least 50 of the stencil peak, thus performing within a factor two of an ideal system with infinite bandwidth (the benchmark runs on registers without memory access). For large stencils and banded matrices the additional data transfers let the limitations of the system bandwidth come into play again, however, our algorithm still gains a large improvement over the other schemes.", "Stencil computation sweeps over a spatial grid over multiple time steps to perform nearest-neighbor computations. The bandwidth-to-compute requirement for a large class of stencil kernels is very high, and their performance is bound by the available memory bandwidth. Since memory bandwidth grows slower than compute, the performance of stencil kernels will not scale with increasing compute density. We present a novel 3.5D-blocking algorithm that performs 2.5D-spatial and temporal blocking of the input grid into on-chip memory for both CPUs and GPUs. The resultant algorithm is amenable to both thread- level and data-level parallelism, and scales near-linearly with the SIMD width and multiple-cores. Our performance numbers are faster or comparable to state-of-the-art-stencil implementations on CPUs and GPUs. Our implementation of 7-point-stencil is 1.5X-faster on CPUs, and 1.8X faster on GPUs for single- precision floating point inputs than previously reported numbers. For Lattice Boltzmann methods, the corresponding speedup number on CPUs is 2.1X.", "Methods are developed for the parallel execution of different iterations of a DO loop. Both asynchronous multiprocessor computers and array computers are considered. Practical application to the design of compilers for such computers is discussed.", "Time skewing is a compile-time optimization that can provide arbitrarily high cache hit rates for a class of iterative calculations, given a sufficient number of time steps and sufficient cache memory. Thus, it can eliminate processor idle time caused by inadequate main memory bandwidth. In this article, we give a generalization of time skewing for multiprocessor architectures, and discuss time skewing for multilevel caches. Our generalization for multiprocessors lets us eliminate processor idle time caused by any combination of inadequate main memory bandwidth, limited network bandwidth, and high network latency, given a sufficiently large problem and sufficient cache. As in the uniprocessor case, the cache requirement grows with the machine balance rather than the problem size. Our techniques for using multilevel caches reduce the LI cache requirement, which would otherwise be unacceptably high for some architectures when using arrays of high dimension." ] }
1410.3060
1506424797
The importance of stencil-based algorithms in computational science has focused attention on optimized parallel implementations for multilevel cache-based processors. Temporal blocking schemes leverage the large bandwidth and low latency of caches to accelerate stencil updates and approach theoretical peak performance. A key ingredient is the reduction of data traffic across slow data paths, especially the main memory interface. In this work we combine the ideas of multicore wavefront temporal blocking and diamond tiling to arrive at stencil update schemes that show large reductions in memory pressure compared to existing approaches. The resulting schemes show performance advantages in bandwidth-starved situations, which are exacerbated by the high bytes per lattice update case of variable coefficients. Our thread groups concept provides a controllable trade-off between concurrency and memory usage, shifting the pressure between the memory interface and the CPU. We present performance results on a contemp...
Several frameworks have been developed to produce optimized stencil codes. PLUTO @cite_20 is a source-to-source transformation tool that uses polyhedral model, CATS @cite_32 is a library, Pochoir @cite_13 uses cache-oblivious algorithms in Domain Specific Languages (DSL), PATUS @cite_33 uses auto-tuning with a DSL, and Henretty @cite_28 develop a DSL that uses split-tiling. Unat @cite_30 introduced Mint, a programming model that produces highly optimized GPU code from a user's annotated traditional C code. Physis, a DSL that generates optimized GPU codes with the necessary MPI calls for heterogeneous GPU clusters, was proposed by Maruyama @cite_16 . A recent review paper of stencil optimization tools that use polyhedral model has been prepared by Wonnacott @cite_4 .
{ "cite_N": [ "@cite_30", "@cite_4", "@cite_33", "@cite_28", "@cite_32", "@cite_16", "@cite_13", "@cite_20" ], "mid": [ "2028798345", "", "2104512032", "2151764765", "2097757554", "2074833026", "1979457157", "2034761517" ], "abstract": [ "We present Mint, a programming model that enables the non-expert to enjoy the performance benefits of hand coded CUDA without becoming entangled in the details. Mint targets stencil methods, which are an important class of scientific applications. We have implemented the Mint programming model with a source-to-source translator that generates optimized CUDA C from traditional C source. The translator relies on annotations to guide translation at a high level. The set of pragmas is small, and the model is compact and simple. Yet, Mint is able to deliver performance competitive with painstakingly hand-optimized CUDA. We show that, for a set of widely used stencil kernels, Mint realized 80 of the performance obtained from aggressively optimized CUDA on the 200 series NVIDIA GPUs. Our optimizations target three dimensional kernels, which present a daunting array of optimizations.", "", "Stencil calculations comprise an important class of kernels in many scientific computing applications ranging from simple PDE solvers to constituent kernels in multigrid methods as well as image processing applications. In such types of solvers, stencil kernels are often the dominant part of the computation, and an efficient parallel implementation of the kernel is therefore crucial in order to reduce the time to solution. However, in the current complex hardware micro architectures, meticulous architecture-specific tuning is required to elicit the machine's full compute power. We present a code generation and auto-tuning framework for stencil computations targeted at multi- and many core processors, such as multicore CPUs and graphics processing units, which makes it possible to generate compute kernels from a specification of the stencil operation and a parallelization and optimization strategy, and leverages the auto tuning methodology to optimize strategy-dependent parameters for the given hardware architecture.", "Stencil computations are an integral component of applications in a number of scientific computing domains. Short-vector SIMD instruction sets are ubiquitous on modern processors and can be used to significantly increase the performance of stencil computations. Traditional approaches to optimizing stencils on these platforms have focused on either short-vector SIMD or data locality optimizations. In this paper, we propose a domain specific language and compiler for stencil computations that allows specification of stencils in a concise manner and automates both locality and short-vector SIMD optimizations, along with effective utilization of multi-core parallelism. Loop transformations to enhance data locality and enable load-balanced parallelism are combined with a data layout transformation to effectively increase the performance of stencil computations. Performance increases are demonstrated for a number of stencils on several modern SIMD architectures.", "We present a time skewing algorithm that breaks the memory wall for certain iterative stencil computations. A stencil computation, even with constant weights, is a completely memory-bound algorithm. For example, for a large 3D domain of @math doubles and 100 iterations on a quad-core Xeon X5482 3.2GHz system, a hand-vectorized and parallelized naive 7-point stencil implementation achieves only 1.4 GFLOPS because the system memory bandwidth limits the performance. Although many efforts have been undertaken to improve the performance of such nested loops, for large data sets they still lag far behind synthetic benchmark performance. The state-of-art automatic locality optimizer PluTo achieves 3.7 GFLOPS for the above stencil, whereas a parallel benchmark executing the inner stencil computation directly on registers performs at 25.1 GFLOPS. In comparison, our algorithm achieves 13.0 GFLOPS (52 of the stencil peak benchmark).We present results for 2D and 3D domains in double precision including problems with gigabyte large data sets. The results are compared against hand-optimized naive schemes, PluTo, the stencil peak benchmark and results from literature. For constant stencils of slope one we break the dependence on the low system bandwidth and achieve at least 50 of the stencil peak, thus performing within a factor two of an ideal system with infinite bandwidth (the benchmark runs on registers without memory access). For large stencils and banded matrices the additional data transfers let the limitations of the system bandwidth come into play again, however, our algorithm still gains a large improvement over the other schemes.", "This paper proposes a compiler-based programming framework that automatically translates user-written structured grid code into scalable parallel implementation code for GPU-equipped clusters. To enable such automatic translations, we design a small set of declarative constructs that allow the user to express stencil computations in a portable and implicitly parallel manner. Our framework translates the user-written code into actual implementation code in CUDA for GPU acceleration and MPI for node-level parallelization with automatic optimizations such as computation and communication overlapping. We demonstrate the feasibility of such automatic translations by implementing several structured grid applications in our framework. Experimental results on the TSUBAME2.0 GPU-based supercomputer show that the performance is comparable as hand-written code and good strong and weak scalability up to 256 GPUs.", "A stencil computation repeatedly updates each point of a d-dimensional grid as a function of itself and its near neighbors. Parallel cache-efficient stencil algorithms based on \"trapezoidal decompositions\" are known, but most programmers find them difficult to write. The Pochoir stencil compiler allows a programmer to write a simple specification of a stencil in a domain-specific stencil language embedded in C++ which the Pochoir compiler then translates into high-performing Cilk code that employs an efficient parallel cache-oblivious algorithm. Pochoir supports general d-dimensional stencils and handles both periodic and aperiodic boundary conditions in one unified algorithm. The Pochoir system provides a C++ template library that allows the user's stencil specification to be executed directly in C++ without the Pochoir compiler (albeit more slowly), which simplifies user debugging and greatly simplified the implementation of the Pochoir compiler itself. A host of stencil benchmarks run on a modern multicore machine demonstrates that Pochoir outperforms standard parallelloop implementations, typically running 2-10 times faster. The algorithm behind Pochoir improves on prior cache-efficient algorithms on multidimensional grids by making \"hyperspace\" cuts, which yield asymptotically more parallelism for the same cache efficiency.", "We present the design and implementation of an automatic polyhedral source-to-source transformation framework that can optimize regular programs (sequences of possibly imperfectly nested loops) for parallelism and locality simultaneously. Through this work, we show the practicality of analytical model-driven automatic transformation in the polyhedral model -- far beyond what is possible by current production compilers. Unlike previous works, our approach is an end-to-end fully automatic one driven by an integer linear optimization framework that takes an explicit view of finding good ways of tiling for parallelism and locality using affine transformations. The framework has been implemented into a tool to automatically generate OpenMP parallel code from C program sections. Experimental results from the tool show very high speedups for local and parallel execution on multi-cores over state-of-the-art compiler frameworks from the research community as well as the best native production compilers. The system also enables the easy use of powerful empirical iterative optimization for general arbitrarily nested loop sequences." ] }
1410.2792
1930966021
In this work, the author presents a method called Convex Model Predictive Control (CMPC) to control systems whose states are elements of the rotation matrices SO(n) for n = 2, 3. This is done without charts or any local linearization, and instead is performed by operating over the orbitope of rotation matrices. This results in a novel model predictive control (MPC) scheme without the drawbacks associated with conventional linearization techniques such as slow computation time and local minima. Of particular emphasis is the application to aeronautical and vehicular systems, wherein the method removes many of the trigonometric terms associated with these systems’ state space equations. Furthermore, the method is shown to be compatible with many existing variants of MPC, including obstacle avoidance via Mixed Integer Linear Programming (MILP).
An alternative paradigm lies in optimization, primarily based on a Model Predictive Control (MPC) framework, in which a model of the system is used to optimize a time-discretized trajectory into a finite horizon of the future. References @cite_18 and @cite_13 shares many similarities with our work, adopting a Mixed Integer framework to allow for obstacles, and can be considered an antecedent of this work. Where our work differs is in our use of the convex hull of @math , and @math to constrain the motion of the system, which had previously required an infinite number of Linear Program constraints and had thus only been approximated. Additionally, we develop the method further, incorporating integrator dynamics for our UAV and spacecraft examples. The work of @cite_23 also uses convex programming for model predictive control. In particular, a norm constraint on the control thrust is relaxed, with the relaxation shown to be tight. Going further in examining the Lie group structure of these systems, @cite_6 develops a functional approach to the problem that circumvents the need for time discretization and grapples with the manifold structure of the group directly.
{ "cite_N": [ "@cite_18", "@cite_13", "@cite_6", "@cite_23" ], "mid": [ "2102488278", "2100512461", "101508493", "1974100855" ], "abstract": [ "Describes a method for finding optimal trajectories for multiple aircraft avoiding collisions. Developments in spacecraft path-planning have shown that trajectory optimization including collision avoidance can be written as a linear program subject to mixed integer constraints, known as a mixed-integer linear program (MILP). This can be solved using commercial software written for the operations research community. In the paper, an approximate model of aircraft dynamics using only linear constraints is developed, enabling the MILP approach to be applied to aircraft collision avoidance. The formulation can also be extended to include multiple waypoint path-planning, in which each vehicle is required to visit a set of points in an order chosen within the optimization.", "This paper presents a new approach to trajectory optimization for autonomous fixed-wing aerial vehicles performing large-scale maneuvers. The main result is a planner which designs nearly minimum time planar trajectories to a goal, constrained by no-fly zones and the vehicle's maximum speed and turning rate. Mixed-Integer Linear Programming (MILP) is used for the optimization, and is well suited to trajectory optimization because it can incorporate logical constraints, such as no-fly zone avoidance, and continuous constraints, such as aircraft dynamics. MILP is applied over a receding planning horizon to reduce the computational effort of the planner and to incorporate feedback. In this approach, MILP is used to plan short trajectories that extend towards the goal, but do not necessarily reach it. The cost function accounts for decisions beyond the planning horizon by estimating the time to reach the goal from the plan's end point. This time is estimated by searching a graph representation of the environment. This approach is shown to avoid entrapment behind obstacles, to yield near-optimal performance when comparison with the minimum arrival time found using a fixed horizon controller is possible, and to work consistently on large trajectory optimization problems that are intractable for the fixed horizon controller.", "1 Introduction and Overview.- 2 Configuration Space of a Rigid Object.- 3 Obstacles in Configuration Space.- 4 Roadmap Methods.- 5 Exact Cell Decomposition.- 6 Approximate Cell Decomposition.- 7 Potential Field Methods.- 8 Multiple Moving Objects.- 9 Kinematic Constraints.- 10 Dealing with Uncertainty.- 11 Movable Objects.- Prospects.- Appendix A Basic Mathematics.- Appendix B Computational Complexity.- Appendix C Graph Searching.- Appendix D Sweep-Line Algorithm.- References.", "Abstract In this paper we consider a class of optimal control problems that have continuous-time nonlinear dynamics and nonconvex control constraints. We propose a convex relaxation of the nonconvex control constraints, and prove that the optimal solution to the relaxed problem is the globally optimal solution to the original problem with nonconvex control constraints. This lossless convexification enables a computationally simpler problem to be solved instead of the original problem. We demonstrate the approach in simulation with a planetary soft landing problem involving a nonlinear gravity field." ] }
1410.2792
1930966021
In this work, the author presents a method called Convex Model Predictive Control (CMPC) to control systems whose states are elements of the rotation matrices SO(n) for n = 2, 3. This is done without charts or any local linearization, and instead is performed by operating over the orbitope of rotation matrices. This results in a novel model predictive control (MPC) scheme without the drawbacks associated with conventional linearization techniques such as slow computation time and local minima. Of particular emphasis is the application to aeronautical and vehicular systems, wherein the method removes many of the trigonometric terms associated with these systems’ state space equations. Furthermore, the method is shown to be compatible with many existing variants of MPC, including obstacle avoidance via Mixed Integer Linear Programming (MILP).
Research into the structure and application of the convex hulls of @math has increased recently, a body of literature that the current paper builds upon. In @cite_3 , the authors study the structure of convex bodies of Lie groups broadly. In @cite_4 the structure of the convex hull of @math is studied in detail, yielding semidefinite descriptions of the convex hull for arbitrary @math , and also uses these parameterizations to solve Wahba's problem, a common estimation problem in aeronautical navigation. In computer vision, the authors of @cite_5 investigated the convex relaxation is pose estimation problems, integrating convex penalties to create novel convex optimization problems. The techniques were also integrated with decentralization schemes in @cite_8 to create a distributed algorithm for consensus over @math .
{ "cite_N": [ "@cite_5", "@cite_4", "@cite_3", "@cite_8" ], "mid": [ "2080423978", "1488782662", "", "2950504271" ], "abstract": [ "This paper proposes a new method for rigid body pose estimation based on spectrahedral representations of the tautological orbitopes of SE(2) and SE(3). The approach can use dense point cloud data from stereo vision or an RGB-D sensor (such as the Microsoft Kinect), as well as visual appearance data as input. The method is a convex relaxation of the classical pose estimation problem, and is based on explicit linear matrix inequality (LMI) representations for the convex hulls of SE(2) and SE(3). Given these representations, the relaxed pose estimation problem can be framed as a robust least squares problem with the optimization variable constrained to these convex sets. Although this formulation is a relaxation of the original problem, numerical experiments indicates that it is indeed exact - i.e. its solution is a member of SE(2) or SE(3) - in many interesting settings. We additionally show that this method is guaranteed to be exact for a large class of pose estimation problems.", "We study the convex hull of @math , the set of @math orthogonal matrices with unit determinant, from the point of view of semidefinite programming. We show that the convex hull of @math is doubly spectrahedral, i.e., both it and its polar have a description as the intersection of a cone of positive semidefinite matrices with an affine subspace. Our spectrahedral representations are explicit and are of minimum size, in the sense that there are no smaller spectrahedral representations of these convex bodies.", "", "This paper introduces several new algorithms for consensus over the special orthogonal group. By relying on a convex relaxation of the space of rotation matrices, consensus over rotation elements is reduced to solving a convex problem with a unique global solution. The consensus protocol is then implemented as a distributed optimization using (i) dual decomposition, and (ii) both semi and fully distributed variants of the alternating direction method of multipliers technique -- all with strong convergence guarantees. The convex relaxation is shown to be exact at all iterations of the dual decomposition based method, and exact once consensus is reached in the case of the alternating direction method of multipliers. Further, analytic and or efficient solutions are provided for each iteration of these distributed computation schemes, allowing consensus to be reached without any online optimization. Examples in satellite attitude alignment with up to 100 agents, an estimation problem from computer vision, and a rotation averaging problem on @math validate the approach." ] }
1410.2167
1894701209
Real-time dense computer vision and SLAM offer great potential for a new level of scene modelling, tracking and real environmental interaction for many types of robot, but their high computational requirements mean that use on mass market embedded platforms is challenging. Meanwhile, trends in low-cost, low-power processing are towards massive parallelism and heterogeneity, making it difficult for robotics and vision researchers to implement their algorithms in a performance-portable way. In this paper we introduce SLAMBench, a publicly-available software framework which represents a starting point for quantitative, comparable and validatable experimental research to investigate trade-offs in performance, accuracy and energy consumption of a dense RGB-D SLAM system. SLAMBench provides a KinectFusion implementation in C++, OpenMP, OpenCL and CUDA, and harnesses the ICL-NUIM dataset of synthetic RGB-D sequences with trajectory and scene ground truth for reliable accuracy comparison of different implementation and algorithms. We present an analysis and breakdown of the constituent algorithmic elements of KinectFusion, and experimentally investigate their execution time on a variety of multicore and GPU-accelerated platforms. For a popular embedded platform, we also present an analysis of energy efficiency for different configuration alternatives.
Computer vision research has traditionally focused on optimising the accuracy of algorithms. In autonomous driving, for example, the KITTI benchmark suite @cite_3 provides data and evaluation criteria for the stereo, optical flow, visual odometry and 3D object recognition. The ICL-NUIM dataset @cite_26 and TUM RGB-D benchmark @cite_18 aim to benchmark the accuracy of visual odometry and SLAM algorithms. However, in an energy and performance constrained context, such as a battery-powered robot, it is important to achieve sufficient accuracy while maximising battery life. New benchmarks are needed which provide tools and techniques to investigate these constraints. An important early benchmark suite for performance evaluation entirely dedicated to computer vision is SD-VBS @cite_9 . SD-VBS provides single-threaded C and MATLAB implementations of 28 commonly used computer vision kernels that are combined to build 9 high-level vision applications; only some modules relevant to SLAM are included, notably Monte Carlo localisation. SD-VBS @cite_9 prohibits modifications to the algorithm, only allowing the implementation to be tuned to suit novel hardware architectures. This limits the use of the benchmark in the development of novel algorithms.
{ "cite_N": [ "@cite_9", "@cite_18", "@cite_26", "@cite_3" ], "mid": [ "2132511032", "2021851106", "2058535340", "2150066425" ], "abstract": [ "In the era of multi-core, computer vision has emerged as an exciting application area which promises to continue to drive the demand for both more powerful and more energy efficient processors. Although there is still a long way to go, vision has matured significantly over the last few decades, and the list of applications that are useful to end users continues to grow. The parallelism inherent in vision applications makes them a promising workload for multi-core and many-core processors.", "In this paper, we present a novel benchmark for the evaluation of RGB-D SLAM systems. We recorded a large set of image sequences from a Microsoft Kinect with highly accurate and time-synchronized ground truth camera poses from a motion capture system. The sequences contain both the color and depth images in full sensor resolution (640 × 480) at video frame rate (30 Hz). The ground-truth trajectory was obtained from a motion-capture system with eight high-speed tracking cameras (100 Hz). The dataset consists of 39 sequences that were recorded in an office environment and an industrial hall. The dataset covers a large variety of scenes and camera motions. We provide sequences for debugging with slow motions as well as longer trajectories with and without loop closures. Most sequences were recorded from a handheld Kinect with unconstrained 6-DOF motions but we also provide sequences from a Kinect mounted on a Pioneer 3 robot that was manually navigated through a cluttered indoor environment. To stimulate the comparison of different approaches, we provide automatic evaluation tools both for the evaluation of drift of visual odometry systems and the global pose error of SLAM systems. The benchmark website [1] contains all data, detailed descriptions of the scenes, specifications of the data formats, sample code, and evaluation tools.", "We introduce the Imperial College London and National University of Ireland Maynooth (ICL-NUIM) dataset for the evaluation of visual odometry, 3D reconstruction and SLAM algorithms that typically use RGB-D data. We present a collection of handheld RGB-D camera sequences within synthetically generated environments. RGB-D sequences with perfect ground truth poses are provided as well as a ground truth surface model that enables a method of quantitatively evaluating the final map or surface reconstruction accuracy. Care has been taken to simulate typically observed real-world artefacts in the synthetic imagery by modelling sensor noise in both RGB and depth data. While this dataset is useful for the evaluation of visual odometry and SLAM trajectory estimation, our main focus is on providing a method to benchmark the surface reconstruction accuracy which to date has been missing in the RGB-D community despite the plethora of ground truth RGB-D datasets available.", "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net datasets kitti" ] }
1410.2167
1894701209
Real-time dense computer vision and SLAM offer great potential for a new level of scene modelling, tracking and real environmental interaction for many types of robot, but their high computational requirements mean that use on mass market embedded platforms is challenging. Meanwhile, trends in low-cost, low-power processing are towards massive parallelism and heterogeneity, making it difficult for robotics and vision researchers to implement their algorithms in a performance-portable way. In this paper we introduce SLAMBench, a publicly-available software framework which represents a starting point for quantitative, comparable and validatable experimental research to investigate trade-offs in performance, accuracy and energy consumption of a dense RGB-D SLAM system. SLAMBench provides a KinectFusion implementation in C++, OpenMP, OpenCL and CUDA, and harnesses the ICL-NUIM dataset of synthetic RGB-D sequences with trajectory and scene ground truth for reliable accuracy comparison of different implementation and algorithms. We present an analysis and breakdown of the constituent algorithmic elements of KinectFusion, and experimentally investigate their execution time on a variety of multicore and GPU-accelerated platforms. For a popular embedded platform, we also present an analysis of energy efficiency for different configuration alternatives.
Another attempt at such performance evaluation is MEVBench @cite_27 , which focuses on a set of visual recognition applications including face detection, feature classification, object tracking and feature extraction. It provides single and multithreaded C++ implementations for some of the kernels with a special emphasis on low-power embedded systems. However, MEVBench only focuses on recognition algorithms and does not include a SLAM pipeline.
{ "cite_N": [ "@cite_27" ], "mid": [ "2023441626" ], "abstract": [ "The growth in mobile vision applications, coupled with the performance limitations of mobile platforms, has led to a growing need to understand computer vision applications. Computationally intensive mobile vision applications, such as augmented reality or object recognition, place significant performance and power demands on existing embedded platforms, often leading to degraded application quality. With a better understanding of this growing application space, it will be possible to more effectively optimize future embedded platforms. In this work, we introduce and evaluate a custom benchmark suite for mobile embedded vision applications named MEVBench. MEVBench provides a wide range of mobile vision applications such as face detection, feature classification, object tracking and feature extraction. To better understand mobile vision processing characteristics at the architectural level, we analyze single and multithread implementations of many algorithms to evaluate performance, scalability, and memory characteristics. We provide insights into the major areas where architecture can improve the performance of these applications in embedded systems." ] }
1410.2167
1894701209
Real-time dense computer vision and SLAM offer great potential for a new level of scene modelling, tracking and real environmental interaction for many types of robot, but their high computational requirements mean that use on mass market embedded platforms is challenging. Meanwhile, trends in low-cost, low-power processing are towards massive parallelism and heterogeneity, making it difficult for robotics and vision researchers to implement their algorithms in a performance-portable way. In this paper we introduce SLAMBench, a publicly-available software framework which represents a starting point for quantitative, comparable and validatable experimental research to investigate trade-offs in performance, accuracy and energy consumption of a dense RGB-D SLAM system. SLAMBench provides a KinectFusion implementation in C++, OpenMP, OpenCL and CUDA, and harnesses the ICL-NUIM dataset of synthetic RGB-D sequences with trajectory and scene ground truth for reliable accuracy comparison of different implementation and algorithms. We present an analysis and breakdown of the constituent algorithmic elements of KinectFusion, and experimentally investigate their execution time on a variety of multicore and GPU-accelerated platforms. For a popular embedded platform, we also present an analysis of energy efficiency for different configuration alternatives.
While such efforts are a step in the right direction, they do not provide the software tools for accuracy verification and exploitation of hardware accelerators or graphics processor units (GPUs). Nor do they enable investigation of energy consumption, performance and accuracy envelopes for 3D scene reconstruction algorithms across a range of hardware targets. The lack of benchmarks stems from the difficulty in systematically comparing the accuracy of the reconstruction while measuring the performance. In this work we focus specifically on SLAM and introduce a publicly-available framework for quantitative, comparable and validatable experimental research in the form of a benchmark for dense 3D scene understanding. A key feature of SLAMBench is that it is designed on top of the recently-proposed ICL-NUIM accuracy benchmark @cite_26 , and thus supports wider research in hardware and software. The quantitative evaluation of solution accuracy into SLAMBench enables algorithmic research to be performed. The latter is an important feature that is lacking in current performance benchmarks. A typical output of SLAMBench consists of the performance achieved, and the accuracy of the result along with the energy consumption (on platforms where such measurement is possible). These parameters capture the potential trade-offs for real-time vision platforms.
{ "cite_N": [ "@cite_26" ], "mid": [ "2058535340" ], "abstract": [ "We introduce the Imperial College London and National University of Ireland Maynooth (ICL-NUIM) dataset for the evaluation of visual odometry, 3D reconstruction and SLAM algorithms that typically use RGB-D data. We present a collection of handheld RGB-D camera sequences within synthetically generated environments. RGB-D sequences with perfect ground truth poses are provided as well as a ground truth surface model that enables a method of quantitatively evaluating the final map or surface reconstruction accuracy. Care has been taken to simulate typically observed real-world artefacts in the synthetic imagery by modelling sensor noise in both RGB and depth data. While this dataset is useful for the evaluation of visual odometry and SLAM trajectory estimation, our main focus is on providing a method to benchmark the surface reconstruction accuracy which to date has been missing in the RGB-D community despite the plethora of ground truth RGB-D datasets available." ] }
1410.2466
1597843003
This paper presents two approaches to quantifying and visualizing variation in datasets of trees. The first approach localizes subtrees in which significant population differences are found through hypothesis testing and sparse classifiers on subtree features. The second approach visualizes the global metric structure of datasets through low-distortion embedding into hyperbolic planes in the style of multidimensional scaling. A case study is made on a dataset of airway trees in relation to Chronic Obstructive Pulmonary Disease.
Many data types represent entities which can be decomposed into parts or regions. Examples are graph-structured data @cite_9 @cite_26 , anatomical data which can be segmented into different organs @cite_14 @cite_1 or even single anatomical organs where additional spatial information is relevant; for instance, in the framework of shape analysis @cite_18 @cite_15 where local analysis is made on correspondence points on biomedical shape surfaces. A typical problem when studying such data is : A classifier will often only predict a certain diagnosis or class, but in order to understand the cause of the result (and, e.g. in diagnostic settings, react on it) one also desires to know which parts of the collection caused a certain classification outcome.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_26", "@cite_9", "@cite_1", "@cite_15" ], "mid": [ "1826158161", "2165844313", "2059742622", "2124824205", "2527361485", "1536489727" ], "abstract": [ "This paper presents a new method for constructing compact statistical point-based models of ensembles of similar shapes that does not rely on any specific surface parameterization. The method requires very little preprocessing or parameter tuning, and is applicable to a wider range of problems than existing methods, including nonmanifold surfaces and objects of arbitrary topology. The proposed method is to construct a point-based sampling of the shape ensemble that simultaneously maximizes both the geometric accuracy and the statistical simplicity of the model. Surface point samples, which also define the shape-to-shape correspondences, are modeled as sets of dynamic particles that are constrained to lie on a set of implicit surfaces. Sample positions are optimized by gradient descent on an energy function that balances the negative entropy of the distribution on each shape with the positive entropy of the ensemble of shapes. We also extend the method with a curvature-adaptive sampling strategy in order to better approximate the geometry of the objects. This paper presents the formulation; several synthetic examples in two and three dimensions; and an application to the statistical shape analysis of the caudate and hippocampus brain structures from two clinical studies.", "One goal of statistical shape analysis is the discrimination between two populations of objects. Whereas traditional shape analysis was mostly concerned with single objects, analysis of multi-object complexes presents new challenges related to alignment and pose. In this paper, we present a methodology for discriminant analysis of multiple objects represented by sampled medial manifolds. Non-Euclidean metrics that describe geodesic distances between sets of sampled representations are used for alignment and discrimination. Our choice of discriminant method is the distance-weighted discriminant because of its generalization ability in high-dimensional, low sample size settings. Using an unbiased, soft discrimination score, we associate a statistical hypothesis test with the discrimination results. We explore the effectiveness of different choices of features as input to the discriminant analysis, using measures like volume, pose, shape, and the combination of pose and shape. Our method is applied to a longitudinal pediatric autism study with 10 subcortical brain structures in a population of 70 subjects. It is shown that the choices of type of global alignment and of intrinsic versus extrinsic shape features, the latter being sensitive to relative pose, are crucial factors for group discrimination and also for explaining the nature of shape change in terms of the application domain.", "Reverse inference, or brain reading, is a recent paradigm for analyzing functional magnetic resonance imaging (fMRI) data based on pattern recognition and statistical learning. By predicting some cognitive variables related to brain activation maps, this approach aims at decoding brain activity. Reverse inference takes into account the multivariate information between voxels and is currently the only way to assess how precisely some cognitive information is encoded by the activity of neural populations within the whole brain. However, it relies on a prediction function that is plagued by the curse of dimensionality, since there are far more features than samples, i.e., more voxels than fMRI volumes. To address this problem, different methods have been proposed, including univariate feature selection, feature agglomeration, and regularization techniques. In this paper, we consider a sparse hierarchical structured regularization. Specifically, the penalization we use is constructed from a tree that is obtai...", "While graphs with continuous node attributes arise in many applications, state-of-the-art graph kernels for comparing continuous-attributed graphs suffer from a high runtime complexity. For instance, the popular shortest path kernel scales as O(n4), where n is the number of nodes. In this paper, we present a class of graph kernels with computational complexity O(n2(m + log n + δ2 + d)), where δ is the graph diameter, m is the number of edges, and d is the dimension of the node attributes. Due to the sparsity and small diameter of real-world graphs, these kernels typically scale comfortably to large graphs. In our experiments, the presented kernels outperform state-of-the-art kernels in terms of speed and accuracy on classification benchmark datasets.", "", "[24] h. : il. ; 30 cm. Documento de trabajo (Universidad de San Andres. Departamento de Matematica y Ciencias) ; 44. Autores: Jorge R. Busch, Pablo A. Ferrari, Georgina Flesia, Ricardo Fraiman y Sebastian Grynberg. \"Agosto 2006.\" Incluye referencias bibliograficas (h. [22])." ] }
1410.2466
1597843003
This paper presents two approaches to quantifying and visualizing variation in datasets of trees. The first approach localizes subtrees in which significant population differences are found through hypothesis testing and sparse classifiers on subtree features. The second approach visualizes the global metric structure of datasets through low-distortion embedding into hyperbolic planes in the style of multidimensional scaling. A case study is made on a dataset of airway trees in relation to Chronic Obstructive Pulmonary Disease.
While a large body of work has been done on classifying structured data, less is known about how to identify which parts of a structure are relevant for the classification problem. Most such work has been done in settings where there is a correspondence between the parts constituting the data object: In analysis of brain connectivity @cite_26 @cite_23 , one usually has a matching between the nodes in the dataset, while in voxel-based morphometry @cite_30 or shape analysis @cite_18 , registration is used to match different images to a template. A popular approach to such problems is @cite_20 @cite_26 @cite_33 , which detects discriminative substructures in data described by fixed-length Euclidean vectors with a known underlying structure relating the vector coordinates. However, anatomical trees usually cannot be described by fixed-length vectors without discarding parts of the tree. Thus, these methods are not directly applicable.
{ "cite_N": [ "@cite_30", "@cite_18", "@cite_26", "@cite_33", "@cite_23", "@cite_20" ], "mid": [ "", "1826158161", "2059742622", "", "24815514", "2951182273" ], "abstract": [ "", "This paper presents a new method for constructing compact statistical point-based models of ensembles of similar shapes that does not rely on any specific surface parameterization. The method requires very little preprocessing or parameter tuning, and is applicable to a wider range of problems than existing methods, including nonmanifold surfaces and objects of arbitrary topology. The proposed method is to construct a point-based sampling of the shape ensemble that simultaneously maximizes both the geometric accuracy and the statistical simplicity of the model. Surface point samples, which also define the shape-to-shape correspondences, are modeled as sets of dynamic particles that are constrained to lie on a set of implicit surfaces. Sample positions are optimized by gradient descent on an energy function that balances the negative entropy of the distribution on each shape with the positive entropy of the ensemble of shapes. We also extend the method with a curvature-adaptive sampling strategy in order to better approximate the geometry of the objects. This paper presents the formulation; several synthetic examples in two and three dimensions; and an application to the statistical shape analysis of the caudate and hippocampus brain structures from two clinical studies.", "Reverse inference, or brain reading, is a recent paradigm for analyzing functional magnetic resonance imaging (fMRI) data based on pattern recognition and statistical learning. By predicting some cognitive variables related to brain activation maps, this approach aims at decoding brain activity. Reverse inference takes into account the multivariate information between voxels and is currently the only way to assess how precisely some cognitive information is encoded by the activity of neural populations within the whole brain. However, it relies on a prediction function that is plagued by the curse of dimensionality, since there are far more features than samples, i.e., more voxels than fMRI volumes. To address this problem, different methods have been proposed, including univariate feature selection, feature agglomeration, and regularization techniques. In this paper, we consider a sparse hierarchical structured regularization. Specifically, the penalization we use is constructed from a tree that is obtai...", "", "Network representation of brain connectivity has provided a novel means of investigating brain changes arising from pathology, development or aging. The high dimensionality of these networks demands methods that are not only able to extract the patterns that highlight these sources of variation, but describe them individually. In this paper, we present a unified framework for learning subnetwork patterns of connectivity by their projective non-negative decomposition into a reconstructive basis set, as well as, additional basis sets representing development and group discrimination. In order to obtain these components, we exploit the geometrical distribution of the population in the connectivity space by using a graph-theoretical scheme that imposes locality-preserving properties. In addition, the projection of the subject networks into the basis set provides a low dimensional representation of it, that teases apart the different sources of variation in the sample, facilitating variation-specific statistical analysis. The proposed framework is applied to a study of diffusion-based connectivity in subjects with autism.", "This paper investigates a new learning formulation called structured sparsity, which is a natural extension of the standard sparsity concept in statistical learning and compressive sensing. By allowing arbitrary structures on the feature set, this concept generalizes the group sparsity idea that has become popular in recent years. A general theory is developed for learning with structured sparsity, based on the notion of coding complexity associated with the structure. It is shown that if the coding complexity of the target signal is small, then one can achieve improved performance by using coding complexity regularization methods, which generalize the standard sparse regularization. Moreover, a structured greedy algorithm is proposed to efficiently solve the structured sparsity problem. It is shown that the greedy algorithm approximately solves the coding complexity optimization problem under appropriate conditions. Experiments are included to demonstrate the advantage of structured sparsity over standard sparsity on some real applications." ] }
1410.2466
1597843003
This paper presents two approaches to quantifying and visualizing variation in datasets of trees. The first approach localizes subtrees in which significant population differences are found through hypothesis testing and sparse classifiers on subtree features. The second approach visualizes the global metric structure of datasets through low-distortion embedding into hyperbolic planes in the style of multidimensional scaling. A case study is made on a dataset of airway trees in relation to Chronic Obstructive Pulmonary Disease.
* Low-distortion embeddings The standard technique for visualising population structure in high-dimensional or non-Euclidean datasets is to extract the pairwise distances between data points, and then use (MDS), which attempts to embed the points into a lower dimensional Euclidean space such that the given distances between the points are preserved. This is expressed mathematically as minimizing the sum of the differences between original and embedded pairwise distances @cite_7 . In a sequence of work @cite_41 @cite_5 @cite_31 Amenta, St. investigate visualization of sets of phylogenetic, or evolutionary, trees using multidimensional scaling. In this work, inter-tree distances are given by the Robinson-Foulds distance @cite_25 , which only measures topological differences in the trees. More recently, @cite_29 compare several non-linear versions of MDS on phylogenetic trees, and find that a metric that places less weight on large distances gives more meaningful visualizations. Chakerian and Holmes @cite_40 use MDS with the geodesic distance between trees @cite_22 . A different approach is that of @cite_38 , who visualize phylogenetic trees by projecting them onto a hypersphere; this approach does not consider branch lengths, only tree topology.
{ "cite_N": [ "@cite_38", "@cite_22", "@cite_7", "@cite_41", "@cite_29", "@cite_40", "@cite_5", "@cite_31", "@cite_25" ], "mid": [ "2129838374", "2003064123", "2495084345", "2155351266", "", "2592698424", "", "2111303992", "2060425093" ], "abstract": [ "Phylogenetic analysis is becoming an increasingly important tool for biological research. Applications include epidemiological studies, drug development, and evolutionary analysis. Phylogenetic search is a known NP-Hard problem. The size of the data sets which can be analyzed is limited by the exponential growth in the number of trees that must be considered as the problem size increases. A better understanding of the problem space could lead to better methods, which in turn could lead to the feasible analysis of more data sets. We present a definition of phylogenetic tree space and a visualization of this space that shows significant exploitable structure. This structure can be used to develop search methods capable of handling much larger data sets.", "We consider a continuous space which models the set of all phylogenetic trees having a fixed set of leaves. This space has a natural metric of nonpositive curvature, giving a way of measuring distance between phylogenetic trees and providing some procedures for averaging or combining several trees whose leaves are identical. This geometry also shows which trees appear within a fixed distance of a given tree and enables construction of convex hulls of a set of trees. This geometric model of tree space provides a setting in which questions that have been posed by biologists and statisticians over the last decade can be approached in a systematic fashion. For example, it provides a justification for disregarding portions of a collection of trees that agree, thus simplifying the space in which comparisons are to be made.", "Modern multidimensional scalin , Modern multidimensional scalin , کتابخانه دیجیتال جندی شاپور اهواز", "We describe a visualization tool which allows a biologist to explore a large set of hypothetical evolutionary trees. Interacting with such a dataset allows the biologist to identify distinct hypotheses about how different species or organisms evolved, which would not have been clear from traditional analyses. Our system integrates a point-set visualization of the distribution of hypothetical trees with detail views of an individual tree, or of a consensus tree summarizing a subset of trees. Efficient algorithms were required for the key tasks of computing distances between trees, finding consensus trees, and laying out the point-set visualization.", "", "Inferential summaries of tree estimates are useful in the setting of evolutionary biology, where phylogenetic trees have been built from DNA data since the 1960s. In bioinformatics, psychometrics, and data mining, hierarchical clustering techniques output the same mathematical objects, and practitioners have similar questions about the stability and “generalizability” of these summaries. This article describes the implementation of the geometric distance between trees developed by Billera, Holmes, and Vogtmann (2001) equally applicable to phylogenetic trees and hierarchical clustering trees, and shows some of the applications in evaluating tree estimates. In particular, since (2001) have shown that the space of trees is negatively curved (called a CAT(0) space), a collection of trees can naturally be represented as a tree. We compare this representation to the Euclidean approximations of treespace made available through both a classical multidimensional scaling and a Kernel multidimensional...", "", "We explored the use of multidimensional scaling (MDS) of tree-to-tree pairwise distances to visualize the re- lationships among sets of phylogenetic trees. We found the technique to be useful for exploring \"tree islands\" (sets of topologically related trees among larger sets of near-optimal trees), for comparing sets of trees obtained from bootstrapping and Bayesian sampling, for comparing trees obtained from the analysis of several different genes, and for comparing mul- tiple Bayesian analyses. The technique was also useful as a teaching aid for illustrating the progress of a Bayesian analysis and as an exploratory tool for examining large sets of phylogenetic trees. We also identified some limitations to the method, including distortions of the multidimensional tree space into two dimensions through the MDS technique, and the defini- tion of the MDS-defined space based on a limited sample of trees. Nonetheless, the technique is a useful approach for the analysis of large sets of phylogenetic trees. (Bayesian analysis; multidimensional scaling; phylogenetic analysis; tree space; visualization.) Systematists are often faced with the need to analyze a large collection of phylogenetic trees. These trees may represent a collection of equally parsimonious solutions to a phylogenetic problem, or a set of trees of similar likelihood, or a sampled set of trees from a Markov chain Monte Carlo (MCMC) Bayesian analysis. In any of these cases, a common approach for expressing the results is to make a consensus tree from the large col- lection of potential solutions (see Swofford, 1991, for a discussion of consensus methods). Consensus trees are produced to distill a large amount of information into a single summary tree, because it is often impractical to examine or display all of the individual solutions. In the case of MCMC Bayesian analysis, a consensus tree is usually used to summarize information about the pos- terior probabilities of the individual inferred branches. Although these uses of consensus trees may be ap- propriate for many purposes, a great deal of informa- tion about the individual solutions is usually lost. It is possible that two or more distinct but different bi- ological explanations are represented among different \"islands\" of solutions (e.g., see Maddison, 1991), but that a consensus of these solutions produces little or no resolution. Although many other solutions among the universe of possible trees may be excluded by the avail- able data, this information can be lost in a consensus tree.", "Abstract A metric on general phylogenetic trees is presented. This extends the work of most previous authors, who constructed metrics for binary trees. The metric presented in this paper makes possible the comparison of the many nonbinary phylogenetic trees appearing in the literature. This provides an objective procedure for comparing the different methods for constructing phylogenetic trees. The metric is based on elementary operations which transform one tree into another. Various results obtained in applying these operations are given. They enable the distance between any pair of trees to be calculated efficiently. This generalizes previous work by Bourque to the case where interior vertices can be labeled, and labels may contain more than one element or may be empty." ] }
1410.2466
1597843003
This paper presents two approaches to quantifying and visualizing variation in datasets of trees. The first approach localizes subtrees in which significant population differences are found through hypothesis testing and sparse classifiers on subtree features. The second approach visualizes the global metric structure of datasets through low-distortion embedding into hyperbolic planes in the style of multidimensional scaling. A case study is made on a dataset of airway trees in relation to Chronic Obstructive Pulmonary Disease.
All of these methods approach visualization through embedding into a Euclidean space in a low-distortion way. However, embedding spaces need not be restricted to only Euclidean spaces. For instance, low-distortion embedding of a general metric into a tree has been considered for various measures of distortion @cite_28 @cite_2 . Low-distortion embedding of general metrics into hyperbolic spaces has also been considered by @cite_35 @cite_37 and Cvetkovski and Crovella @cite_8 . In this paper, we use hyperbolic MDS for more truthful visualizations of tree variation.
{ "cite_N": [ "@cite_35", "@cite_37", "@cite_8", "@cite_28", "@cite_2" ], "mid": [ "2020141201", "2169504875", "1834815371", "2009861473", "" ], "abstract": [ "We introduce a novel projection-based visualization method for high-dimensional data sets by combining concepts from MDS and the geometry of the hyperbolic spaces. This approach hyperbolic multi-dimensional scaling (H-MDS) is a synthesis of two important concepts for explorative data analysis and visualization: (i) multi-dimensional scaling uses proximity or pair distance data to generate a low-dimensional, spatial presentation of the data; (ii) previous work on the \"hyperbolic tree browser\" demonstrated the extraordinary advantages for an interactive display of graph-like data in the two-dimensional hyperbolic space (H2).In the new approach, H-MDS maps proximity data directly into the H2. This removes the restriction to \"quasihierarchical\", graph-based data--a major limitation of (ii). Since a suitable distance function can convert all kinds of data to proximity (or distance-based) data, this type of data can be considered the most general.We review important properties of the hyperbolic space and, in particular, the circular Poincare model of the H2. It enables effective human-computer interaction: by mouse dragging the \"focus\", the user can navigate in the data without loosing the context. In H2 the \"fish-eye\" behavior originates not simply by a non-linear view transformation but rather by extraordinary, non-Euclidean properties of the H2. Especially, the exponential growth of length and area of the underlying space makes the H2 a prime target for mapping hierarchical and (now also) high-dimensional data.Several high-dimensional mapping examples including synthetic and real-world data are presented. Since high-dimensional data produce \"ring\"-shaped displays, we present methods to enhance the display by modulating the dissimilarity contrast. This is demonstrated for an application for unstructured text: i.e., by using multiple film critiques from news:rec.art.movies.reviews and www.imdb.com, each movie is placed within the H2--creating a \"space of movies\" for interactive exploration.", "We propose a novel projection based visualization method for high-dimensional datasets by combining concepts from MDS and the geometry of the hyperbolic spaces. Our approach Hyperbolic Multi-Dimensional Scaling (H-MDS) extends earlier work [7] using hyperbolic spaces for visualization of tree structures data ( \"hyperbolic tree browser\" ).By borrowing concepts from multi-dimensional scaling we map proximity data directly into the 2-dimensional hyperbolic space (H2). This removes the restriction to \"quasihierarchical\", graph-based data -- limiting previous work. Since a suitable distance function can convert all kinds of data to proximity (or distance-based) data this type of data can be considered the most general.We used the circular Poincare model of the H2 which allows effective human-computer interaction: by moving the \"focus\" via mouse the user can navigate in the data without loosing the \"context\". In H2 the \"fish-eye\" behavior originates not simply by a non-linear view transformation but rather by extraordinary, non-Euclidean properties of the H2. Especially, the exponential growth of length and area of the underlying space makes the H2 a prime target for mapping hierarchical and (now also) high-dimensional data.We present several high-dimensional mapping examples including synthetic and real world data and a successful application for unstructured text. By analyzing and integrating multiple film critiques from news:rec.art.movies.reviews and the internet movie database, each movie becomes placed within the H2. Here the idea is, that related films share more words in their reviews than unrelated. Their semantic proximity leads to a closer arrangement. The result is a kind of high-level content structured display allowing the user to explore the \"space of movies\".", "Multidimensional scaling (MDS) is a class of projective algorithms traditionally used in Euclidean space to produce two- or three-dimensional visualizations of datasets of multidimensional points or point distances. More recently however, several authors have pointed out that for certain datasets, hyperbolic target space may provide a better fit than Euclidean space. In this paper we develop PD-MDS, a metric MDS algorithm designed specifically for the Poincare disk (PD) model of the hyperbolic plane. Emphasizing the importance of proceeding from first principles in spite of the availability of various black box optimizers, our construction is based on an elementary hyperbolic line search and reveals numerous particulars that need to be carefully addressed when implementing this as well as more sophisticated iterative optimization methods in a hyperbolic space model.", "We consider the problem of embedding general metrics into trees. We give the first non-trivial approximation algorithm for minimizing the multiplicative distortion. Our algorithm produces an embedding with distortion (c log n)O(√log Δ), where c is the optimal distortion, and Δ is the spread of the metric (i.e. the ratio of the diameter over the minimum distance). We give an improved O(1)-approximation algorithm for the case where the input is the shortest path metric over an unweighted graph. Moreover, we show that by composing our approximation algorithm for embedding general metrics into trees, with the approximation algorithm of [BCIS05] for embedding trees into the line, we obtain an improved approximation algorithm for embedding general metrics into the line. We also provide almost tight bounds for the relation between embedding into trees and embedding into spanning subtrees. We show that for any unweighted graph G, the ratio of the distortion required to embed G into a spanning subtree, over the distortion of an optimal tree embedding of G, is at most O(log n). We complement this bound by exhibiting a family of graphs for which the ratio is Ω(log n log log n).", "" ] }
1410.2146
1989653854
Interference Alignment is a new solution to overcome the problem of interference in multiuser wireless communication systems. Recently, the Compute-and-Forward (CF) transform has been proposed to approximate the capacity of K-user Gaussian Symmetric Interference Channel and practically perform Interference Alignment in wireless networks. However, this technique shows a random behavior in the achievable sum-rate, especially at high SNR. In this work, the origin of this random behavior is analyzed and a novel precoding technique based on the Golden Ratio is proposed to scale down the fadings experiences by the achievable sum-rate at high SNR.
From information theoretical perspective, this issue is modeled by the interference channel introduced many years ago in IEEE:Ashlswede and Berkeley:Shannon . Still it remains one of the most important challenges in the domain of multiuser information theory. In two-user interference channel, a significant progress had been made for the case of @cite_8 and interference @cite_12 channels. Indeed, it is natural to overcome the problem of achievable sum-rate described in @cite_6 , for 2-user systems before generalizing it for @math user case, which @math .
{ "cite_N": [ "@cite_6", "@cite_12", "@cite_8" ], "mid": [ "2136960893", "2148402849", "2052530379" ], "abstract": [ "For a centralized encoder and decoder, a channel matrix is simply a set of linear equations that can be transformed into parallel channels. We develop a similar approach to multi-user networks: we view interference as creating linear equations of codewords and that a receiverpsilas goal is to collect a full rank set of such equations. Our new relaying technique, compute-and-forward, uses structured codes to reliably compute functions over channels. This allows the relays to efficiently recover a linear functions of codewords without recovering the individual codewords. Thus, our scheme can work with the structure of the interference while removing the effects of the noise at the relay. We apply our scheme to a Gaussian relay network with interference and achieve better rates than either compress-and-forward or decode-and-forward for certain regimes.", "This paper studies a symmetric K user Gaussian interference channel with K transmitters and K receivers. A \"very strong\" interference regime is derived for this channel setup. A \"very strong\" interference regime is one where the capacity region of the interference channel is the same as the capacity region of the channel with no interference. In this regime, the interference can be perfectly canceled by all the receivers without incurring any rate penalties. A \"very strong\" interference condition for an example symmetric K user deterministic interference channel is also presented.", "It is shown that, under certain conditions, two strongly interfering communication links with additive white Gaussian noise can achieve rates as high as would be achievable without this interference." ] }
1410.1606
1802032049
In this paper, we design a Collaborative-Hierarchical Sparse and Low-Rank (C-HiSLR) model that is natural for recognizing human emotion in visual data. Previous attempts require explicit expression components, which are often unavailable and difficult to recover. Instead, our model exploits the low-rank property to subtract neutral faces from expressive facial frames as well as performs sparse representation on the expression components with group sparsity enforced. For the CK+ dataset, C-HiSLR on raw expressive faces performs as competitive as the Sparse Representation based Classification (SRC) applied on manually prepared emotions. Our C-HiSLR performs even better than SRC in terms of true positive rate.
In practice, we care more about how to recover @math @cite_20 . Enforcing sparsity is feasible since @math can be exactly recovered from @math under conditions for @math @cite_13 . However, finding the sparsest solution is NP-hard and difficult to solve exactly @cite_22 . But now, it is well-known that the @math norm is a good convex relaxation of sparsity -- -- minimizing the @math norm induces the sparsest solution under mild conditions @cite_21 . Exact recovery is also guaranteed by @math -minimization under suitable conditions @cite_31 . Typically, an iterative greedy algorithm is the Orthogonal Matching Pursuit (OMP) @cite_20 .
{ "cite_N": [ "@cite_22", "@cite_21", "@cite_31", "@cite_13", "@cite_20" ], "mid": [ "2075779886", "2164452299", "2129131372", "2145096794", "2127271355" ], "abstract": [ "We investigate the computational complexity of two closely related classes of combinatorial optimization problems for linear systems which arise in various fields such as machine learning, operations research and pattern recognition. In the first class (Min ULR) one wishes, given a possibly infeasible system of linear relations, to find a solution that violates as few relations as possible while satisfying all the others. In the second class (Min RVLS) the linear system is supposed to be feasible and one looks for a solution with as few nonzero variables as possible. For both Min ULR and Min RVLS the four basic types of relational operators =, ⩾, > and ≠ are considered. While Min RVLS with equations was mentioned to be NP-hard in (Garey and Johnson, 1979), we established in (Amaldi; 1992; Amaldi and Kann, 1995) that min ULR with equalities and inequalities are NP-hard even when restricted to homogeneous systems with bipolar coefficients. The latter problems have been shown hard to approximate in (, 1993). In this paper we determine strong bounds on the approximability of various variants of Min RVLS and min ULR, including constrained ones where the variables are restricted to take binary values or where some relations are mandatory while others are optional. The various NP-hard versions turn out to have different approximability properties depending on the type of relations and the additional constraints, but none of them can be approximated within any constant factor, unless P = NP. Particular attention is devoted to two interesting special cases that occur in discriminant analysis and machine learning. In particular, we disprove a conjecture of van Horn and Martinez (1992) regarding the existence of a polynomial-time algorithm to design linear classifiers (or perceptrons) that involve a close-to-minimum number of features.", "Suppose we wish to recover a vector x0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax0 + e; A is an n × m", "This paper considers a natural error correcting problem with real valued input output. We wish to recover an input vector f spl isin R sup n from corrupted measurements y=Af+e. Here, A is an m by n (coding) matrix and e is an arbitrary and unknown vector of errors. Is it possible to recover f exactly from the data y? We prove that under suitable conditions on the coding matrix A, the input f is the unique solution to the spl lscr sub 1 -minimization problem ( spl par x spl par sub spl lscr 1 := spl Sigma sub i |x sub i |) min(g spl isin R sup n ) spl par y - Ag spl par sub spl lscr 1 provided that the support of the vector of errors is not too large, spl par e spl par sub spl lscr 0 :=| i:e sub i spl ne 0 | spl les spl rho spl middot m for some spl rho >0. In short, f can be recovered exactly by solving a simple convex optimization problem (which one can recast as a linear program). In addition, numerical experiments suggest that this recovery procedure works unreasonably well; f is recovered exactly even in situations where a significant fraction of the output is corrupted. This work is related to the problem of finding sparse solutions to vastly underdetermined systems of linear equations. There are also significant connections with the problem of recovering signals from highly incomplete measurements. In fact, the results introduced in this paper improve on our earlier work. Finally, underlying the success of spl lscr sub 1 is a crucial property we call the uniform uncertainty principle that we shall describe in detail.", "This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discrete-time signal f spl isin C sup N and a randomly chosen set of frequencies spl Omega . Is it possible to reconstruct f from the partial knowledge of its Fourier coefficients on the set spl Omega ? A typical result of this paper is as follows. Suppose that f is a superposition of |T| spikes f(t)= spl sigma sub spl tau spl isin T f( spl tau ) spl delta (t- spl tau ) obeying |T| spl les C sub M spl middot (log N) sup -1 spl middot | spl Omega | for some constant C sub M >0. We do not know the locations of the spikes nor their amplitudes. Then with probability at least 1-O(N sup -M ), f can be reconstructed exactly as the solution to the spl lscr sub 1 minimization problem. In short, exact recovery may be obtained by solving a convex optimization problem. We give numerical values for C sub M which depend on the desired probability of success. Our result may be interpreted as a novel kind of nonlinear sampling theorem. In effect, it says that any signal made out of |T| spikes may be recovered by convex programming from almost every set of frequencies of size O(|T| spl middot logN). Moreover, this is nearly optimal in the sense that any method succeeding with probability 1-O(N sup -M ) would in general require a number of frequency samples at least proportional to |T| spl middot logN. The methodology extends to a variety of other situations and higher dimensions. For example, we show how one can reconstruct a piecewise constant (one- or two-dimensional) object from incomplete frequency samples - provided that the number of jumps (discontinuities) obeys the condition above - by minimizing other convex functionals such as the total variation of f.", "This paper demonstrates theoretically and empirically that a greedy algorithm called orthogonal matching pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results, which require O(m2) measurements. The new results for OMP are comparable with recent results for another approach called basis pursuit (BP). In some settings, the OMP algorithm is faster and easier to implement, so it is an attractive alternative to BP for signal recovery problems." ] }
1410.1606
1802032049
In this paper, we design a Collaborative-Hierarchical Sparse and Low-Rank (C-HiSLR) model that is natural for recognizing human emotion in visual data. Previous attempts require explicit expression components, which are often unavailable and difficult to recover. Instead, our model exploits the low-rank property to subtract neutral faces from expressive facial frames as well as performs sparse representation on the expression components with group sparsity enforced. For the CK+ dataset, C-HiSLR on raw expressive faces performs as competitive as the Sparse Representation based Classification (SRC) applied on manually prepared emotions. Our C-HiSLR performs even better than SRC in terms of true positive rate.
For multichannel @math with dependant coefficients across channels @cite_3 , @math where @math is low-rank. In a manner, Sparse Subspace Clustering @cite_17 of @math solves @math where @math is sparse and Principal Component Analysis is @math where @math is a projection matrix.
{ "cite_N": [ "@cite_3", "@cite_17" ], "mid": [ "1997201895", "1993962865" ], "abstract": [ "In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way.", "Many real-world problems deal with collections of high-dimensional data, such as images, videos, text, and web documents, DNA microarray data, and more. Often, such high-dimensional data lie close to low-dimensional structures corresponding to several classes or categories to which the data belong. In this paper, we propose and study an algorithm, called sparse subspace clustering, to cluster data points that lie in a union of low-dimensional subspaces. The key idea is that, among the infinitely many possible representations of a data point in terms of other points, a sparse representation corresponds to selecting a few points from the same subspace. This motivates solving a sparse optimization program whose solution is used in a spectral clustering framework to infer the clustering of the data into subspaces. Since solving the sparse optimization program is in general NP-hard, we consider a convex relaxation and show that, under appropriate conditions on the arrangement of the subspaces and the distribution of the data, the proposed minimization program succeeds in recovering the desired sparse representations. The proposed algorithm is efficient and can handle data points near the intersections of subspaces. Another key advantage of the proposed algorithm with respect to the state of the art is that it can deal directly with data nuisances, such as noise, sparse outlying entries, and missing entries, by incorporating the model of the data into the sparse optimization program. We demonstrate the effectiveness of the proposed algorithm through experiments on synthetic data as well as the two real-world problems of motion segmentation and face clustering." ] }
1410.1090
2159243025
In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12, Flickr 8K, and Flickr 30K). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.
The deep neural network structure develops rapidly in recent years in both the field of computer vision and natural language. For computer vision, Krizhevsky et. al @cite_0 proposed a deep convolutional neural networks with 8 layers (denoted as AlexNet) for image classification tasks and outperformed previous methods by a large margin. Recently, Girshick et. al @cite_4 proposed a object detection framework based on AlexNet. For natural language, the Recurrent Neural Network shows the state-of-the-art performance in many tasks, such as speech recognition and word embedding learning @cite_5 @cite_12 @cite_10 .
{ "cite_N": [ "@cite_4", "@cite_0", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2102605133", "1686810756", "179875071", "2950133940", "2171928131" ], "abstract": [ "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50 reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18 reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5 on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition", "The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.", "We present several modifications of the original recurrent neural network language model (RNN LM).While this model has been shown to significantly outperform many competitive language modeling techniques in terms of accuracy, the remaining problem is the computational complexity. In this work, we show approaches that lead to more than 15 times speedup for both training and testing phases. Next, we show importance of using a backpropagation through time algorithm. An empirical comparison with feedforward networks is also provided. In the end, we discuss possibilities how to reduce the amount of parameters in the model. The resulting RNN model can thus be smaller, faster both during training and testing, and more accurate than the basic one." ] }
1410.1090
2159243025
In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12, Flickr 8K, and Flickr 30K). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.
Many works treat the task of describe images as a retrieval task and formulate the problem as a ranking or embedding learning problem @cite_23 @cite_1 @cite_15 . They will first extract the word and sentence features (e.g. Socher et.al @cite_15 uses dependency tree Recursive Neural network to extract sentence features) as well as the image features. Then they optimize a ranking cost to learn an embedding model that maps both the language feature and the image feature to a common semantic feature space. In this way, they can directly calculate the distance between images and sentences. Most recently, Karpathy et.al @cite_11 showed that object level image features based on object detection results will generate better results than image features extracted at the global level.
{ "cite_N": [ "@cite_15", "@cite_1", "@cite_23", "@cite_11" ], "mid": [ "2149557440", "2123024445", "68733909", "2953276893" ], "abstract": [ "Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.", "Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.", "The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.", "We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit." ] }
1410.1309
2949427583
Modern data centers that provide Internet-scale services are stadium-size structures housing tens of thousands of heterogeneous devices (server clusters, networking equipment, power and cooling infrastructures) that must operate continuously and reliably. As part of their operation, these devices produce large amounts of data in the form of event and error logs that are essential not only for identifying problems but also for improving data center efficiency and management. These activities employ data analytics and often exploit hidden statistical patterns and correlations among different factors present in the data. Uncovering these patterns and correlations is challenging due to the sheer volume of data to be analyzed. This paper presents BiDAl, a prototype "log-data analysis framework" that incorporates various Big Data technologies to simplify the analysis of data traces from large clusters. BiDAl is written in Java with a modular and extensible architecture so that different storage backends (currently, HDFS and SQLite are supported), as well as different analysis languages (current implementation supports SQL, R and Hadoop MapReduce) can be easily selected as appropriate. We present the design of BiDAl and describe our experience using it to analyze several public traces of Google data clusters for building a simulation model capable of reproducing observed behavior.
With the public availability of the two Google cluster traces @cite_15 , numerous analyses of different aspects of the data have been reported. These provide general statistics about the workload and node state for such clusters @cite_16 @cite_13 @cite_17 and identify high levels of heterogeneity and dynamicity of the system, especially in comparison to grid workloads @cite_0 . However, no unified tool for studying the different traces were introduced. BiDAl is one of the first such tools facilitating Big Data analysis of trace data, which underlines similar properties of the public Google traces as the previous studies. Other traces have been analyzed in the past @cite_8 @cite_18 @cite_19 , but again without a dedicated tool available for further study.
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_0", "@cite_19", "@cite_15", "@cite_16", "@cite_13", "@cite_17" ], "mid": [ "2163291889", "2132353061", "2136510202", "228898923", "", "2060331550", "2129542763", "" ], "abstract": [ "MapReduce systems face enormous challenges due to increasing growth, diversity, and consolidation of the data and computation involved. Provisioning, configuring, and managing large-scale MapReduce clusters require realistic, workload-specific performance insights that existing MapReduce benchmarks are ill-equipped to supply. In this paper, we build the case for going beyond benchmarks for MapReduce performance evaluations. We analyze and compare two production MapReduce traces to develop a vocabulary for describing MapReduce workloads. We show that existing benchmarks fail to capture rich workload characteristics observed in traces, and propose a framework to synthesize and execute representative workloads. We demonstrate that performance evaluations using realistic workloads gives cluster operator new ways to identify workload-specific resource bottlenecks, and workload-specific choice of MapReduce task schedulers. We expect that once available, workload suites would allow cluster operators to accomplish previously challenging tasks beyond what we can now imagine, thus serving as a useful tool to help design and manage MapReduce systems.", "MapReduce is a programming paradigm for parallel processing that is increasingly being used for data-intensive applications in cloud computing environments. An understanding of the characteristics of workloads running in MapReduce environments benefits both the service providers in the cloud and users: the service provider can use this knowledge to make better scheduling decisions, while the user can learn what aspects of their jobs impact performance. This paper analyzes 10-months of MapReduce logs from the M45 supercomputing cluster which Yahoo! made freely available to select universities for academic research. We characterize resource utilization patterns, job patterns, and sources of failures. We use an instance-based learning technique that exploits temporal locality to predict job completion times from historical data and identify potential performance problems in our dataset.", "A new era of Cloud Computing has emerged, but the characteristics of Cloud load in data centers is not perfectly clear. Yet this characterization is critical for the design of novel Cloud job and resource management systems. In this paper, we comprehensively characterize the job task load and host load in a real-world production data center at Google Inc. We use a detailed trace of over 25 million tasks across over 12,500 hosts. We study the differences between a Google data center and other Grid HPC systems, from the perspective of both work load (w.r.t. jobs and tasks) and host load (w.r.t. machines). In particular, we study the job length, job submission frequency, and the resource utilization of jobs in the different systems, and also investigate valuable statistics of machine's maximum load, queue state and relative usage levels, with different job priorities and resource attributes. We find that the Google data center exhibits finer resource allocation with respect to CPU and memory than that of Grid HPC systems. Google jobs are always submitted with much higher frequency and they are much shorter than Grid jobs. As such, Google host load exhibits higher variance and noise.", "Abstract : In this paper, we analyze seven MapReduce workload traces from production clusters at Facebook and at Cloudera customers in e-commerce, telecommunications media, and retail. Cumulatively, these traces comprise over a year's worth of data logged from over 5000 machines, and contain over two million jobs that perform 1.6 exabytes of I O. Key observations include input data forms up to 77 of all bytes, 90 of jobs access KB to GB sized files that make up less than 16 of stored bytes, up to 60 of jobs re-access data that has been touched within the past 6 hours, peak-to-median job submission rates are 9:1 or greater, an average of 68 of all compute time is spent in map, task-seconds-per-byte is a key metric for balancing compute and data bandwidth task durations range from seconds to hours, and five out of seven workloads contain map-only jobs. We have also deployed a public workload repository with workload replay tools so that the researchers can systematically assess design priorities and compare performance across diverse MapReduce workloads.", "", "Cloud computing offers high scalability, flexibility and cost-effectiveness to meet emerging computing requirements. Understanding the characteristics of real workloads on a large production cloud cluster benefits not only cloud service providers but also researchers and daily users. This paper studies a large-scale Google cluster usage trace dataset and characterizes how the machines in the cluster are managed and the workloads submitted during a 29-day period behave. We focus on the frequency and pattern of machine maintenance events, job- and task-level workload behavior, and how the overall cluster resources are utilized.", "To better understand the challenges in developing effective cloud-based resource schedulers, we analyze the first publicly available trace data from a sizable multi-purpose cluster. The most notable workload characteristic is heterogeneity: in resource types (e.g., cores:RAM per machine) and their usage (e.g., duration and resources needed). Such heterogeneity reduces the effectiveness of traditional slot- and core-based scheduling. Furthermore, some tasks are constrained as to the kind of machine types they can use, increasing the complexity of resource assignment and complicating task migration. The workload is also highly dynamic, varying over time and most workload features, and is driven by many short jobs that demand quick scheduling decisions. While few simplifying assumptions apply, we find that many longer-running jobs have relatively stable resource utilizations, which can help adaptive resource schedulers.", "" ] }
1410.1309
2949427583
Modern data centers that provide Internet-scale services are stadium-size structures housing tens of thousands of heterogeneous devices (server clusters, networking equipment, power and cooling infrastructures) that must operate continuously and reliably. As part of their operation, these devices produce large amounts of data in the form of event and error logs that are essential not only for identifying problems but also for improving data center efficiency and management. These activities employ data analytics and often exploit hidden statistical patterns and correlations among different factors present in the data. Uncovering these patterns and correlations is challenging due to the sheer volume of data to be analyzed. This paper presents BiDAl, a prototype "log-data analysis framework" that incorporates various Big Data technologies to simplify the analysis of data traces from large clusters. BiDAl is written in Java with a modular and extensible architecture so that different storage backends (currently, HDFS and SQLite are supported), as well as different analysis languages (current implementation supports SQL, R and Hadoop MapReduce) can be easily selected as appropriate. We present the design of BiDAl and describe our experience using it to analyze several public traces of Google data clusters for building a simulation model capable of reproducing observed behavior.
BiDAl can be very useful in generating synthetic trace data. In general synthesising traces involves two phases: characterising the process by analyzing historical data and generation of new data. The aforementioned Google traces and log data from other sources have been successfully used for workload characterisation. In terms of resource usage, classes of jobs and their prevalence can be used to characterize workloads and generate new ones @cite_6 @cite_11 , or real usage patterns can be replaced by the average utilization @cite_5 . Placement constraints have also been synthesized using clustering for characterisation @cite_3 . Our tool enables workload and cloud structure characterisation through fitting of distributions that can be further used for trace synthesis. The analysis is not restricted to one particular aspect, but the flexibility of our tool allows the the user to decide what phenomenon to characterize and then simulate.
{ "cite_N": [ "@cite_5", "@cite_3", "@cite_6", "@cite_11" ], "mid": [ "2182419557", "2028617807", "", "2143492785" ], "abstract": [ "The increase in scale and complexity of large compute clusters motivates a need for representative workload benchmarks to evaluate the performance impact of system changes, so as to assist in designing better scheduling algorithms and in carrying out management activities. To achieve this goal, it is necessary to construct workload characterizations from which realistic performance benchmarks can be created. In this paper, we focus on characterizing run-time task resource usage for CPU, memory and disk. The goal is to find an accurate characterization that can faithfully reproduce the performance of historical workload traces in terms of key performance metrics, such as task wait time and machine resource utilization. Through experiments using workload traces from Google production clusters, we find that simply using the mean of task usage can generate synthetic workload traces that accurately reproduce resource utilizations and task waiting time. This seemingly surprising result can be justified by the fact that resource usage for CPU, memory and disk are relatively stable over time for the majority of the tasks. Our work not only presents a simple technique for constructing realistic workload benchmarks, but also provides insights into understanding workload performance in production compute clusters.", "Evaluating the performance of large compute clusters requires benchmarks with representative workloads. At Google, performance benchmarks are used to obtain performance metrics such as task scheduling delays and machine resource utilizations to assess changes in application codes, machine configurations, and scheduling algorithms. Existing approaches to workload characterization for high performance computing and grids focus on task resource requirements for CPU, memory, disk, I O, network, etc. Such resource requirements address how much resource is consumed by a task. However, in addition to resource requirements, Google workloads commonly include task placement constraints that determine which machine resources are consumed by tasks. Task placement constraints arise because of task dependencies such as those related to hardware architecture and kernel version. This paper develops methodologies for incorporating task placement constraints and machine properties into performance benchmarks of large compute clusters. Our studies of Google compute clusters show that constraints increase average task scheduling delays by a factor of 2 to 6, which often results in tens of minutes of additional task wait time. To understand why, we extend the concept of resource utilization to include constraints by introducing a new metric, the Utilization Multiplier (UM). UM is the ratio of the resource utilization seen by tasks with a constraint to the average utilization of the resource. UM provides a simple model of the performance impact of constraints in that task scheduling delays increase with UM. Last, we describe how to synthesize representative task constraints and machine properties, and how to incorporate this synthesis into existing performance benchmarks. Using synthetic task constraints and machine properties generated by our methodology, we accurately reproduce performance metrics for benchmarks of Google compute clusters with a discrepancy of only 13 in task scheduling delay and 5 in resource utilization.", "", "Designing cloud computing setups is a challenging task. It involves understanding the impact of a plethora of parameters ranging from cluster configuration, partitioning, networking characteristics, and the targeted applications' behavior. The design space, and the scale of the clusters, make it cumbersome and error-prone to test different cluster configurations using real setups. Thus, the community is increasingly relying on simulations and models of cloud setups to infer system behavior and the impact of design choices. The accuracy of the results from such approaches depends on the accuracy and realistic nature of the workload traces employed. Unfortunately, few cloud workload traces are available (in the public domain). In this paper, we present the key steps towards analyzing the traces that have been made public, e.g., from Google, and inferring lessons that can be used to design realistic cloud workloads as well as enable thorough quantitative studies of Hadoop design. Moreover, we leverage the lessons learned from the traces to undertake two case studies: (i) Evaluating Hadoop job schedulers, and (ii) Quantifying the impact of shared storage on Hadoop system performance." ] }
1410.1309
2949427583
Modern data centers that provide Internet-scale services are stadium-size structures housing tens of thousands of heterogeneous devices (server clusters, networking equipment, power and cooling infrastructures) that must operate continuously and reliably. As part of their operation, these devices produce large amounts of data in the form of event and error logs that are essential not only for identifying problems but also for improving data center efficiency and management. These activities employ data analytics and often exploit hidden statistical patterns and correlations among different factors present in the data. Uncovering these patterns and correlations is challenging due to the sheer volume of data to be analyzed. This paper presents BiDAl, a prototype "log-data analysis framework" that incorporates various Big Data technologies to simplify the analysis of data traces from large clusters. BiDAl is written in Java with a modular and extensible architecture so that different storage backends (currently, HDFS and SQLite are supported), as well as different analysis languages (current implementation supports SQL, R and Hadoop MapReduce) can be easily selected as appropriate. We present the design of BiDAl and describe our experience using it to analyze several public traces of Google data clusters for building a simulation model capable of reproducing observed behavior.
Recently, the Failure Trace Archive (FTA) has published a toolkit for analysis of failure trace data @cite_1 . This toolkit is implemented in Matlab and enables analysis of traces from the FTA repository, which consists of about 20 public traces. It is, to our knowledge, the only other tool for large scale trace data analysis. However, the analysis is only possible if traces are stored in the FTA format in a relational database, and is only available for traces containing failure information. BiDAl on the other hand provides two different storage options, including HDFS, with transfer among them transparent to the user, and is available for any trace data, regardless of what process it describes. Additionally, usage of FTA on new data requires publication of the data in their repository, while BiDAl can be used also for sensitive data that cannot be made public.
{ "cite_N": [ "@cite_1" ], "mid": [ "2158197021" ], "abstract": [ "With the increasing presence, scale, and complexity of distributed systems, resource failures are becoming an important and practical topic of computer science research. While numerous failure models and failure-aware algorithms exist, their comparison has been hampered by the lack of public failure data sets and data processing tools. To facilitate the design, validation, and comparison of fault-tolerant models and algorithms, we have created the Failure Trace Archive (FTA)-an online, public repository of failure traces collected from diverse parallel and distributed systems. In this work, we first describe the design of the archive, in particular of the standard FTA data format, and the design of a toolbox that facilitates automated analysis of trace data sets. We also discuss the use of the FTA for various current and future purposes. Second, after applying the toolbox to nine failure traces collected from distributed systems used in various application domains (e.g., HPC, Internet operation, and various online applications), we present a comparative analysis of failures in various distributed systems. Our analysis presents various statistical insights and typical statistical modeling results for the availability of individual resources in various distributed systems. The analysis results underline the need for public availability of trace data from different distributed systems. Last, we show how different interpretations of the meaning of failure data can result in different conclusions for failure modeling and job scheduling in distributed systems. Our results for different interpretations show evidence that there may be a need for further revisiting existing failure-aware algorithms, when applied for general rather than for domain-specific distributed systems." ] }
1410.1309
2949427583
Modern data centers that provide Internet-scale services are stadium-size structures housing tens of thousands of heterogeneous devices (server clusters, networking equipment, power and cooling infrastructures) that must operate continuously and reliably. As part of their operation, these devices produce large amounts of data in the form of event and error logs that are essential not only for identifying problems but also for improving data center efficiency and management. These activities employ data analytics and often exploit hidden statistical patterns and correlations among different factors present in the data. Uncovering these patterns and correlations is challenging due to the sheer volume of data to be analyzed. This paper presents BiDAl, a prototype "log-data analysis framework" that incorporates various Big Data technologies to simplify the analysis of data traces from large clusters. BiDAl is written in Java with a modular and extensible architecture so that different storage backends (currently, HDFS and SQLite are supported), as well as different analysis languages (current implementation supports SQL, R and Hadoop MapReduce) can be easily selected as appropriate. We present the design of BiDAl and describe our experience using it to analyze several public traces of Google data clusters for building a simulation model capable of reproducing observed behavior.
Although public tools for analysis of general trace data are scarce, several large corporations have reported building in-house applications for analysis of logs. These are, in general, used for live monitoring of the system, and analyze in real time large amounts of data to provide visualisation that helps operators make administrative decisions. While Facebook use Scuba @cite_9 , mentioned before, Microsoft have developed the Autopilot system @cite_12 , which helps administer their clusters. This has a component (Cockpit) that analyzes logs and provides real time statistics to operators. An example from Google is CPI2 @cite_21 which monitors Cycles per Instruction for running tasks to determine job performance interference. This helps in deciding task migration or throttling to maintain high performance of production jobs. All these tools are, however, not open, apply only to data of the corresponding company and sometimes require very large computational resources (e.g. Scuba). Our aim in this paper is to provide an open research tool that can be used also by smaller research groups that have more limited resources.
{ "cite_N": [ "@cite_9", "@cite_21", "@cite_12" ], "mid": [ "2024463287", "2093941454", "2001276096" ], "abstract": [ "Facebook takes performance monitoring seriously. Performance issues can impact over one billion users so we track thousands of servers, hundreds of PB of daily network traffic, hundreds of daily code changes, and many other metrics. We require latencies of under a minute from events occuring (a client request on a phone, a bug report filed, a code change checked in) to graphs showing those events on developers' monitors. Scuba is the data management system Facebook uses for most real-time analysis. Scuba is a fast, scalable, distributed, in-memory database built at Facebook. It currently ingests millions of rows (events) per second and expires data at the same rate. Scuba stores data completely in memory on hundreds of servers each with 144 GB RAM. To process each query, Scuba aggregates data from all servers. Scuba processes almost a million queries per day. Scuba is used extensively for interactive, ad hoc, analysis queries that run in under a second over live data. In addition, Scuba is the workhorse behind Facebook's code regression analysis, bug report monitoring, ads revenue monitoring, and performance debugging.", "Performance isolation is a key challenge in cloud computing. Unfortunately, Linux has few defenses against performance interference in shared resources such as processor caches and memory buses, so applications in a cloud can experience unpredictable performance caused by other programs' behavior. Our solution, CPI2, uses cycles-per-instruction (CPI) data obtained by hardware performance counters to identify problems, select the likely perpetrators, and then optionally throttle them so that the victims can return to their expected behavior. It automatically learns normal and anomalous behaviors by aggregating data from multiple tasks in the same job. We have rolled out CPI2 to all of Google's shared compute clusters. The paper presents the analysis that lead us to that outcome, including both case studies and a large-scale evaluation of its ability to solve real production issues.", "Microsoft is rapidly increasing the number of large-scale web services that it operates. Services such as Windows Live Search and Windows Live Mail operate from data centers that contain tens or hundreds of thousands of computers, and it is essential that these data centers function reliably with minimal human intervention. This paper describes the first version of Autopilot, the automatic data center management infrastructure developed within Microsoft over the last few years. Autopilot is responsible for automating software provisioning and deployment; system monitoring; and carrying out repair actions to deal with faulty software and hardware. A key assumption underlying Autopilot is that the services built on it must be designed to be manageable. We also therefore outline the best practices adopted by applications that run on Autopilot." ] }
1410.1282
1949468118
Due to various green initiatives, renewable energy will be massively incorporated into the future smart grid. However, the intermittency of the renewables may result in power imbalance, thus adversely affecting the stability of a power system. Frequency regulation may be used to maintain the power balance at all times. As electric vehicles (EVs) become popular, they may be connected to the grid to form a vehicle-to-grid (V2G) system. An aggregation of EVs can be coordinated to provide frequency regulation services. However, V2G is a dynamic system where the participating EVs come and go independently. Thus, it is not easy to estimate the regulation capacities for V2G. In a preliminary study, we modeled an aggregation of EVs with a queueing network, whose structure allows us to estimate the capacities for regulation-up and regulation-down separately. The estimated capacities from the V2G system can be used for establishing a regulation contract between an aggregator and the grid operator, and facilitating a new business model for V2G. In this paper, we extend our previous development by designing a smart charging mechanism that can adapt to given characteristics of the EVs and make the performance of the actual system follow the analytical model.
The preliminary version of this work can be found in @cite_7 . In @cite_7 , we defined a queueing network model to estimate the RU and RD capacities. However, we assumed that there exists a smart charging mechanism which makes the service times at various queues exponentially distributed. This exponential distribution property is one of the keys to developing mathematically tractable closed-form solutions for the capacities. In this paper, we relax this assumption by explaining how such smart charging mechanism works. It allows the model to function even when the attributes of EVs are distributed in unknown distributions. We also perform simulation to verify the behavior of this mechanism when applied to various queues in the model.
{ "cite_N": [ "@cite_7" ], "mid": [ "2161715775" ], "abstract": [ "Due to green initiatives adopted in many countries, renewable energy will be massively incorporated into the future smart grid. However, the intermittency of the renewables may result in power imbalance, thus adversely affecting the stability of a power system. Voltage regulation may be used to maintain the power balance at all times. As electric vehicles (EVs) become popular, they may be connected to the grid to form a vehicle-to-grid (V2G) system. An aggregation of EVs can be coordinated to provide voltage regulation services. However, V2G is a dynamic system where EVs are connected to the grid according to the owners' habits. In this paper, we model an aggregation of EVs with a queueing network, whose structure allows us to estimate the capacities for regulation up and regulation down, separately. The estimated capacities from the V2G system can be used for establishing a regulation contract between an aggregator and the grid operator, and facilitate a new business model for V2G." ] }
1410.1282
1949468118
Due to various green initiatives, renewable energy will be massively incorporated into the future smart grid. However, the intermittency of the renewables may result in power imbalance, thus adversely affecting the stability of a power system. Frequency regulation may be used to maintain the power balance at all times. As electric vehicles (EVs) become popular, they may be connected to the grid to form a vehicle-to-grid (V2G) system. An aggregation of EVs can be coordinated to provide frequency regulation services. However, V2G is a dynamic system where the participating EVs come and go independently. Thus, it is not easy to estimate the regulation capacities for V2G. In a preliminary study, we modeled an aggregation of EVs with a queueing network, whose structure allows us to estimate the capacities for regulation-up and regulation-down separately. The estimated capacities from the V2G system can be used for establishing a regulation contract between an aggregator and the grid operator, and facilitating a new business model for V2G. In this paper, we extend our previous development by designing a smart charging mechanism that can adapt to given characteristics of the EVs and make the performance of the actual system follow the analytical model.
There are many studies on V2G since it is expected to be a major component in the future smart grid. In @cite_23 and @cite_34 , V2G was systematically introduced with studies on the business model for V2G. They gave information of different kinds of EVs and different power markets, including baseload power, peak power, spinning reserves, and regulation. The merits of V2G are quick response and high-value services with low capital costs, but V2G has shorter lifespans and higher operating costs per kWh. They gave some rough idea about the scale of V2G so as to make it comparable with the traditional regulation from generators. V2G energy trading was studied as an auction in @cite_30 . Interested readers can refer to @cite_37 for a comprehensive review on the impact of V2G on distribution systems and utility interfaces of power systems.
{ "cite_N": [ "@cite_30", "@cite_37", "@cite_34", "@cite_23" ], "mid": [ "2043922680", "2058606879", "2063996928", "2142871505" ], "abstract": [ "In this paper, we propose a novel multi-layer market for analyzing the energy exchange process between electric vehicles and the smart grid. The proposed market consists essentially of two layers: a macro layer and a micro layer. At the macro layer, we propose a double auction mechanism using which the aggregators, acting as sellers, and the smart grid elements, acting as buyers, interact so as to trade energy. We show that this double auction mechanism is strategy-proof and converges asymptotically. At the micro layer, the aggregators, which are the sellers in the macro layer, are given monetary incentives so as to sell the energy of associated plug-in hybrid electric vehicles (PHEVs) and to maximize their revenues. We analyze the interaction between the macro and micro layers and study some representative cases. Depending on the elasticity of the supply and demand, the utility functions are analyzed under different scenarios. Simulation results show that the proposed approach can significantly increase the utility of PHEVs, compared to a classical greedy approach.", "Plug-in vehicles can behave either as loads or as a distributed energy and power resource in a concept known as vehicle-to-grid (V2G) connection. This paper reviews the current status and implementation impact of V2G grid-to-vehicle (G2V) technologies on distributed systems, requirements, benefits, challenges, and strategies for V2G interfaces of both individual vehicles and fleets. The V2G concept can improve the performance of the electricity grid in areas such as efficiency, stability, and reliability. A V2G-capable vehicle offers reactive power support, active power regulation, tracking of variable renewable energy sources, load balancing, and current harmonic filtering. These technologies can enable ancillary services, such as voltage and frequency control and spinning reserve. Costs of V2G include battery degradation, the need for intensive communication between the vehicles and the grid, effects on grid distribution equipment, infrastructure changes, and social, political, cultural, and technical obstacles. Although V2G operation can reduce the lifetime of vehicle batteries, it is projected to become economical for vehicle owners and grid operators. Components and unidirectional bidirectional power flow technologies of V2G systems, individual and aggregated structures, and charging recharging frequency and strategies (uncoordinated coordinated smart) are addressed. Three elements are required for successful V2G operation: power connection to the grid, control and communication between vehicles and the grid operator, and on-board off-board intelligent metering. Success of the V2G concept depends on standardization of requirements and infrastructure decisions, battery technology, and efficient and smart scheduling of limited fast-charge infrastructure. A charging discharging infrastructure must be deployed. Economic benefits of V2G technologies depend on vehicle aggregation and charging recharging frequency and strategies. The benefits will receive increased attention from grid operators and vehicle owners in the future.", "Abstract Vehicle-to-grid power (V2G) uses electric-drive vehicles (battery, fuel cell, or hybrid) to provide power for specific electric markets. This article examines the systems and processes needed to tap energy in vehicles and implement V2G. It quantitatively compares today's light vehicle fleet with the electric power system. The vehicle fleet has 20 times the power capacity, less than one-tenth the utilization, and one-tenth the capital cost per prime mover kW. Conversely, utility generators have 10–50 times longer operating life and lower operating costs per kWh. To tap V2G is to synergistically use these complementary strengths and to reconcile the complementary needs of the driver and grid manager. This article suggests strategies and business models for doing so, and the steps necessary for the implementation of V2G. After the initial high-value, V2G markets saturate and production costs drop, V2G can provide storage for renewable energy generation. Our calculations suggest that V2G could stabilize large-scale (one-half of US electricity) wind power with 3 of the fleet dedicated to regulation for wind, plus 8–38 of the fleet providing operating reserves or storage for wind. Jurisdictions more likely to take the lead in adopting V2G are identified.", "As the light vehicle fleet moves to electric drive (hybrid, battery, and fuel cell vehicles), an opportunity opens for “vehicle-to-grid” (V2G) power. This article defines the three vehicle types that can produce V2G power, and the power markets they can sell into. V2G only makes sense if the vehicle and power market are matched. For example, V2G appears to be unsuitable for baseload power—the constant round-theclock electricity supply—because baseload power can be provided more cheaply by large generators, as it is today. Rather, V2G’s greatest near-term promise is for quick-response, high-value electric services. These quick-response electric services are purchased to balance constant fluctuations in load and to adapt to unexpected equipment failures; they account for 5–10 of electric cost—$ 12 billion per year in the US. This article develops equations to calculate the capacity for grid power from three types of electric drive vehicles. These equations are applied to evaluate revenue and costs for these vehicles to supply electricity to three electric markets (peak power, spinning reserves, and regulation). The results suggest that the engineering rationale and economic motivation for V2G power are compelling. The societ al advantages of developing V2G include an additional revenue stream for cleaner vehicles, increased stability and reliability of the electric grid, lower electric system costs, and eventually, inexpensive storage and backup for renewable electricity. © 2005 Elsevier B.V. All rights reserved." ] }
1410.1282
1949468118
Due to various green initiatives, renewable energy will be massively incorporated into the future smart grid. However, the intermittency of the renewables may result in power imbalance, thus adversely affecting the stability of a power system. Frequency regulation may be used to maintain the power balance at all times. As electric vehicles (EVs) become popular, they may be connected to the grid to form a vehicle-to-grid (V2G) system. An aggregation of EVs can be coordinated to provide frequency regulation services. However, V2G is a dynamic system where the participating EVs come and go independently. Thus, it is not easy to estimate the regulation capacities for V2G. In a preliminary study, we modeled an aggregation of EVs with a queueing network, whose structure allows us to estimate the capacities for regulation-up and regulation-down separately. The estimated capacities from the V2G system can be used for establishing a regulation contract between an aggregator and the grid operator, and facilitating a new business model for V2G. In this paper, we extend our previous development by designing a smart charging mechanism that can adapt to given characteristics of the EVs and make the performance of the actual system follow the analytical model.
Queueing theory has been used to study the aggregate behavior of EVs. In @cite_4 , a simple @math queueing model for EV charging was devised and a similar idea was also adopted in @cite_14 to determine V2G capacity. Ref. @cite_19 suggested an @math queue with random interruptions to model the EV charging process and analyzed the dynamics with time-scale decomposition. The service'' process adopted was assumed to be exponential but it may not be practical unless there is a special arrangement to conform with the exponential property. In this work, we will design a smart charging mechanism to overcome this problem.
{ "cite_N": [ "@cite_19", "@cite_14", "@cite_4" ], "mid": [ "2029268553", "2063915800", "2142022337" ], "abstract": [ "We consider a queuing model with applications to electric vehicle (EV) charging systems in smart grids. We adopt a scheme where an Electric Service Company (ESCo) broadcasts a one bit signal to EVs, possibly indicating ‘on-peak’ periods during which electricity cost is high. EVs randomly suspend resume charging based on the signal. To model the dynamics of EVs we propose an M M ∞ queue with random interruptions, and analyze the dynamics using time-scale decomposition. There exists a trade-off: one may postpone charging activity to ‘off-peak’ periods during which electricity cost is cheaper, however this incurs extra delay in completion of charging. Using our model we characterize achievable trade-offs between the mean cost and delay perceived by users. Next we consider a scenario where EVs respond to the signal based on the individual loads. Simulation results show that peak electricity demand can be reduced if EVs carrying higher loads are less sensitive to the signal.", "Vehicle-to-grid (V2G) units are gaining prominence and may dominate the auto-market in the near future. The V2G batteries require corporate parking lots for charging and discharging operations. The electric power capacity of an existing parking lot can be increased by the installation of photovoltaic (PV) rooftops. This paper describes mathematical models for estimating the electric power capacity of a V2G parking lot (VPL) system with PV canopy. The electric vehicle (EV) demand supply model was formulated as a queuing theory problem, exhibiting stochastic characteristics. New formulae were developed to address the impacts of battery charger efficiency on the amount of power demand during battery charging, and also how the latter is effected by inverter efficiency during discharging. Mathematical models for grid gain factor were developed. The proposed models were tested using Tesla Roadster EV and Nissan leaf EV. Promising simulation results are gained leading to a conclusion that vehicle parking lots with PV facilities can potentially relieve and enhance the grid capacity. Results show that 60 gain factor is possible. The effect of weather uncertainties and energy market price were studied. The study could be useful in battery-charger control studies.", "Abstract This article introduces a specific and simple model for electric vehicles suitable for load flow studies. The electric vehicles demand system is modeled as a PQ bus with stochastic characteristics based on the concept of the queuing theory. All appropriate variables of stochastic PQ buses are given with closed formulas as a function of charging time. A specific manufacturer model of electric vehicles is used as study case." ] }
1410.1471
2265687946
The Springer correspondence makes a link between the characters of a Weyl group and the geometry of the nilpotent cone of the corresponding semisimple Lie algebra. In this article, we consider a modular version of the theory, and show that the decomposition numbers of a Weyl group are particular cases of decomposition numbers for equivariant perverse sheaves on the nilpotent cone. We give some decomposition numbers which can be obtained geometrically. In the case of the symmetric group, we show that James' row and column removal rule for the symmetric group can be derived from a smooth equivalence between nilpotent singularities proved by Kraft and Procesi. We give the complete structure of the Springer and Grothendieck sheaves in the case of @math . Finally, we determine explicitly the modular Springer correspondence for exceptional types.
In @cite_26 , Mautner proves that, for @math , the category of @math -equivariant perverse sheaves with @math coefficients on the nilpotent cone of @math is equivalent to the category of polynomial representations of @math over @math of degree @math , using Lusztig's embedding of the nilpotent cone in the affine Grassmannian @cite_40 , a map in the other direction at the level of stacks, and the geometric Satake correspondence @cite_32 . Using the modular Springer correspondence, he then provides a geometric proof of Schur-Weyl duality, the Schur functor being described by homomorphisms from the Springer sheaf.
{ "cite_N": [ "@cite_40", "@cite_26", "@cite_32" ], "mid": [ "2037099729", "2950339729", "2056647821" ], "abstract": [ "", "We give geometric descriptions of the category C_k(n,d) of rational polynomial representations of GL_n over a field k of degree d for d less than or equal to n, the Schur functor and Schur-Weyl duality. The descriptions and proofs use a modular version of Springer theory and relationships between the equivariant geometry of the affine Grassmannian and the nilpotent cone for the general linear groups. Motivated by this description, we propose generalizations for an arbitrary connected complex reductive group of the category C_k(n,d) and the Schur functor.", "As such, it can be viewed as a first step in the geometric Langlands program. The connected complex reductive groups have a combinatorial classification by their root data. In the root datum the roots and the co-roots appear in a symmetric manner and so the connected reductive algebraic groups come in pairs. If G is a reductive group, we write G for its companion and call it the dual group G. The notion of the dual group itself does not appear in Satake's paper, but was introduced by Langlands, together with its various elaborations, in [LI], [L2] and is a cornerstone of the Langlands program. It also appeared later in physics [MO], [GNO]. In this paper we discuss the basic relationship between G and G. We begin with a reductive G and consider the affine Grassmannian Qx, the Grassmannian for the loop group of G. For technical reasons we work with formal algebraic loops. The affine Grassmannian is an infinite dimen sional complex space. We consider a certain category of sheaves, the spherical perverse sheaves, on ?r. These sheaves can be multiplied using a convolution product and this leads to a rather explicit construction of a Hopf algebra, by what has come to be known as Tannakian formalism. The resulting Hopf algebra turns out to be the ring of functions on G. In this interpretation, the spherical perverse sheaves on the affine Grassman nian correspond to finite dimensional complex representations of G. Thus, instead of defining G in terms of the classification of reductive groups, we pro vide a canonical construction of G, starting from G. We can carry out our construction over the integers. The spherical perverse sheaves are then those with integral coefficients, but the Grassmannian remains a complex algebraic object." ] }
1410.1471
2265687946
The Springer correspondence makes a link between the characters of a Weyl group and the geometry of the nilpotent cone of the corresponding semisimple Lie algebra. In this article, we consider a modular version of the theory, and show that the decomposition numbers of a Weyl group are particular cases of decomposition numbers for equivariant perverse sheaves on the nilpotent cone. We give some decomposition numbers which can be obtained geometrically. In the case of the symmetric group, we show that James' row and column removal rule for the symmetric group can be derived from a smooth equivalence between nilpotent singularities proved by Kraft and Procesi. We give the complete structure of the Springer and Grothendieck sheaves in the case of @math . Finally, we determine explicitly the modular Springer correspondence for exceptional types.
We should also mention @cite_12 which, apart from being a continuation of @cite_14 , contains many compatibilities, notably of induction and restriction functors with respect to a modular Springer functor defined in terms of restriction to the nilpotent cone. By @cite_15 , those compatibilities are valid also for the Fourier transform construction of the present paper.
{ "cite_N": [ "@cite_15", "@cite_14", "@cite_12" ], "mid": [ "2028153807", "", "2950941433" ], "abstract": [ "We show that two Weyl group actions on the Springer sheaf with arbitrary coefficients, one defined by Fourier transform and one by restriction, agree up to a twist by the sign character. This generalizes a familiar result from the setting of l-adic cohomology, making it applicable to modular representation theory. We use the Weyl group actions to define a Springer correspondence in this generality, and identify the zero weight spaces of small representations in terms of this Springer correspondence.", "", "For a simply-connected simple algebraic group @math over @math , we exhibit a subvariety of its affine Grassmannian that is closely related to the nilpotent cone of @math , generalizing a well-known fact about @math . Using this variety, we construct a sheaf-theoretic functor that, when combined with the geometric Satake equivalence and the Springer correspondence, leads to a geometric explanation for a number of known facts (mostly due to Broer and Reeder) about small representations of the dual group." ] }
1410.0706
2953295021
Efficient and flexible information matching over wireless networks has become increasingly important and challenging with the popularity of smart devices and the growth of social-network-based applications. Some existing approaches designed for wired networks are not applicable to wireless networks, due to their overwhelming control overheads. In this paper, we propose a reliable and scalable binary range vector summary tree (BRVST) infrastructure for flexible information expression support, effective content matching and timely information dissemination over the dynamic wireless network. A novel attribute range vector structure has been introduced for efficient and accurate content representation and a summary tree structure to facilitate information aggregation. For robust and scalable operations over dynamic wireless network, the proposed overlay system exploits a virtual hierarchical geographic management framework. Extensive simulations demonstrate that BRVST has a significantly faster event matching speed, while incurs very low storage and traffic overhead, as compared with peer schemes tested.
Other types of systems such as @cite_7 by assume tree-based topologies, which are hard to maintain and vulnerable to network topology changes. To avoid this drawback, the wireless network can be divided into regions for more efficient management and information distribution. DRIP @cite_4 groups nodes registered to different broker nodes into Voronoi regions whose shape and size could change over time. However, it may involve a high overhead to maintain the topology region especially over a mobile network. Based on virtual infrastructure, our design avoids the high overhead of region maintenance and also facilitates information aggregation to minimize information update changes.
{ "cite_N": [ "@cite_4", "@cite_7" ], "mid": [ "2103856529", "2132871830" ], "abstract": [ "The publish subscribe (pub sub for short) paradigm is used to deliver events from a source to interested clients in an asynchronous way. Recently, extending a pub sub system in wireless networks has become a promising topic. However, most existing works focus on pub sub systems in infrastructured wireless networks. To adapt pub sub systems to mobile ad hoc networks, we propose DRIP, a dynamic Voronoi region-based pub sub protocol. In our design, the network is dynamically divided into several Voronoi regions after choosing proper nodes as broker nodes. Each broker node is used to collect subscriptions and detected events, as well as efficiently notify subscribers with matched events in its Voronoi region. Other nodes join their nearest broker nodes to submit subscriptions, publish events, and wait for notifications of their requested events. Broker nodes cooperate with each other for sharing subscriptions and useful events. Our proposal includes two major components: a Voronoi regions construction protocol, and a delivery mechanism that implements the pub sub paradigm. The effectiveness of DRIP is demonstrated through comprehensive simulation studies.", "Distributed content-based publish-subscribe middleware provides the decoupling, flexibility, expressiveness, and scalability required by highly dynamic distributed applications, e.g., mobile ones. Nevertheless, the available systems exploiting a distributed event dispatcher are unable to rearrange dynamically their behavior to adapt to changes in the topology of the dispatching infrastructure. In this work, we first define a strawman solution based on ideas proposed (but never precisely characterized) in existing work. We then analyze this solution and achieve a deeper understanding of how the event dispatching information is reconfigured. Based on this analysis, we modify the strawman approach to reduce its overhead. Simulations show that the reduction is significant (up to 50 ), and yet the algorithm is resilient to concurrent reconfigurations." ] }
1410.0471
2950774424
This paper describes PinView, a content-based image retrieval system that exploits implicit relevance feedback collected during a search session. PinView contains several novel methods to infer the intent of the user. From relevance feedback, such as eye movements or pointer clicks, and visual features of images, PinView learns a similarity metric between images which depends on the current interests of the user. It then retrieves images with a specialized online learning algorithm that balances the tradeoff between exploring new images and exploiting the already inferred interests of the user. We have integrated PinView to the content-based image retrieval system PicSOM, which enables applying PinView to real-world image databases. With the new algorithms PinView outperforms the original PicSOM, and in online experiments with real users the combination of implicit and explicit feedback gives the best results.
Since the mid-1990's, has been used for incorporating the user's preferences and his understanding of the semantic similarity of images in the retrieval process @cite_50 @cite_56 . Research on relevance feedback techniques constitutes a subfield of CBIR research in its own right and the early works on the topic have been summarized in @cite_52 . The forms of explicit user interaction and giving of relevance feedback in interactive CBIR vary. In retrieval systems with multiple feature representations of the images, a straightforward approach could be to ask the user to tune the relative weights of the features in order to be able to find more relevant images @cite_56 . The weight tuning method and other approaches where the user is required to be able to modify the internal parameters of the CBIR system are, however, impractical for non-professional use.
{ "cite_N": [ "@cite_52", "@cite_50", "@cite_56" ], "mid": [ "", "1762020163", "2101498401" ], "abstract": [ "", "In addition to the problem of which image analysis models to use in digital libraries, e.g. wavelet, Wold, color histograms, is the problem of how to combine these models with their different strengths. Most present systems place the burden of combination on the user, e.g. the user specifies 50 texture features, 20 color features, etc. This is a problem since most users do not know how to best pick the settings for the given data and search problem. The paper addresses this problem, describing research in progress for a system that: (1) automatically infers which combination of models best represents the data of interest to the user; and (2) learns continuously during interaction with each user. In particular, these two components-inference and learning-provide a solution that adapts to the subjective and hard to predict behaviors frequently seen when people query or browse image libraries.", "Content-based image retrieval (CBIR) has become one of the most active research areas in the past few years. Many visual feature representations have been explored and many systems built. While these research efforts establish the basis of CBIR, the usefulness of the proposed approaches is limited. Specifically, these efforts have relatively ignored two distinct characteristics of CBIR systems: (1) the gap between high-level concepts and low-level features, and (2) the subjectivity of human perception of visual content. This paper proposes a relevance feedback based interactive retrieval approach, which effectively takes into account the above two characteristics in CBIR. During the retrieval process, the user's high-level query and perception subjectivity are captured by dynamically updated weights based on the user's feedback. The experimental results over more than 70000 images show that the proposed approach greatly reduces the user's effort of composing a query, and captures the user's information need more precisely." ] }
1410.0471
2950774424
This paper describes PinView, a content-based image retrieval system that exploits implicit relevance feedback collected during a search session. PinView contains several novel methods to infer the intent of the user. From relevance feedback, such as eye movements or pointer clicks, and visual features of images, PinView learns a similarity metric between images which depends on the current interests of the user. It then retrieves images with a specialized online learning algorithm that balances the tradeoff between exploring new images and exploiting the already inferred interests of the user. We have integrated PinView to the content-based image retrieval system PicSOM, which enables applying PinView to real-world image databases. With the new algorithms PinView outperforms the original PicSOM, and in online experiments with real users the combination of implicit and explicit feedback gives the best results.
The more traditional implicit feedback approaches rely on feedback obtained from the control devices. @cite_2 studied use of mouse and keyboard activity, as well as time spent on the page and scrolling, and @cite_24 compared the amount of information between such implicit channels and explicit feedback. The most consistent finding in these kinds of works has been that the time spent on the page and the way the user exits the page are good indicators of relevance. More advanced works still using the regular control devices use click-through data, typically on the search result page @cite_13 . While these sources of implicit information are readily available for all search tools, they provide a rather limited view of the actions and intents of the user.
{ "cite_N": [ "@cite_24", "@cite_13", "@cite_2" ], "mid": [ "2123937625", "2152314154", "2074680184" ], "abstract": [ "Of growing interest in the area of improving the search experience is the collection of implicit user behavior measures (implicit measures) as indications of user interest and user satisfaction. Rather than having to submit explicit user feedback, which can be costly in time and resources and alter the pattern of use within the search experience, some research has explored the collection of implicit measures as an efficient and useful alternative to collecting explicit measure of interest from users.This research article describes a recent study with two main objectives. The first was to test whether there is an association between explicit ratings of user satisfaction and implicit measures of user interest. The second was to understand what implicit measures were most strongly associated with user satisfaction. The domain of interest was Web search. We developed an instrumented browser to collect a variety of measures of user activity and also to ask for explicit judgments of the relevance of individual pages visited and entire search sessions. The data was collected in a workplace setting to improve the generalizability of the results.Results were analyzed using traditional methods (e.g., Bayesian modeling and decision trees) as well as a new usage behavior pattern analysis (“gene analysis”). We found that there was an association between implicit measures of user activity and the user's explicit satisfaction ratings. The best models for individual pages combined clickthrough, time spent on the search result page, and how a user exited a result or ended a search session (exit type end action). Behavioral patterns (through the gene analysis) can also be used to predict user satisfaction for search sessions.", "This paper examines the reliability of implicit feedback generated from clickthrough data in WWW search. Analyzing the users' decision process using eyetracking and comparing implicit feedback against manual relevance judgments, we conclude that clicks are informative but biased. While this makes the interpretation of clicks as absolute relevance judgments difficult, we show that relative preferences derived from clicks are reasonably accurate on average.", "Recommender systems provide personalized suggestions about items that users will find interesting. Typically, recommender systems require a user interface that can intelligently'' determine the interest of a user and use this information to make suggestions. The common solution, explicit ratings'', where users tell the system what they think about a piece of information, is well-understood and fairly precise. However, having to stop to enter explicit ratings can alter normal patterns of browsing and reading. A more intelligent'' method is to use implicit ratings , where a rating is obtained by a method other than obtaining it directly from the user. These implicit interest indicators have obvious advantages, including removing the cost of the user rating, and that every user interaction with the system can contribute to an implicit rating. Current recommender systems mostly do not use implicit ratings, nor is the ability of implicit ratings to predict actual user interest well-understood. This research studies the correlation between various implicit ratings and the explicit rating for a single Web page. A Web browser was developed to record the user's actions (implicit ratings) and the explicit rating of a page. Actions included mouse clicks, mouse movement, scrolling and elapsed time. This browser was used by over 80 people that browsed more than 2500 Web pages. Using the data collected by the browser, the individual implicit ratings and some combinations of implicit ratings were analyzed and compared with the explicit rating. We found that the time spent on a page, the amount of scrolling on a page and the combination of time and scrolling had a strong correlation with explicit interest, while individual scrolling methods and mouse-clicks were ineffective in predicting explicit interest." ] }
1410.0471
2950774424
This paper describes PinView, a content-based image retrieval system that exploits implicit relevance feedback collected during a search session. PinView contains several novel methods to infer the intent of the user. From relevance feedback, such as eye movements or pointer clicks, and visual features of images, PinView learns a similarity metric between images which depends on the current interests of the user. It then retrieves images with a specialized online learning algorithm that balances the tradeoff between exploring new images and exploiting the already inferred interests of the user. We have integrated PinView to the content-based image retrieval system PicSOM, which enables applying PinView to real-world image databases. With the new algorithms PinView outperforms the original PicSOM, and in online experiments with real users the combination of implicit and explicit feedback gives the best results.
In the other extreme, a number of approaches have used brain computer interfaces for IR or related tasks. The C3Vision system @cite_6 and a human-aided computing approach by @cite_20 infer image categories or presence of distinct objects in images from EEG measurements, and @cite_15 @cite_38 use fMRI techniques for image categorization. @cite_25 built a prototype image annotation system using these ideas; relevance of images is inferred from EEG and visual pattern mining is used to retrieve similar images. They do not, however, consider a full relevance feedback procedure for retrieval, but only study a single iteration and measure the performance as annotation accuracy. Brain activity measurements provide the most accurate picture of the intents of the user, but are clearly not yet practically feasible for real retrieval tools. Notable instrumentation and modeling challenges remain to be solved for making the devices applicable for daily use.
{ "cite_N": [ "@cite_38", "@cite_6", "@cite_15", "@cite_25", "@cite_20" ], "mid": [ "2112532472", "2163027455", "", "2041346309", "2110849321" ], "abstract": [ "Over the past decade, functional Magnetic Resonance Imaging (fMRI) has emerged as a powerful new instrument to collect vast quantities of data about activity in the human brain. A typical fMRI experiment can produce a three-dimensional image related to the human subject's brain activity every half second, at a spatial resolution of a few millimeters. As in other modern empirical sciences, this new instrumentation has led to a flood of new data, and a corresponding need for new data analysis methods. We describe recent research applying machine learning methods to the problem of classifying the cognitive state of a human subject based on fRMI data observed over a single time interval. In particular, we present case studies in which we have successfully trained classifiers to distinguish cognitive states such as (1) whether the human subject is looking at a picture or a sentence, (2) whether the subject is reading an ambiguous or non-ambiguous sentence, and (3) whether the word the subject is viewing is a word describing food, people, buildings, etc. This learning problem provides an interesting case study of classifier learning from extremely high dimensional (105 features), extremely sparse (tens of training examples), noisy data. This paper summarizes the results obtained in these three case studies, as well as lessons learned about how to successfully apply machine learning methods to train classifiers in such settings.", "We describe a real-time electroencephalography (EEG)-based brain-computer interface system for triaging imagery presented using rapid serial visual presentation. A target image in a sequence of nontarget distractor images elicits in the EEG a stereotypical spatiotemporal response, which can be detected. A pattern classifier uses this response to reprioritize the image sequence, placing detected targets in the front of an image stack. We use single-trial analysis based on linear discrimination to recover spatial components that reflect differences in EEG activity evoked by target versus nontarget images. We find an optimal set of spatial weights for 59 EEG sensors within a sliding 50-ms time window. Using this simple classifier allows us to process EEG in real time. The detection accuracy across five subjects is on average 92 , i.e., in a sequence of 2500 images, resorting images based on detector output results in 92 of target images being moved from a random position in the sequence to one of the first 250 images (first 10 of the sequence). The approach leverages the highly robust and invariant object recognition capabilities of the human visual system, using single-trial EEG analysis to efficiently detect neural signatures correlated with the recognition event.", "", "Human visual perception is able to recognize a wide range of targets under challenging conditions, but has limited throughput. Machine vision and automatic content analytics can process images at a high speed, but suffers from inadequate recognition accuracy for general target classes. In this paper, we propose a new paradigm to explore and combine the strengths of both systems. A single trial EEG-based brain machine interface (BCI) subsystem is used to detect objects of interest of arbitrary classes from an initial subset of images. The EEG detection outcomes are used as input to a graph-based pattern mining subsystem to identify, refine, and propagate the labels to retrieve relevant images from a much larger pool. The combined strategy is unique in its generality, robustness, and high throughput. It has great potential for advancing the state of the art in media retrieval applications. We have evaluated and demonstrated significant performance gains of the proposed system with multiple and diverse image classes over several data sets, including those from Internet (Caltech 101) and remote sensing images. In this paper, we will also present insights learned from the experiments and discuss future research directions.", "In this paper, we present Human-Aided Computing, an approach that uses an electroencephalograph (EEG) device to measure the presence and outcomes of implicit cognitive processing, processing that users perform automatically and may not even be aware of. We describe a classification system and present results from two experiments as proof-of-concept. Results from the first experiment showed that our system could classify whether a user was looking at an image of a face or not, even when the user was not explicitly trying to make this determination. Results from the second experiment extended this to animals and inanimate object categories as well, suggesting generality beyond face recognition. We further show that we can improve classification accuracies if we show images multiple times, potentially to multiple people, attaining well above 90 classification accuracies with even just ten presentations." ] }
1410.0471
2950774424
This paper describes PinView, a content-based image retrieval system that exploits implicit relevance feedback collected during a search session. PinView contains several novel methods to infer the intent of the user. From relevance feedback, such as eye movements or pointer clicks, and visual features of images, PinView learns a similarity metric between images which depends on the current interests of the user. It then retrieves images with a specialized online learning algorithm that balances the tradeoff between exploring new images and exploiting the already inferred interests of the user. We have integrated PinView to the content-based image retrieval system PicSOM, which enables applying PinView to real-world image databases. With the new algorithms PinView outperforms the original PicSOM, and in online experiments with real users the combination of implicit and explicit feedback gives the best results.
The most interesting implicit feedback modalities fall between these two extremes. Various information signals can be captured by microphones, cameras or other easily wearable sensors, and they are likely to contain more information on the intentions of the user than what can be observed through the traditional control devices. Both speech and gestures have been extensively used as explicit control modalities, but there are also a few studies on their implicit use. For example, @cite_10 infers tags for images from implicit speech and @cite_31 considers facial expressions as indicators of topical relevance. In addition, various physiological measurements are extensively used for inferring the affective state of the user, which can in turn be used as a feedback source @cite_51 @cite_53 . However, to our knowledge there are no fully fledged image retrieval systems that use these input modalities as implicit feedback.
{ "cite_N": [ "@cite_51", "@cite_31", "@cite_10", "@cite_53" ], "mid": [ "2096131645", "2099703387", "2171907858", "2041347087" ], "abstract": [ "User feedback is considered to be a critical element in the information seeking process. An important aspect of the feedback cycle is relevance assessment that has progressively become a popular practice in web searching activities and interactive information retrieval (IR). The value of relevance assessment lies in the disambiguation of the user's information need, which is achieved by applying various feedback techniques. Such techniques vary from explicit to implicit and help determine the relevance of the retrieved documents. The former type of feedback is usually obtained through the explicit and intended indication of documents as relevant (positive feedback) or irrelevant (negative feedback). Explicit feedback is a robust method for improving a system's overall retrieval performance and producing better query reformulations [1], at the expense of users' cognitive resources. On the other hand, implicit feedback techniques tend to collect information on search behavior in a more intelligent and unobtrusive manner. By doing so, they disengage the users from the cognitive burden of document rating and relevance judgments. Information-seeking activities such as reading time, saving, printing, selecting and referencing have been all treated as indicators of relevance, despite the lack of sufficient evidence to support their effectiveness [2]. Besides their apparent differences, both categories of feedback techniques determine document relevance with respect to the cognitive and situational levels of the interactive dialogue that occurs between the user and the retrieval system [5]. However, this approach does not account for the dynamic interplay and adaptation that takes place between the different dialogue levels, but most importantly it does not consider the affective dimension of interaction. Users interact with intentions, motivations and feelings apart from real-life problems and information objects, which are all critical aspects of cognition and decision-making [3][4]. By evaluating users' affective response towards an information object (e.g. a document), prior and post to their exposure to it, a more accurate understanding of the object's properties and degree of relevance to the current information need may be facilitated. Furthermore, systems that can detect and respond accordingly to user emotions could potentially improve the naturalness of human-computer interaction and progressively optimize their retrieval strategy. The current study investigates the role of emotions in the information seeking process, as the latter are communicated through multi-modal interaction, and reconsiders relevance feedback with respect to what occurs on the affective level of interaction as well.", "Multimedia search systems face a number of challenges, emanating mainly from the semantic gap problem. Implicit feedback is considered a useful technique in addressing many of the semantic-related issues. By analysing implicit feedback information search systems can tailor the search criteria to address more effectively users' information needs. In this paper we examine whether we could employ affective feedback as an implicit source of evidence, through the aggregation of information from various sensory channels. These channels range between facial expressions to neuro-physiological signals and are regarded as indicative of the user's affective states. The end-goal is to model user affective responses and predict with reasonable accuracy the topical relevance of information items without the help of explicit judgements. For modelling relevance we extract a set of features from the acquired signals and apply different classification techniques, such as Support Vector Machines and K-Nearest Neighbours. The results of our evaluation suggest that the prediction of topical relevance, using the above approach, is feasible and, to a certain extent, implicit feedback models can benefit from incorporating such affective features.", "This paper provides a general introduction to the concept of Implicit Human-Centered Tagging (IHCT) — the automatic extraction of tags from nonverbal behavioral feedback of media users. The main idea behind IHCT is that nonverbal behaviors displayed when interacting with multimedia data (e.g., facial expressions, head nods, etc.) provide information useful for improving the tag sets associated with the data. As such behaviors are displayed naturally and spontaneously, no effort is required from the users, and this is why the resulting tagging process is said to be “implicit”. Tags obtained through IHCT are expected to be more robust than tags associated with the data explicitly, at least in terms of: generality (they make sense to everybody) and statistical reliability (all tags will be sufficiently represented). The paper discusses these issues in detail and provides an overview of pioneering efforts in the field.", "In this paper, we propose an approach for affective ranking of movie scenes based on the emotions that are actually felt by spectators. Such a ranking can be used for characterizing the affective, or emotional, content of video clips. The ranking can for instance help determine which video clip from a database elicits, for a given user, the most joy. This in turn will permit video indexing and retrieval based on affective criteria corresponding to a personalized user affective profile. A dataset of 64 different scenes from 8 movies was shown to eight participants. While watching, their physiological responses were recorded; namely, five peripheral physiological signals (GSR - galvanic skin resistance, EMG - electromyograms, blood pressure, respiration pattern, skin temperature) were acquired. After watching each scene, the participants were asked to self-assess their felt arousal and valence for that scene. In addition, movie scenes were analyzed in order to characterize each with various audio- and video-based features capturing the key elements of the events occurring within that scene. Arousal and valence levels were estimated by a linear combination of features from physiological signals, as well as by a linear combination of content-based audio and video features. We show that a correlation exists between arousal- and valence-based rankings provided by the spectator's self-assessments, and rankings obtained automatically from either physiological signals or audio-video features. This demonstrates the ability of using physiological responses of participants to characterize video scenes and to rank them according to their emotional content. This further shows that audio-visual features, either individually or combined, can fairly reliably be used to predict the spectator's felt emotion for a given scene. The results also confirm that participants exhibit different affective responses to movie scenes, which emphasizes the need for the emotional profiles to be user-dependant." ] }
1410.0471
2950774424
This paper describes PinView, a content-based image retrieval system that exploits implicit relevance feedback collected during a search session. PinView contains several novel methods to infer the intent of the user. From relevance feedback, such as eye movements or pointer clicks, and visual features of images, PinView learns a similarity metric between images which depends on the current interests of the user. It then retrieves images with a specialized online learning algorithm that balances the tradeoff between exploring new images and exploiting the already inferred interests of the user. We have integrated PinView to the content-based image retrieval system PicSOM, which enables applying PinView to real-world image databases. With the new algorithms PinView outperforms the original PicSOM, and in online experiments with real users the combination of implicit and explicit feedback gives the best results.
Two decisive characteristics common to the setups of @cite_7 @cite_1 @cite_5 should, however, be noticed. First, the user is expected to always explicitly select exactly one relevant image, by either eye fixation or mouse clicking. Second, the user interface has in the experiments been such that the target or query image is continuously visible on the screen, which is not plausible in real CBIR applications. Showing the target will also facilitate and even encourage the use of gaze for image comparison, which will certainly have an effect on the gaze patterns.
{ "cite_N": [ "@cite_5", "@cite_1", "@cite_7" ], "mid": [ "2181062612", "33281282", "" ], "abstract": [ "This paper explores the feasibility of using an eye tracker as an image retrieval interface. A database of image similarity values between 1000 Corel images is used in the study. Results from participants performing image search tasks show that eye tracking data can be used to reach target images in fewer steps than by random selection. The effects of the intrinsic difficulty of finding images and the time allowed for successive selections were also investigated.", "In this thesis visual search experiments are devised to explore the feasibility of an eye gaze driven search mechanism. The thesis first explores gaze behaviour on images possessing different levels of saliency. Eye behaviour was predominantly attracted by salient locations, but appears to also require frequent reference to non-salient background regions which indicated that information from scan paths might prove useful for image search. The thesis then specifically investigates the benefits of eye tracking as an image retrieval interface in terms of speed relative to selection by mouse, and in terms of the efficiency of eye tracking mechanisms in the task of retrieving target images. Results are analysed using ANOVA and significant findings are discussed. Results show that eye selection was faster than a computer mouse and experience gained during visual tasks carried out using a mouse would benefit users if they were subsequently transferred to an eye tracking system. Results on the image retrieval experiments show that users are able to navigate to a target image within a database confirming the feasibility of an eye gaze driven search mechanism. Additional histogram analysis of the fixations, saccades and pupil diameters in the human eye movement data revealed a new method of extracting intentions from gaze behaviour for image search, of which the user was not aware and promises even quicker search performances. The research has two implications for Content Based Image Retrieval: (i) improvements in query formulation for visual search and (ii) new methods for visual search using attentional weighting. Futhermore it was demonstrated that users are able to find target images at sufficient speeds indicating that pre-attentive activity is playing a role in visual search. A current review of eye tracking technology, current applications, visual perception research, and models of visual attention is discussed. A review of the potential of the technology for commercial exploitation is also presented.", "" ] }
1410.0471
2950774424
This paper describes PinView, a content-based image retrieval system that exploits implicit relevance feedback collected during a search session. PinView contains several novel methods to infer the intent of the user. From relevance feedback, such as eye movements or pointer clicks, and visual features of images, PinView learns a similarity metric between images which depends on the current interests of the user. It then retrieves images with a specialized online learning algorithm that balances the tradeoff between exploring new images and exploiting the already inferred interests of the user. We have integrated PinView to the content-based image retrieval system PicSOM, which enables applying PinView to real-world image databases. With the new algorithms PinView outperforms the original PicSOM, and in online experiments with real users the combination of implicit and explicit feedback gives the best results.
Later, also @cite_36 and @cite_14 @cite_60 and @cite_44 introduced their image retrieval systems using eye movements. The first one @cite_36 was based on a conceptual interface designed to be controlled completely by implicit gaze, providing a mix of a browsing and search tool. A small-scale online experiment was provided, but it cannot be used for drawing strong conclusions on the accuracy of the retrieval results. The second study mostly concentrated on the accuracy of inferring the relevance in @cite_60 and on fixation-weighted region matching between the query and database images in @cite_14 . The last one @cite_44 used gaze data for genuinely implicit relevance feedback by the means of reranking the results of Google Image Search. However, the system was not fully functional yet as the described experimantal evaluation was done in a non-interactive mode.
{ "cite_N": [ "@cite_36", "@cite_44", "@cite_14", "@cite_60" ], "mid": [ "2172091789", "2049997905", "2089964152", "2004820586" ], "abstract": [ "We introduce GaZIR, a gaze-based interface for browsing and searching for images. The system computes on-line predictions of relevance of images based on implicit feedback, and when the user zooms in, the images predicted to be the most relevant are brought out. The key novelty is that the relevance feedback is inferred from implicit cues obtained in real-time from the gaze pattern, using an estimator learned during a separate training phase. The natural zooming interface can be connected to any content-based information retrieval engine operating on user feedback. We show with experiments on one engine that there is sufficient amount of information in the gaze patterns to make the estimated relevance feedback a viable choice to complement or even replace explicit feedback by pointing-and-clicking.", "In this paper we propose an implicit relevance feedback method with the aim to improve the performance of known Content Based Image Retrieval (CBIR) systems by re-ranking the retrieved images according to users' eye gaze data. This represents a new mechanism for implicit relevance feedback, in fact usually the sources taken into account for image retrieval are based on the natural behavior of the user in his her environment estimated by analyzing mouse and keyboard interactions. In detail, after the retrieval of the images by querying CBIRs with a keyword, our system computes the most salient regions (where users look with a greater interest) of the retrieved images by gathering data from an unobtrusive eye tracker, such as Tobii T60. According to the features, in terms of color, texture, of these relevant regions our system is able to re-rank the images, initially, retrieved by the CBIR. Performance evaluation, carried out on a set of 30 users by using Google Images and \"pyramid\" like keyword, shows that about the 87 of the users is more satisfied of the output images when the re-raking is applied.", "Image retrieval technology has been developed for more than twenty years. However, the current image retrieval techniques cannot achieve a satisfactory recall and precision. To improve the effectiveness and efficiency of an image retrieval system, a novel content-based image retrieval method with a combination of image segmentation and eye tracking data is proposed in this paper. In the method, eye tracking data is collected by a non-intrusive table mounted eye tracker at a sampling rate of 120 Hz, and the corresponding fixation data is used to locate the human's Regions of Interest (hROIs) on the segmentation result from the JSEG algorithm. The hROIs are treated as important informative segments objects and used in the image matching. In addition, the relative gaze duration of each hROI is used to weigh the similarity measure for image retrieval. The similarity measure proposed in this paper is based on a retrieval strategy emphasizing the most important regions. Experiments on 7346 Hemera color images annotated manually show that the retrieval results from our proposed approach compare favorably with conventional content-based image retrieval methods, especially when the important regions are difficult to be located based on visual features.", "Relevance feedback (RF) mechanisms are widely adopted in Content-Based Image Retrieval (CBIR) systems to improve image retrieval performance. However, there exist some intrinsic problems: (1) the semantic gap between high-level concepts and low-level features and (2) the subjectivity of human perception of visual contents. The primary focus of this paper is to evaluate the possibility of inferring the relevance of images based on eye movement data. In total, 882 images from 101 categories are viewed by 10 subjects to test the usefulness of implicit RF, where the relevance of each image is known beforehand. A set of measures based on fixations are thoroughly evaluated which include fixation duration, fixation count, and the number of revisits. Finally, the paper proposes a decision tree to predict the user's input during the image searching tasks. The prediction precision of the decision tree is over 87 , which spreads light on a promising integration of natural eye movement into CBIR systems in the future." ] }