aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1501.04292 | 1599611047 | This paper presents a new framework for visual bag-of-words (BOW) refinement and reduction to overcome the drawbacks associated with the visual BOW model which has been widely used for image classification. Although very influential in the literature, the traditional visual BOW model has two distinct drawbacks. Firstly, for efficiency purposes, the visual vocabulary is commonly constructed by directly clustering the low-level visual feature vectors extracted from local keypoints, without considering the high-level semantics of images. That is, the visual BOW model still suffers from the semantic gap, and thus may lead to significant performance degradation in more challenging tasks (e.g. social image classification). Secondly, typically thousands of visual words are generated to obtain better performance on a relatively large image dataset. Due to such large vocabulary size, the subsequent image classification may take sheer amount of time. To overcome the first drawback, we develop a graph-based method for visual BOW refinement by exploiting the tags (easy to access although noisy) of social images. More notably, for efficient image classification, we further reduce the refined visual BOW model to a much smaller size through semantic spectral clustering. Extensive experimental results show the promising performance of the proposed framework for visual BOW refinement and reduction. | In this paper, we formulate visual BOW refinement as a multi-class graph-based SSL problem. Considering that graph construction is the key step of graph-based SSL, we develop a new @math -graph construction method using structured sparse representation, other than the traditional @math -graph construction method only using sparse representation. As compared with sparse representation, our structured sparse representation has a distinct advantage, i.e., the extra structured sparsity can be induced into @math -graph construction and thus the noise in the data can be suppressed to the most extent. In fact, the structured sparsity penalty used in this paper is defined as @math -norm Laplacian regularization, which is formulated directly over all the eigenvectors of the normalized Laplacian matrix. Hence, our new @math -norm Laplacian regularization is different from the @math -Laplacian regularization @cite_34 as an ordinary @math -generalization (with @math ) of the traditional Laplacian regularization (see further comparison in Section ). In this paper, we focus on exploiting the manifold structure of the data for @math -graph construction with structured sparse representation, regardless of other types of structured sparsity @cite_45 @cite_14 used in the literature. | {
"cite_N": [
"@cite_14",
"@cite_34",
"@cite_45"
],
"mid": [
"1539012881",
"1486280163",
"2038964396"
],
"abstract": [
"Sparse coding consists in representing signals as sparse linear combinations of atoms selected from a dictionary. We consider an extension of this framework where the atoms are further assumed to be embedded in a tree. This is achieved using a recently introduced tree-structured sparse regularization norm, which has proven useful in several applications. This norm leads to regularized problems that are difficult to optimize, and in this paper, we propose efficient algorithms for solving them. More precisely, we show that the proximal operator associated with this norm is computable exactly via a dual approach that can be viewed as the composition of elementary proximal operators. Our procedure has a complexity linear, or close to linear, in the number of atoms, and allows the use of accelerated gradient techniques to solve the tree-structured sparse approximation problem at the same computational cost as traditional ones using the l1-norm. Our method is efficient and scales gracefully to millions of variables, which we illustrate in two types of applications: first, we consider fixed hierarchical dictionaries of wavelets to denoise natural images. Then, we apply our optimization tools in the context of dictionary learning, where learned dictionary elements naturally self-organize in a prespecified arborescent structure, leading to better performance in reconstruction of natural image patches. When applied to text documents, our method learns hierarchies of topics, thus providing a competitive alternative to probabilistic topic models.",
"We consider the classification problem on a finite set of objects. Some of them are labeled, and the task is to predict the labels of the remaining unlabeled ones. Such an estimation problem is generally referred to as transductive inference. It is well-known that many meaningful inductive or supervised methods can be derived from a regularization framework, which minimizes a loss function plus a regularization term. In the same spirit, we propose a general discrete regularization framework defined on finite object sets, which can be thought of as discrete analogue of classical regularization theory. A family of transductive inference schemes is then systemically derived from the framework, including our earlier algorithm for transductive inference, with which we obtained encouraging results on many practical classification problems. The discrete regularization framework is built on discrete analysis and geometry developed by ourselves, in which a number of discrete differential operators of various orders are constructed, which can be thought of as discrete analogues of their counterparts in the continuous case.",
"In many problems in computer vision, data in multiple classes lie in multiple low-dimensional subspaces of a high-dimensional ambient space. However, most of the existing classification methods do not explicitly take this structure into account. In this paper, we consider the problem of classification in the multi-sub space setting using sparse representation techniques. We exploit the fact that the dictionary of all the training data has a block structure where the training data in each class form few blocks of the dictionary. We cast the classification as a structured sparse recovery problem where our goal is to find a representation of a test example that uses the minimum number of blocks from the dictionary. We formulate this problem using two different classes of non-convex optimization programs. We propose convex relaxations for these two non-convex programs and study conditions under which the relaxations are equivalent to the original problems. In addition, we show that the proposed optimization programs can be modified properly to also deal with corrupted data. To evaluate the proposed algorithms, we consider the problem of automatic face recognition. We show that casting the face recognition problem as a structured sparse recovery problem can improve the results of the state-of-the-art face recognition algorithms, especially when we have relatively small number of training data for each class. In particular, we show that the new class of convex programs can improve the state-of-the-art face recognition results by 10 with only 25 of the training data. In addition, we show that the algorithms are robust to occlusion, corruption, and disguise."
]
} |
1501.04343 | 2951237788 | Due to the ubiquitous batch data processing in cloud computing, the fundamental model of scheduling malleable batch tasks and its extensions have received significant attention recently. In this model, a set of n tasks is to be scheduled on C identical machines and each task is specified by a value, a workload, a deadline and a parallelism bound. Within the parallelism bound, the number of the machines allocated to a task can vary over time and its workload will not change accordingly. In this paper, the two core results of this paper are to quantitatively characterize a sufficient and necessary condition such that a set of malleable batch tasks with deadlines can be feasibly scheduled on C machines, and to propose a polynomial time algorithm to produce such a feasible schedule. The core results provide a conceptual tool and an optimal scheduling algorithm to enable proposing new analysis and design of algorithms or improving existing algorithms for extensive scheduling objectives. | In @cite_15 , consider an extension of our task model, i.e., DAG-structured malleable tasks, and, based on randomized rounding of linear programming , they propose an algorithm with an expected approximation ratio of @math for every @math , where @math . The online version of our task model is considered in @cite_2 @cite_9 ; again based on the dual-fitting technique , two weighted greedy algorithms are proposed respectively for non-committed and committed scheduling and achieve the competitive ratios of @math where @math @cite_3 and @math where @math and @math . | {
"cite_N": [
"@cite_9",
"@cite_15",
"@cite_3",
"@cite_2"
],
"mid": [
"2092461546",
"2076885483",
"2122985136",
""
],
"abstract": [
"We study online mechanisms for preemptive scheduling with deadlines, with the goal of maximizing the total value of completed jobs. This problem is fundamental to deadline-aware cloud scheduling, but there are strong lower bounds even for the algorithmic problem without incentive constraints. However, these lower bounds can be circumvented under the natural assumption of deadline slackness, i.e., that there is a guaranteed lower bound s > 1 on the ratio between a job's size and the time window in which it can be executed. In this paper, we construct a truthful scheduling mechanism with a constant competitive ratio, given slackness s > 1. Furthermore, we show that if s is large enough then we can construct a mechanism that also satisfies a commitment property: it can be determined whether or not a job will finish, and the requisite payment if so, well in advance of each job's deadline. This is notable because, in practice, users with strict deadlines may find it unacceptable to discover only very close to their deadline that their job has been rejected.",
"This paper presents a novel algorithm for scheduling big data jobs on large compute clusters. In our model, each job is represented by a DAG consisting of several stages linked by precedence constraints. The resource allocation per stage is malleable, in the sense that the processing time of a stage depends on the resources allocated to it (the dependency can be arbitrary in general).The goal of the scheduler is to maximize the total value of completed jobs, where the value for each job depends on its completion time. We design an algorithm for the problem which guarantees an expected constant approximation factor when the cluster capacity is sufficiently high. To the best of our knowledge, this is the first constant-factor approximation algorithm for the problem. The algorithm is based on formulating the problem as a linear program and then rounding an optimal (fractional) solution into a feasible (integral) schedule using randomized rounding.",
"We consider a market-based resource allocation model for batch jobs in cloud computing clusters. In our model, we incorporate the importance of the due date of a job rather than the number of servers allocated to it at any given time. Each batch job is characterized by the work volume of total computing units (e.g., CPU hours) along with a bound on maximum degree of parallelism. Users specify, along with these job characteristics, their desired due date and a value for finishing the job by its deadline. Given this specification, the primary goal is to determine the scheduling of cloud computing instances under capacity constraints in order to maximize the social welfare (i.e., sum of values gained by allocated users). Our main result is a new ( C (C-k) ⋅ s (s-1))-approximation algorithm for this objective, where C denotes cloud capacity, k is the maximal bound on parallelized execution (in practical settings, k l C) and s is the slackness on the job completion time i.e., the minimal ratio between a specified deadline and the earliest finish time of a job. Our algorithm is based on utilizing dual fitting arguments over a strengthened linear program to the problem. Based on the new approximation algorithm, we construct truthful allocation and pricing mechanisms, in which reporting the job true value and properties (deadline, work volume and the parallelism bound) is a dominant strategy for all users. To that end, we provide a general framework for transforming allocation algorithms into truthful mechanisms in domains of single-value and multi-properties. We then show that the basic mechanism can be extended under proper Bayesian assumptions to the objective of maximizing revenues, which is important for public clouds. We empirically evaluate the benefits of our approach through simulations on data-center job traces, and show that the revenues obtained under our mechanism are comparable with an ideal fixed-price mechanism, which sets an on-demand price using oracle knowledge of users' valuations. Finally, we discuss how our model can be extended to accommodate uncertainties in job work volumes, which is a practical challenge in cloud settings.",
""
]
} |
1501.04343 | 2951237788 | Due to the ubiquitous batch data processing in cloud computing, the fundamental model of scheduling malleable batch tasks and its extensions have received significant attention recently. In this model, a set of n tasks is to be scheduled on C identical machines and each task is specified by a value, a workload, a deadline and a parallelism bound. Within the parallelism bound, the number of the machines allocated to a task can vary over time and its workload will not change accordingly. In this paper, the two core results of this paper are to quantitatively characterize a sufficient and necessary condition such that a set of malleable batch tasks with deadlines can be feasibly scheduled on C machines, and to propose a polynomial time algorithm to produce such a feasible schedule. The core results provide a conceptual tool and an optimal scheduling algorithm to enable proposing new analysis and design of algorithms or improving existing algorithms for extensive scheduling objectives. | In addition, @cite_8 consider DAG-structured malleable tasks and propose two algorithms with approximation ratios of 6 and 2 respectively for the objectives of minimizing the total weighted completion time and the maximum weighted lateness of tasks. The conclusions in @cite_8 also show that scheduling deadline-sensitive malleable tasks is a key to the solutions to scheduling for their objectives. In particular, seeking a schedule for DAG tasks can be transformed into seeking a schedule for tasks with simpler chain-precedence constraints; then whenever there is a feasible schedule to complete a set of tasks by their deadlines, propose an algorithm where each task is completed by at most 2 times its deadline and give two procedures to obtain the near-optimal completion times of tasks in terms of the two scheduling objectives. | {
"cite_N": [
"@cite_8"
],
"mid": [
"236893983"
],
"abstract": [
"We introduce FlowFlex, a highly generic and effective scheduler for flows of MapReduce jobs connected by precedence constraints. Such a flow can result, for example, from a single user-level Pig, Hive or Jaql query. Each flow is associated with an arbitrary function describing the cost incurred in completing the flow at a particular time. The overall objective is to minimize either the total cost (minisum) or the maximum cost (minimax) of the flows. Our contributions are both theoretical and practical. Theoretically, we advance the state of the art in malleable parallel scheduling with precedence constraints. We employ resource augmentation analysis to provide bicriteria approximation algorithms for both minisum and minimax objective functions. As corollaries, we obtain approximation algorithms for total weighted completion time (and thus average completion time and average stretch), and for maximum weighted completion time (and thus makespan and maximum stretch). Practically, the average case performance of the FlowFlex scheduler is excellent, significantly better than other approaches. Specifically, we demonstrate via extensive experiments the overall performance of FlowFlex relative to optimal and also relative to other, standard MapReduce scheduling schemes. All told, FlowFlex dramatically extends the capabilities of the earlier Flex scheduler for singleton MapReduce jobs while simultaneously providing a solid theoretical foundation for both."
]
} |
1501.04434 | 2045513608 | To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions -- especially those adding a token -- to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience. | 2FA usability. @cite_11 were among the first to investigate the usability of 2FA and suggested that, by increasing redundancy, 2FA strengthens security but reduces usability. However, they did not conduct any actual user study. @cite_13 analysed the effect on productivity of the Common Access Card (CAC), a smart card and photo ID used by US Department of Defense employees. They found that employees were often locked out as they left the card in the reader and almost stopped answering emails from home, concluding that the DoD lost more than $10M worth of time. @cite_1 investigated authentication usability in the context of automated telephone banking, asking 62 participants to rate their experience via a 22-item questionnaire: when second factors of authentication were enforced, users felt more secure than when using only passwords or PINs, but at the expense of usability. | {
"cite_N": [
"@cite_1",
"@cite_13",
"@cite_11"
],
"mid": [
"2019380907",
"2121371744",
"2072694646"
],
"abstract": [
"This paper describes an experiment to investigate user perceptions of the usability and security of single-factor and two-factor authentication methods in automated telephone banking. In a controlled experiment with 62 banking customers a knowledge-based, single-factor authentication procedure, based on those commonly used in the financial services industry, was compared with a two-factor approach where in addition to the knowledge-based step, a one-time passcode was generated using a hardware security token. Results were gathered on the usability and perceived security of the two methods described, together with call completion rates and call durations for the two methods. Significant differences were found between the two methods, with the two-factor version being perceived as offering higher levels of security than the single-factor authentication version; however, this gain was offset by significantly lower perceptions of usability, and lower ratings for convenience and ease of use for the two-factor version. In addition, the two-factor authentication version took longer for participants to complete. This research provides valuable empirical evidence of the trade-off between security and usability in automated systems.",
"The Department of Defense has mandated the use of a two-factor security system for access and authentication. The increased security of such a system has been extensively researched by the military. This research uses a survey to examine the effects on productivity and usability of implementing such a system.",
"The usability of security systems has become a major issue in research on the efficiency and user acceptance of security systems. The authentication process is essential for controlling the access to various resources and facilities. The design of usable yet secure user authentication methods raises crucial questions concerning how to solve conflicts between security and usability goals."
]
} |
1501.04434 | 2045513608 | To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions -- especially those adding a token -- to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience. | Online banking. Just and Aspinall @cite_7 analysed the use of dual credential authentication for online banking from both security and usability points of view. They considered granularity and time of feedback given to users during the authentication steps as main usability properties, and found that some banks delayed feedback by not providing it at screen change, or provided granular feedback too late in the authentication process. They concluded that these issues are likely to confuse users, but did not conduct an actual user study. Our work complements @cite_7 as we conduct an in-depth user study aiming to understand authentication for online banking from the users' point of view. Also, while @cite_7 looked at dual credentials (e.g., two passwords, two PINs, or two challenge questions), we focus on actual 2FA. | {
"cite_N": [
"@cite_7"
],
"mid": [
"1635927852"
],
"abstract": [
"This paper presents the results of a security and usability review of the authentication implementations used by more than 10 UK banks. Our focus is on their use of dual text credentials that combine two passwords, PINs, or challenge questions (and some “partial selection” variations). We model the authentication protocols based upon several deployment choices, such as the credential rules, and use the model to compare the security and usability properties of the implementations. Our results indicate some variation and inconsistency across the UK banking industry, from which we offer some suggestions for improved authentication protocol design."
]
} |
1501.04434 | 2045513608 | To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions -- especially those adding a token -- to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience. | Authentication diaries. Other studies have also used authentication diaries to understand authentication habits and usability issues. Inglesant and Sasse @cite_6 introduced password diaries to capture the details of authentication events happening in the wild and found that frequent password changes are perceived as troublesome, that users do not change passwords unless forced to, and that it is difficult for them to create memorable passwords adhering to the policy. Hayashi and Hong @cite_15 also analysed two-week diaries to derive the average number of passwords, frequency of use, recall strategies, etc., across users. Finally, @cite_0 asked 23 employees of a US government organisation to keep a diary for every authentication event in one day, and later interviewed them on their authentication experience: the study highlights users' frustration with authentication processes that disrupt their primary task and hinder their productivity, and uncovers a number of coping strategies aimed to minimise the negative impact of security on employees' work. Authors also found that the requirement to use 2FA in the form of a SecureID token made employees log in remotely less often than they would normally do. | {
"cite_N": [
"@cite_0",
"@cite_15",
"@cite_6"
],
"mid": [
"",
"2018696390",
"2150341374"
],
"abstract": [
"",
"While past work has examined password usage on a specific computer, web site, or organization, there is little work examining overall password usage in daily life. Through a diary study, we examine all usage of passwords, and offer some new findings based on quantitative analyses regarding how often people log in, where they log in, and how frequently people use foreign computers. Our analysis also confirms or updates existing statistics about password usage patterns. We also discuss some implications for design as well as security education.",
"HCI research published 10 years ago pointed out that many users cannot cope with the number and complexity of passwords, and resort to insecure workarounds as a consequence. We present a study which re-examined password policies and password practice in the workplace today. 32 staff members in two organisations kept a password diary for 1 week, which produced a sample of 196 passwords. The diary was followed by an interview which covered details of each password, in its context of use. We find that users are in general concerned to maintain security, but that existing security policies are too inflexible to match their capabilities, and the tasks and contexts in which they operate. As a result, these password policies can place demands on users which impact negatively on their productivity and, ultimately, that of the organisation. We conclude that, rather than focussing password policies on maximizing password strength and enforcing frequency alone, policies should be designed using HCI principles to help the user to set an appropriately strong password in a specific context of use."
]
} |
1501.04434 | 2045513608 | To prevent password breaches and guessing attacks, banks increasingly turn to two-factor authentication (2FA), requiring users to present at least one more factor, such as a one-time password generated by a hardware token or received via SMS, besides a password. We can expect some solutions -- especially those adding a token -- to create extra work for users, but little research has investigated usability, user acceptance, and perceived security of deployed 2FA. This paper presents an in-depth study of 2FA usability with 21 UK online banking customers, 16 of whom had accounts with more than one bank. We collected a rich set of qualitative and quantitative data through two rounds of semi-structured interviews, and an authentication diary over an average of 11 days. Our participants reported a wide range of usability issues, especially with the use of hardware tokens, showing that the mental and physical workload involved shapes how they use online banking. Key targets for improvements are (i) the reduction in the number of authentication steps, and (ii) removing features that do not add any security but negatively affect the user experience. | Summary. Prior work on 2FA usability presented expert assessments, survey-based studies, and experiments on prototypes, incurring a number of shortcomings. Expert assessments did not involve users, yielding findings that only rely on researchers' judgement and often without the benefit of a structured usability assessment technique, such as GOMS, heuristic evaluation or cognitive walkthrough @cite_9 . Survey-based studies asked participants to make hypothetical choices or report behaviours based on what they can remember. Finally, studies with prototypes were performed in the absence of real-life constraints: without reference to a primary task -- such as paying a bill –- or context of use -– paying a bill from your office during lunch break or in a hotel room while traveling. This highlights the lack of studies focused on actual users of 2FA and online banking, which is crucial to understanding how customers use 2FA different technologies for online banking and how these fit into their every-day activities. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2152309982"
],
"abstract": [
"Usability inspection is the generic name for a set of costeffective ways of evaluating user interfaces to find usability problems. They are fairly informal methods and easy to use."
]
} |
1501.03719 | 2950814836 | We present a novel means of describing local image appearances using binary strings. Binary descriptors have drawn increasing interest in recent years due to their speed and low memory footprint. A known shortcoming of these representations is their inferior performance compared to larger, histogram based descriptors such as the SIFT. Our goal is to close this performance gap while maintaining the benefits attributed to binary representations. To this end we propose the Learned Arrangements of Three Patch Codes descriptors, or LATCH. Our key observation is that existing binary descriptors are at an increased risk from noise and local appearance variations. This, as they compare the values of pixel pairs; changes to either of the pixels can easily lead to changes in descriptor values, hence damaging its performance. In order to provide more robustness, we instead propose a novel means of comparing pixel patches. This ostensibly small change, requires a substantial redesign of the descriptors themselves and how they are produced. Our resulting LATCH representation is rigorously compared to state-of-the-art binary descriptors and shown to provide far better performance for similar computation and space requirements. | The development of local image descriptors has been the subject of immense research, and a comprehensive review of related methods is beyond the scope of this work. For a recent survey and evaluation of alternative binary interest point descriptors, we refer the reader to @cite_17 . Here, we only briefly review these and other related representations. | {
"cite_N": [
"@cite_17"
],
"mid": [
"1953691509"
],
"abstract": [
"Performance evaluation of salient features has a long-standing tradition in computer vision. In this paper, we fill the gap of evaluation for the recent wave of binary feature descriptors, which aim to provide robustness while achieving high computational efficiency. We use established metrics to embed our assessment into the body of existing evaluations, allowing us to provide a novel taxonomy unifying both traditional and novel binary features. Moreover, we analyze the performance of different detector and descriptor pairings, which are often used in practice but have been infrequently analyzed. Additionally, we complement existing datasets with novel data testing for illumination change, pure camera rotation, pure scale change, and the variety present in photo-collections. Our performance analysis clearly demonstrates the power of the new class of features. To benefit the community, we also provide a website for the automatic testing of new description methods using our provided metrics and datasets www.cs.unc.edu feature-evaluation."
]
} |
1501.03719 | 2950814836 | We present a novel means of describing local image appearances using binary strings. Binary descriptors have drawn increasing interest in recent years due to their speed and low memory footprint. A known shortcoming of these representations is their inferior performance compared to larger, histogram based descriptors such as the SIFT. Our goal is to close this performance gap while maintaining the benefits attributed to binary representations. To this end we propose the Learned Arrangements of Three Patch Codes descriptors, or LATCH. Our key observation is that existing binary descriptors are at an increased risk from noise and local appearance variations. This, as they compare the values of pixel pairs; changes to either of the pixels can easily lead to changes in descriptor values, hence damaging its performance. In order to provide more robustness, we instead propose a novel means of comparing pixel patches. This ostensibly small change, requires a substantial redesign of the descriptors themselves and how they are produced. Our resulting LATCH representation is rigorously compared to state-of-the-art binary descriptors and shown to provide far better performance for similar computation and space requirements. | Binary descriptors. Binary key-point descriptors were recently introduced in answer to the rapidly expanding sizes of image data sets and the pressing need for compact representations which can be efficiently matched. One of the first of this family of descriptors was the Binary Robust Independent Elementary Features (BRIEF) @cite_16 . BRIEF is based on intensity comparisons of random pixel pairs in a patch centered around a detected image key point. These comparisons result in binary strings that can be matched very quickly using a simple XOR operation. As BRIEF is based on intensity comparisons, instead of image gradient computations and histogram pooling of values, it is much faster to extract than SIFT-like descriptors @cite_8 . Furthermore, by using no more than 512 bits, a single BRIEF descriptor requires far less memory than its floating point alternatives. | {
"cite_N": [
"@cite_16",
"@cite_8"
],
"mid": [
"1491719799",
"2151103935"
],
"abstract": [
"We propose to use binary strings as an efficient feature point descriptor, which we call BRIEF. We show that it is highly discriminative even when using relatively few bits and can be computed using simple intensity difference tests. Furthermore, the descriptor similarity can be evaluated using the Hamming distance, which is very efficient to compute, instead of the L2 norm as is usually done. As a result, BRIEF is very fast both to build and to match. We compare it against SURF and U-SURF on standard benchmarks and show that it yields a similar or better recognition performance, while running in a fraction of the time required by either.",
"This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance."
]
} |
1501.03719 | 2950814836 | We present a novel means of describing local image appearances using binary strings. Binary descriptors have drawn increasing interest in recent years due to their speed and low memory footprint. A known shortcoming of these representations is their inferior performance compared to larger, histogram based descriptors such as the SIFT. Our goal is to close this performance gap while maintaining the benefits attributed to binary representations. To this end we propose the Learned Arrangements of Three Patch Codes descriptors, or LATCH. Our key observation is that existing binary descriptors are at an increased risk from noise and local appearance variations. This, as they compare the values of pixel pairs; changes to either of the pixels can easily lead to changes in descriptor values, hence damaging its performance. In order to provide more robustness, we instead propose a novel means of comparing pixel patches. This ostensibly small change, requires a substantial redesign of the descriptors themselves and how they are produced. Our resulting LATCH representation is rigorously compared to state-of-the-art binary descriptors and shown to provide far better performance for similar computation and space requirements. | Rather than random sampling or unsupervised learning of pairs, the Binary Robust Invariant Scalable Keypoints (BRISK) @cite_7 use hand-crafted, concentric ring-based sampling patterns. BRISK uses pixel pairs with large distances between them to compute the patch orientation, and pixel pairs separated by short distances to compute the values of the descriptor itself, again, by performing binary intensity comparisons on pixel pairs. More recently, inspired by the retinal patterns of the human eye, the Fast REtinA Keypoint descriptor (FREAK) was proposed. Similarly to BRISK, FREAK also uses a concentric rings arrangement, but unlike it, FREAK samples exponentially more points in the inner rings. Of all the possible pairs which may be sampled under these guidelines, FREAK, following ORB, uses unsupervised learning to choose an optimal set of point pairs. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2141584146"
],
"abstract": [
"Effective and efficient generation of keypoints from an image is a well-studied problem in the literature and forms the basis of numerous Computer Vision applications. Established leaders in the field are the SIFT and SURF algorithms which exhibit great performance under a variety of image transformations, with SURF in particular considered as the most computationally efficient amongst the high-performance methods to date. In this paper we propose BRISK1, a novel method for keypoint detection, description and matching. A comprehensive evaluation on benchmark datasets reveals BRISK's adaptive, high quality performance as in state-of-the-art algorithms, albeit at a dramatically lower computational cost (an order of magnitude faster than SURF in cases). The key to speed lies in the application of a novel scale-space FAST-based detector in combination with the assembly of a bit-string descriptor from intensity comparisons retrieved by dedicated sampling of each keypoint neighborhood."
]
} |
1501.03719 | 2950814836 | We present a novel means of describing local image appearances using binary strings. Binary descriptors have drawn increasing interest in recent years due to their speed and low memory footprint. A known shortcoming of these representations is their inferior performance compared to larger, histogram based descriptors such as the SIFT. Our goal is to close this performance gap while maintaining the benefits attributed to binary representations. To this end we propose the Learned Arrangements of Three Patch Codes descriptors, or LATCH. Our key observation is that existing binary descriptors are at an increased risk from noise and local appearance variations. This, as they compare the values of pixel pairs; changes to either of the pixels can easily lead to changes in descriptor values, hence damaging its performance. In order to provide more robustness, we instead propose a novel means of comparing pixel patches. This ostensibly small change, requires a substantial redesign of the descriptors themselves and how they are produced. Our resulting LATCH representation is rigorously compared to state-of-the-art binary descriptors and shown to provide far better performance for similar computation and space requirements. | Similar to BRIEF, the Local Difference Binary (LDB) descriptor was proposed in @cite_14 @cite_20 where instead of comparing smoothed intensities, mean intensities in grids of @math , @math or @math were compared. Also, in addition to the mean intensity values, LDB also compares the mean values of horizontal and vertical derivatives, amounting to 3 bits per comparison. Building upon LDB, the Accelerated-KAZE (A-KAZE) descriptor was suggested in @cite_4 where in addition to presenting a feature detector, the authors also suggest the Modified Local Difference Binary (M-LDB) descriptor. M-LDB uses the A-KAZE detector estimation of orientation for rotating the LDB grid to achieve rotation invariance and uses the A-KAZE detector's estimation of feature scale to sub-sample the grid in steps that are a function of the feature scale. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_20"
],
"mid": [
"2095260419",
"2048710758",
"1988703599"
],
"abstract": [
"The efficiency, robustness and distinctiveness of a feature descriptor are critical to the user experience and scalability of a mobile Augmented Reality (AR) system. However, existing descriptors are either too compute-expensive to achieve real-time performance on a mobile device such as a smartphone or tablet, or not sufficiently robust and distinctive to identify correct matches from a large database. As a result, current mobile AR systems still only have limited capabilities, which greatly restrict their deployment in practice. In this paper, we propose a highly efficient, robust and distinctive binary descriptor, called Local Difference Binary (LDB). LDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. A multiple gridding strategy is applied to capture the distinct patterns of the patch at different spatial granularities. Experimental results demonstrate that LDB is extremely fast to compute and to match against a large database due to its high robustness and distinctiveness. Comparing to the state-of-the-art binary descriptor BRIEF, primarily designed for speed, LDB has similar computational efficiency, while achieves a greater accuracy and 5x faster matching speed when matching over a large database with 1.7M+ descriptors.",
"We propose a novel and fast multiscale feature detection and description approach that exploits the benefits of nonlinear scale spaces. Previous attempts to detect and describe features in nonlinear scale spaces such as KAZE [1] and BFSIFT [6] are highly time consuming due to the computational burden of creating the nonlinear scale space. In this paper we propose to use recent numerical schemes called Fast Explicit Diffusion (FED) [3, 4] embedded in a pyramidal framework to dramatically speed-up feature detection in nonlinear scale spaces. In addition, we introduce a Modified-Local Difference Binary (M-LDB) descriptor that is highly efficient, exploits gradient information from the nonlinear scale space, is scale and rotation invariant and has low storage requirements. Our features are called Accelerated-KAZE (A-KAZE) due to the dramatic speed-up introduced by FED schemes embedded in a pyramidal framework.",
"The efficiency and quality of a feature descriptor are critical to the user experience of many computer vision applications. However, the existing descriptors are either too computationally expensive to achieve real-time performance, or not sufficiently distinctive to identify correct matches from a large database with various transformations. In this paper, we propose a highly efficient and distinctive binary descriptor, called local difference binary (LDB). LDB directly computes a binary string for an image patch using simple intensity and gradient difference tests on pairwise grid cells within the patch. A multiple-gridding strategy and a salient bit-selection method are applied to capture the distinct patterns of the patch at different spatial granularities. Experimental results demonstrate that compared to the existing state-of-the-art binary descriptors, primarily designed for speed, LDB has similar construction efficiency, while achieving a greater accuracy and faster speed for mobile object recognition and tracking tasks."
]
} |
1501.03719 | 2950814836 | We present a novel means of describing local image appearances using binary strings. Binary descriptors have drawn increasing interest in recent years due to their speed and low memory footprint. A known shortcoming of these representations is their inferior performance compared to larger, histogram based descriptors such as the SIFT. Our goal is to close this performance gap while maintaining the benefits attributed to binary representations. To this end we propose the Learned Arrangements of Three Patch Codes descriptors, or LATCH. Our key observation is that existing binary descriptors are at an increased risk from noise and local appearance variations. This, as they compare the values of pixel pairs; changes to either of the pixels can easily lead to changes in descriptor values, hence damaging its performance. In order to provide more robustness, we instead propose a novel means of comparing pixel patches. This ostensibly small change, requires a substantial redesign of the descriptors themselves and how they are produced. Our resulting LATCH representation is rigorously compared to state-of-the-art binary descriptors and shown to provide far better performance for similar computation and space requirements. | A somewhat different descriptor design approach was proposed by @cite_19 . Their LDA-Hash representation extracts SIFT descriptors from the image, projects them to a more discriminant space and then thresholds the projected descriptors to obtain binary representations. Though the final representation is a binary descriptor, producing it requires extracting SIFT descriptors, making the representation slower than its pure binary alternatives. To alleviate some of this computational cost, the DBRIEF @cite_24 representation projects patch intensities directly. The projections are further computed as a linear combination of a small number of simple filters from a given dictionary. Finally, the BinBoost representation of @cite_1 @cite_26 also learns a set of hash functions that correspond to each bit in the final descriptor. Hash functions are learned using boosting and implemented as a sign operation on a linear combination of non linear week classifiers which are gradient based image features. | {
"cite_N": [
"@cite_24",
"@cite_19",
"@cite_26",
"@cite_1"
],
"mid": [
"1689727326",
"2134514757",
"2170066474",
"2126338221"
],
"abstract": [
"Binary descriptors of image patches are increasingly popular given that they require less storage and enable faster processing. This, however, comes at a price of lower recognition performances. To boost these performances, we project the image patches to a more discriminative subspace, and threshold their coordinates to build our binary descriptor. However, applying complex projections to the patches is slow, which negates some of the advantages of binary descriptors. Hence, our key idea is to learn the discriminative projections so that they can be decomposed into a small number of simple filters for which the responses can be computed fast. We show that with as few as 32 bits per descriptor we outperform the state-of-the-art binary descriptors in terms of both accuracy and efficiency.",
"SIFT-like local feature descriptors are ubiquitously employed in computer vision applications such as content-based retrieval, video analysis, copy detection, object recognition, photo tourism, and 3D reconstruction. Feature descriptors can be designed to be invariant to certain classes of photometric and geometric transformations, in particular, affine and intensity scale transformations. However, real transformations that an image can undergo can only be approximately modeled in this way, and thus most descriptors are only approximately invariant in practice. Second, descriptors are usually high dimensional (e.g., SIFT is represented as a 128-dimensional vector). In large-scale retrieval and matching problems, this can pose challenges in storing and retrieving descriptor data. We map the descriptor vectors into the Hamming space in which the Hamming metric is used to compare the resulting representations. This way, we reduce the size of the descriptors by representing them as short binary strings and learn descriptor invariance from examples. We show extensive experimental validation, demonstrating the advantage of the proposed approach.",
"We propose a novel and general framework to learn compact but highly discriminative floating-point and binary local feature descriptors. By leveraging the boosting-trick we first show how to efficiently train a compact floating-point descriptor that is very robust to illumination and viewpoint changes. We then present the main contribution of this paper—a binary extension of the framework that demonstrates the real advantage of our approach and allows us to compress the descriptor even further. Each bit of the resulting binary descriptor, which we call BinBoost, is computed with a boosted binary hash function, and we show how to efficiently optimize the hash functions so that they are complementary, which is key to compactness and robustness. As we do not put any constraints on the weak learner configuration underlying each hash function, our general framework allows us to optimize the sampling patterns of recently proposed hand-crafted descriptors and significantly improve their performance. Moreover, our boosting scheme can easily adapt to new applications and generalize to other types of image data, such as faces, while providing state-of-the-art results at a fraction of the matching time and memory footprint.",
"Binary key point descriptors provide an efficient alternative to their floating-point competitors as they enable faster processing while requiring less memory. In this paper, we propose a novel framework to learn an extremely compact binary descriptor we call Bin Boost that is very robust to illumination and viewpoint changes. Each bit of our descriptor is computed with a boosted binary hash function, and we show how to efficiently optimize the different hash functions so that they complement each other, which is key to compactness and robustness. The hash functions rely on weak learners that are applied directly to the image patches, which frees us from any intermediate representation and lets us automatically learn the image gradient pooling configuration of the final descriptor. Our resulting descriptor significantly outperforms the state-of-the-art binary descriptors and performs similarly to the best floating-point descriptors at a fraction of the matching time and memory footprint."
]
} |
1501.03719 | 2950814836 | We present a novel means of describing local image appearances using binary strings. Binary descriptors have drawn increasing interest in recent years due to their speed and low memory footprint. A known shortcoming of these representations is their inferior performance compared to larger, histogram based descriptors such as the SIFT. Our goal is to close this performance gap while maintaining the benefits attributed to binary representations. To this end we propose the Learned Arrangements of Three Patch Codes descriptors, or LATCH. Our key observation is that existing binary descriptors are at an increased risk from noise and local appearance variations. This, as they compare the values of pixel pairs; changes to either of the pixels can easily lead to changes in descriptor values, hence damaging its performance. In order to provide more robustness, we instead propose a novel means of comparing pixel patches. This ostensibly small change, requires a substantial redesign of the descriptors themselves and how they are produced. Our resulting LATCH representation is rigorously compared to state-of-the-art binary descriptors and shown to provide far better performance for similar computation and space requirements. | Local binary patterns. In a separate line of work, the Local Binary Patterns (LBP) were proposed as global (whole image) representation by @cite_18 @cite_6 . Since their original release, they have been successfully applied to many image classification problems, most notably of texture and face images (e.g., @cite_3 and @cite_27 ). | {
"cite_N": [
"@cite_27",
"@cite_18",
"@cite_3",
"@cite_6"
],
"mid": [
"",
"1866173756",
"2163808566",
"2163352848"
],
"abstract": [
"",
"This paper presents generalizations to the gray scale and rotation invariant texture classification method based on local binary patterns that we have recently introduced. We derive a generalized presentation that allows for realizing a gray scale and rotation invariant LBP operator for any quantization of the angular space and for any spatial resolution, and present a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray scale variations, since the operator is by definition invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity, as the operator can be realized with a few operations in a small neighborhood and a lookup table. Excellent experimental results obtained in a true problem of rotation invariance, where the classifier is trained at one particular rotation angle and tested with samples from other rotation angles, demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns. These operators characterize the spatial configuration of local image texture and the performance can be further improved by combining them with rotation invariant variance measures that characterize the contrast of local image texture. The joint distributions of these orthogonal measures are shown to be very powerful tools for rotation invariant texture analysis.",
"This paper presents a novel and efficient facial image representation based on local binary pattern (LBP) texture features. The face image is divided into several regions from which the LBP feature distributions are extracted and concatenated into an enhanced feature vector to be used as a face descriptor. The performance of the proposed method is assessed in the face recognition problem under different challenges. Other applications and several extensions are also discussed",
"Presents a theoretically very simple, yet efficient, multiresolution approach to gray-scale and rotation invariant texture classification based on local binary patterns and nonparametric discrimination of sample and prototype distributions. The method is based on recognizing that certain local binary patterns, termed \"uniform,\" are fundamental properties of local image texture and their occurrence histogram is proven to be a very powerful texture feature. We derive a generalized gray-scale and rotation invariant operator presentation that allows for detecting the \"uniform\" patterns for any quantization of the angular space and for any spatial resolution and presents a method for combining multiple operators for multiresolution analysis. The proposed approach is very robust in terms of gray-scale variations since the operator is, by definition, invariant against any monotonic transformation of the gray scale. Another advantage is computational simplicity as the operator can be realized with a few operations in a small neighborhood and a lookup table. Experimental results demonstrate that good discrimination can be achieved with the occurrence statistics of simple rotation invariant local binary patterns."
]
} |
1501.03719 | 2950814836 | We present a novel means of describing local image appearances using binary strings. Binary descriptors have drawn increasing interest in recent years due to their speed and low memory footprint. A known shortcoming of these representations is their inferior performance compared to larger, histogram based descriptors such as the SIFT. Our goal is to close this performance gap while maintaining the benefits attributed to binary representations. To this end we propose the Learned Arrangements of Three Patch Codes descriptors, or LATCH. Our key observation is that existing binary descriptors are at an increased risk from noise and local appearance variations. This, as they compare the values of pixel pairs; changes to either of the pixels can easily lead to changes in descriptor values, hence damaging its performance. In order to provide more robustness, we instead propose a novel means of comparing pixel patches. This ostensibly small change, requires a substantial redesign of the descriptors themselves and how they are produced. Our resulting LATCH representation is rigorously compared to state-of-the-art binary descriptors and shown to provide far better performance for similar computation and space requirements. | Our work is related to a particular LBP variant, the Three-Patch LBP (TPLBP) @cite_9 @cite_2 , which was shown to be an exceptionally potent global representation for face images @cite_13 . Unlike previous LBP code schemes, TPLBP computes 8-bit value codes by comparing not the intensities of pixel pairs, but rather the similarity of three pixel patches. Specifically, for every pixel in the image, TPLBP compares the pixel patch centered on the pixel, with eight pixel patches, evenly distributed on a ring at radius @math around the pixel. A single binary value is set following a comparison of the center patch to two patches, spaced @math degrees away from each other along the circle. A value of 1 represents the central patch being closer (in the SSD sense) to the first of these two patches, 0 otherwise. | {
"cite_N": [
"@cite_9",
"@cite_13",
"@cite_2"
],
"mid": [
"2098017479",
"",
"2652751060"
],
"abstract": [
"Computer vision systems have demonstrated considerable improvement in recognizing and verifying faces in digital images. Still, recognizing faces appearing in unconstrained, natural conditions remains a challenging task. In this paper, we present a face-image, pair-matching approach primarily developed and tested on the “Labeled Faces in the Wild” (LFW) benchmark that reflects the challenges of face recognition from unconstrained images. The approach we propose makes the following contributions. 1) We present a family of novel face-image descriptors designed to capture statistics of local patch similarities. 2) We demonstrate how unlabeled background samples may be used to better evaluate image similarities. To this end, we describe a number of novel, effective similarity measures. 3) We show how labeled background samples, when available, may further improve classification performance, by employing a unique pair-matching pipeline. We present state-of-the-art results on the LFW pair-matching benchmarks. In addition, we show our system to be well suited for multilabel face classification (recognition) problem, on both the LFW images and on images from the laboratory controlled multi-PIE database.",
"",
"Recent methods for learning similarity between images have presented impressive results in the problem of pair matching (same notsame classification) of face images. In this paper we explore how well this performance carries over to the related task of multi-option face identification, specifically on the Labeled Faces in the Wild (LFW) image set. In addition, we seek to compare the performance of similarity learning methods to descriptor based methods. We present the following results: (1) Descriptor-Based approaches that efficiently encode the appearance of each face image as a vector outperform the leading similarity based method in the task of multi-option face identification. (2) Straightforward use of Euclidean distance on the descriptor vectors performs somewhat worse than the similarity learning methods on the task of pair matching. (3) Adding a learning stage, the performance of descriptor based methods matches and exceeds that of similarity methods on the pair matching task. (4) A novel patch based descriptor we propose is able to improve the performance of the successful Local Binary Pattern (LBP) descriptor in both multi-option identification and same not-same classification."
]
} |
1501.03641 | 2949699615 | The concept of well group in a special but important case captures homological properties of the zero set of a continuous map @math on a compact space K that are invariant with respect to perturbations of f. The perturbations are arbitrary continuous maps within @math distance r from f for a given r>0. The main drawback of the approach is that the computability of well groups was shown only when dim K=n or n=1. Our contribution to the theory of well groups is twofold: on the one hand we improve on the computability issue, but on the other hand we present a range of examples where the well groups are incomplete invariants, that is, fail to capture certain important robust properties of the zero set. For the first part, we identify a computable subgroup of the well group that is obtained by cap product with the pullback of the orientation of R^n by f. In other words, well groups can be algorithmically approximated from below. When f is smooth and dim K<2n-2, our approximation of the (dim K-n)th well group is exact. For the second part, we find examples of maps @math with all well groups isomorphic but whose perturbations have different zero sets. We discuss on a possible replacement of the well groups of vector valued maps by an invariant of a better descriptive power and computability status. | Verification of zeros. An important topic in the interval computation community is the verification of the (non)existence of zeros of a given function @cite_3 . While the nonexistence can be often verified by interval arithmetic alone, a proof of existence requires additional methods which often include topological considerations. In the case of continuous maps @math , Miranda's or Borsuk's theorem can be used for zero verification @cite_22 @cite_15 , or the computation of the topological degree @cite_18 @cite_6 @cite_20 . Fulfilled assumptions of these tests not only yield a zero in @math but also a robust'' zero and a nontrivial @math th well group @math for some @math . Recently, topological degree has been used for simplification of vector fields @cite_5 . | {
"cite_N": [
"@cite_18",
"@cite_22",
"@cite_3",
"@cite_6",
"@cite_5",
"@cite_15",
"@cite_20"
],
"mid": [
"",
"2041977310",
"1483874187",
"2143247392",
"",
"154568157",
"2952554977"
],
"abstract": [
"",
"We show how interval arithmetic can be used in connection with Borsuk's theorem to computationally prove the existence of a solution of a system of nonlinear equations. It turns out that this new test, which can be checked computationally in several different ways, is more general than an existing test based on Miranda's theorem in the sense that it is successful for a larger set of situations. A numerical example is included.",
"Preface Symbol index 1. Basic properties of interval arithmetic 2. Enclosures for the range of a function 3. Matrices and sublinear mappings 4. The solution of square linear systems of equations 5. Nonlinear systems of equations 6. Hull computation References Author index Subject index.",
"In this note we give a new representation for closed sets under which the robust zero set of a function is computable. We call this representation the component cover representation. The computation of the zero set is based on topological index theory, the most powerful tool for finding robust solutions of equations.",
"",
"We show that the assumptions of the well-known Kantorovich theorem imply the assumptions of Miranda’s theorem, but not vice versa.",
"In this paper we consider a fragment of the first-order theory of the real numbers that includes systems of equations of continuous functions in bounded domains, and for which all functions are computable in the sense that it is possible to compute arbitrarily close piece-wise interval approximations. Even though this fragment is undecidable, we prove that there is a (possibly non-terminating) algorithm for checking satisfiability such that (1) whenever it terminates, it computes a correct answer, and (2) it always terminates when the input is robust. A formula is robust, if its satisfiability does not change under small perturbations. As a basic tool for our algorithm we use the notion of degree from the field of (differential) topology."
]
} |
1501.03208 | 2952050240 | Consider the problem of recovering an unknown signal from undersampled measurements, given the knowledge that the signal has a sparse representation in a specified dictionary @math . This problem is now understood to be well-posed and efficiently solvable under suitable assumptions on the measurements and dictionary, if the number of measurements scales roughly with the sparsity level. One sufficient condition for such is the @math -restricted isometry property ( @math -RIP), which asks that the sampling matrix approximately preserve the norm of all signals which are sufficiently sparse in @math . While many classes of random matrices are known to satisfy such conditions, such matrices are not representative of the structural constraints imposed by practical sensing systems. We close this gap in the theory by demonstrating that one can subsample a fixed orthogonal matrix in such a way that the @math -RIP will hold, provided this basis is sufficiently incoherent with the sparsifying dictionary @math . We also extend this analysis to allow for weighted sparse expansions. Consequently, we arrive at compressive sensing recovery guarantees for structured measurements and redundant dictionaries, opening the door to a wide array of practical applications. | Due to the abundance of relevant applications, a number of works have studied compressive sensing for overcomplete frames. The first work on this topic aimed to recover the coefficient vector @math directly, and thus required strong incoherence assumptions on the dictionary @math @cite_45 . More recently, it was noted that if one instead aims to recover @math rather than @math , recovery guarantees can be obtained under weaker assumptions. Namely, one only needs that the measurement matrix @math respects the norms of signals which are sparse in the dictionary @math . To quantify this, Cand @cite_25 define the @math -restricted isometry property ( @math -RIP in short, see Definition below). For measurement matrices that have this property, a number of algorithms have been shown to guarantee recovery under certain assumptions. Optimization approaches such as @math -analysis @cite_2 @cite_25 @cite_14 @cite_52 @cite_0 @cite_35 @cite_28 and greedy approaches @cite_56 @cite_14 @cite_11 @cite_71 @cite_69 have been studied. | {
"cite_N": [
"@cite_35",
"@cite_69",
"@cite_14",
"@cite_28",
"@cite_52",
"@cite_56",
"@cite_0",
"@cite_45",
"@cite_71",
"@cite_2",
"@cite_25",
"@cite_11"
],
"mid": [
"2953076353",
"2949964595",
"",
"",
"",
"",
"2137959902",
"2105877514",
"",
"",
"",
"2962817023"
],
"abstract": [
"This paper provides novel results for the recovery of signals from undersampled measurements based on analysis @math -minimization, when the analysis operator is given by a frame. We both provide so-called uniform and nonuniform recovery guarantees for cosparse (analysis-sparse) signals using Gaussian random measurement matrices. The nonuniform result relies on a recovery condition via tangent cones and the uniform recovery guarantee is based on an analysis version of the null space property. Examining these conditions for Gaussian random matrices leads to precise bounds on the number of measurements required for successful recovery. In the special case of standard sparsity, our result improves a bound due to Rudelson and Vershynin concerning the exact reconstruction of sparse signals from Gaussian measurements with respect to the constant and extends it to stability under passing to approximately sparse signals and to robustness under noise on the measurements.",
"Compressive sampling (CoSa) is a new methodology which demonstrates that sparse signals can be recovered from a small number of linear measurements. Greedy algorithms like CoSaMP have been designed for this recovery, and variants of these methods have been adapted to the case where sparsity is with respect to some arbitrary dictionary rather than an orthonormal basis. In this work we present an analysis of the so-called Signal Space CoSaMP method when the measurements are corrupted with mean-zero white Gaussian noise. We establish near-oracle performance for recovery of signals sparse in some arbitrary dictionary. In addition, we analyze the block variant of the method for signals whose supports obey a block structure, extending the method into the model-based compressed sensing framework. Numerical experiments confirm that the block method significantly outperforms the standard method in these settings.",
"",
"",
"",
"",
"Compressed sensing with sparse frame representations is seen to have much greater range of practical applications than that with orthonormal bases. In such settings, one approach to recover the signal is known as l1-analysis. We expand in this paper the performance analysis of this approach by providing a weaker recovery condition than existing results in the literature. Our analysis is also broadly based on general frames and alter native dual frames (as analysis operators). As one application to such a general-dual-based approach and performance analysis, an optimal-dual-based technique is proposed to demonstrate the effectiveness of using alternative dual frames as l1-analysis operators. An iterative algorithm is outlined for solving the optimal-dual-based -analysis problem. The effectiveness of the proposed method and algorithm is demonstrated through several experiments.",
"This paper extends the concept of compressed sensing to signals that are not sparse in an orthonormal basis but rather in a redundant dictionary. It is shown that a matrix, which is a composition of a random matrix of certain type and a deterministic dictionary, has small restricted isometry constants. Thus, signals that are sparse with respect to the dictionary can be recovered via basis pursuit (BP) from a small number of random measurements. Further, thresholding is investigated as recovery algorithm for compressed sensing, and conditions are provided that guarantee reconstruction with high probability. The different schemes are compared by numerical experiments.",
"",
"",
"",
"Abstract Compressive sampling (CoSa) has provided many methods for signal recovery of signals compressible with respect to an orthonormal basis. However, modern applications have sparked the emergence of approaches for signals not sparse in an orthonormal basis but in some arbitrary, perhaps highly overcomplete, dictionary. Recently, several “signal-space” greedy methods have been proposed to address signal recovery in this setting. However, such methods inherently rely on the existence of fast and accurate projections which allow one to identify the most relevant atoms in a dictionary for any given signal, up to a very strict accuracy. When the dictionary is highly overcomplete, no such projections are currently known; the requirements on such projections do not even hold for incoherent or well-behaved dictionaries. In this work, we provide an alternate analysis for signal space greedy methods which enforce assumptions on these projections which hold in several settings including those when the dictionary is incoherent or structurally coherent. These results align more closely with traditional results in the standard CoSa literature and improve upon previous work in the signal space setting."
]
} |
1501.03069 | 2949071195 | Many visual surveillance tasks, e.g.video summarisation, is conventionally accomplished through analysing imagerybased features. Relying solely on visual cues for public surveillance video understanding is unreliable, since visual observations obtained from public space CCTV video data are often not sufficiently trustworthy and events of interest can be subtle. On the other hand, non-visual data sources such as weather reports and traffic sensory signals are readily accessible but are not explored jointly to complement visual data for video content analysis and summarisation. In this paper, we present a novel unsupervised framework to learn jointly from both visual and independently-drawn non-visual data sources for discovering meaningful latent structure of surveillance video data. In particular, we investigate ways to cope with discrepant dimension and representation whist associating these heterogeneous data sources, and derive effective mechanism to tolerate with missing and incomplete data from different sources. We show that the proposed multi-source learning framework not only achieves better video content clustering than state-of-the-art methods, but also is capable of accurately inferring missing non-visual semantics from previously unseen videos. In addition, a comprehensive user study is conducted to validate the quality of video summarisation generated using the proposed multi-source model. | - There exist studies that exploit different sensory or information modalities from a single source for data structure mining. For example, @cite_6 propose to perform multi-modal image clustering by learning a commonly shared graph-Laplacian matrix from different visual feature modalities. Heer and Chi @cite_17 combine linearly individual similarity matrices derived from multi-modal webpages for web user grouping. @cite_20 present a tensor based model to cluster music items with additional tags. In terms of video analysis, the auditory channel and or transcripts have been widely explored for detecting semantic concepts from multimedia videos @cite_52 @cite_23 , summarising highlights in news and broadcast programs @cite_9 @cite_36 , or locating speakers @cite_30 . User tags associated with web videos (e.g. YouTube) have also been utilised @cite_16 @cite_28 @cite_53 . In contrast, surveillance videos captured from public spaces are typically without auditory signals nor any synchronised transcripts and user tags available. Instead, we wish to explore alternative non-visual data drawn independently elsewhere from multiple sources, with inherent challenges of being inaccurate and incomplete, unsynchronised to and may also be in conflict with the observed visual data. | {
"cite_N": [
"@cite_30",
"@cite_36",
"@cite_28",
"@cite_9",
"@cite_53",
"@cite_52",
"@cite_6",
"@cite_23",
"@cite_16",
"@cite_20",
"@cite_17"
],
"mid": [
"2011699475",
"2072197653",
"2038219201",
"",
"",
"2150938184",
"2142109962",
"2003723718",
"1994694952",
"2294140729",
""
],
"abstract": [
"The problem of multimodal clustering arises whenever the data are gathered with several physically different sensors. Observations from different modalities are not necessarily aligned in the sense there there is no obvious way to associate or compare them in some common space. A solution may consist in considering multiple clustering tasks independently for each modality. The main difficulty with such an approach is to guarantee that the unimodal clusterings are mutually consistent. In this letter, we show that multimodal clustering can be addressed within a novel framework: conjugate mixture models. These models exploit the explicit transformations that are often available between an unobserved parameter space (objects) and each of the observation spaces (sensors). We formulate the problem as a likelihood maximization task and derive the associated conjugate expectation-maximization algorithm. The convergence properties of the proposed algorithm are thoroughly investigated. Several local and global optimization techniques are proposed in order to increase its convergence speed. Two initialization strategies are proposed and compared. A consistent model selection criterion is proposed. The algorithm and its variants are tested and evaluated within the task of 3D localization of several speakers using both auditory and visual data.",
"In this paper, we focus on video programs that are intended to disseminate information and knowledge such as news, documentaries, seminars, etc, and present an audiovisual summarization system that summarizes the audio and visual contents of the given video separately, and then integrating the two summaries with a partial alignment. The audio summary is created by selecting spoken sentences that best present the main content of the audio speech while the visual summary is created by eliminating duplicates redundancies and preserving visually rich contents in the image stream. The alignment operation aims to synchronize each spoken sentence in the audio summary with its corresponding speaker's face and to preserve the rich content in the visual summary. A Bipartite Graph-based audiovisual alignment algorithm is developed to efficiently find the best alignment solution that satisfies these alignment requirements. With the proposed system, we strive to produce a video summary that: (1) provides a natural visual and audio content overview, and (2) maximizes the coverage for both audio and visual contents of the original video without having to sacrifice either of them.",
"We present a system that automatically recommends tags for YouTube videos solely based on their audiovisual content. We also propose a novel framework for unsupervised discovery of video categories that exploits knowledge mined from the World-Wide Web text documents searches. First, video content to tag association is learned by training classifiers that map audiovisual content-based features from millions of videos on YouTube.com to existing uploader-supplied tags for these videos. When a new video is uploaded, the labels provided by these classifiers are used to automatically suggest tags deemed relevant to the video. Our system has learned a vocabulary of over 20,000 tags. Secondly, we mined large volumes of Web pages and search queries to discover a set of possible text entity categories and a set of associated is-A relationships that map individual text entities to categories. Finally, we apply these is-A relationships mined from web text on the tags learned from audiovisual content of videos to automatically synthesize a reliable set of categories most relevant to videos – along with a mechanism to predict these categories for new uploads. We then present rigorous rating studies that establish that: (a) the average relevance of tags automatically recommended by our system matches the average relevance of the uploader-supplied tags at the same or better coverage and (b) the average precision@K of video categories discovered by our system is 70 with K=5.",
"",
"",
"Data clustering is an important technique for visual data management. Most previous work focuses on clustering video data within single sources. We address the problem of clustering across sources, and propose novel spectral clustering algorithms for multisource clustering problems. Spectral clustering is a new discriminative method realizing clustering by partitioning data graphs. We represent multi-source data as bipartite or K-partite graphs, and investigate the spectral clustering algorithm under these representations. The algorithms are evaluated using the TRECVID-2003 corpus with semantic features extracted from speech transcripts and visual concept recognition results from videos. The experiments show that the proposed bipartite clustering algorithm significantly outperforms the regular spectral clustering algorithm in capturing cross-source associations.",
"In recent years, more and more visual descriptors have been proposed to describe objects and scenes appearing in images. Different features describe different aspects of the visual characteristics. How to combine these heterogeneous features has become an increasing critical problem. In this paper, we propose a novel approach to unsupervised integrate such heterogeneous features by performing multi-modal spectral clustering on unlabeled images and unsegmented images. Considering each type of feature as one modal, our new multi-modal spectral clustering (MMSC) algorithm is to learn a commonly shared graph Laplacian matrix by unifying different modals (image features). A non-negative relaxation is also added in our method to improve the robustness and efficiency of image clustering. We applied our MMSC method to integrate five types of popularly used image features, including SIFT, HOG, GIST, LBP, CENTRIST and evaluated the performance by two benchmark data sets: Caltech-101 and MSRC-v1. Compared with existing unsupervised scene and object categorization methods, our approach always achieves superior performances measured by three standard clustering evaluation metrices.",
"The rapid development of social media sharing has created a huge demand for automatic media classification and annotation techniques. Attribute learning has emerged as a promising paradigm for bridging the semantic gap and addressing data sparsity via transferring attribute knowledge in object recognition and relatively simple action classification. In this paper, we address the task of attribute learning for understanding multimedia data with sparse and incomplete labels. In particular, we focus on videos of social group activities, which are particularly challenging and topical examples of this task because of their multimodal content and complex and unstructured nature relative to the density of annotations. To solve this problem, we 1) introduce a concept of semilatent attribute space, expressing user-defined and latent attributes in a unified framework, and 2) propose a novel scalable probabilistic topic model for learning multimodal semilatent attributes, which dramatically reduces requirements for an exhaustive accurate attribute ontology and expensive annotation effort. We show that our framework is able to exploit latent attributes to outperform contemporary approaches for addressing a variety of realistic multimedia sparse data learning tasks including: multitask learning, learning with label noise, N-shot transfer learning, and importantly zero-shot learning.",
"Automatic categorization of videos in a Web-scale unconstrained collection such as YouTube is a challenging task. A key issue is how to build an effective training set in the presence of missing, sparse or noisy labels. We propose to achieve this by first manually creating a small labeled set and then extending it using additional sources such as related videos, searched videos, and text-based webpages. The data from such disparate sources has different properties and labeling quality, and thus fusing them in a coherent fashion is another practical challenge. We propose a fusion framework in which each data source is first combined with the manually-labeled set independently. Then, using the hierarchical taxonomy of the categories, a Conditional Random Field (CRF) based fusion strategy is designed. Based on the final fused classifier, category labels are predicted for the new videos. Extensive experiments on about 80K videos from 29 most frequent categories in YouTube show the effectiveness of the proposed method for categorizing large-scale wild Web videos1.",
"Socialtaggingis anincreasingly popularphenomenon with substantial impact on Music Information Retrieval (MIR). Tags express the personal perspectives of the user on the music items (such as songs, artists, or albums) they tagged. These personal perspectives should be taken into account inMIRtasksthatassessthesimilaritybetweenmusicitems. In this paper, we propose an novel approach for clustering music items represented in social tagging systems. Its characteristic is that it determines similarity between items by preserving the 3-way relationships among the inherent dimensions of the data, i.e., users, items, and tags. Conversely to existing approaches that use reductions to 2way relationships (between items-users or items-tags), this characteristic allows the proposed algorithm to consider the personal perspectives of tags and to improve the clustering quality. Due to the complexity of social tagging data, we focus on spectral clustering that has been proven effective in addressing complex data. However, existing spectral clustering algorithms work with 2-way relationships. To overcome this problem, we develop a novel data-modeling scheme and a tag-aware spectral clustering procedure that uses tensors (high-dimensional arrays) to store the multigraph structures that capture the personalised aspects of similarity. Experimental results with data from Last.fm indicate the superiority of the proposed method in terms of clustering quality over conventional spectral clustering approaches that consider only 2-way relationships.",
""
]
} |
1501.03069 | 2949071195 | Many visual surveillance tasks, e.g.video summarisation, is conventionally accomplished through analysing imagerybased features. Relying solely on visual cues for public surveillance video understanding is unreliable, since visual observations obtained from public space CCTV video data are often not sufficiently trustworthy and events of interest can be subtle. On the other hand, non-visual data sources such as weather reports and traffic sensory signals are readily accessible but are not explored jointly to complement visual data for video content analysis and summarisation. In this paper, we present a novel unsupervised framework to learn jointly from both visual and independently-drawn non-visual data sources for discovering meaningful latent structure of surveillance video data. In particular, we investigate ways to cope with discrepant dimension and representation whist associating these heterogeneous data sources, and derive effective mechanism to tolerate with missing and incomplete data from different sources. We show that the proposed multi-source learning framework not only achieves better video content clustering than state-of-the-art methods, but also is capable of accurately inferring missing non-visual semantics from previously unseen videos. In addition, a comprehensive user study is conducted to validate the quality of video summarisation generated using the proposed multi-source model. | - Contemporary video summarisation methods can be broadly classified into two paradigms, key-frame-based @cite_32 @cite_39 @cite_29 @cite_19 @cite_44 and object-based @cite_49 @cite_27 @cite_26 methods. The key-frame-based approaches select representative key-frames by analysing low-level imagery properties, e.g. optical flow @cite_39 or image differences @cite_29 , object's appearance and motion @cite_32 , to form a storyboard of still images. Object-based techniques @cite_49 @cite_27 , on the other hand, rely on object segmentation and tracking to extract object-centric trajectories tubes, and compress those tubes to reduce spatiotemporal redundancy. | {
"cite_N": [
"@cite_26",
"@cite_29",
"@cite_32",
"@cite_39",
"@cite_44",
"@cite_19",
"@cite_27",
"@cite_49"
],
"mid": [
"2126802797",
"",
"2106229755",
"",
"2117051369",
"",
"2145037218",
"2115060048"
],
"abstract": [
"The world is covered with millions of Webcams, many transmit everything in their field of view over the Internet 24 hours a day. A Web search finds public webcams in airports, intersections, classrooms, parks, shops, ski resorts, and more. Even more private surveillance cameras cover many private and public facilities. Webcams are an endless resource, but most of the video broadcast will be of little interest due to lack of activity. We propose to generate a short video that will be a synopsis of an endless video streams, generated by webcams or surveillance cameras. We would like to address queries like \"I would like to watch in one minute the highlights of this camera broadcast during the past day\". The process includes two major phases: (i) An online conversion of the video stream into a database of objects and activities (rather than frames), (ii) A response phase, generating the video synopsis as a response to the user's query. To include maximum information in a short synopsis we simultaneously show activities that may have happened at different times. The synopsis video can also be used as an index into the original video stream.",
"",
"We present a video summarization approach for egocentric or “wearable” camera data. Given hours of video, the proposed method produces a compact storyboard summary of the camera wearer's day. In contrast to traditional keyframe selection techniques, the resulting summary focuses on the most important objects and people with which the camera wearer interacts. To accomplish this, we develop region cues indicative of high-level saliency in egocentric video — such as the nearness to hands, gaze, and frequency of occurrence — and learn a regressor to predict the relative importance of any new region based on these cues. Using these predictions and a simple form of temporal event detection, our method selects frames for the storyboard that reflect the key object-driven happenings. Critically, the approach is neither camera-wearer-specific nor object-specific; that means the learned importance metric need not be trained for a given user or context, and it can predict the importance of objects and people that have never been seen previously. Our results with 17 hours of egocentric data show the method's promise relative to existing techniques for saliency and summarization.",
"",
"Video summaries provide condensed and succinct representations of the content of a video stream through a combination of still images, video segments, graphical representations and textual descriptors. This paper presents a conceptual framework for video summarisation derived from the research literature and used as a means for surveying the research literature. The framework distinguishes between video summarisation techniques (the methods used to process content from a source video stream to achieve a summarisation of that stream) and video summaries (outputs of video summarisation techniques). Video summarisation techniques are considered within three broad categories: internal (analyse information sourced directly from the video stream), external (analyse information not sourced directly from the video stream) and hybrid (analyse a combination of internal and external information). Video summaries are considered as a function of the type of content they are derived from (object, event, perception or feature based) and the functionality offered to the user for their consumption (interactive or static, personalised or generic). It is argued that video summarisation would benefit from greater incorporation of external information, particularly user based information that is unobtrusively sourced, in order to overcome longstanding challenges such as the semantic gap and providing video summaries that have greater relevance to individual users.",
"",
"Explosive growth of surveillance video data presents formidable challenges to its browsing, retrieval and storage. Video synopsis, an innovation proposed by Peleg and his colleagues, is aimed for fast browsing by shortening the video into a synopsis while keeping activities in video captured by a camera. However, the current techniques are offline methods requiring that all the video data be ready for the processing, and are expensive in time and space. In this paper, we propose an online and efficient solution, and its supporting algorithms to overcome the problems. The method adopts an online content-aware approach in a step-wise manner, hence applicable to endless video, with less computational cost. Moreover, we propose a novel tracking method, called sticky tracking, to achieve high-quality visualization. The system can achieve a faster-than-real-time speed with a multi-core CPU implementation. The advantages are demonstrated by extensive experiments with a wide variety of videos. The proposed solution and algorithms could be integrated with surveillance cameras, and impact the way that surveillance videos are recorded.",
"The amount of captured video is growing with the increased numbers of video cameras, especially the increase of millions of surveillance cameras that operate 24 hours a day. Since video browsing and retrieval is time consuming, most captured video is never watched or examined. Video synopsis is an effective tool for browsing and indexing of such a video. It provides a short video representation, while preserving the essential activities of the original video. The activity in the video is condensed into a shorter period by simultaneously showing multiple activities, even when they originally occurred at different times. The synopsis video is also an index into the original video by pointing to the original time of each activity. Video synopsis can be applied to create a synopsis of an endless video streams, as generated by Webcams and by surveillance cameras. It can address queries like \"show in one minute the synopsis of this camera broadcast during the past day''. This process includes two major phases: (i) an online conversion of the endless video stream into a database of objects and activities (rather than frames). (ii) A response phase, generating the video synopsis as a response to the user's query."
]
} |
1501.03069 | 2949071195 | Many visual surveillance tasks, e.g.video summarisation, is conventionally accomplished through analysing imagerybased features. Relying solely on visual cues for public surveillance video understanding is unreliable, since visual observations obtained from public space CCTV video data are often not sufficiently trustworthy and events of interest can be subtle. On the other hand, non-visual data sources such as weather reports and traffic sensory signals are readily accessible but are not explored jointly to complement visual data for video content analysis and summarisation. In this paper, we present a novel unsupervised framework to learn jointly from both visual and independently-drawn non-visual data sources for discovering meaningful latent structure of surveillance video data. In particular, we investigate ways to cope with discrepant dimension and representation whist associating these heterogeneous data sources, and derive effective mechanism to tolerate with missing and incomplete data from different sources. We show that the proposed multi-source learning framework not only achieves better video content clustering than state-of-the-art methods, but also is capable of accurately inferring missing non-visual semantics from previously unseen videos. In addition, a comprehensive user study is conducted to validate the quality of video summarisation generated using the proposed multi-source model. | - Random forests @cite_45 @cite_51 have proven as powerful models in the literature. Different variants of random forests have been devised, either supervised @cite_4 @cite_13 @cite_40 @cite_14 @cite_46 , or unsupervised @cite_21 @cite_41 @cite_34 @cite_33 @cite_54 . Supervised models are not suitable to our problem since we do not assume the availability of ground truth labels during model training. Existing clustering forest models, on the other hand, assumes only homogeneous data sources such as pure imagery-based features. No principled way of combining multiple heterogeneous and independent data sources in forest models is available. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_33",
"@cite_41",
"@cite_54",
"@cite_21",
"@cite_40",
"@cite_45",
"@cite_51",
"@cite_46",
"@cite_34",
"@cite_13"
],
"mid": [
"",
"2060280062",
"",
"2021833436",
"",
"2072343647",
"1994635760",
"",
"",
"2128302979",
"2078863887",
"2125337786"
],
"abstract": [
"",
"We propose a new method to quickly and accurately predict human pose---the 3D positions of body joints---from a single depth image, without depending on information from preceding frames. Our approach is strongly rooted in current object recognition strategies. By designing an intermediate representation in terms of body parts, the difficult pose estimation problem is transformed into a simpler per-pixel classification problem, for which efficient machine learning techniques exist. By using computer graphics to synthesize a very large dataset of training image pairs, one can train a classifier that estimates body part labels from test images invariant to pose, body shape, clothing, and other irrelevances. Finally, we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding local modes. The system runs in under 5ms on the Xbox 360. Our evaluation shows high accuracy on both synthetic and real test sets, and investigates the effect of several training parameters. We achieve state-of-the-art accuracy in our comparison with related work and demonstrate improved generalization over exact whole-skeleton nearest neighbor matching.",
"",
"A random forest (RF) predictor is an ensemble of individual tree predictors. As part of their construction, RF predictors naturally lead to a dissimilarity measure between the observations. One can also define an RF dissimilarity measure between unlabeled data: the idea is to construct an RF predictor that distinguishes the “observed” data from suitably generated synthetic data. The observed data are the original unlabeled data and the synthetic data are drawn from a reference distribution. Here we describe the properties of the RF dissimilarity and make recommendations on how to use it in practice.An RF dissimilarity can be attractive because it handles mixed variable types well, is invariant to monotonic transformations of the input variables, and is robust to outlying observations. The RF dissimilarity easily deals with a large number of variables due to its intrinsic variable selection; for example, the Addcl 1 RF dissimilarity weighs the contribution of each variable according to how dependent it is ...",
"",
"",
"We present Alternating Regression Forests (ARFs), a novel regression algorithm that learns a Random Forest by optimizing a global loss function over all trees. This interrelates the information of single trees during the training phase and results in more accurate predictions. ARFs can minimize any differentiable regression loss without sacrificing the appealing properties of Random Forests, like low computational complexity during both, training and testing. Inspired by recent developments for classification [19], we derive a new algorithm capable of dealing with different regression loss functions, discuss its properties and investigate the relations to other methods like Boosted Trees. We evaluate ARFs on standard machine learning benchmarks, where we observe better generalization power compared to both standard Random Forests and Boosted Trees. Moreover, we apply the proposed regressor to two computer vision applications: object detection and head pose estimation from depth images. ARFs outperform the Random Forest baselines in both tasks, illustrating the importance of optimizing a common loss function for all trees.",
"",
"",
"In this paper we perform an empirical evaluation of supervised learning on high-dimensional data. We evaluate performance on three metrics: accuracy, AUC, and squared loss and study the effect of increasing dimensionality on the performance of the learning algorithms. Our findings are consistent with previous studies for problems of relatively low dimension, but suggest that as dimensionality increases the relative performance of the learning algorithms changes. To our surprise, the method that performs consistently well across all dimensions is random forests, followed by neural nets, boosted trees, and SVMs.",
"This paper considers the problem of clustering large data sets in a high-dimensional space. Using a random forest, we first generate multiple partitions of the same input space, one per tree. The partitions from all trees are merged by intersecting them, resulting in a partition of higher resolution. A graph is then constructed by assigning a node to each region and linking adjacent nodes. This Graph of Superimposed Partitions (GSP) represents a remapped space of the input data where regions of high density are mapped to a larger number of nodes. Generating such a graph turns the clustering problem in the feature space into a graph clustering task which we solve with the Markov cluster algorithm (MCL). The proposed algorithm is able to capture non-convex structure while being computationally efficient, capable of dealing with large data sets. We show the clustering performance on synthetic data and apply the method to the task of video segmentation.",
"The paper introduces Hough forests, which are random forests adapted to perform a generalized Hough transform in an efficient way. Compared to previous Hough-based systems such as implicit shape models, Hough forests improve the performance of the generalized Hough transform for object detection on a categorical level. At the same time, their flexibility permits extensions of the Hough transform to new domains such as object tracking and action recognition. Hough forests can be regarded as task-adapted codebooks of local appearance that allow fast supervised training and fast matching at test time. They achieve high detection accuracy since the entries of such codebooks are optimized to cast Hough votes with small variance and since their efficiency permits dense sampling of local image patches or video cuboids during detection. The efficacy of Hough forests for a set of computer vision tasks is validated through experiments on a large set of publicly available benchmark data sets and comparisons with the state-of-the-art."
]
} |
1501.03336 | 1668953478 | In recent processor development, we have witnessed the integration of GPU and CPUs into a single chip. The result of this integration is a reduction of the data communication overheads. This enables an efficient collaboration of both devices in the execution of parallel workloads. In this work, we focus on the problem of efficiently scheduling chunks of iterations of parallel loops among the computing devices on the chip (the GPU and the CPU cores) in the context of irregular applications. In particular, we analyze the sources of overhead that the host thread experiments when a chunk of iterations is offloaded to the GPU while other threads are executing concurrently other chunks on the CPU cores. We carefully study these overheads on different processor architectures and operating systems using Barnes Hut as a study case representative of irregular applications. We also propose a set of optimizations to mitigate the overheads that arise in presence of oversubscription and take advantage of the different features of the heterogeneous architectures. Thanks to these optimizations we reduce Energy-Delay Product (EDP) by 18 and 84 on Intel Ivy Bridge and Haswell architectures, respectively, and by 57 on the Exynos big.LITTLE. | The closest work to ours is that of @cite_15 , which address the problem of performance degradation when several independent OpenCL programs run at the same time (co-run) on the CPU and on the GPU of an Ivy Bridge using the Windows OS. The programs running on the CPU use all cores, so they are in a situation similar to our 4+1 configurations (oversubscription). To avoid degradation of the GPU kernel they also propose increasing the priority of the thread that launches the GPU kernel. Our study differs from theirs because we do not run two different programs, instead we partition the iteration space of a single program to exploit both, the CPU and the GPU. Our study also shows that increasing the priority of the host thread is not necessary when there is no oversubscription (i.e. 3+1) or when the underlying OS is Linux. We also assess the use of a big.LITTLE architecture. | {
"cite_N": [
"@cite_15"
],
"mid": [
"813884746"
],
"abstract": [
"Co-runs of independent applications on systems with heterogeneous processors are common (data centers, mobile devices, etc.). There has been limited understanding on the influence of co-runners on such systems. The previous studys on this topic are on simulators with limited settings."
]
} |
1501.03336 | 1668953478 | In recent processor development, we have witnessed the integration of GPU and CPUs into a single chip. The result of this integration is a reduction of the data communication overheads. This enables an efficient collaboration of both devices in the execution of parallel workloads. In this work, we focus on the problem of efficiently scheduling chunks of iterations of parallel loops among the computing devices on the chip (the GPU and the CPU cores) in the context of irregular applications. In particular, we analyze the sources of overhead that the host thread experiments when a chunk of iterations is offloaded to the GPU while other threads are executing concurrently other chunks on the CPU cores. We carefully study these overheads on different processor architectures and operating systems using Barnes Hut as a study case representative of irregular applications. We also propose a set of optimizations to mitigate the overheads that arise in presence of oversubscription and take advantage of the different features of the heterogeneous architectures. Thanks to these optimizations we reduce Energy-Delay Product (EDP) by 18 and 84 on Intel Ivy Bridge and Haswell architectures, respectively, and by 57 on the Exynos big.LITTLE. | Other works as @cite_0 @cite_12 also address the overhead problems while offloading computation to GPUs. The work of Lustig and Martonosi @cite_0 presents a GPU hardware extension coupled with a software API that aims at reducing two sources of overhead: data transfers and kernel launching. They use a Full Empty Bits technique to improve data staging and synchronization in CPU-GPU communication. This technique allows subsets of data results being transferred to the CPU proactively, rather than waiting for the entire kernel to finish. @cite_4 propose several host code optimizations (Use of Zero-copy Buffer, Global Work Size equal to multiples of #EUs) in order to reduce GPU's computation overheads on embedded GPUs. They present a comparison in terms of performance and energy consumption between an OpenCL legacy version and an OpenCL optimized one. Our work focus on reducing the sources of overhead as well, but in contrast, we focus on CPU-GPU collaborative computation instead of only targeting the integrated GPU. | {
"cite_N": [
"@cite_0",
"@cite_4",
"@cite_12"
],
"mid": [
"1979717209",
"2142805836",
""
],
"abstract": [
"GPUs are seeing increasingly widespread use for general purpose computation due to their excellent performance for highly-parallel, throughput-oriented applications. For many workloads, however, the performance benefits of offloading are hindered by the large and unpredictable overheads of launching GPU kernels and of transferring data between CPU and GPU.",
"A lot of effort from academia and industry has been invested in exploring the suitability of low-power embedded technologies for HPC. Although state-of-the-art embedded systems-on-chip (SoCs) inherently contain GPUs that could be used for HPC, their performance and energy capabilities have never been evaluated. Two reasons contribute to the above. Primarily, embedded GPUs until now, have not supported 64-bit floating point arithmetic - a requirement for HPC. Secondly, embedded GPUs did not provide support for parallel programming languages such as OpenCL and CUDA. However, the situation is changing, and the latest GPUs integrated in embedded SoCs do support 64-bit floating point precision and parallel programming models. In this paper, we analyze performance and energy advantages of embedded GPUs for HPC. In particular, we analyze ARM Mali-T604 GPU - the first embedded GPUs with OpenCL Full Profile support. We identify, implement and evaluate software optimization techniques for efficient utilization of the ARM Mali GPU Compute Architecture. Our results show that, HPC benchmarks running on the ARM Mali-T604 GPU integrated into Exynos 5250 SoC, on average, achieve speed-up of 8.7X over a single Cortex-A15 core, while consuming only 32 of the energy. Overall results show that embedded GPUs have performance and energy qualities that make them candidates for future HPC systems.",
""
]
} |
1501.03336 | 1668953478 | In recent processor development, we have witnessed the integration of GPU and CPUs into a single chip. The result of this integration is a reduction of the data communication overheads. This enables an efficient collaboration of both devices in the execution of parallel workloads. In this work, we focus on the problem of efficiently scheduling chunks of iterations of parallel loops among the computing devices on the chip (the GPU and the CPU cores) in the context of irregular applications. In particular, we analyze the sources of overhead that the host thread experiments when a chunk of iterations is offloaded to the GPU while other threads are executing concurrently other chunks on the CPU cores. We carefully study these overheads on different processor architectures and operating systems using Barnes Hut as a study case representative of irregular applications. We also propose a set of optimizations to mitigate the overheads that arise in presence of oversubscription and take advantage of the different features of the heterogeneous architectures. Thanks to these optimizations we reduce Energy-Delay Product (EDP) by 18 and 84 on Intel Ivy Bridge and Haswell architectures, respectively, and by 57 on the Exynos big.LITTLE. | Several previous works study the problem of automatically scheduling on heterogeneous platforms with a multicore and an integrated or discrete GPU @cite_11 @cite_13 @cite_3 @cite_1 @cite_16 @cite_7 @cite_5 . Among those works, the only one that also uses chips with integrated GPUs is Concord @cite_5 . However, Concord does not analyze the overheads incurred by offloading a chunk of iterations to the GPU. | {
"cite_N": [
"@cite_7",
"@cite_1",
"@cite_3",
"@cite_5",
"@cite_16",
"@cite_13",
"@cite_11"
],
"mid": [
"1993338334",
"1996632297",
"2095668076",
"2169049902",
"2104008467",
"2031553682",
"2150476673"
],
"abstract": [
"Today's heterogeneous architectures bring together multiple general-purpose CPUs and multiple domain-specific GPUs and FPGAs to provide dramatic speedup for many applications. However, the challenge lies in utilizing these heterogeneous processors to optimize overall application performance by minimizing workload completion time. Operating system and application development for these systems is in their infancy. In this article, we propose a new scheduling and workload balancing scheme, HDSS, for execution of loops having dependent or independent iterations on heterogeneous multiprocessor systems. The new algorithm dynamically learns the computational power of each processor during an adaptive phase and then schedules the remainder of the workload using a weighted self-scheduling scheme during the completion phase. Different from previous studies, our scheme uniquely considers the runtime effects of block sizes on the performance for heterogeneous multiprocessors. It finds the right trade-off between large and small block sizes to maintain balanced workload while keeping the accelerator utilization at maximum. Our algorithm does not require offline training or architecture-specific parameters. We have evaluated our scheme on two different heterogeneous architectures: AMD 64-core Bulldozer system with nVidia Fermi C2050 GPU and Intel Xeon 32-core SGI Altix 4700 supercomputer with Xilinx Virtex 4 FPGAs. The experimental results show that our new scheduling algorithm can achieve performance improvements up to over 200p when compared to the closest existing load balancing scheme. Our algorithm also achieves full processor utilization with all processors completing at nearly the same time which is significantly better than alternative current approaches.",
"The race for Exascale computing has naturally led the current technologies to converge to multi-CPU multi-GPU computers, based on thousands of CPUs and GPUs interconnected by PCI-Express buses or interconnection networks. To exploit this high computing power, programmers have to solve the issue of scheduling parallel programs on hybrid architectures. And, since the performance of a GPU increases at a much faster rate than the throughput of a PCI bus, data transfers must be managed efficiently by the scheduler. This paper targets multi-GPU compute nodes, where several GPUs are connected to the same machine. To overcome the data transfer limitations on such platforms, the available soft wares compute, usually before the execution, a mapping of the tasks that respects their dependencies and minimizes the global data transfers. Such an approach is too rigid and it cannot adapt the execution to possible variations of the system or to the application's load. We propose a solution that is orthogonal to the above mentioned: extensions of the Xkaapi software stack that enable to exploit full performance of a multi-GPUs system through asynchronous GPU tasks. Xkaapi schedules tasks by using a standard Work Stealing algorithm and the runtime efficiently exploits concurrent GPU operations. The runtime extensions make it possible to overlap the data transfers and the task executions on current generation of GPUs. We demonstrate that the overlapping capability is at least as important as computing a scheduling decision to reduce completion time of a parallel program. Our experiments on two dense linear algebra problems (Matrix Product and Cholesky factorization) show that our solution is highly competitive with other soft wares based on static scheduling. Moreover, we are able to sustain the peak performance (approx. 310 GFlop s) on DGEMM, even for matrices that cannot be stored entirely in one GPU memory. With eight GPUs, we archive a speed-up of 6.74 with respect to single-GPU. The performance of our Cholesky factorization, with more complex dependencies between tasks, outperforms the state of the art single-GPU MAGMA code.",
"To fully tap into the potential of heterogeneous machines composed of multicore processors and multiple accelerators, simple offloading approaches in which the main trunk of the application runs on regular cores while only specific parts are offloaded on accelerators are not sufficient. The real challenge is to build systems where the application would permanently spread across the entire machine, that is, where parallel tasks would be dynamically scheduled over the full set of available processing units. To face this challenge, we previously proposed StarPU, a runtime system capable of scheduling tasks over multicore machines equipped with GPU accelerators. StarPU uses a software virtual shared memory (VSM) that provides a highlevel programming interface and automates data transfers between processing units so as to enable a dynamic scheduling of tasks. We now present how we have extended StarPU to minimize the cost of transfers between processing units in order to efficiently cope with multi-GPU hardware configurations. To this end, our runtime system implements data prefetching based on asynchronous data transfers, and uses data transfer cost prediction to influence the decisions taken by the task scheduler. We demonstrate the relevance of our approach by benchmarking two parallel numerical algorithms using our runtime system. We obtain significant speedups and high efficiency over multicore machines equipped with multiple accelerators. We also evaluate the behaviour of these applications over clusters featuring multiple GPUs per node, showing how our runtime system can combine with MPI.",
"Many processors today integrate a CPU and GPU on the same die, which allows them to share resources like physical memory and lowers the cost of CPU-GPU communication. As a consequence, programmers can effectively utilize both the CPU and GPU to execute a single application. This paper presents novel adaptive scheduling techniques for integrated CPU-GPU processors. We present two online profiling-based scheduling algorithms: naive and asymmetric. Our asymmetric scheduling algorithm uses low-overhead online profiling to automatically partition the work of data-parallel kernels between the CPU and GPU without input from application developers. It does profiling on the CPU and GPU in a way that it doesn't penalize GPU-centric workloads that run significantly faster on the GPU. It adapts to application characteristics by addressing: 1) load imbalance via irregularity caused by, e.g., data-dependent control flow, 2) different amounts of work on each kernel call, and 3) multiple kernels with different characteristics. Unlike many existing approaches primarily targeting NVIDIA discrete GPUs, our scheduling algorithm does not require offline processing. We evaluate our asymmetric scheduling algorithm on a desktop system with an Intel 4th generation Core processor using a set of sixteen regular and irregular workloads from diverse application areas. On average, our asymmetric scheduling algorithm performs within 3.2 of the maximum throughput with a perfect CPU-and-GPU oracle that always chooses the ideal work partitioning between the CPU and GPU. These results underscore the feasibility of online profile-based heterogeneous scheduling on integrated CPU-GPU processors.",
"Clusters of GPUs are emerging as a new computational scenario. Programming them requires the use of hybrid models that increase the complexity of the applications, reducing the productivity of programmers. We present the implementation of OmpSs for clusters of GPUs, which supports asynchrony and heterogeneity for task parallelism. It is based on annotating a serial application with directives that are translated by the compiler. With it, the same program that runs sequentially in a node with a single GPU can run in parallel in multiple GPUs either local (single node) or remote (cluster of GPUs). Besides performing a task-based parallelization, the runtime system moves the data as needed between the different nodes and GPUs minimizing the impact of communication by using affinity scheduling, caching, and by overlapping communication with the computational task. We show several applications programmed with OmpSs and their performance with multiple GPUs in a local node and in remote nodes. The results show good tradeoff between performance and effort from the programmer.",
"A trend that has materialized, and has given rise to much attention, is of the increasingly heterogeneous computing platforms. Recently, it has become very common for a desktop or a notebook computer to be equipped with both a multi-core CPU and a GPU. Application development for exploiting the aggregate computing power of such an environment is a major challenge today. Particularly, we need dynamic work distribution schemes that are adaptable to different computation and communication patterns in applications, and to various heterogeneous configurations. This paper describes a general dynamic scheduling framework for mapping applications with different communication patterns to heterogeneous architectures. We first make key observations about the architectural tradeoffs among heterogeneous resources and the communication pattern of an application, and then infer constraints for the dynamic scheduler. We then present a novel cost model for choosing the optimal chunk size in a heterogeneous configuration. Finally, based on general framework and cost model we provide optimized work distribution schemes to further improve the performance.",
"Heterogeneous multiprocessors are increasingly important in the multi-core era due to their potential for high performance and energy efficiency. In order for software to fully realize this potential, the step that maps computations to processing elements must be as automated as possible. However, the state-of-the-art approach is to rely on the programmer to specify this mapping manually and statically. This approach is not only labor intensive but also not adaptable to changes in runtime environments like problem sizes and hardware software configurations. In this study, we propose adaptive mapping, a fully automatic technique to map computations to processing elements on a CPU+GPU machine. We have implemented it in our experimental heterogeneous programming system called Qilin. Our results show that, by judiciously distributing works over the CPU and GPU, automatic adaptive mapping achieves a 25 reduction in execution time and a 20 reduction in energy consumption than static mappings on average for a set of important computation benchmarks. We also demonstrate that our technique is able to adapt to changes in the input problem size and system configuration."
]
} |
1501.02330 | 1576483002 | Job scheduling for a MapReduce cluster has been an active research topic in recent years. However, measurement traces from real-world production environment show that the duration of tasks within a job vary widely. The overall elapsed time of a job, i.e. the so-called flowtime, is often dictated by one or few slowly-running tasks within a job, generally referred as the "stragglers". The cause of stragglers include tasks running on partially intermittently failing machines or the existence of some localized resource bottleneck(s) within a MapReduce cluster. To tackle this online job scheduling challenge, we adopt the task cloning approach and design the corresponding scheduling algorithms which aim at minimizing the weighted sum of job flowtimes in a MapReduce cluster based on the Shortest Remaining Processing Time scheduler (SRPT). To be more specific, we first design a 2-competitive offline algorithm when the variance of task-duration is negligible. We then extend this offline algorithm to yield the so-called SRPTMS+C algorithm for the online case and show that SRPTMS+C is @math @math in reducing the weighted sum of job flowtimes within a cluster. Both of the algorithms explicitly consider the precedence constraints between the two phases within the MapReduce framework. We also demonstrate via trace-driven simulations that SRPTMS+C can significantly reduce the weighted unweighted sum of job flowtimes by cutting down the elapsed time of small jobs substantially. In particular, SRPTMS+C beats the Microsoft Mantri scheme by nearly 25 according to this metric. | The straggler problem was first identified in the original MapReduce paper @cite_19 . Since then, various solutions have been proposed to deal with it using the Straggler-Detection-based speculative execution strategy @cite_24 @cite_13 @cite_30 @cite_17 . These solutions mainly focus on promptly identifying stragglers and accurately predicting the performance of running tasks. One fundamental limitation is that detection may be too late for helping small jobs as it needs to wait for the collection of enough samples while monitoring the progress of tasks. To avoid the extra delay caused by the straggler detection, cloning approach was proposed in @cite_25 . This approach relies on cloning very small job in a greedy manner to mitigate the straggler-effect and is based on simple heuristics. In contrast, we develop an optimization framework to make clones for each arriving job. Recently, @cite_0 presents GRASS, which carefully adopts the Detection-based approach to trim stragglers for approximation jobs. GRASS also provides a unified solution for normal jobs. However, one limitation is that it only prioritizes the tasks within a job and it remains a problem to prioritize different jobs (i.e., the scheduler is not optimized and unknown to the readers). | {
"cite_N": [
"@cite_30",
"@cite_24",
"@cite_19",
"@cite_0",
"@cite_13",
"@cite_25",
"@cite_17"
],
"mid": [
"",
"2100830825",
"2173213060",
"1750643891",
"1861377444",
"1903497807",
"2040722314"
],
"abstract": [
"",
"Dryad is a general-purpose distributed execution engine for coarse-grain data-parallel applications. A Dryad application combines computational \"vertices\" with communication \"channels\" to form a dataflow graph. Dryad runs the application by executing the vertices of this graph on a set of available computers, communicating as appropriate through flies, TCP pipes, and shared-memory FIFOs. The vertices provided by the application developer are quite simple and are usually written as sequential programs with no thread creation or locking. Concurrency arises from Dryad scheduling vertices to run simultaneously on multiple computers, or on multiple CPU cores within a computer. The application can discover the size and placement of data at run time, and modify the graph as the computation progresses to make efficient use of the available resources. Dryad is designed to scale from powerful multi-core single computers, through small clusters of computers, to data centers with thousands of computers. The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.",
"MapReduce is a programming model and an associated implementation for processing and generating large datasets that is amenable to a broad variety of real-world tasks. Users specify the computation in terms of a map and a reduce function, and the underlying runtime system automatically parallelizes the computation across large-scale clusters of machines, handles machine failures, and schedules inter-machine communication to make efficient use of the network and disks. Programmers find the system easy to use: more than ten thousand distinct MapReduce programs have been implemented internally at Google over the past four years, and an average of one hundred thousand MapReduce jobs are executed on Google's clusters every day, processing a total of more than twenty petabytes of data per day.",
"In big data analytics, timely results, even if based on only part of the data, are often good enough. For this reason, approximation jobs, which have deadline or error bounds and require only a subset of their tasks to complete, are projected to dominate big data workloads. Straggler tasks are an important hurdle when designing approximate data analytic frameworks, and the widely adopted approach to deal with them is speculative execution. In this paper, we present GRASS, which carefully uses speculation to mitigate the impact of stragglers in approximation jobs. GRASS's design is based on first principles analysis of the impact of speculation. GRASS delicately balances immediacy of improving the approximation goal with the long term implications of using extra resources for speculation. Evaluations with production workloads from Facebook and Microsoft Bing in an EC2 cluster of 200 nodes shows that GRASS increases accuracy of deadline-bound jobs by 47 and speeds up error-bound jobs by 38 . GRASS's design also speeds up exact computations (zero error-bound), making it a unified solution for straggler mitigation.",
"MapReduce is emerging as an important programming model for large-scale data-parallel applications such as web indexing, data mining, and scientific simulation. Hadoop is an open-source implementation of MapReduce enjoying wide adoption and is often used for short jobs where low response time is critical. Hadoop's performance is closely tied to its task scheduler, which implicitly assumes that cluster nodes are homogeneous and tasks make progress linearly, and uses these assumptions to decide when to speculatively re-execute tasks that appear to be stragglers. In practice, the homogeneity assumptions do not always hold. An especially compelling setting where this occurs is a virtualized data center, such as Amazon's Elastic Compute Cloud (EC2). We show that Hadoop's scheduler can cause severe performance degradation in heterogeneous environments. We design a new scheduling algorithm, Longest Approximate Time to End (LATE), that is highly robust to heterogeneity. LATE can improve Hadoop response times by a factor of 2 in clusters of 200 virtual machines on EC2.",
"Small jobs, that are typically run for interactive data analyses in datacenters, continue to be plagued by disproportionately long-running tasks called stragglers. In the production clusters at Facebook and Microsoft Bing, even after applying state-of-the-art straggler mitigation techniques, these latency sensitive jobs have stragglers that are on average 8 times slower than themedian task in that job. Such stragglers increase the average job duration by 47 . This is because current mitigation techniques all involve an element of waiting and speculation. We instead propose full cloning of small jobs, avoiding waiting and speculation altogether. Cloning of small jobs only marginally increases utilization because workloads show that while the majority of jobs are small, they only consume a small fraction of the resources. The main challenge of cloning is, however, that extra clones can cause contention for intermediate data. We use a technique, delay assignment, which efficiently avoids such contention. Evaluation of our system, Dolly, using production workloads shows that the small jobs speedup by 34 to 46 after state-of-the-artmitigation techniques have been applied, using just 5 extra resources for cloning.",
"MapReduce is a widely used parallel computing framework for large scale data processing. The two major performance metrics in MapReduce are job execution time and cluster throughput. They can be seriously impacted by straggler machines-machines on which tasks take an unusually long time to finish. Speculative execution is a common approach for dealing with the straggler problem by simply backing up those slow running tasks on alternative machines. Multiple speculative execution strategies have been proposed, but they have some pitfalls: (i) Use average progress rate to identify slow tasks while in reality the progress rate can be unstable and misleading, (ii) Cannot appropriately handle the situation when there exists data skew among the tasks, (iii) Do not consider whether backup tasks can finish earlier when choosing backup worker nodes. In this paper, we first present a detailed analysis of scenarios where existing strategies cannot work well. Then we develop a new strategy, maximum cost performance (MCP), which improves the effectiveness of speculative execution significantly. To accurately and promptly identify stragglers, we provide the following methods in MCP: (i) Use both the progress rate and the process bandwidth within a phase to select slow tasks, (ii) Use exponentially weighted moving average (EWMA) to predict process speed and calculate a task's remaining time, (iii) Determine which task to backup based on the load of a cluster using a cost-benefit model. To choose proper worker nodes for backup tasks, we take both data locality and data skew into consideration. We evaluate MCP in a cluster of 101 virtual machines running a variety of applications on 30 physical servers. Experiment results show that MCP can run jobs up to 39 percent faster and improve the cluster throughput by up to 44 percent compared to Hadoop-0.21."
]
} |
1501.02330 | 1576483002 | Job scheduling for a MapReduce cluster has been an active research topic in recent years. However, measurement traces from real-world production environment show that the duration of tasks within a job vary widely. The overall elapsed time of a job, i.e. the so-called flowtime, is often dictated by one or few slowly-running tasks within a job, generally referred as the "stragglers". The cause of stragglers include tasks running on partially intermittently failing machines or the existence of some localized resource bottleneck(s) within a MapReduce cluster. To tackle this online job scheduling challenge, we adopt the task cloning approach and design the corresponding scheduling algorithms which aim at minimizing the weighted sum of job flowtimes in a MapReduce cluster based on the Shortest Remaining Processing Time scheduler (SRPT). To be more specific, we first design a 2-competitive offline algorithm when the variance of task-duration is negligible. We then extend this offline algorithm to yield the so-called SRPTMS+C algorithm for the online case and show that SRPTMS+C is @math @math in reducing the weighted sum of job flowtimes within a cluster. Both of the algorithms explicitly consider the precedence constraints between the two phases within the MapReduce framework. We also demonstrate via trace-driven simulations that SRPTMS+C can significantly reduce the weighted unweighted sum of job flowtimes by cutting down the elapsed time of small jobs substantially. In particular, SRPTMS+C beats the Microsoft Mantri scheme by nearly 25 according to this metric. | Finally, the SRPT scheduler has been studied extensively in traditional parallel scheduling literature. In particular, SRPT has proven to be @math @math for total flowtime on @math identical machines under the single task case @cite_10 . In this paper, we extend the SRPT scheduler to yield an online scheduler which can mitigate stragglers as well. | {
"cite_N": [
"@cite_10"
],
"mid": [
"2951082421"
],
"abstract": [
"Due to its optimality on a single machine for the problem of minimizing average flow time, Shortest-Remaining-Processing-Time ( ) appears to be the most natural algorithm to consider for the problem of minimizing average flow time on multiple identical machines. It is known that @math achieves the best possible competitive ratio on multiple machines up to a constant factor. Using resource augmentation, @math is known to achieve total flow time at most that of the optimal solution when given machines of speed @math . Further, it is known that @math 's competitive ratio improves as the speed increases; @math is @math -speed @math -competitive when @math . However, a gap has persisted in our understanding of @math . Before this work, the performance of @math was not known when @math is given @math -speed when @math . We complement this by showing that @math is @math -speed @math -competitive for the objective of minimizing the @math -norms of flow time on @math identical machines. Both of our results rely on new potential functions that capture the structure of . Our results, combined with previous work, show that @math is the best possible online algorithm in essentially every aspect when migration is permissible."
]
} |
1501.02223 | 347197115 | The exploitation of the mm-wave bands is one of the most promising solutions for 5G mobile radio networks. However, the use of mm-wave technologies in cellular networks is not straightforward due to mm-wave harsh propagation conditions that limit access availability. In order to overcome this obstacle, hybrid network architectures are being considered where mm-wave small cells can exploit an overlay coverage layer based on legacy technology. The additional mm-wave layer can also take advantage of a functional split between control and user plane, that allows to delegate most of the signaling functions to legacy base stations and to gather context information from users for resource optimization. However, mm-wave technology requires high gain antenna systems to compensate for high path loss and limited power, e.g., through the use of multiple antennas for high directivity. Directional transmissions must be also used for the cell discovery and synchronization process, and this can lead to a non-negligible delay due to the need to scan the cell area with multiple transmissions at different directions. In this paper, we propose to exploit the context information related to user position, provided by the separated control plane, to improve the cell discovery procedure and minimize delay. We investigate the fundamental trade-offs of the cell discovery process with directional antennas and the effects of the context information accuracy on its performance. Numerical results are provided to validate our observations. | The challenges brought in by the use of directional antennas for C-plane functions have been studied in the past at lower frequency in ad-hoc wireless network scenarios @cite_3 . Considering mm-wave access for Wireless Personal Area Networks (WPANs), devices are assumed to have omnidirectional sensing capabilities, while increasing their directivity towards incoming signals @cite_9 . In addition to that, only 360-degree scanning is used to discover neighbors. In this context, algorithms for BF tracking @cite_6 and route deviation to get around obstacles have been proposed @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_6",
"@cite_3"
],
"mid": [
"2157725608",
"2166486216",
"2163516538"
],
"abstract": [
"We present a cross-layer modeling and design approach for multigigabit indoor wireless personal area networks (WPANs) utilizing the unlicensed millimeter (mm) wave spectrum in the 60 GHz band. Our approach accounts for the following two characteristics that sharply distinguish mm wave networking from that at lower carrier frequencies. First, mm wave links are inherently directional: directivity is required to overcome the higher path loss at smaller wavelengths, and it is feasible with compact, low-cost circuit board antenna arrays. Second, indoor mm wave links are highly susceptible to blockage because of the limited ability to diffract around obstacles such as the human body and furniture. We develop a diffraction-based model to determine network link connectivity as a function of the locations of stationary and moving obstacles. For a centralized WPAN controlled by an access point, it is shown that multihop communication, with the introduction of a small number of relay nodes, is effective in maintaining network connectivity in scenarios where single-hop communication would suffer unacceptable outages. The proposed multihop MAC protocol accounts for the fact that every link in the WPAN is highly directional, and is shown, using packet level simulations, to maintain high network utilization with low overhead.",
"In order to realize high speed, long range, reliable transmission in millimeter-wave 60 GHz wireless personal area networks (60 GHz WPANs), we propose a beamforming (BF) protocol realized in media access control (MAC) layer on top of multiple physical layer (PHY) designs. The proposed BF protocol targets to minimize the BF set-up time and to mitigate the high path loss of 60 GHz WPAN systems. It consists of 3 stages, namely the device (DEV) to DEV linking, sector-level searching and beam-level searching. The division of the stages facilitates significant reduction in setup time as compared to BF protocols with exhaustive searching mechanisms. The proposed BF protocol employs discrete phase-shifters, which significantly simplifies the structure of DEVs as compared to the conventional BF with phase-and-amplitude adjustment, at the expense of a gain degradation of less than 1 dB. The proposed BF protocol is a complete design and PHY-independent, it is applicable to different antenna configurations. Simulation results show that the setup time of the proposed BF protocol is as small as 2 when compared to the exhaustive searching protocol. Furthermore, based on the codebooks with four phases per element, around 15.1 dB gain is achieved by using eight antenna elements at both transmitter and receiver, thereby enabling 1.6 Gbps-data-streaming over a range of three meters. Due to the flexibility in supporting multiple PHY layer designs, the proposed protocol has been adopted by the IEEE 802.15.3c as an optional functionality to realize Gbps communication systems.",
"Many MAC sub-layer protocols for supporting the usage of directional antennas in ad hoc networks have been proposed in literature. However, there remain two open issues that are yet to be resolved completely. First, in order to fully exploit the spatial diversity gains possible due to the use of directional antennas, it is essential to shift to the exclusive usage of directional antennas for the transmission and reception of all the MAC layer frames. This would facilitate maximal spatial reuse and will efface the phenomena of asymmetry in gain. Second, in the presence of mobility the MAC protocol should incorporate mechanisms by which a node can efficiently discover and track its neighbors. In this paper we propose PMAC, a new MAC protocol that addresses both the issues in an integrated way. PMAC incorporates an efficient mechanism for neighbor discovery, and a scheduling based medium sharing that allows for exclusive directional transmissions and receptions. We perform analysis and simulations to understand the performance of our scheme. We find that each node, on average, can achieve a per node utilization of about 80 in static and about 45 in mobile scenarios. In terms of throughput, our protocol is seen to outperform both the traditional IEEE 802.11 and previously proposed MAC protocols for use with directional antennas in ad hoc networks"
]
} |
1501.02223 | 347197115 | The exploitation of the mm-wave bands is one of the most promising solutions for 5G mobile radio networks. However, the use of mm-wave technologies in cellular networks is not straightforward due to mm-wave harsh propagation conditions that limit access availability. In order to overcome this obstacle, hybrid network architectures are being considered where mm-wave small cells can exploit an overlay coverage layer based on legacy technology. The additional mm-wave layer can also take advantage of a functional split between control and user plane, that allows to delegate most of the signaling functions to legacy base stations and to gather context information from users for resource optimization. However, mm-wave technology requires high gain antenna systems to compensate for high path loss and limited power, e.g., through the use of multiple antennas for high directivity. Directional transmissions must be also used for the cell discovery and synchronization process, and this can lead to a non-negligible delay due to the need to scan the cell area with multiple transmissions at different directions. In this paper, we propose to exploit the context information related to user position, provided by the separated control plane, to improve the cell discovery procedure and minimize delay. We investigate the fundamental trade-offs of the cell discovery process with directional antennas and the effects of the context information accuracy on its performance. Numerical results are provided to validate our observations. | The cell discovery problem in mm-wave cellular networks has been considered in @cite_12 and @cite_2 . In @cite_2 , authors show that there is a mismatch between the area where the network is discoverable (C-plane range) and the area where the mm-wave service is available (U-plane range). In @cite_12 , the cell discovery in directional mm-wave cellular networks is addressed from the physical layer point of view, and the problem of designing the best detector is investigated. | {
"cite_N": [
"@cite_12",
"@cite_2"
],
"mid": [
"779733492",
"2049501010"
],
"abstract": [
"The acute disparity between increasing bandwidth demand and available spectrum has brought millimeter wave (mmWave) bands to the forefront of candidate solutions for the next-generation cellular networks. Highly directional transmissions are essential for cellular communication in these frequencies to compensate for higher isotropic path loss. This reliance on directional beamforming, however, complicates initial cell search since mobiles and base stations must jointly search over a potentially large angular directional space to locate a suitable path to initiate communication. To address this problem, this paper proposes a directional cell discovery procedure where base stations periodically transmit synchronization signals, potentially in time-varying random directions, to scan the angular space. Detectors for these signals are derived based on a Generalized Likelihood Ratio Test (GLRT) under various signal and receiver assumptions. The detectors are then simulated under realistic design parameters and channels based on actual experimental measurements at 28 GHz in New York City. The study reveals two key findings: 1) digital beamforming can significantly outperform analog beamforming even when digital beamforming uses very low quantization to compensate for the additional power requirements and 2) omnidirectional transmissions of the synchronization signals from the base station generally outperform random directional scanning.",
"Communication in millimeter wave (mmWave) spectrum has gained an increasing interests for tackling the spectrum crunch problem and meeting the high network capacity demand in 4G and beyond. Considering the channel characteristics of mmWave bands, it can be fit into heterogeneous networks (HetNet) for boosting local-area data rate. In this paper, we investigate the challenges in deploying an anchor-booster based HetNet with mmWave capable booster cells. We show that due to the channel characteristics of mmWave bands, there could be a mismatch between the discoverable coverage area of booster cell at mmWave band and the actual supportable coverage area. Numerical results are provided in validating the observation. We suggest possible ways in addressing the coverage mismatch problem. This work provides insights on the deployment and implementation challenges in mmWave capable HetNets."
]
} |
1501.02134 | 1982914465 | Human computation is a computing approach that draws upon human cognitive abilities to solve computational tasks for which there are so far no satisfactory fully automated solutions even when using the most advanced computing technologies available. Human computation for citizen science projects consists in designing systems that allow large crowds of volunteers to contribute to scientific research by executing human computation tasks. Examples of successful projects are Galaxy Zoo and FoldIt. A key feature of this kind of project is its capacity to engage volunteers. An important requirement for the proposal and evaluation of new engagement strategies is having a clear understanding of the typical engagement of the volunteers; however, even though several projects of this kind have already been completed, little is known about this issue. In this paper, we investigate the engagement pattern of the volunteers in their interactions in human computation for citizen science projects, how they differ among themselves in terms of engagement, and how those volunteer engagement features should be taken into account for establishing the engagement encouragement strategies that should be brought into play in a given project. To this end, we define four quantitative engagement metrics to measure different aspects of volunteer engagement, and use data mining algorithms to identify the different volunteer profiles in terms of the engagement metrics. Our study is based on data collected from two projects: Galaxy Zoo and The Milky Way Project. The results show that the volunteers in such projects can be grouped into five distinct engagement profiles that we label as follows: hardworking, spasmodic, persistent, lasting, and moderate. The analysis of these profiles provides a deeper understanding of the nature of volunteers' engagement in human computation for citizen science projects. | The subject of human engagement has been studied within a variety of disciplines, such as education @cite_8 , management science @cite_6 and computer science @cite_51 . Some studies make an attempt to conceptualize the term engagement in an interdisciplinary perspective @cite_45 @cite_5 @cite_51 @cite_6 @cite_31 @cite_3 . A consensus that emerges from these studies is that engagement means to participate in any enterprise by self-investing personal resources, such as time, physical energy, and cognitive power. | {
"cite_N": [
"@cite_8",
"@cite_6",
"@cite_3",
"@cite_45",
"@cite_5",
"@cite_31",
"@cite_51"
],
"mid": [
"2048191365",
"2112232916",
"2029389635",
"",
"1997858829",
"1969110260",
"2048395990"
],
"abstract": [
"",
"Abstract Objectives Engagement at work has emerged as a potentially important employee performance and organizational management topic, however, the definition and measurement of engagement at work, and more specifically, nurse engagement, is poorly understood. The objective of this paper is to examine the current state of knowledge about engagement at work through a review of the literature. This review highlights the four lines of engagement research and focuses on the determinants and consequences of engagement at work. Methodological issues, as identified in the current research, and recommendations for future nurse-based engagement research are provided. Design A systematic review of the business, organizational psychology, and health sciences and health administration literature about engagement at work (1990–2007) was performed. Data sources The electronic databases for Health Sciences and Health Administration (CINAHL, MEDLINE), Business (ABI INFORM), and Psychology (PsycINFO) were systematically searched. Review methods Due to the limited amount of research that has examined engagement among the nursing workforce, published research that included varying employee types were included in this review. The selection criteria for this review include those studies that were: (1) written in English and (2) examined engagement at work in employee populations of any type within any work setting. Results The literature review identified four distinct lines of research that has focused on engagement within the employee work role. Of the 32 engagement-based articles referenced in this paper, a sample of 20 studies report on the examination of antecedents and or consequences of engagement at work among varying employee types and work settings. Key findings suggest organizational factors versus individual contributors significantly impact engagement at work. A common implication in this body of research was that of the performance-based impact. Conclusions The study of nurses' work engagement and its relationship to nurses' organizational behavior, including work performance and healthcare organizational outcomes can be achieved by first building upon a conceptually consistent definition and measurement of work engagement. Future research is needed to provide nurse leaders with a better understanding of how nurse work engagement impacts organizational outcomes, including quality of care indicators.",
"We study how the visual catchiness (saliency) of relevant information impacts user engagement metrics such as focused attention and emotion (affect). Participants completed tasks in one of two conditions, where the task-relevant information either appeared salient or non-salient. Our analysis provides insights into relationships between saliency, focused attention, and affect. Participants reported more distraction in the non-salient condition, and non-salient information was slower to find than salient. Lack-of-saliency led to a negative impact on affect, while saliency maintained positive affect, suggesting its helpfulness. Participants reported that it was easier to focus in the salient condition, although there was no significant improvement in the focused attention scale rating. Finally, this study suggests user interest in the topic is a good predictor of focused attention, which in turn is a good predictor of positive affect. These results suggest that enhancing saliency of user-interested topics seems a good strategy for boosting user engagement.",
"",
"Purpose - This paper aims to provide an overview of the recently introduced concept of work engagement. Design methodology approach - Qualitative and quantitative studies on work engagement are reviewed to uncover the manifestation of engagement, and reveal its antecedents and consequences. Findings - Work engagement can be defined as a state including vigor, dedication, and absorption. Job and personal resources are the main predictors of engagement; these resources gain their salience in the context of high job demands. Engaged workers are more creative, more productive, and more willing to go the extra mile. Originality value - The findings of previous studies are integrated in an overall model that can be used to develop work engagement and advance career development in today's workplace.",
"Our research goal is to provide a better understanding of how users engage with online services, and how to measure this engagement. We should not speak of one main approach to measure user engagement --- e.g. through one fixed set of metrics --- because engagement depends on the online services at hand. Instead, we should be talking of models of user engagement. As a first step, we analysed a number of online services, and show that it is possible to derive effectively simple models of user engagement, for example, accounting for user types and temporal aspects. This paper provides initial insights into engagement patterns, allowing for a better understanding of the important characteristics of how users repeatedly interact with a service or group of services.",
"The purpose of this article is to critically deconstruct the term engagement as it applies to peoples' experiences with technology. Through an extensive, critical multidisciplinary literature review and exploratory study of users of Web searching, online shopping, Webcasting, and gaming applications, we conceptually and operationally defined engagement. Building on past research, we conducted semistructured interviews with the users of four applications to explore their perception of being engaged with the technology. Results indicate that engagement is a process comprised of four distinct stages: point of engagement, period of sustained engagement, disengagement, and reengagement. Furthermore, the process is characterized by attributes of engagement that pertain to the user, the system, and user-system interaction. We also found evidence of the factors that contribute to nonengagement. Emerging from this research is a definition of engagement—a term not defined consistently in past work—as a quality of user experience characterized by attributes of challenge, positive affect, endurability, aesthetic and sensory appeal, attention, feedback, variety-novelty, interactivity, and perceived user control. This exploratory work provides the foundation for future work to test the conceptual model in various application areas, and to develop methods to measure engaging user experiences. © 2008 Wiley Periodicals, Inc."
]
} |
1501.02134 | 1982914465 | Human computation is a computing approach that draws upon human cognitive abilities to solve computational tasks for which there are so far no satisfactory fully automated solutions even when using the most advanced computing technologies available. Human computation for citizen science projects consists in designing systems that allow large crowds of volunteers to contribute to scientific research by executing human computation tasks. Examples of successful projects are Galaxy Zoo and FoldIt. A key feature of this kind of project is its capacity to engage volunteers. An important requirement for the proposal and evaluation of new engagement strategies is having a clear understanding of the typical engagement of the volunteers; however, even though several projects of this kind have already been completed, little is known about this issue. In this paper, we investigate the engagement pattern of the volunteers in their interactions in human computation for citizen science projects, how they differ among themselves in terms of engagement, and how those volunteer engagement features should be taken into account for establishing the engagement encouragement strategies that should be brought into play in a given project. To this end, we define four quantitative engagement metrics to measure different aspects of volunteer engagement, and use data mining algorithms to identify the different volunteer profiles in terms of the engagement metrics. Our study is based on data collected from two projects: Galaxy Zoo and The Milky Way Project. The results show that the volunteers in such projects can be grouped into five distinct engagement profiles that we label as follows: hardworking, spasmodic, persistent, lasting, and moderate. The analysis of these profiles provides a deeper understanding of the nature of volunteers' engagement in human computation for citizen science projects. | The type of engagement is defined by the kind of personal resources and skills that humans invest in performing an activity. Examples of types of engagement are social engagement @cite_23 and cognitive engagement @cite_25 . Social engagement refers to actions that require humans to interact with others. It is widely studied in areas such as online social networks and communities @cite_52 @cite_42 . Cognitive engagement refers to actions that require mainly human cognitive effort. It has been widely addressed in educational psychology and work engagement @cite_8 @cite_6 . | {
"cite_N": [
"@cite_8",
"@cite_42",
"@cite_52",
"@cite_6",
"@cite_23",
"@cite_25"
],
"mid": [
"2048191365",
"2129500531",
"323338930",
"2112232916",
"2070085990",
"2076532645"
],
"abstract": [
"",
"One of the most challenging problems facing builders and facilitators of community networks is to create and sustain social engagement among members. In this paper, we investigate the drivers of social engagement in a community network through the analysis of three data sources: activity logs, a member survey, and the content analysis of the conversation archives. We describe three important ways to encourage and support social engagement in online communities: through system design elements such as conversation channeling and event notification, by various selection criteria for community members, and through facilitation of specific kinds of discussion topics.",
"Online communities are among the most popular destinations on the Internet, but not all online communities are equally successful. For every flourishing Facebook, there is a moribund Friendster--not to mention the scores of smaller social networking sites that never attracted enough members to be viable. This book offers lessons from theory and empirical research in the social sciences that can help improve the design of online communities. The social sciences can tell us much about how to make online communities thrive, offering theories of individual motivation and human behavior that, properly interpreted, can inform particular design choices for online communities. The authors draw on the literature in psychology, economics, and other social sciences, as well as their own research, translating general findings into useful design claims. They explain, for example, how to encourage information contributions based on the theory of public goods, and how to build members' commitment based on theories of interpersonal bond formation. For each design claim, they offer supporting evidence from theory, experiments, or observational studies.The book focuses on five high-level design challenges: starting a new community, attracting new members, encouraging commitment, encouraging contribution, and regulating misbehavior and conflict. By organizing their presentation around these fundamental design features, the authors encourage practitioners to consider alternatives rather than simply adapting a feature seen on other sites.",
"Abstract Objectives Engagement at work has emerged as a potentially important employee performance and organizational management topic, however, the definition and measurement of engagement at work, and more specifically, nurse engagement, is poorly understood. The objective of this paper is to examine the current state of knowledge about engagement at work through a review of the literature. This review highlights the four lines of engagement research and focuses on the determinants and consequences of engagement at work. Methodological issues, as identified in the current research, and recommendations for future nurse-based engagement research are provided. Design A systematic review of the business, organizational psychology, and health sciences and health administration literature about engagement at work (1990–2007) was performed. Data sources The electronic databases for Health Sciences and Health Administration (CINAHL, MEDLINE), Business (ABI INFORM), and Psychology (PsycINFO) were systematically searched. Review methods Due to the limited amount of research that has examined engagement among the nursing workforce, published research that included varying employee types were included in this review. The selection criteria for this review include those studies that were: (1) written in English and (2) examined engagement at work in employee populations of any type within any work setting. Results The literature review identified four distinct lines of research that has focused on engagement within the employee work role. Of the 32 engagement-based articles referenced in this paper, a sample of 20 studies report on the examination of antecedents and or consequences of engagement at work among varying employee types and work settings. Key findings suggest organizational factors versus individual contributors significantly impact engagement at work. A common implication in this body of research was that of the performance-based impact. Conclusions The study of nurses' work engagement and its relationship to nurses' organizational behavior, including work performance and healthcare organizational outcomes can be achieved by first building upon a conceptually consistent definition and measurement of work engagement. Future research is needed to provide nurse leaders with a better understanding of how nurse work engagement impacts organizational outcomes, including quality of care indicators.",
"Abstract: This article focuses on the importance of social engagement and the behavioral and neurophysiological mechanisms that allow individuals to reduce psychological and physical distance. A model of social engagement derived from the Polyvagal Theory is presented. The model emphasizes phylogeny as an organizing principle and includes the following points: (1) there are well-defined neural circuits to support social engagement behaviors and the defensive strategies of fight, flight, and freeze; (2) these neural circuits form a phylogenetically organized hierarchy; (3) without being dependent on conscious awareness, the nervous system evaluates risk in the environment and regulates the expression of adaptive behavior to match the neuroception of a safe, dangerous, or life-threatening environment; (4) social engagement behaviors and the benefits of the physiological states associated with social support require a neuroception of safety; (5) social behaviors associated with nursing, reproduction, and the formation of strong pair bonds require immobilization without fear; and (6) immobilization without fear is mediated by a co-opting of the neural circuit regulating defensive freezing behaviors through the involvement of oxytocin, a neuropeptide in mammals involved in the formation of social bonds. The model provides a phylogenetic interpretation of the neural mechanisms mediating the behavioral and physiological features associated with stress and several psychiatric disorders.",
"The article analyzes the concept of student cognitive engagement, and the manner in which classroom instruction may develop self‐regulated learners. Since theory and research on academic motivation, to date only vaguely define the role of learning processes, and since studies of learning strategies rarely assess motivational outcomes, our analysis integrates these two streams of literature. We also identify specific features of instruction and discuss how they might influence the complex of student interpretive processes focal to classroom learning and motivation. Measurement issues and research strategies peculiar to the investigation of cognitive engagement are addressed."
]
} |
1501.02134 | 1982914465 | Human computation is a computing approach that draws upon human cognitive abilities to solve computational tasks for which there are so far no satisfactory fully automated solutions even when using the most advanced computing technologies available. Human computation for citizen science projects consists in designing systems that allow large crowds of volunteers to contribute to scientific research by executing human computation tasks. Examples of successful projects are Galaxy Zoo and FoldIt. A key feature of this kind of project is its capacity to engage volunteers. An important requirement for the proposal and evaluation of new engagement strategies is having a clear understanding of the typical engagement of the volunteers; however, even though several projects of this kind have already been completed, little is known about this issue. In this paper, we investigate the engagement pattern of the volunteers in their interactions in human computation for citizen science projects, how they differ among themselves in terms of engagement, and how those volunteer engagement features should be taken into account for establishing the engagement encouragement strategies that should be brought into play in a given project. To this end, we define four quantitative engagement metrics to measure different aspects of volunteer engagement, and use data mining algorithms to identify the different volunteer profiles in terms of the engagement metrics. Our study is based on data collected from two projects: Galaxy Zoo and The Milky Way Project. The results show that the volunteers in such projects can be grouped into five distinct engagement profiles that we label as follows: hardworking, spasmodic, persistent, lasting, and moderate. The analysis of these profiles provides a deeper understanding of the nature of volunteers' engagement in human computation for citizen science projects. | The dimensions of engagement presented in the last section are helpful to framing the previous studies in engagement. There is an extensive body of work dealing with engagement in technology-mediated social participation systems @cite_36 such as wiki-based systems @cite_11 @cite_33 @cite_34 @cite_57 @cite_0 @cite_29 @cite_58 @cite_22 @cite_39 , open source software projects @cite_4 @cite_29 , and human computation for citizen science projects @cite_12 @cite_32 @cite_43 @cite_18 @cite_38 . | {
"cite_N": [
"@cite_38",
"@cite_18",
"@cite_4",
"@cite_33",
"@cite_22",
"@cite_36",
"@cite_29",
"@cite_58",
"@cite_32",
"@cite_39",
"@cite_57",
"@cite_0",
"@cite_43",
"@cite_34",
"@cite_12",
"@cite_11"
],
"mid": [
"2033354470",
"1558395318",
"",
"",
"",
"",
"2103152872",
"1967964230",
"276677300",
"",
"2001534825",
"2150461333",
"",
"",
"",
""
],
"abstract": [
"In most online citizen science projects, a large proportion of participants contribute in small quantities. To investigate how low contributors differ from committed volunteers, we distributed a survey to members of the Old Weather project, followed by interviews with respondents selected according to a range of contribution levels. The studies reveal a complex relationship between motivations and contribution. Whilst high contributors were deeply engaged by social or competitive features, low contributors described a solitary experience of 'dabbling' in projects for short periods. Since the majority of participants exhibit this small-scale contribution pattern, there is great potential value in designing interfaces to tempt lone workers to complete 'just another page', or to lure early drop-outs back into participation. This includes breaking the work into components which can be tackled without a major commitment of time and effort, and providing feedback on the quality and value of these contributions.",
"We present studies of the attention and time, or engagement, invested by crowd workers on tasks. Consideration of worker engagement is especially important in volunteer settings such as online citizen science. Using data from Galaxy Zoo, a prominent citizen science project, we design and construct statistical models that provide predictions about the forthcoming engagement of volunteers. We characterize the accuracy of predictions with respect to different sets of features that describe user behavior and study the sensitivity of predictions to variations in the amount of data and retraining. We design our model for guiding system actions in real-time settings, and discuss the prospect for harnessing predictive models of engagement to enhance user attention and effort on volunteer tasks.",
"",
"",
"",
"",
"Wikipedia is often considered as an example of ‘collaborative knowledge’. Researchers have contested the value of Wikipedia content on various accounts. Some have disputed the ability of anonymous amateurs to produce quality information, while others have contested Wikipedia’s claim to accuracy and neutrality. Even if these concerns about Wikipedia as an encyclopaedic genre are relevant, they misguidedly focus on human agents only. Wikipedia’s advance is not only enabled by its human resources, but is equally defined by the technological tools and managerial dynamics that structure and maintain its content. This article analyses the sociotechnical system — the intricate collaboration between human users and automated content agents — that defines Wikipedia as a knowledge instrument.",
"The quality of Wikipedia articles is debatable. On the one hand, existing research indicates that not only are people willing to contribute articles but the quality of these articles is close to that found in conventional encyclopedias. On the other hand, the public has never stopped criticizing the quality of Wikipedia articles, and critics never have trouble finding low-quality Wikipedia articles. Why do Wikipedia articles vary widely in qualityq We investigate the relationship between collaboration and Wikipedia article quality. We show that the quality of Wikipedia articles is not only dependent on the different types of contributors but also on how they collaborate. Based on an empirical study, we classify contributors based on their roles in editing individual Wikipedia articles. We identify various patterns of collaboration based on the provenance or, more specifically, who does what to Wikipedia articles. Our research helps identify collaboration patterns that are preferable or detrimental for article quality, thus providing insights for designing tools and mechanisms to improve the quality of Wikipedia articles.",
"Reliance on volunteer participation for citizen science has become extremely popular. Cutting across disciplines, locations, and participation practices, hundreds of thousands of volunteers throughout the world are helping scientists accomplish tasks they could not otherwise perform. Although existing projects have demonstrated the value of involving volunteers in data collection, relatively few projects have been successful in maintaining volunteers’ continued involvement over long periods of time. Therefore, it is important to understand the temporal nature of volunteers’ motivations and their effect on participation practices, so that effective partnerships between volunteers and scientists can be established. This paper presents case studies of longitudinal participation practices in citizen science in three countries—the United States, India, and Costa Rica. The findings reveal a temporal process of participation, in which initial participation stems in most cases from self-directed motivations, such as personal interest. In contrast, long-term participation is more complex and includes both self-directed motivations and collaborative motivations.",
"",
"The online encyclopedia Wikipedia is a highly successful “open content” project, written and maintained completely by volunteers. Little is known, however, about the motivation of these volunteers. Results from an online survey among 106 contributors to the German Wikipedia project are presented. Both motives derived from social sciences (perceived benefits, identification with Wikipedia, etc.) as well as perceived task characteristics (autonomy, skill variety, etc.) were assessed as potential predictors of contributors' satisfaction and self-reported engagement. Satisfaction ratings were particularly determined by perceived benefits, identification with the Wikipedia community, and task characteristics. Engagement was particularly determined by high tolerance for opportunity costs and by task characteristics, the latter effect being partially mediated by intrinsic motivation. Relevant task characteristics for contributors' engagement and satisfaction were perceived autonomy, task significance, skill vari...",
"",
"",
"",
"",
""
]
} |
1501.01242 | 1573782941 | Learning a kernel matrix from relative comparison human feedback is an important problem with applications in collaborative filtering, object retrieval, and search. For learning a kernel over a large number of objects, existing methods face significant scalability issues inhibiting the application of these methods to settings where a kernel is learned in an online and timely fashion. In this paper we propose a novel framework called Efficient online Relative comparison Kernel LEarning (ERKLE), for efficiently learning the similarity of a large set of objects in an online manner. We learn a kernel from relative comparisons via stochastic gradient descent, one query response at a time, by taking advantage of the sparse and low-rank properties of the gradient to efficiently restrict the kernel to lie in the space of positive semidefinite matrices. In addition, we derive a passive-aggressive online update for minimally satisfying new relative comparisons as to not disrupt the influence of previously obtained comparisons. Experimentally, we demonstrate a considerable improvement in speed while obtaining improved or comparable accuracy compared to current methods in the online learning setting. | The problem of learning a kernel matrix, driven by relative comparison feedback, has been the focus of much recent work. Most recent techniques primarily differ by the choice of loss function. For instance, Generalized Non-metric Multidimensional Scaling @cite_23 employs hinge loss, Crowd Kernel Learning @cite_19 uses a scale-invariant loss, and Stochastic Triplet Embedding @cite_0 uses a logistic loss function. | {
"cite_N": [
"@cite_0",
"@cite_19",
"@cite_23"
],
"mid": [
"2088247287",
"2951342632",
"85455704"
],
"abstract": [
"This paper considers the problem of learning an embedding of data based on similarity triplets of the form “A is more similar to B than to C”. This learning setting is of relevance to scenarios in which we wish to model human judgements on the similarity of objects. We argue that in order to obtain a truthful embedding of the underlying data, it is insufficient for the embedding to satisfy the constraints encoded by the similarity triplets. In particular, we introduce a new technique called t-Distributed Stochastic Triplet Embedding (t-STE) that collapses similar points and repels dissimilar points in the embedding — even when all triplet constraints are satisfied. Our experimental evaluation on three data sets shows that as a result, t-STE is much better than existing techniques at revealing the underlying data structure.",
"We introduce an algorithm that, given n objects, learns a similarity matrix over all n^2 pairs, from crowdsourced data alone. The algorithm samples responses to adaptively chosen triplet-based relative-similarity queries. Each query has the form \"is object 'a' more similar to 'b' or to 'c'?\" and is chosen to be maximally informative given the preceding responses. The output is an embedding of the objects into Euclidean space (like MDS); we refer to this as the \"crowd kernel.\" SVMs reveal that the crowd kernel captures prominent and subtle features across a number of domains, such as \"is striped\" among neckties and \"vowel vs. consonant\" among letters.",
"We consider the non-metric multidimensional scaling problem: given a set of dissimilarities ∆, find an embedding whose inter-point Euclidean distances have the same ordering as ∆. In this paper, we look at a generalization of this problem in which only a set of order relations of the form dij < dkl are provided. Unlike the original problem, these order relations can be contradictory and need not be specified for all pairs of dissimilarities. We argue that this setting is more natural in some experimental settings and propose an algorithm based on convex optimization techniques to solve this problem. We apply this algorithm to human subject data from a psychophysics experiment concerning how reflectance properties are perceived. We also look at the standard NMDS problem, where a dissimilarity matrix ∆ is provided as input, and show that we can always find an orderrespecting embedding of ∆."
]
} |
1501.01242 | 1573782941 | Learning a kernel matrix from relative comparison human feedback is an important problem with applications in collaborative filtering, object retrieval, and search. For learning a kernel over a large number of objects, existing methods face significant scalability issues inhibiting the application of these methods to settings where a kernel is learned in an online and timely fashion. In this paper we propose a novel framework called Efficient online Relative comparison Kernel LEarning (ERKLE), for efficiently learning the similarity of a large set of objects in an online manner. We learn a kernel from relative comparisons via stochastic gradient descent, one query response at a time, by taking advantage of the sparse and low-rank properties of the gradient to efficiently restrict the kernel to lie in the space of positive semidefinite matrices. In addition, we derive a passive-aggressive online update for minimally satisfying new relative comparisons as to not disrupt the influence of previously obtained comparisons. Experimentally, we demonstrate a considerable improvement in speed while obtaining improved or comparable accuracy compared to current methods in the online learning setting. | The aforementioned RCKL methods can be viewed as solving a kernelized special case of the classic non-metric multidimensional scaling problem @cite_21 , where the goal is to find an embedding of objects in @math such that they satisfy given Euclidean distance constraints. In contrast to many of the kernel-learning formulations, their analogous embedding-learning counterparts are non-convex optimization problems, which only guarantee convergence to a local minimum. In the typical non-convex batch setting, multiple solutions are found with different initializations and the best is chosen among them. This strategy is poorly suited for the online setting where triplets are being observed sequentially, and which solution is best may change as feedback is received. | {
"cite_N": [
"@cite_21"
],
"mid": [
"1973192023"
],
"abstract": [
"We describe the numerical methods required in our approach to multi-dimensional scaling. The rationale of this approach has appeared previously."
]
} |
1501.01242 | 1573782941 | Learning a kernel matrix from relative comparison human feedback is an important problem with applications in collaborative filtering, object retrieval, and search. For learning a kernel over a large number of objects, existing methods face significant scalability issues inhibiting the application of these methods to settings where a kernel is learned in an online and timely fashion. In this paper we propose a novel framework called Efficient online Relative comparison Kernel LEarning (ERKLE), for efficiently learning the similarity of a large set of objects in an online manner. We learn a kernel from relative comparisons via stochastic gradient descent, one query response at a time, by taking advantage of the sparse and low-rank properties of the gradient to efficiently restrict the kernel to lie in the space of positive semidefinite matrices. In addition, we derive a passive-aggressive online update for minimally satisfying new relative comparisons as to not disrupt the influence of previously obtained comparisons. Experimentally, we demonstrate a considerable improvement in speed while obtaining improved or comparable accuracy compared to current methods in the online learning setting. | In this work we consider the RCKL problem, where one is sequentially acquiring relative comparisons among a large collection of objects. Stochastic gradient descent techniques @cite_4 are a popular class of methods for online learning of high-dimensional data for a very general class of functions, where recent techniques @cite_25 @cite_24 have demonstrated competitive performance with batch techniques. In particular, recent methods @cite_11 @cite_17 have developed efficient methods to solve SDPs in an online fashion. The work of @cite_27 shows how to devise efficient update schemes for solving SDPs when the gradient of the objective function is low-rank. We build upon and improve the efficiency of this work, by taking advantage of the sparse and low-rank structure of the gradient common in convex RCKL formulations. | {
"cite_N": [
"@cite_11",
"@cite_4",
"@cite_24",
"@cite_27",
"@cite_25",
"@cite_17"
],
"mid": [
"2949177896",
"",
"2105875671",
"2124379907",
"2205628031",
"2170262790"
],
"abstract": [
"The computational bottleneck in applying online learning to massive data sets is usually the projection step. We present efficient online learning algorithms that eschew projections in favor of much more efficient linear optimization steps using the Frank-Wolfe technique. We obtain a range of regret bounds for online convex optimization, with better bounds for specific cases such as stochastic online smooth convex optimization. Besides the computational advantage, other desirable features of our algorithms are that they are parameter-free in the stochastic case and produce sparse decisions. We apply our algorithms to computationally intensive applications of collaborative filtering, and show the theoretical improvements to be clearly visible on standard datasets.",
"",
"We propose a new stochastic gradient method for optimizing the sum of a finite set of smooth functions, where the sum is strongly convex. While standard stochastic gradient methods converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard algorithms, both in terms of optimizing the training error and reducing the test error quickly.",
"The measurement of rank correlation introduction to the general theory of rank correlation tied ranks tests of significance proof of the results of chapter 4 the problem of m ranking proof of the result of chapter 6 partial rank correlation ranks and variate values proof of the result of chapter 9 paired comparisons proof of the results of chapter 11 some further applications.",
"We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as l1-norm for promoting sparsity. We develop extensions of Nesterov's dual averaging method, that can exploit the regularization structure in an online setting. At each iteration of these methods, the learning variables are adjusted by solving a simple minimization problem that involves the running average of all past subgradients of the loss function and the whole regularization term, not just its subgradient. In the case of l1-regularization, our method is particularly effective in obtaining sparse solutions. We show that these methods achieve the optimal convergence rates or regret bounds that are standard in the literature on stochastic and online convex optimization. For stochastic learning problems in which the loss functions have Lipschitz continuous gradients, we also present an accelerated version of the dual averaging method.",
"Although many variants of stochastic gradient descent have been proposed for large-scale convex optimization, most of them require projecting the solution at each iteration to ensure that the obtained solution stays within the feasible domain. For complex domains (e.g., positive semidefinite cone), the projection step can be computationally expensive, making stochastic gradient descent unattractive for large-scale optimization problems. We address this limitation by developing novel stochastic optimization algorithms that do not need intermediate projections. Instead, only one projection at the last iteration is needed to obtain a feasible solution in the given domain. Our theoretical analysis shows that with a high probability, the proposed algorithms achieve an O(1 √T) convergence rate for general convex optimization, and an O(ln T T) rate for strongly convex optimization under mild conditions about the domain and the objective function."
]
} |
1501.01549 | 2024111042 | We study quantum protocols among two distrustful parties. By adopting a rather strict definition of correctness — guaranteeing that honest players obtain their correct outcomes only — we can show that every strictly correct quantum protocol implementing a non-trivial classical primitive necessarily leaks information to a dishonest player. This extends known impossibility results to all non-trivial primitives. We provide a framework for quantifying this leakage and argue that leakage is a good measure for the privacy provided to the players by a given protocol. Our framework also covers the case where the two players are helped by a trusted third party. We show that despite the help of a trusted third party, the players cannot amplify the cryptographic power of any primitive. All our results hold even against quantum honest-but-curious adversaries who honestly follow the protocol but purify their actions and apply a different measurement at the end of the protocol. As concrete examples, we establish lower bounds on the leakage of standard universal two-party primitives such as oblivious transfer. | Our framework allows to quantify the minimum amount of leakage whereas standard impossibility proofs as the ones of @cite_21 @cite_18 @cite_3 @cite_31 @cite_25 do not in general provide such quantification since they usually assume privacy for one player in order to show that the protocol must be totally insecure for the other player Trade-offs between the security for one and the security for the other player have been considered before, but either the relaxation of security has to be very small @cite_3 or the trade-offs are restricted to particular primitives such as commitments @cite_41 @cite_27 or oblivious transfer @cite_17 . . By contrast, we derive lower bounds for the leakage of any implementation. At first glance, our approach seems contradictory with standard impossibility proofs since embeddings leak the same amount towards both parties. To resolve this apparent paradox it suffices to observe that in previous approaches only the adversary purified its actions whereas in our case both parties do. If a honest player does not purify his actions then some leakage may be lost by the act of irreversibly and unnecessarily measuring some of his quantum registers. | {
"cite_N": [
"@cite_18",
"@cite_41",
"@cite_21",
"@cite_3",
"@cite_27",
"@cite_31",
"@cite_25",
"@cite_17"
],
"mid": [
"2075695434",
"2033439890",
"1782172301",
"1508636262",
"2101060409",
"2140466544",
"2012772813",
"2106426971"
],
"abstract": [
"The claim of quantum cryptography has always been that it can provide protocols that are unconditionally secure, that is, for which the security does not depend on any restriction on the time, space, or technology available to the cheaters. We show that this claim does not hold for any quantum bit commitment protocol. Since many cryptographic tasks use bit commitment as a basic primitive, this result implies a severe setback for quantum cryptography. The model used encompasses all reasonable implementations of quantum bit commitment protocols in which the participants have not met before, including those that make use of the theory of special relativity.",
"Although it is impossible for a bit commitment protocol to be both arbitrarily concealing and arbitrarily binding, it is possible for it to be both partially concealing and partially binding. This means that Bob cannot, prior to the beginning of the unveiling phase, find out everything about the bit committed, and Alice cannot, through actions taken after the end of the commitment phase, unveil whatever bit she desires. We determine upper bounds on the degrees of concealment and bindingness that can be achieved simultaneously in any bit commitment protocol although it is unknown whether these can be saturated. We do, however, determine the maxima of these quantities in a restricted class of bit commitment protocols, namely, those wherein all the systems that play a role in the commitment phase are supplied by Alice. We show that these maxima can be achieved using a protocol that requires Alice to prepare a pair of systems in an entangled state, submit one of the pair to Bob at the commitment phase, and the other at the unveiling phase. Finally, we determine the form of the trade off that exists between the degree of concealment and the degree of bindingness given various assumptions about the purity and dimensionality of the states used in the protocol.",
"Work on quantum cryptography was started by S. J. Wiesner in a paper written in about 1970, but remained unpublished until 1983 [1]. Recently, there have been lots of renewed activities in the subject. The most wellknown application of quantum cryptography is the socalled quantum key distribution (QKD) [2–4], which is useful for making communications between two users totally unintelligible to an eavesdropper. QKD takes advantage of the uncertainty principle of quantum mechanics: Measuring a quantum system in general disturbs it. Therefore, eavesdropping on a quantum communication channel will generally leave unavoidable disturbance in the transmitted signal which can be detected by the legitimate users. Besides QKD, other quantum cryptographic protocols [5] have also been proposed. In particular, it is generally believed [4] that quantum mechanics can protect private information while it is being used for public decision. Suppose Alice has a secret x and Bob a secret y. In a “two-party secure computation” (TPSC), Alice and Bob compute a prescribed function f(x,y) in such a way that nothing about each party’s input is disclosed to the other, except for what follows logically from one’s private input and the function’s output. An example of the TPSC is the millionaires’ problem: Two persons would like to know who is richer, but neither wishes the other to know the exact amount of money he she has. In classical cryptography, TPSC can be achieved either through trusted intermediaries or by invoking some unproven computational assumptions such as the hardness of factoring large integers. The great expectation is that quantum cryptography can get rid of those requirements and achieve the same goal using the laws of physics alone. At the heart of such optimism has been the widespread belief that unconditionally secure quantum bit commitment (QBC) schemes exist [6]. Here we put such optimism into very serious doubt by showing",
"It had been widely claimed that quantum mechanics can protect private information during public decision in, for example, the so-called two-party secure computation. If this were the case, quantum smart-cards, storing confidential information accessible only to a proper reader, could prevent fake teller machines from learning the PIN (personal identification number) from the customers' input. Although such optimism has been challenged by the recent surprising discovery of the insecurity of the so-called quantum bit commitment, the security of quantum two-party computation itself remains unaddressed. Here I answer this question directly by showing that all one-sided two-party computations (which allow only one of the two parties to learn the result) are necessarily insecure. As corollaries to my results, quantum one-way oblivious password identification and the so-called quantum one-out-of-two oblivious transfer are impossible. I also construct a class of functions that cannot be computed securely in any two-sided two-party computation. Nevertheless, quantum cryptography remains useful in key distribution and can still provide partial security in quantum money'' proposed by Wiesner.",
"Unconditionally secure nonrelativistic bit commitment is known to be impossible in both the classical and the quantum worlds. But when committing to a string of n bits at once, how far can we stretch the quantum limits? In this paper, we introduce a framework for quantum schemes where Alice commits a string of n bits to Bob in such a way that she can only cheat on a bits and Bob can learn at most b bits of information before the reveal phase. Our results are twofold: we show by an explicit construction that in the traditional approach, where the reveal and guess probabilities form the security criteria, no good schemes can exist: a+b is at least n. If, however, we use a more liberal criterion of security, the accessible information, we construct schemes where a=4 log2 n+O(1) and b=4, which is impossible classically. We furthermore present a cheat-sensitive quantum bit string commitment protocol for which we give an explicit tradeoff between Bob's ability to gain information about the committed string, and the probability of him being detected cheating.",
"Bit commitment protocols whose security is based on the laws of quantum mechanics alone are generally held to be impossible. We give a strengthened and explicit proof of this result. We extend its scope to a much larger variety of protocols, which may have an arbitrary number of rounds, in which both classical and quantum information is exchanged, and which may include aborts and resets. Moreover, we do not consider the receiver to be bound to a fixed 'honest' strategy, so that 'anonymous state protocols', which were recently suggested as a possible way to beat the known no-go results, are also covered. We show that any concealing protocol allows the sender to find a cheating strategy, which is universal in the sense that it works against any strategy of the receiver. Moreover, if the concealing property holds only approximately, the cheat goes undetected with a high probability, which we explicitly estimate. The proof uses an explicit formalization of general two-party protocols, which is applicable to more general situations, and an estimate about the continuity of the Stinespring dilation of a general quantum channel. The result also provides a natural characterization of protocols that fall outside the standard setting of unlimitedmore » available technology and thus may allow secure bit commitment. We present such a protocol whose security, perhaps surprisingly, relies on decoherence in the receiver's laboratory.« less",
"A fundamental task in modern cryptography is the joint computation of a function which has two inputs, one from Alice and one from Bob, such that neither of the two can learn more about the other's input than what is implied by the value of the function. In this Letter, we show that any quantum protocol for the computation of a classical deterministic function that outputs the result to both parties (two-sided computation) and that is secure against a cheating Bob can be completely broken by a cheating Alice. Whereas it is known that quantum protocols for this task cannot be completely secure, our result implies that security for one party implies complete insecurity for the other. Our findings stand in stark contrast to recent protocols for weak coin tossing and highlight the limits of cryptography within quantum mechanics. We remark that our conclusions remain valid, even if security is only required to be approximate and if the function that is computed for Bob is different from that of Alice.",
"Oblivious transfer is a fundamental primitive in cryptography. While perfect information theoretic security is impossible, quantum oblivious transfer protocols can limit the dishonest player's cheating. Finding the optimal security parameters in such protocols is an important open question. In this paper we show that every 1-out-of-2 oblivious transfer protocol allows a dishonest party to cheat with probability bounded below by a constant strictly larger than 1 2. Alice's cheating is defined as her probability of guessing Bob's index, and Bob's cheating is defined as his probability of guessing both input bits of Alice. In our proof, we relate these cheating probabilities to the cheating probabilities of a bit commitment protocol and conclude by using lower bounds on quantum bit commitment. Then, we present an oblivious transfer protocol with two messages and cheating probabilities at most 3 4. Last, we extend Kitaev's semidefinite programming formulation to more general primitives, where the security is against a dishonest player trying to force the outcome of the other player, and prove optimal lower and upper bounds for them."
]
} |
1501.01549 | 2024111042 | We study quantum protocols among two distrustful parties. By adopting a rather strict definition of correctness — guaranteeing that honest players obtain their correct outcomes only — we can show that every strictly correct quantum protocol implementing a non-trivial classical primitive necessarily leaks information to a dishonest player. This extends known impossibility results to all non-trivial primitives. We provide a framework for quantifying this leakage and argue that leakage is a good measure for the privacy provided to the players by a given protocol. Our framework also covers the case where the two players are helped by a trusted third party. We show that despite the help of a trusted third party, the players cannot amplify the cryptographic power of any primitive. All our results hold even against quantum honest-but-curious adversaries who honestly follow the protocol but purify their actions and apply a different measurement at the end of the protocol. As concrete examples, we establish lower bounds on the leakage of standard universal two-party primitives such as oblivious transfer. | Our results complement the ones obtained by Colbeck in @cite_2 for the setting where Alice and Bob have inputs and obtain identical outcomes (called single-function computations). @cite_2 shows that in any implementation of primitives of a certain form, an honest-but-curious player can access more information about the other party's input than it is available through the ideal functionality. Unlike @cite_2 , we deal in our work with the case where Alice and Bob do not have inputs but might receive different outputs according to a joint probability distributions. We show that only trivial distributions can be implemented securely in the QHBC model. Furthermore, we introduce a quantitative measure of protocol-insecurity that lets us answer which embedding allow the least effective cheating. | {
"cite_N": [
"@cite_2"
],
"mid": [
"1978814996"
],
"abstract": [
"We present attacks that show that unconditionally secure two-party classical computation is impossible for many classes of function. Our analysis applies to both quantum and relativistic protocols. We illustrate our results by showing the impossibility of oblivious transfer."
]
} |
1501.01549 | 2024111042 | We study quantum protocols among two distrustful parties. By adopting a rather strict definition of correctness — guaranteeing that honest players obtain their correct outcomes only — we can show that every strictly correct quantum protocol implementing a non-trivial classical primitive necessarily leaks information to a dishonest player. This extends known impossibility results to all non-trivial primitives. We provide a framework for quantifying this leakage and argue that leakage is a good measure for the privacy provided to the players by a given protocol. Our framework also covers the case where the two players are helped by a trusted third party. We show that despite the help of a trusted third party, the players cannot amplify the cryptographic power of any primitive. All our results hold even against quantum honest-but-curious adversaries who honestly follow the protocol but purify their actions and apply a different measurement at the end of the protocol. As concrete examples, we establish lower bounds on the leakage of standard universal two-party primitives such as oblivious transfer. | A result by K " @cite_5 shows that two-party functions that are securely computable against active quantum adversaries form a strict subset of the set of functions which are securely computable in the classical HBC model. This complements our result that the sets of securely computable functions in both HBC and QHBC models are the same. | {
"cite_N": [
"@cite_5"
],
"mid": [
"1779849045"
],
"abstract": [
"While general secure function evaluation (SFE) with information-theoretical (IT) security is infeasible in presence of a corrupted majority in the standard model, there are SFE protocols ( [STOC'87]) that are computationally secure (without fairness) in presence of an actively corrupted majority of the participants. Now, computational assumptions can usually be well justified at the time of protocol execution. The concern is rather a potential violation of the privacy of sensitive data by an attacker whose power increases over time. Therefore, we ask which functions can be computed with long-term security, where we admit computational assumptions for the duration of a computation, but require IT security (privacy) once the computation is concluded. Towards a combinatorial characterization of this class of functions, we also characterize the classes of functions that can be computed IT securely in the authenticated channels model in presence of passive, semi-honest, active, and quantum adversaries."
]
} |
1501.01744 | 2953061944 | We present a novel method for communicating between a camera and display by embedding and recovering hidden and dynamic information within a displayed image. A handheld camera pointed at the display can receive not only the display image, but also the underlying message. These active scenes are fundamentally different from traditional passive scenes like QR codes because image formation is based on display emittance, not surface reflectance. Detecting and decoding the message requires careful photometric modeling for computational message recovery. Unlike standard watermarking and steganography methods that lie outside the domain of computer vision, our message recovery algorithm uses illumination to optically communicate hidden messages in real world scenes. The key innovation of our approach is an algorithm that performs simultaneous radiometric calibration and message recovery in one convex optimization problem. By modeling the photometry of the system using a camera-display transfer function (CDTF), we derive a physics-based kernel function for support vector machine classification. We demonstrate that our method of optimal online radiometric calibration (OORC) leads to an efficient and robust algorithm for computational messaging between nine commercial cameras and displays. | In developing a system where cameras and displays can communicate under real world conditions, the initial expectation was that existing watermarking techniques could be used directly. Certainly the work in this field is extensive and has a long history with numerous surveys compiled @cite_1 @cite_21 @cite_27 @cite_40 @cite_41 @cite_10 . Surprisingly, existing methods are not directly applicable to our problem. In the field of watermarking, a fixed image or mark is embedded in an image often with the goal of identifying fraudulent copies of a video, image or document. Existing work emphasizes almost exclusively the digital domain and does not account for the effect of illumination in the image formation process in real world scenes. In the digital domain, neglecting the physics of illumination is quite reasonable; however, for camera-display messaging, illumination plays a central role. | {
"cite_N": [
"@cite_41",
"@cite_21",
"@cite_1",
"@cite_40",
"@cite_27",
"@cite_10"
],
"mid": [
"2152240488",
"1978773133",
"2028197392",
"",
"1542157244",
""
],
"abstract": [
"Information hiding is a rapidly developing research area with a potentially enormous number of practical applications. There is already a considerable amount of commercial interest in information hiding applications. However, there is a lack of a good theoretical understanding on the subject, without which, commercial applications will face significant problems that would only hinder their success.",
"Cryptology is the practice of hiding digital information by means of various obfuscatory and steganographic techniques. The application of said techniques facilitates message confidentiality and sender receiver identity authentication, and helps to ensure the integrity and security of computer passwords, ATM card information, digital signatures, DVD and HDDVD content, and electronic commerce. Cryptography is also central to digital rights management (DRM), a group of techniques for technologically controlling the use of copyrighted material that is being widely implemented and deployed at the behest of corporations that own and create revenue from the hundreds of thousands of mini-transactions that take place daily on programs like iTunes. This new edition of our best-selling book on cryptography and information hiding delineates a number of different methods to hide information in all types of digital media files. These methods include encryption, compression, data embedding and watermarking, data mimicry, and scrambling. During the last 5 years, the continued advancement and exponential increase of computer processing power have enhanced the efficacy and scope of electronic espionage and content appropriation. Therefore, this edition has amended and expanded outdated sections in accordance with new dangers, and includes 5 completely new chapters that introduce newer more sophisticated and refined cryptographic algorithms and techniques (such as fingerprinting, synchronization, and quantization) capable of withstanding the evolved forms of attack. Each chapter is divided into sections, first providing an introduction and high-level summary for those who wish to understand the concepts without wading through technical explanations, and then presenting concrete examples and greater detail for those who want to write their own programs. This combination of practicality and theory allows programmers and system designers to not only implement tried and true encryption procedures, but also consider probable future developments in their designs, thus fulfilling the need for preemptive caution that is becoming ever more explicit as the transference of digital media escalates. * Includes 5 completely new chapters that delineate the most current and sophisticated cryptographic algorithms, allowing readers to protect their information against even the most evolved electronic attacks. * Conceptual tutelage in conjunction with detailed mathematical directives allows the reader to not only understand encryption procedures, but also to write programs which anticipate future security developments in their design. * Grants the reader access to online source code which can be used to directly implement proven cryptographic procedures such as data mimicry and reversible grammar generation into their own work.",
"Steganography is the science that involves communicating secret data in an appropriate multimedia carrier, e.g., image, audio, and video files. It comes under the assumption that if the feature is visible, the point of attack is evident, thus the goal here is always to conceal the very existence of the embedded data. Steganography has various useful applications. However, like any other science it can be used for ill intentions. It has been propelled to the forefront of current security techniques by the remarkable growth in computational power, the increase in security awareness by, e.g., individuals, groups, agencies, government and through intellectual pursuit. Steganography's ultimate objectives, which are undetectability, robustness (resistance to various image processing methods and compression) and capacity of the hidden data, are the main factors that separate it from related techniques such as watermarking and cryptography. This paper provides a state-of-the-art review and analysis of the different existing methods of steganography along with some common standards and guidelines drawn from the literature. This paper concludes with some recommendations and advocates for the object-oriented embedding mechanism. Steganalysis, which is the science of attacking steganography, is not the focus of this survey but nonetheless will be briefly discussed.",
"",
"Watermarking, which belong to the information hiding field, has seen a lot of research interest. There is a lot of work begin conducted in different branches in this field. Steganography is used for secret communication, whereas watermarking is used for content protection, copyright management, content authentication and tamper detection. In this paper we present a detailed survey of existing and newly proposed steganographic and watermarking techniques. We classify the techniques based on different domains in which data is embedded. We limit the survey to images only.",
""
]
} |
1501.01744 | 2953061944 | We present a novel method for communicating between a camera and display by embedding and recovering hidden and dynamic information within a displayed image. A handheld camera pointed at the display can receive not only the display image, but also the underlying message. These active scenes are fundamentally different from traditional passive scenes like QR codes because image formation is based on display emittance, not surface reflectance. Detecting and decoding the message requires careful photometric modeling for computational message recovery. Unlike standard watermarking and steganography methods that lie outside the domain of computer vision, our message recovery algorithm uses illumination to optically communicate hidden messages in real world scenes. The key innovation of our approach is an algorithm that performs simultaneous radiometric calibration and message recovery in one convex optimization problem. By modeling the photometry of the system using a camera-display transfer function (CDTF), we derive a physics-based kernel function for support vector machine classification. We demonstrate that our method of optimal online radiometric calibration (OORC) leads to an efficient and robust algorithm for computational messaging between nine commercial cameras and displays. | From a computer vision point of view, the imaging process can be divided into two main components: photometry and geometry. The geometric aspects of image formation have been addressed to some extent in the watermarking community, and many techniques have been developed for robustness to geometric changes during the imaging process such as scaling, rotations, translations and general homography transformations @cite_9 @cite_33 @cite_30 @cite_13 @cite_14 @cite_27 @cite_15 . However, the photometry of imaging has largely been ignored. The rare mention of photometric effects @cite_16 @cite_39 in the watermarking literature doesn't define photometry with respect to illumination; instead photometric effects are defined as lossy compression, denoising, noise addition and lowpass filtering''. In fact, photometric attacks are sometimes defined as jpeg compression @cite_30 . | {
"cite_N": [
"@cite_30",
"@cite_14",
"@cite_33",
"@cite_9",
"@cite_39",
"@cite_27",
"@cite_15",
"@cite_16",
"@cite_13"
],
"mid": [
"",
"2129002694",
"2096599197",
"2155766055",
"2102639506",
"1542157244",
"2136596536",
"2106608297",
"2088226635"
],
"abstract": [
"",
"Traditional watermarking schemes are sensitive to geometric distortions, in which synchronisation for recovering embedded information is a challenging task because of the disorder caused by rotation, scaling or translation (RST). The existing RST-resistant watermarking methods still have limitations with respect to robustness, capacity or fidelity. In this study, the authors address several major problems in RST-invariant watermarking. The first point is how to take advantage of the high RST resilience of scale-invariant feature transform (SIFT) features, which show good performance in terms of RST-resistant pattern recognition. Since many keypoint-based watermarking methods do not discuss cropping attacks, the second issue discussed in this study is how to resist cropping using a human visual system (HVS), which also helps us to eliminate computational complexity. The third issue is the investigation of an HVS-based watermarking strategy for extracting only feature points in the human attentive area. Lastly, a variable-length watermark synchronisation algorithm using dynamic programming is proposed. Experimental results show that the proposed algorithms are practical and show superior performance in comparison with many existing works in terms of watermark capacity, watermark transparency, and the resistance to RST attacks.",
"Here we propose a new technique for the Watermarking to withstand the geometric attacks, which may occur during the transmission of the watermarked image. The underlying system was based on Direct Sequence Code Division Multiple Access (DS-CDMA). The algorithm for the normalization had been formulated for use in black and white images. The watermark was spreaded across the carrier image by using the pseudo-random noise sequences of optimal period and retrieval was made by the use of correlation. Private Key technique is used so the transmission is very secure. Matlab was used to implement the algorithm discussed here.",
"In this paper, we present two watermarking approaches that are robust to geometric distortions. The first approach is based on image normalization, in which both watermark embedding and extraction are carried out with respect to an image normalized to meet a set of predefined moment criteria. We propose a new normalization procedure, which is invariant to affine transform attacks. The resulting watermarking scheme is suitable for public watermarking applications, where the original image is not available for watermark extraction. The second approach is based on a watermark resynchronization scheme aimed to alleviate the effects of random bending attacks. In this scheme, a deformable mesh is used to correct the distortion caused by the attack. The watermark is then extracted from the corrected image. In contrast to the first scheme, the latter is suitable for private watermarking applications, where the original image is necessary for watermark detection. In both schemes, we employ a direct-sequence code division multiple access approach to embed a multibit watermark in the discrete cosine transform domain of the image. Numerical experiments demonstrate that the proposed watermarking schemes are robust to a wide range of geometric attacks.",
"A robust video watermarking scheme resilient to spatial desynchronization and photometric distortion is presented. The scheme is based on geometrically approximate invariant watermarking and adaptive watermarking, and it embeds watermark by modifying middle frequency component adaptively according to persistence of vision and Watson's DCT based visual model. Experimental results show that the proposed watermarking scheme is not only robust against spatial desynchronization and photometric distortion, but also robust against their combination. Furthermore, it can resist randomly frame inserting or frame dropping to some extent",
"Watermarking, which belong to the information hiding field, has seen a lot of research interest. There is a lot of work begin conducted in different branches in this field. Steganography is used for secret communication, whereas watermarking is used for content protection, copyright management, content authentication and tamper detection. In this paper we present a detailed survey of existing and newly proposed steganographic and watermarking techniques. We classify the techniques based on different domains in which data is embedded. We limit the survey to images only.",
"This paper proposes a novel content-based image watermarking method based on invariant regions of an image. The invariant regions are self-adaptive image patches that deform with geometric transformations. Three different invariant-region detection methods based on the scale-space representation of an image were considered for watermarking. At each invariant region, the watermark is embedded after geometric normalization according to the shape of the region. By binding watermarking with invariant regions, resilience against geometric transformations can be readily obtained. Experimental results show that the proposed method is robust against various image processing steps, including geometric transformations, cropping, filtering, and JPEG compression.",
"This paper proposes a novel geometric distortion resilient image copy detection scheme based on Scale Invariant Feature Transform (SIFT) detector. By using the SIFT detector, the proposed copy detection scheme first construct a series of robust, homogenous, and larger size circular patches. And then, the cirque track division strategy and ordinal measure concept are introduced to generate a cirque-based ordinal measure feature vector for each circular patch. Besides, the ROC graph and MAP probability are utilized to estimate the two parameters (vector dimension and detection threshold) respectively. Experimental results and the related analysis show that the proposed scheme is robust to most of geometric and photometric distortions.",
"Based on scale space theory and an image normalization technique, a new feature-based image watermarking scheme robust to general geometric attacks is proposed in this paper. First, the Harris-Laplace detector is utilized to extract steady feature points from the host image; then, the local feature regions (LFR) are ascertained adaptively according to the characteristic scale theory, and they are normalized by an image normalization technique; finally, according to the predistortion compensation theory, several copies of the digital watermark are embedded into the nonoverlapped normalized LFR by comparing the DFT mid-frequency magnitudes. Experimental results show that the proposed scheme is not only invisible and robust against common signals processing methods such as median filtering, sharpening, noise adding, and JPEG compression etc., but also robust against the general geometric attacks such as rotation, translation, scaling, row or column removal, shearing, local geometric distortion and combination attacks etc."
]
} |
1501.01409 | 1983002424 | This work addresses the inverse problem of electrocardiography from a new perspective, by combining electrical and mechanical measurements. Our strategy relies on the definition of a model of the electromechanical contraction which is registered on ECG data but also on measured mechanical displacements of the heart tissue typically extracted from medical images. In this respect, we establish in this work the convergence of a sequential estimator which combines for such coupled problems various state of the art sequential data assimilation methods in a unified consistent and efficient framework. Indeed, we aggregate a Luenberger observer for the mechanical state and a Reduced-Order Unscented Kalman Filter applied on the parameters to be identified and a POD projection of the electrical state. Then using synthetic data we show the benefits of our approach for the estimation of the electrical state of the ventricles along the heart beat compared with more classical strategies which only consider an electrophysiological model with ECG measurements. Our numerical results actually show that the mechanical measurements improve the identifiability of the electrical problem allowing to reconstruct the electrical state of the coupled system more precisely. Therefore, this work is intended to be a first proof of concept, with theoretical justifications and numerical investigations, of the advantage of using available multi-modal observations for the estimation and identification of an electromechanical model of the heart. | The different strategies can be distinguished by the cardiac electrical source models they rely on. One of the first approaches was to estimate equivalent electrical dipoles @cite_45 @cite_13 . Another popular approach is to estimate the heart surface potential, usually called epicardial potential (even though pericardial potential would be more appropriate as noted in @cite_8 ). The potential @math within the torso @math is assumed to be solution of the Poisson problem: where @math denotes the boundary of the heart and @math is the electrical conductivity of the torso. The inverse problem then consists in estimating @math on @math (see e.g. @cite_1 @cite_22 @cite_41 ). This problem being notoriously ill-posed, various regularizations have been proposed: Tikhonov @cite_20 , the use of temporal information @cite_62 @cite_18 , truncated Singular Value Decomposition or truncated Total Least Square @cite_21 . | {
"cite_N": [
"@cite_18",
"@cite_62",
"@cite_22",
"@cite_8",
"@cite_41",
"@cite_21",
"@cite_1",
"@cite_45",
"@cite_13",
"@cite_20"
],
"mid": [
"2106903293",
"2162981237",
"2121879623",
"2112280747",
"2105969869",
"592974461",
"2010040374",
"2032449521",
"2145203582",
"2079961943"
],
"abstract": [
"The inverse problem of electrocardiography is solved in order to reconstruct electrical events within the heart from information measured noninvasively on the body surface. These electrical events can be deduced from measured epicardial potentials; therefore, a noninvasive method of recovering epicardial potentials from body surface data is useful in clinical and experimental work. The ill-posed nature of this problem necessitates the use of regularization in the solution procedure. Inversion using Tihonov zero-order regularization, a quasi-static method, had been employed previously and was able to reconstruct, with relatively good accuracy, important events in cardiac excitation (maxima, minima, etc.). Taking advantage of the fact that the process of cardiac excitation is continuous in time, one can incorporate information from the time progression of excitation in the regularization procedure using the Twomey technique. Methods of this type were tested on data obtained from a human-torso tank in which a beating canine heart was placed. The results show a marked improvement in the inverse solution when these temporal methods are used. >",
"The authors present a new method for regularizing the ill-posed problem of computing epicardial potentials from body surface potentials. The method simultaneously regularizes the equations associated with all time points, and relies on a new theorem which states that a solution based on optimal regularization of each integral equation associated with each principal component of the data will be more accurate than a solution based on optimal regularization of each integral equation associated with each time point. The theorem is illustrated with simulations mimicking the complexity of the inverse electrocardiography problem. As must be expected from a method which imposes no additional a priori constraints, the new approach addresses uncorrelated noise only, and in the presence of dominating correlated noise it is only successful in producing a \"cleaner\" version of a necessarily compromised solution. Nevertheless, in principle, the new method is always preferred to the standard approach, since it (without penalty) eliminates pure noise that would otherwise be present in the solution estimate.",
"Background—The last decade witnessed an explosion of information regarding the genetic, molecular, and mechanistic basis of heart disease. Translating this information into clinical practice requires the development of novel functional imaging modalities for diagnosis, localization, and guided intervention. A noninvasive modality for imaging cardiac arrhythmias is not yet available. Present electrocardiographic methods cannot precisely localize a ventricular tachycardia (VT) or its key reentrant circuit components. Recently, we developed a noninvasive electrocardiographic imaging modality (ECGI) that can reconstruct epicardial electrophysiological information from body surface potentials. Here, we extend its application to image reentrant arrhythmias. Methods and Results—Epicardial potentials were recorded during VT with a 490 electrode sock during an open chest procedure in 2 dogs with 4-day-old myocardial infarctions. Body surface potentials were generated from these epicardial potentials in a human tor...",
"We compare two source formulations for the electrocardiographic forward problem in consideration of their implications for regularizing the ill-posed inverse problem. The established epicardial potential source model is compared with a bidomain-theory-based transmembrane potential source formulation. The epicardial source approach is extended to the whole heart surface including the endocardial surfaces. We introduce the concept of the numerical null and signal space to draw attention to the problems associated with the nonuniqueness of the inverse solution and show that reconstruction of null-space components is an important issue for physiologically meaningful inverse solutions. Both formulations were tested with simulated data generated with an anisotropic heart model and with clinically measured data of two patients. A linear and a recently proposed quasi-linear inverse algorithm were applied for reconstructions of the epicardial and transmembrane potential, respectively. A direct comparison of both formulations was performed in terms of computed activation times. We found the transmembrane potential-based formulation is a more promising source formulation as stronger regularization by incorporation of biophysical a priori information is permitted.",
"We consider the inverse electrocardiographic problem of computing epicardial potentials from a body-surface potential map. We study how to improve numerical approximation of the inverse problem when the finite-element method is used. Being ill-posed, the inverse problem requires different discretization strategies from its corresponding forward problem. We propose refinement guidelines that specifically address the ill-posedness of the problem. The resulting guidelines necessitate the use of hybrid finite elements composed of tetrahedra and prism elements. Also, in order to maintain consistent numerical quality when the inverse problem is discretized into different scales, we propose a new family of regularizers using the variational principle underlying finite-element methods. These variational-formed regularizers serve as an alternative to the traditional Tikhonov regularizers, but preserves the L2 norm and thereby achieves consistent regularization in multiscale simulations. The variational formulation also enables a simple construction of the discrete gradient operator over irregular meshes, which is difficult to define in traditional discretization schemes. We validated our hybrid element technique and the variational regularizers by simulations on a realistic 3-D torso heart model with empirical heart data. Results show that discretization based on our proposed strategies mitigates the ill-conditioning and improves the inverse solution, and that the variational formulation may benefit a broader range of potential-based bioelectric problems.",
"# Geometric Modelling # Cell Modelling # Tissue Modelling # Whole-Heart Modelling # Organ in the Body -- The Forward Problem of Electrocardiology # The Inverse Problem of Electrocardiology # Modelling Other Cardiac Processes",
"Although it has been known throughout this century that a complex sequence of electrical events is produced on the body surface by the electrophysiological properties of the heart, the question of how well these body surface events can be explained mathematically on the basis of experimental measurements of cardiac geometry and electrical activity remains unanswered. Recent advances in experimental capabilities have made possible the near simultaneous measurement of both cardiac epicardial and corresponding body surface potential distributions from in vivo animal preparations using chronically implanted electrodes to keep the volume conductor intact. This report provides a method for finding transfer coefficients that relate the epicardial and body surface potential distributions to each other. The method is based on knowing the geometric location of each electrode, and on having enough electrodes to establish the geometric shape and the potential distribution of closed epicardial and body surfaces. However, the method does not require that either the heart or body surfaces have any special shape, such as that of a sphere, or that any electrical quantities, such as voltage gradients, be known in addition to the potentials. The use of potential distributions to represent heart electrical activity is advantageous since such distributions can be directly measured experimentally, without a transformation to any other form, such as multiple current-generating dipoles, being required. This report includes a statement of the underlying integral equations, the procedure. for finding the equations' coefficients from geometry measurements, some considerations for computer algorithms, and an example.",
"This paper reviews and updates the single moving dipole (SMD) and two moving dipole (TMD) inverse electrocardiographic and electroencephalographic solutions. These inverse solutions are particularly appropriate when the electrical activity of the heart or brain may be represented by one or two well-localized foci. They attempt to match the measured body surface or evoked scalp potentials to potentials generated on the surface of a model of the intervening volume conductor by one or two moving current dipoles, respectively. The two alternative methods of solution are discussed initially. The first is a direct least-squares error match of measured and model-generated surface potentials, the second an indirect solution based on the least-squares error match of the potentials due to equivalent multipole series representations of the real and model sources, respectively. Next, brief reviews of moving dipole inverse solutions in the EEG and ECG fields are presented. Simulation studies, as well as experimental and clinical studies in animals and humans, are described. The Discussion section summarizes the optimum solution approaches that should be used in clinical EEG and ECG studies in man. It also cautions against the temptation to translate the numerical adequacy of inverse SMD and TMD solutions into physiological validity, without independent knowledge as to the nature of the underlying sources.",
"The ability of a numerical procedure to detect and to localize two experimentally induced, epicardial dipolar generators was tested in 24 isolated, perfused rabbit heart preparations, suspended in an electrolyte-filled spherical tank. Electrocardiograms were recorded from 32 electrodes on the surface of the test chamber before and after placement of each of two epicardial burns. The second lesion was located either 180 degrees, 90 degrees, or 45 degrees from the first. Signals were processed by iterative routines that computed the location of one or two independent dipoles that best reconstruced the observed surface potentials. The computed single dipole acounting for 99.68 of root mean sequare (RMS) surface potential recorded after the first burn was located 0.26 + - 0.10 cm from the centroid of the lesion. Potentials recorded after the second lesions were fit with two dipoles that accounted for 99.36 + - 1.51 of RMS surface potentials and that were located 0.42 + - 0.26 cm and 0.57 + - 0.49 cm from the centers of the corresponding burn. Seventy-one percent of computed dipoles were located within the visible perimeter of the burn. Thus, two simultaneously active dipolar sources can be detected and accurately localized by rigorous study of the generated electrical field.",
"Abstract An analytic, eccentric-spheres model was used to test the efficacy of different regularization techniques based on the Tikhonov family of regularizers. The model, although simple, retains the relative size and position of the heart within the body and may incorporate all the inhomogeneities of the human torso. The boundary-element method was used to construct a transfer matrix relating the body surface potentials to the epicardial potentials, for the homogeneous form of the model. Different regularization techniques were compared in the presence of surface potential noise and in the presence of errors in estimating the conductivities, the heart size and the heart position. Results indicate that the relative error in the inverse-recovered epicardial potential with regularization does not rise proportionally to the noise level. The relative error (RE) with a 5 Gaussian noise level is 0.17; with 20 it is 0.29. Additionally, the regularized inverse procedure is shown to restore smoothness and accuracy to the inverse-recovered epicardial potentials in the presence of errors in estimating the heart position and heart size, which, using an unregularized inversion, would lead to large-amplitude oscillations in the solution."
]
} |
1501.01409 | 1983002424 | This work addresses the inverse problem of electrocardiography from a new perspective, by combining electrical and mechanical measurements. Our strategy relies on the definition of a model of the electromechanical contraction which is registered on ECG data but also on measured mechanical displacements of the heart tissue typically extracted from medical images. In this respect, we establish in this work the convergence of a sequential estimator which combines for such coupled problems various state of the art sequential data assimilation methods in a unified consistent and efficient framework. Indeed, we aggregate a Luenberger observer for the mechanical state and a Reduced-Order Unscented Kalman Filter applied on the parameters to be identified and a POD projection of the electrical state. Then using synthetic data we show the benefits of our approach for the estimation of the electrical state of the ventricles along the heart beat compared with more classical strategies which only consider an electrophysiological model with ECG measurements. Our numerical results actually show that the mechanical measurements improve the identifiability of the electrical problem allowing to reconstruct the electrical state of the coupled system more precisely. Therefore, this work is intended to be a first proof of concept, with theoretical justifications and numerical investigations, of the advantage of using available multi-modal observations for the estimation and identification of an electromechanical model of the heart. | In @cite_9 , the authors note that the usual regularization techniques have no physical ground. Instead, they propose to regularize the inverse solution with the monodomain equations coupled to the Fenton-Karma ionic model. The strategy proposed in the present paper has some similarities with this approach. We rely on a full electrophysiological model of the action potential coupled to the Poisson problem to estimate the solution of the heart electrical activation. This physical model is personalized on the fly with respect to its parameters in order to adapt it to a specific patient. Furthermore, we propose an additional step of modeling by considering the mechanical response to the electrophysiological activation, so we are able to also integrate mechanical measurements. Indeed, we believe that multimodal observations improve the identifiability of the complete model and therefore improve the quality of the electrical and mechanical state reconstruction. Another originality of our work is the use of a sequential data assimilation strategy that is adapted to a coupled electromechanical evolution model. Here we demonstrate how state-of-the-art gain filter on the electrophysiological model and on the mechanical model can be aggregated to propose a joint gain filter for the coupled problem. | {
"cite_N": [
"@cite_9"
],
"mid": [
"2081699235"
],
"abstract": [
"The inverse problem in electrocardiography is to reconstruct the voltage in the surface of the heart, using a high density electrocardiogram. This problem is usually solved using regularization techniques, which tend to give the minimum energy response in a static scheme. In our work, we propose to calculate a dynamic inverse solution using the Monodomain as a model of electrical heart activity, thus constraining the family of solutions to one that satisfies the model."
]
} |
1501.01252 | 2075173141 | This paper proposes a new method to provide personalized tour recommendation for museum visits. It combines an optimization of preference criteria of visitors with an automatic extraction of artwork importance from museum information based on Natural Language Processing using textual energy. This project includes researchers from computer and social sciences. Some results are obtained with numerical experiments. They show that our model clearly improves the satisfaction of the visitor who follows the proposed tour. This work foreshadows some interesting outcomes and applications about on-demand personalized visit of museums in a very near future. | A first model developed in 2010 @cite_10 proposes to formulate the visitor routing problem as an extension of the open shop scheduling problem (in which each visitor group is a job and each interesting room is a machine). Each visitor group has to pass through all rooms but it is impossible for two groups of visitors to be simultaneously in the same room. This restriction can lead to non optimal or infeasible solutions if there are more visitor groups than rooms in the museum (which is the case if we consider each single visitor as a group). | {
"cite_N": [
"@cite_10"
],
"mid": [
"2040552073"
],
"abstract": [
"In the museum visitor routing problem (MVRP), each visitor group has some exhibit rooms of interest. The visiting route of a certain visitor group requires going through all the exhibit rooms that the group wants to visit. Routes need to be scheduled based on certain criteria to avoid congestion and or prolonged touring time. In this study, the MVRP is formulated as a mixed integer program which is an extension of the open shop scheduling (OSS) problem in which visitor groups and exhibit rooms are treated as jobs and machines, respectively. The time each visitor group spends in an exhibit room is analogous to the processing time required for each job on a particular machine. The travel time required from one exhibit room to another is modeled as the sequence-dependent setup time on a machine, which is not considered in the OSS problem. Due to the intrinsic complexity of the MVRP, a simulated annealing (SA) approach is proposed to solve the problem. Computational results indicate that the proposed SA approach is capable of obtaining high quality MVRP solutions within a reasonable amount of computational time and enables the approach to be used in practice."
]
} |
1501.01252 | 2075173141 | This paper proposes a new method to provide personalized tour recommendation for museum visits. It combines an optimization of preference criteria of visitors with an automatic extraction of artwork importance from museum information based on Natural Language Processing using textual energy. This project includes researchers from computer and social sciences. Some results are obtained with numerical experiments. They show that our model clearly improves the satisfaction of the visitor who follows the proposed tour. This work foreshadows some interesting outcomes and applications about on-demand personalized visit of museums in a very near future. | Relying on the constraint programming model @cite_13 , we propose to reduce the number of used variables. In @cite_13 , they generate a route by calculating the smallest number @math of steps required to cross the museum (to visit all the rooms). This model requires that each artwork is represented as @math variables (one per step). Due to the fact that museums often have several thousands of artworks, it leads to a huge number of variables. Moreover they use mathematical distributions to simulate a visitor profile which does not necessary reflect reality (in museums, artworks are often grouped in a room because they are related to each other, a configuration that a random distribution as they used cannot represent). | {
"cite_N": [
"@cite_13"
],
"mid": [
"2232622878"
],
"abstract": [
"In this paper, we consider the problem of designing personalised museum visits. Given a set of preferences and constraints a visitor might express on her visit, the aim is to compute the tour that best matches her requirements. The museum visits problem can be expressed as a planning problem, with cost optimization. We show how to bound the number of steps required to find an optimal solution, via the resolution of an instance of the shortest complete walk problem. We also point out an alternative encoding of the museum visits problem as an optimization problem with pseudo-Boolean constraints and a linear objective function. We have evaluated several constraints solvers, a planner and a tailored solver on a number of benchmarks, representing various instances of the museum visits problem corresponding to real museums. Our empirical results show the feasibility of both the planning and the constraint programming approaches. Optimal solutions can be computed for short visits and \"practically good\" solutions for much longer visits."
]
} |
1501.00624 | 2950864125 | We provide tight upper and lower bounds on the noise resilience of interactive communication over noisy channels with feedback. In this setting, we show that the maximal fraction of noise that any robust protocol can resist is 1 3. Additionally, we provide a simple and efficient robust protocol that succeeds as long as the fraction of noise is at most 1 3 - . Surprisingly, both bounds hold regardless of whether the parties send bits or symbols from an arbitrarily large alphabet. We also consider interactive communication over erasure channels. We provide a protocol that matches the optimal tolerable erasure rate of 1 2 - of previous protocols (, CRYPTO '13) but operates in a much simpler and more efficient way. Our protocol works with an alphabet of size 4, in contrast to prior protocols in which the alphabet size grows as epsilon goes to zero. Building on the above algorithm with a fixed alphabet size, we are able to devise a protocol for binary erasure channels that tolerates erasure rates of up to 1 3 - . | As mentioned above, the question of interactive communication over a noisy channel was initiated by Schulman @cite_0 @cite_5 @cite_7 who mainly focused on the case of random bit flips, but also showed that his scheme resists an adversarial noise rate of up to @math . Braverman and Rao @cite_19 proved that @math is a tight bound on the noise (for large alphabets), and Braverman and Efremenko @cite_20 gave a refinement of this bound, looking at the noise rate separately at each direction of the channel (i.e., from Alice to Bob and from Bob to Alice). For each pair of noise rates, they determine whether or not a coding scheme with a exists. Another line of work improved the efficiency of coding schemes for the interactive setting, either for random noise @cite_9 @cite_4 , or for adversarial noise @cite_3 @cite_12 @cite_16 . | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_3",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_16",
"@cite_12",
"@cite_20"
],
"mid": [
"",
"2115551261",
"2026670837",
"2072979474",
"2117696850",
"2003497308",
"2020278159",
"",
"2407991844",
""
],
"abstract": [
"",
"Let the input to a computation problem be split between two processors connected by a communication link; and let an interactive protocol spl pi be known by which, on any input, the processors can solve the problem using no more than T transmissions of bits between them, provided the channel is noiseless in each direction. We study the following question: if in fact the channel is noisy, what is the effect upon the number of transmissions needed in order to solve the computation problem reliably? Technologically this concern is motivated by the increasing importance of communication as a resource in computing, and by the tradeoff in communications equipment between bandwidth, reliability, and expense. We treat a model with random channel noise. We describe a deterministic method for simulating noiseless-channel protocols on noisy channels, with only a constant slowdown. This is an analog for general, interactive protocols of Shannon's coding theorem, which deals only with data transmission, i.e., one-way protocols. We cannot use Shannon's block coding method because the bits exchanged in the protocol are determined only one at a time, dynamically, in the course of the interaction. Instead, we describe a simulation protocol using a new kind of code, explicit tree codes.",
"We revisit the problem of reliable interactive communication over a noisy channel, and obtain the first fully explicit (randomized) efficient constant-rate emulation procedure for reliable interactive communication. Our protocol works for any discrete memory less noisy channel with constant capacity, and fails with exponentially small probability in the total length of the protocol. Following a work by Schulman [Schulman 1993] our simulation uses a tree-code, yet as opposed to the non-constructive absolute tree-code used by Schulman, we introduce a relaxation in the notion of goodness for a tree code and define a potent tree code. This relaxation allows us to construct an explicit emulation procedure for any two-party protocol. Our results also extend to the case of interactive multiparty communication. We show that a randomly generated tree code (with suitable constant alphabet size) is an efficiently decodable potent tree code with overwhelming probability. Furthermore we are able to partially derandomize this result by means of epsilon-biased distributions using only O(N) random bits, where N is the depth of the tree.",
"In this work, we study the problem of constructing interactive protocols that are robust to noise, a problem that was originally considered in the seminal works of Schulman (FOCS '92, STOC '93), and has recently regained popularity. Robust interactive communication is the interactive analogue of error correcting codes: Given an interactive protocol which is designed to run on an error-free channel, construct a protocol that evaluates the same function (or, more generally, simulates the execution of the original protocol) over a noisy channel. As in (non-interactive) error correcting codes, the noise can be either stochastic, i.e. drawn from some distribution, or adversarial, i.e. arbitrary subject only to a global bound on the number of errors. We show how to simulate any interactive protocol in the presence of constant-rate noise, while incurring only a constant blow-up in the communication complexity (CC). Our simulator is randomized, and succeeds in simulating the original protocol with probability at least @math .",
"Communication is critical to distributed computing, parallel computing, or any situation in which automata interact-hence its significance as a resource in computation. In view of the likelihood of errors occurring in a lengthy interaction, it is desirable to incorporate this possibility in the model of communication. The author relates the noisy channel and the standard (noise less channel) complexities of a communication problem by establishing a 'two-way' or interactive analogue of Shanon's coding theorem: every noiseless channel protocol can be simulated by a private-coin noisy channel protocol whose time bound is proportional to the original (noiseless) time bound and inversely proportional to the capacity of the channel, while the protocol errs with vanishing probability. The method involves simulating the original protocol while implementing a hierarchical system of progress checks which ensure that errors of any magnitude in the simulation are, with high probability, rapidly eliminated. >",
"We show that it is possible to encode any communication protocol between two parties so that the protocol succeeds even if a (1 4-e) fraction of all symbols transmitted by the parties are corrupted adversarially, at a cost of increasing the communication in the protocol by a constant factor (the constant depends on epsilon). This encoding uses a constant sized alphabet. This improves on an earlier result of Schulman, who showed how to recover when the fraction of errors is bounded by 1 240. We also show how to simulate an arbitrary protocol with a protocol using the binary alphabet, a constant factor increase in communication and tolerating a (1 8-e) fraction of errors.",
"",
"",
"Consider two parties who wish to communicate in order to execute some interactive protocol π. However, the communication channel between them is noisy: An adversary sees everything that is transmitted over the channel and can change a constant fraction of the bits as he pleases, thus interrupting the execution of π (which was designed for an errorless channel). If π only contained one message, then a good error correcting code would have overcame the noise with only a constant overhead in communication, but this solution is not applicable to interactive protocols with many short messages. Schulman (FOCS 92, STOC 93) presented the notion of interactive coding: A simulator that, given any protocol π, is able to simulate it (i.e. produce its intended transcript) even with constant rate adversarial channel errors, and with only constant (multiplicative) communication overhead. Until recently, however, the running time of all known simulators was exponential (or sub-exponential) in the communication complexity of π (denoted N in this work). Brakerski and Kalai (FOCS 12) recently presented a simulator that runs in time poly (N). Their simulator is randomized (each party flips private coins) and has failure probability roughly 2−N. In this work, we improve the computational complexity of interactive coding. While at least N computational steps are required (even just to output the transcript of π), the BK simulator runs in time",
""
]
} |
1501.00624 | 2950864125 | We provide tight upper and lower bounds on the noise resilience of interactive communication over noisy channels with feedback. In this setting, we show that the maximal fraction of noise that any robust protocol can resist is 1 3. Additionally, we provide a simple and efficient robust protocol that succeeds as long as the fraction of noise is at most 1 3 - . Surprisingly, both bounds hold regardless of whether the parties send bits or symbols from an arbitrarily large alphabet. We also consider interactive communication over erasure channels. We provide a protocol that matches the optimal tolerable erasure rate of 1 2 - of previous protocols (, CRYPTO '13) but operates in a much simpler and more efficient way. Our protocol works with an alphabet of size 4, in contrast to prior protocols in which the alphabet size grows as epsilon goes to zero. Building on the above algorithm with a fixed alphabet size, we are able to devise a protocol for binary erasure channels that tolerates erasure rates of up to 1 3 - . | Protocols in the above works are all robust. The discussion about non-robust or protocols was initiated by Ghaffari, Haeupler and Sudan @cite_15 @cite_16 and concurrently by Agrawal, Gelles and Sahai @cite_6 , giving various notions of adaptive protocols and analyzing their noise resilience. Both the adaptive notion of @cite_15 @cite_16 and of @cite_6 are capable of resisting a higher amount of noise than the maximal @math allowed for robust protocols. Specifically, a tight bound of @math was shown in @cite_15 @cite_16 for protocols of fixed length; when the length of the protocol may adaptively change as well, a coding scheme that achieves a noise rate of @math is given in @cite_6 , yet that scheme does not have a . | {
"cite_N": [
"@cite_15",
"@cite_16",
"@cite_6"
],
"mid": [
"2040355376",
"",
"1963163055"
],
"abstract": [
"We consider the task of interactive communication in the presence of adversarial errors and present tight bounds on the tolerable error-rates in a number of different settings. Most significantly, we explore adaptive interactive communication where the communicating parties decide who should speak next based on the history of the interaction. In particular, this decision can depend on estimates of the amount of errors that have occurred so far. Braverman and Rao [STOC'11] show that non-adaptively one can code for any constant error rate below 1 4 but not more. They asked whether this bound could be improved using adaptivity. We answer this open question in the affirmative (with a slightly different collection of resources): Our adaptive coding scheme tolerates any error rate below 2 7 and we show that tolerating a higher error rate is impossible. We also show that in the setting of [CRYPTO'13], where parties share randomness not known to the adversary, adaptivity increases the tolerable error rate from 1 2 to 2 3. For list-decodable interactive communications, where each party outputs a constant size list of possible outcomes, the tight tolerable error rate is 1 2. Our negative results hold even if the communication and computation are unbounded, whereas for our positive results communication and computations are polynomially bounded. Most prior work considered coding schemes with linear communication bounds, while allowing unbounded computations. We argue that studying tolerable error rates in this relaxed context helps to identify a setting's intrinsic optimal error rate. We set forward a strong working hypothesis which stipulates that for any setting the maximum tolerable error rate is independent of many computational and communication complexity measures. We believe this hypothesis to be a powerful guideline for the design of simple, natural, and efficient coding schemes and for understanding the (im)possibilities of coding for interactive communications.",
"",
"How much adversarial noise can protocols for interactive communication tolerate? This question was examined by Braverman and Rao (IEEE Trans. Inf. Theory, 2014) for the case of \"robust\" protocols, where each party sends messages only in fixed and predetermined rounds. We consider a new class of non-robust protocols for Interactive Communication, which we call adaptive protocols. Such protocols adapt structurally to the noise induced by the channel in the sense that both the order of speaking, and the length of the protocol may vary depending on observed noise. We define models that capture adaptive protocols and study upper and lower bounds on the permissible noise rate in these models. When the length of the protocol may adaptively change according to the noise, we demonstrate a protocol that tolerates noise rates up to @math . When the order of speaking may adaptively change as well, we demonstrate a protocol that tolerates noise rates up to @math . Hence, adaptivity circumvents an impossibility result of @math on the fraction of tolerable noise (Braverman and Rao, 2014)."
]
} |
1501.00624 | 2950864125 | We provide tight upper and lower bounds on the noise resilience of interactive communication over noisy channels with feedback. In this setting, we show that the maximal fraction of noise that any robust protocol can resist is 1 3. Additionally, we provide a simple and efficient robust protocol that succeeds as long as the fraction of noise is at most 1 3 - . Surprisingly, both bounds hold regardless of whether the parties send bits or symbols from an arbitrarily large alphabet. We also consider interactive communication over erasure channels. We provide a protocol that matches the optimal tolerable erasure rate of 1 2 - of previous protocols (, CRYPTO '13) but operates in a much simpler and more efficient way. Our protocol works with an alphabet of size 4, in contrast to prior protocols in which the alphabet size grows as epsilon goes to zero. Building on the above algorithm with a fixed alphabet size, we are able to devise a protocol for binary erasure channels that tolerates erasure rates of up to 1 3 - . | For erasure channels, a tight bound of @math on the erasure rate of robust protocols was given in @cite_11 . For the case of adaptive protocols, @cite_6 provided a coding scheme with a that resists a relative erasure rate of up to @math in a setting that allows parties to remain silent in an adaptive way. The case where the parties share a memoryless erasure channel with a noiseless feedback was considered by Schulman @cite_7 who showed that for any function @math , the communication complexity of solving @math in that setting equals the distributional complexity of @math (over noiseless channels), up to a factor of the channel's capacity. | {
"cite_N": [
"@cite_7",
"@cite_6",
"@cite_11"
],
"mid": [
"2115551261",
"1963163055",
"2216201412"
],
"abstract": [
"Let the input to a computation problem be split between two processors connected by a communication link; and let an interactive protocol spl pi be known by which, on any input, the processors can solve the problem using no more than T transmissions of bits between them, provided the channel is noiseless in each direction. We study the following question: if in fact the channel is noisy, what is the effect upon the number of transmissions needed in order to solve the computation problem reliably? Technologically this concern is motivated by the increasing importance of communication as a resource in computing, and by the tradeoff in communications equipment between bandwidth, reliability, and expense. We treat a model with random channel noise. We describe a deterministic method for simulating noiseless-channel protocols on noisy channels, with only a constant slowdown. This is an analog for general, interactive protocols of Shannon's coding theorem, which deals only with data transmission, i.e., one-way protocols. We cannot use Shannon's block coding method because the bits exchanged in the protocol are determined only one at a time, dynamically, in the course of the interaction. Instead, we describe a simulation protocol using a new kind of code, explicit tree codes.",
"How much adversarial noise can protocols for interactive communication tolerate? This question was examined by Braverman and Rao (IEEE Trans. Inf. Theory, 2014) for the case of \"robust\" protocols, where each party sends messages only in fixed and predetermined rounds. We consider a new class of non-robust protocols for Interactive Communication, which we call adaptive protocols. Such protocols adapt structurally to the noise induced by the channel in the sense that both the order of speaking, and the length of the protocol may vary depending on observed noise. We define models that capture adaptive protocols and study upper and lower bounds on the permissible noise rate in these models. When the length of the protocol may adaptively change according to the noise, we demonstrate a protocol that tolerates noise rates up to @math . When the order of speaking may adaptively change as well, we demonstrate a protocol that tolerates noise rates up to @math . Hence, adaptivity circumvents an impossibility result of @math on the fraction of tolerable noise (Braverman and Rao, 2014).",
"Error correction and message authentication are well studied in the literature, and various efficient solutions have been suggested and analyzed. This is however not the case for data streams in which the message is very long, possibly infinite, and not known in advance to the sender. Trivial solutions for error-correcting and authenticating data streams either suffer from a long delay at the receiver’s end or cannot perform well when the communication channel is noisy."
]
} |
1501.01178 | 2952112270 | The goal of constraint-based sequence mining is to find sequences of symbols that are included in a large number of input sequences and that satisfy some constraints specified by the user. Many constraints have been proposed in the literature, but a general framework is still missing. We investigate the use of constraint programming as general framework for this task. We first identify four categories of constraints that are applicable to sequence mining. We then propose two constraint programming formulations. The first formulation introduces a new global constraint called exists-embedding. This formulation is the most efficient but does not support one type of constraint. To support such constraints, we develop a second formulation that is more general but incurs more overhead. Both formulations can use the projected database technique used in specialised algorithms. Experiments demonstrate the flexibility towards constraint-based settings and compare the approach to existing methods. | Different flavors of sequence mining have been studied in the context of a generic framework, and constraint programming in particular. They all study constraints of type 1, 2 and 4. In @cite_3 the setting of sequence patterns with explicit wildcards in a single sequence is studied: such a pattern has a linear number of embeddings. As only a single sequence is considered, frequency is defined as the number of embeddings in that sequence, leading to a similar encoding to itemsets. This is extended in @cite_10 to sequences of itemsets (with explicit wildcards over a single sequence). @cite_13 also studies patterns with explicit wildcards, but in a database of sequences. Finally, @cite_21 considers standard sequences in a database, just like this paper; they also support constraints of type 3. The main difference is in the use of a costly encoding of the inclusion relation using non-deterministic automata and the inherent inability to use projected frequency. | {
"cite_N": [
"@cite_13",
"@cite_21",
"@cite_10",
"@cite_3"
],
"mid": [
"1981486868",
"1589351983",
"",
"2407211045"
],
"abstract": [
"Sequential pattern mining under various constraints is a challenging data mining task. The paper provides a generic framework based on constraint programming to discover sequence patterns defined by constraints on local patterns (e.g., Gap, regular expressions) or constraints on patterns involving combination of local patterns such as relevant subgroups and top-k patterns. This framework enables the user to mine in a declarative way both kinds of patterns. The solving step is done by exploiting the machinery of Constraint Programming. For complex patterns involving combination of local patterns, we improve the mining step by using dynamic CSP. Finally, we present two case studies in biomedical information extraction and stylistic analysis in linguistics.",
"Constraint-based pattern discovery is at the core of numerous data mining tasks. Patterns are extracted with respect to a given set of constraints (frequency, closedness, size, etc). In the context of sequential pattern mining, a large number of devoted techniques have been developed for solving particular classes of constraints. The aim of this paper is to investigate the use of Constraint Programming (CP) to model and mine sequential patterns in a sequence database. Our CP approach offers a natural way to simultaneously combine in a same framework a large set of constraints coming from various origins. Experiments show the feasibility and the interest of our approach.",
"",
"In this paper we propose a satisfiability-based approach for enumerating all frequent, closed and maximal patterns with wildcards in a given sequence. In this context, since frequency is the most used criterion, we introduce a new polynomial inductive formulation of the cardinality constraint as a Boolean formula. A nogood-based formulation of the anti-monotonicity property is proposed and dynamically used for pruning. This declarative framework allows us to exploit the efficiency of modern SAT solvers and particularly their clause learning component. The experimental evaluation on real world data shows the feasibility of our proposed approach in practice."
]
} |
1501.00333 | 274064621 | We prove a conjecture of Stembridge concerning stability of Kronecker coefficients that vastly generalizes Murnaghan's theorem. The main idea is to identify the sequences of Kronecker coefficients in question with Hilbert functions of modules over finitely generated algebras. The proof only uses Schur---Weyl duality and the Borel---Weil theorem and does not rely on any existing work on Kronecker coefficients. | Vallejo introduces a notion of additive stability in @cite_5 and proves that it implies stability in Stembridge's sense. Additive stability is provided by the existence of a certain additive matrix, and hence is easier to apply, but it is less general [Example 6.3] vallejo . | {
"cite_N": [
"@cite_5"
],
"mid": [
"2258512292"
],
"abstract": [
"In this paper we give a new sufficient condition for a general stability of Kronecker coefficients, which we call it additive stability. It was motivated by a recent talk of J. Stembridge at the conference in honor of Richard P. Stanley's 70th birthday, and it is based on work of the author on discrete tomography along the years. The main contribution of this paper is the discovery of the connection between additivity of integer matrices and stability of Kronecker coefficients. Additivity, in our context, is a concept from discrete tomography. Its advantage is that it is very easy to produce lots of examples of additive matrices and therefore of new instances of stability properties. We also show that Stembridge's hypothesis and additivity are closely related, and prove that all stability properties of Kronecker coefficients discovered before fit into additive stability."
]
} |
1501.00311 | 1549951060 | In this paper, we motivate the need for a publicly available, generic software framework for question-answering (QA) systems. We present an open-source QA framework QANUS which researchers can leverage on to build new QA systems easily and rapidly. The framework implements much of the code that will otherwise have been repeated across different QA systems. To demonstrate the utility and practicality of the framework, we further present a fully functioning factoid QA system QA-SYS built on top of QANUS. | The architecture of is slanted towards QA systems based on current state-of-the-art information retrieval (IR) techniques. These techniques typically involve manipulating the lexical and syntactic form of natural language text and do not attempt to comprehend the semantics expressed by the text. Systems which make use of these techniques @cite_2 @cite_3 have been able to perform ahead of their peers in the Text Retrieval Conference (TREC) QA tracks @cite_6 . | {
"cite_N": [
"@cite_6",
"@cite_3",
"@cite_2"
],
"mid": [
"79070924",
"2148488947",
"2070246124"
],
"abstract": [
"An electrically operated power steering device which is fitted to the steering system of a vehicle to provide supplementary steering power by means of an electric motor, wherein the electrically operated power steering device controls the motor current by the signal corresponding to the torsional torque in the steering system and the single corresponding to the restoring torque in accordance with the angular displacement of the wheel, so that the restoring force takes effect immediately to restore the steering wheel when the input torque on the steering wheel is reduced.",
"Question Answering (QA) is retrieving answers to natural language questions from a collection of documents rather than retrieving relevant documents containing the keywords of the query which is performed by search engines. What a user usually wants is often a precise answer to a question. For example, given the question “Who won the nobel prize in peace in 2006?” what a user really wants is the answer “Dr. Muhammad Yunus”, in stead of reading through lots of documents that contain the words “win”, “nobel”,“prize”, “peace” and “2006” etc. This means that question answering systems will possibly be integral to the next generation of search engines. The Text Retrieval Conference, TREC1 QA track is the major large-scale evaluation environment for open-domain question answering systems. The questions in the TREC-2007 QA track are clustered by target, which is the overall theme or topic of the questions. The track has three types of questions: 1. factoid that require only one correct response, 2. list that require a non redundant list of correct responses and 3. other questions that require a non redundant list of facts about the target that has not already been discovered by a previous answer. We took the approach of designing a question answering system that is based on document tagging and question classification. Question classification extracts useful information (i.e. answer type) from the question about how to answer the question. Document tagging extracts useful information from the documents, which will be used in finding the answer to the question. We used different available tools to tag the documents. Our system classifies the questions using manually developed rules.",
"In order to respond correctly to a free form factual question given a large collection of texts, one needs to understand the question to a level that allows determining some of the constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer.This paper presents a machine learning approach to question classification. We learn a hierarchical classifier that is guided by a layered semantic hierarchy of answer types, and eventually classifies questions into fine-grained classes. We show accurate results on a large collection of free-form questions used in TREC 10."
]
} |
1501.00311 | 1549951060 | In this paper, we motivate the need for a publicly available, generic software framework for question-answering (QA) systems. We present an open-source QA framework QANUS which researchers can leverage on to build new QA systems easily and rapidly. The framework implements much of the code that will otherwise have been repeated across different QA systems. To demonstrate the utility and practicality of the framework, we further present a fully functioning factoid QA system QA-SYS built on top of QANUS. | Though few in numbers, some QA systems have previously been made available to the community. One such system is Available for download at http: www.umiacs.umd.edu @math jimmylin downloads index.html @cite_1 . is a factoid QA system which seeks to exploit the redundancy of data on the web and has achieved credible performances at past TREC evaluations. is not designed however as a generic QA platform. We argue that a framework such as which is designed from the start with extensibility and flexibility in mind will greatly reduce the effort needed for any such customisation. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2042719229"
],
"abstract": [
"The so-called “redundancy-based” approach to question answering represents a successful strategy for mining answers to factoid questions such as “Who shot Abraham Lincolnq” from the World Wide Web. Through contrastive and ablation experiments with Aranea, a system that has performed well in several TREC QA evaluations, this work examines the underlying assumptions and principles behind redundancy-based techniques. Specifically, we develop two theses: that stable characteristics of data redundancy allow factoid systems to rely on external “black box” components, and that despite embodying a data-driven approach, redundancy-based methods encode a substantial amount of knowledge in the form of heuristics. Overall, this work attempts to address the broader question of “what really matters” and to provide guidance for future researchers."
]
} |
1501.00153 | 1690358018 | This paper provides a link between matroid theory and locally repairable codes (LRCs) that are either linear or more generally almost affine. Using this link, new results on both LRCs and matroid theory are derived. The parameters @math of LRCs are generalized to matroids, and the matroid analogue of the generalized Singleton bound in [P. , "On the locality of codeword symbols," IEEE Trans. Inf. Theory] for linear LRCs is given for matroids. It is shown that the given bound is not tight for certain classes of parameters, implying a nonexistence result for the corresponding locally repairable almost affine codes, that are coined perfect in this paper. Constructions of classes of matroids with a large span of the parameters @math and the corresponding local repair sets are given. Using these matroid constructions, new LRCs are constructed with prescribed parameters. The existence results on linear LRCs and the nonexistence results on almost affine LRCs given in this paper strengthen the nonexistence and existence results on perfect linear LRCs given in [W. , "Optimal locally repairable codes," IEEE J. Sel. Areas Comm.]. | Recently, the present authors have studied locally repairable codes with all-symbol locality @cite_20 . Methods to modify already existing codes were presented and it was shown that with high probability, a certain random matrix will be a generator matrix for a locally repairable code with a good minimum distance. Constructions were given for three infinite classes of optimal vector-linear locally repairable codes over an alphabet of small size. The present paper extends and deviates from this work by studying the combinatorics of LRCs in general and relating LRCs to matroid theory. This allows for the derivation of fundamental bounds for matroids and linear and almost affine LRCs, as well as for the characterization of the matroids achieving this bound. | {
"cite_N": [
"@cite_20"
],
"mid": [
"1902314736"
],
"abstract": [
"In this paper, locally repairable codes with all-symbol locality are studied. Methods to modify already existing codes are presented. It is also shown that, with high probability, a random matrix with a few extra columns guaranteeing the locality property is a generator matrix for a locally repairable code with a good minimum distance. The proof of the result provides a constructive method to find locally repairable codes. Finally, constructions of three infinite classes of optimal vector-linear locally repairable codes over a small alphabet independent of the code size are given."
]
} |
1501.00405 | 2952616410 | While analyzing vehicular sensor data, we found that frequently occurring waveforms could serve as features for further analysis, such as rule mining, classification, and anomaly detection. The discovery of waveform patterns, also known as time-series motifs, has been studied extensively; however, available techniques for discovering frequently occurring time-series motifs were found lacking in either efficiency or quality: Standard subsequence clustering results in poor quality, to the extent that it has even been termed 'meaningless'. Variants of hierarchical clustering using techniques for efficient discovery of 'exact pair motifs' find high-quality frequent motifs, but at the cost of high computational complexity, making such techniques unusable for our voluminous vehicular sensor data. We show that good quality frequent motifs can be discovered using bounded spherical clustering of time-series subsequences, which we refer to as COIN clustering, with near linear complexity in time-series size. COIN clustering addresses many of the challenges that previously led to subsequence clustering being viewed as meaningless. We describe an end-to-end motif-discovery procedure using a sequence of pre and post-processing techniques that remove trivial-matches and shifted-motifs, which also plagued previous subsequence-clustering approaches. We demonstrate that our technique efficiently discovers frequent motifs in voluminous vehicular sensor data as well as in publicly available data sets. | Motif discovery from time-series data has been an active topic of research in last decade or so @cite_8 @cite_21 @cite_15 @cite_5 @cite_22 . It is also evident from past work that there are many aspects of this problem which need to be addressed, for example @cite_5 @cite_21 @cite_22 focus on finding out what should be the appropriate width of the time-series motifs, and find the motifs of multiple lengths. Most of these approaches use MK-Motif discovery algorithm @cite_15 underneath, which discovers pairs of subsequences that are similar. in @cite_6 focus on finding another subsequence that is similar to a given subsequence. | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_21",
"@cite_6",
"@cite_5",
"@cite_15"
],
"mid": [
"1999306137",
"2006761268",
"2054978951",
"2086506616",
"2083236658",
"1513731586"
],
"abstract": [
"As one of the most essential data mining tasks, finding frequently occurring patterns, i.e., motif discovery, has drawn a lot of attention in the past decade. Despite successes in speedup of motif discovery algorithms, most of the existing algorithms still require predefined parameters. The critical and most cumbersome one is time series motif length since it is difficult to manually determine the proper length of the motifs-even for the domain experts. In addition, with variability in the motif lengths, ranking among these motifs becomes another major problem. In this work, we propose a novel algorithm using compression ratio as a heuristic to discover meaningful motifs in proper lengths. The ranking of these various length motifs relies on an ability to compress time series by its own motif as a hypothesis. Furthermore, other than being an anytime algorithm, our experimental evaluation also demonstrates that our proposed method outperforms existing works in various domains both in terms of speed and accuracy.",
"Several important time series data mining problems reduce to the core task of finding approximately repeated subsequences in a longer time series. In an earlier work, we formalized the idea of approximately repeated subsequences by introducing the notion of time series motifs. Two limitations of this work were the poor scalability of the motif discovery algorithm, and the inability to discover motifs in the presence of noise.Here we address these limitations by introducing a novel algorithm inspired by recent advances in the problem of pattern discovery in biosequences. Our algorithm is probabilistic in nature, but as we show empirically and theoretically, it can find time series motifs with very high probability even in the presence of noise or \"don't care\" symbols. Not only is the algorithm fast, but it is an anytime algorithm, producing likely candidate motifs almost immediately, and gradually improving the quality of results over time.",
"Time series motifs are repeated patterns in long and noisy time series. Motifs are typically used to understand the dynamics of the source because repeated patterns with high similarity evidentially rule out the presence of noise. Recently, time series motifs have also been used for clustering, summarization, rule discovery and compression as features. For all such purposes, many high quality motifs of various lengths are desirable and thus, originates the problem of enumerating motifs for a wide range of lengths. Existing algorithms find motifs for a given length. A trivial way to enumerate motifs is to run one of the algorithms for the whole range of lengths. However, such parameter sweep is computationally infeasible for large real datasets. In this paper, we describe an exact algorithm, called MOEN, to enumerate motifs. The algorithm is an order of magnitude faster than the naive algorithm. The algorithm frees us from re-discovering the same motif at different lengths and tuning multiple data-dependent parameters. The speedup comes from using a novel bound on the similarity function across lengths and the algorithm uses only linear space unlike other motif discovery algorithms. We describe three case studies in entomology and activity recognition where MOEN enumerates several high quality motifs.",
"The problem of efficiently finding images that are similar to a target image has attracted much attention in the image processing community and is rightly considered an information retrieval task. However, the problem of finding structure and regularities in large image datasets is an area in which data mining is beginning to make fundamental contributions. In this work, we consider the new problem of discovering shape motifs, which are approximately repeated shapes within (or between) image collections. As we shall show, shape motifs can have applications in tasks as diverse as anthropology, law enforcement, and historical manuscript mining. Brute force discovery of shape motifs could be untenably slow, especially as many domains may require an expensive rotation invariant distance measure. We introduce an algorithm that is two to three orders of magnitude faster than brute force search, and demonstrate the utility of our approach with several real world datasets from diverse domains.",
"Given the pervasiveness of time series data in all human endeavors, and the ubiquity of clustering as a data mining application, it is somewhat surprising that the problem of time series clustering from a single stream remains largely unsolved. Most work on time series clustering considers the clustering of individual time series, e.g., gene expression profiles, individual heartbeats or individual gait cycles. The few attempts at clustering time series streams have been shown to be objectively incorrect in some cases, and in other cases shown to work only on the most contrived datasets by carefully adjusting a large set of parameters. In this work, we make two fundamental contributions. First, we show that the problem definition for time series clustering from streams currently used is inherently flawed, and a new definition is necessary. Second, we show that the Minimum Description Length (MDL) framework offers an efficient, effective and essentially parameter-free method for time series clustering. We show that our method produces objectively correct results on a wide variety of datasets from medicine, zoology and industrial process analyses.",
"Time series motifs are pairs of individual time series, or subsequences of a longer time series, which are very similar to each other. As with their discrete analogues in computational biology, this similarity hints at structure which has been conserved for some reason and may therefore be of interest. Since the formalism of time series motifs in 2002, dozens of researchers have used them for diverse applications in many different domains. Because the obvious algorithm for computing motifs is quadratic in the number of items, more than a dozen approximate algorithms to discover motifs have been proposed in the literature. In this work, for the first time, we show a tractable exact algorithm to find time series motifs. As we shall show through extensive experiments, our algorithm is up to three orders of magnitude faster than brute-force search in large datasets. We further show that our algorithm is fast enough to be used as a subroutine in higher level data mining algorithms for anytime classification, near-duplicate detection and summarization, and we consider detailed case studies in domains as diverse as electroencephalograph interpretation and entomological telemetry data mining."
]
} |
1501.00405 | 2952616410 | While analyzing vehicular sensor data, we found that frequently occurring waveforms could serve as features for further analysis, such as rule mining, classification, and anomaly detection. The discovery of waveform patterns, also known as time-series motifs, has been studied extensively; however, available techniques for discovering frequently occurring time-series motifs were found lacking in either efficiency or quality: Standard subsequence clustering results in poor quality, to the extent that it has even been termed 'meaningless'. Variants of hierarchical clustering using techniques for efficient discovery of 'exact pair motifs' find high-quality frequent motifs, but at the cost of high computational complexity, making such techniques unusable for our voluminous vehicular sensor data. We show that good quality frequent motifs can be discovered using bounded spherical clustering of time-series subsequences, which we refer to as COIN clustering, with near linear complexity in time-series size. COIN clustering addresses many of the challenges that previously led to subsequence clustering being viewed as meaningless. We describe an end-to-end motif-discovery procedure using a sequence of pre and post-processing techniques that remove trivial-matches and shifted-motifs, which also plagued previous subsequence-clustering approaches. We demonstrate that our technique efficiently discovers frequent motifs in voluminous vehicular sensor data as well as in publicly available data sets. | Our definition of frequent time-series motifs is similar to that of @cite_8 @cite_5 ; however they do not focus on efficiency. These approaches have quadratic @cite_8 or cubic @cite_5 time-complexity in the size of the series ( @math ) (recently, @cite_22 Source code not published. brings @cite_5 down to quadratic complexity). @cite_8 exploit symbolic representation of subsequences using the SAX scheme @cite_2 , we directly use subsequences in @math space after z-normalization and level-merging. In @cite_5 authors exploit pair motif discovery algorithm @cite_15 followed by a search for other similar subsequences in the time-series. They choose the member subsequences of a frequent motif based on bits saved through MDL encoding of the subsequences of the motif. We improve on these approaches, empirically achieving near linear performance; see Section and . | {
"cite_N": [
"@cite_22",
"@cite_8",
"@cite_2",
"@cite_5",
"@cite_15"
],
"mid": [
"1999306137",
"2006761268",
"1989037929",
"2083236658",
"1513731586"
],
"abstract": [
"As one of the most essential data mining tasks, finding frequently occurring patterns, i.e., motif discovery, has drawn a lot of attention in the past decade. Despite successes in speedup of motif discovery algorithms, most of the existing algorithms still require predefined parameters. The critical and most cumbersome one is time series motif length since it is difficult to manually determine the proper length of the motifs-even for the domain experts. In addition, with variability in the motif lengths, ranking among these motifs becomes another major problem. In this work, we propose a novel algorithm using compression ratio as a heuristic to discover meaningful motifs in proper lengths. The ranking of these various length motifs relies on an ability to compress time series by its own motif as a hypothesis. Furthermore, other than being an anytime algorithm, our experimental evaluation also demonstrates that our proposed method outperforms existing works in various domains both in terms of speed and accuracy.",
"Several important time series data mining problems reduce to the core task of finding approximately repeated subsequences in a longer time series. In an earlier work, we formalized the idea of approximately repeated subsequences by introducing the notion of time series motifs. Two limitations of this work were the poor scalability of the motif discovery algorithm, and the inability to discover motifs in the presence of noise.Here we address these limitations by introducing a novel algorithm inspired by recent advances in the problem of pattern discovery in biosequences. Our algorithm is probabilistic in nature, but as we show empirically and theoretically, it can find time series motifs with very high probability even in the presence of noise or \"don't care\" symbols. Not only is the algorithm fast, but it is an anytime algorithm, producing likely candidate motifs almost immediately, and gradually improving the quality of results over time.",
"The parallel explosions of interest in streaming data, and data mining of time series have had surprisingly little intersection. This is in spite of the fact that time series data are typically streaming data. The main reason for this apparent paradox is the fact that the vast majority of work on streaming data explicitly assumes that the data is discrete, whereas the vast majority of time series data is real valued.Many researchers have also considered transforming real valued time series into symbolic representations, nothing that such representations would potentially allow researchers to avail of the wealth of data structures and algorithms from the text processing and bioinformatics communities, in addition to allowing formerly \"batch-only\" problems to be tackled by the streaming community. While many symbolic representations of time series have been introduced over the past decades, they all suffer from three fatal flaws. Firstly, the dimensionality of the symbolic representation is the same as the original data, and virtually all data mining algorithms scale poorly with dimensionality. Secondly, although distance measures can be defined on the symbolic approaches, these distance measures have little correlation with distance measures defined on the original time series. Finally, most of these symbolic approaches require one to have access to all the data, before creating the symbolic representation. This last feature explicitly thwarts efforts to use the representations with streaming algorithms.In this work we introduce a new symbolic representation of time series. Our representation is unique in that it allows dimensionality numerosity reduction, and it also allows distance measures to be defined on the symbolic approach that lower bound corresponding distance measures defined on the original series. As we shall demonstrate, this latter feature is particularly exciting because it allows one to run certain data mining algorithms on the efficiently manipulated symbolic representation, while producing identical results to the algorithms that operate on the original data. Finally, our representation allows the real valued data to be converted in a streaming fashion, with only an infinitesimal time and space overhead.We will demonstrate the utility of our representation on the classic data mining tasks of clustering, classification, query by content and anomaly detection.",
"Given the pervasiveness of time series data in all human endeavors, and the ubiquity of clustering as a data mining application, it is somewhat surprising that the problem of time series clustering from a single stream remains largely unsolved. Most work on time series clustering considers the clustering of individual time series, e.g., gene expression profiles, individual heartbeats or individual gait cycles. The few attempts at clustering time series streams have been shown to be objectively incorrect in some cases, and in other cases shown to work only on the most contrived datasets by carefully adjusting a large set of parameters. In this work, we make two fundamental contributions. First, we show that the problem definition for time series clustering from streams currently used is inherently flawed, and a new definition is necessary. Second, we show that the Minimum Description Length (MDL) framework offers an efficient, effective and essentially parameter-free method for time series clustering. We show that our method produces objectively correct results on a wide variety of datasets from medicine, zoology and industrial process analyses.",
"Time series motifs are pairs of individual time series, or subsequences of a longer time series, which are very similar to each other. As with their discrete analogues in computational biology, this similarity hints at structure which has been conserved for some reason and may therefore be of interest. Since the formalism of time series motifs in 2002, dozens of researchers have used them for diverse applications in many different domains. Because the obvious algorithm for computing motifs is quadratic in the number of items, more than a dozen approximate algorithms to discover motifs have been proposed in the literature. In this work, for the first time, we show a tractable exact algorithm to find time series motifs. As we shall show through extensive experiments, our algorithm is up to three orders of magnitude faster than brute-force search in large datasets. We further show that our algorithm is fast enough to be used as a subroutine in higher level data mining algorithms for anytime classification, near-duplicate detection and summarization, and we consider detailed case studies in domains as diverse as electroencephalograph interpretation and entomological telemetry data mining."
]
} |
1501.00405 | 2952616410 | While analyzing vehicular sensor data, we found that frequently occurring waveforms could serve as features for further analysis, such as rule mining, classification, and anomaly detection. The discovery of waveform patterns, also known as time-series motifs, has been studied extensively; however, available techniques for discovering frequently occurring time-series motifs were found lacking in either efficiency or quality: Standard subsequence clustering results in poor quality, to the extent that it has even been termed 'meaningless'. Variants of hierarchical clustering using techniques for efficient discovery of 'exact pair motifs' find high-quality frequent motifs, but at the cost of high computational complexity, making such techniques unusable for our voluminous vehicular sensor data. We show that good quality frequent motifs can be discovered using bounded spherical clustering of time-series subsequences, which we refer to as COIN clustering, with near linear complexity in time-series size. COIN clustering addresses many of the challenges that previously led to subsequence clustering being viewed as meaningless. We describe an end-to-end motif-discovery procedure using a sequence of pre and post-processing techniques that remove trivial-matches and shifted-motifs, which also plagued previous subsequence-clustering approaches. We demonstrate that our technique efficiently discovers frequent motifs in voluminous vehicular sensor data as well as in publicly available data sets. | Subsequence clustering (STS) was identified as a research challenge by in @cite_0 . They demonstrated that a) the output of STS clustering is independent of the dataset used to generate them and that b) subsequences contained in a cluster don't share the same waveform and therefore lead to smoothing effect, resulting in sinusoidal motifs being detected for all time-series. This was demonstrated through the use of k-means and hierarchical agglomerative clustering. However, @cite_24 demonstrated (through the use of another distance measure) that the output of STS clustering has correlations with the datasets used. showed (through the use of kernel-density based clustering) that in 7 out of 10 cases there is a correlation between the clusters and the datasets used. in @cite_1 also use density-based clustering of non-overlapping subsequences. | {
"cite_N": [
"@cite_0",
"@cite_1",
"@cite_24"
],
"mid": [
"",
"34835763",
"2000729461"
],
"abstract": [
"",
"The problem of locating motifs in real-valued, multivariate time series data involves the discovery of sets of recurring patterns embedded in the time series. Each set is composed of several non-overlapping subsequences and constitutes a motif because all of the included subsequences are similar. The ability to automatically discover such motifs allows intelligent systems to form endogenously meaningful representations of their environment through unsupervised sensor analysis. In this paper, we formulate a unifying view of motif discovery as a problem of locating regions of high density in the space of all time series subsequences. Our approach is efficient (sub-quadratic in the length of the data), requires fewer user-specified parameters than previous methods, and naturally allows variable length motif occurrences and non-linear temporal warping. We evaluate the performance of our approach using four data sets from different domains including on-body inertial sensors and speech.",
"Recent papers have claimed that the result of K-means clustering for time series subsequences (STS clustering) is independent of the time series that created it. Our paper revisits this claim. In particular, we consider the following question: Given several time series sequences and a set of STS cluster centroids from one of them (generated by the K-means algorithm), is it possible to reliably determine which of the sequences produced these cluster centroids? While recent results suggest that the answer should be NO, we answer this question in the affirmative.We present cluster shape distance, an alternate distance measure for time series subsequence clusters, based on cluster shapes. Given a set of clusters, its shape is the sorted list of the pairwise Euclidean distances between their centroids. We then present two algorithms based on this distance measure, which match a set of STS cluster centroids with the time series that produced it. While the first algorithm creates DQG reuse this term more smaller \"fingerprints\" for the sequences, the second is more accurate. In our experiments with a dataset of 10 sequences, it produced a correct match 100 of the time.Furthermore, we offer an analysis that explains why our cluster shape distance provides a reliable way to match STS clusters to the original sequences, whereas cluster set distance fails to do so. Our work establishes for the first time a strong relation between the result of K-means STS clustering and the time series sequence that created it, despite earlier predictions that this is not possible."
]
} |
1501.00405 | 2952616410 | While analyzing vehicular sensor data, we found that frequently occurring waveforms could serve as features for further analysis, such as rule mining, classification, and anomaly detection. The discovery of waveform patterns, also known as time-series motifs, has been studied extensively; however, available techniques for discovering frequently occurring time-series motifs were found lacking in either efficiency or quality: Standard subsequence clustering results in poor quality, to the extent that it has even been termed 'meaningless'. Variants of hierarchical clustering using techniques for efficient discovery of 'exact pair motifs' find high-quality frequent motifs, but at the cost of high computational complexity, making such techniques unusable for our voluminous vehicular sensor data. We show that good quality frequent motifs can be discovered using bounded spherical clustering of time-series subsequences, which we refer to as COIN clustering, with near linear complexity in time-series size. COIN clustering addresses many of the challenges that previously led to subsequence clustering being viewed as meaningless. We describe an end-to-end motif-discovery procedure using a sequence of pre and post-processing techniques that remove trivial-matches and shifted-motifs, which also plagued previous subsequence-clustering approaches. We demonstrate that our technique efficiently discovers frequent motifs in voluminous vehicular sensor data as well as in publicly available data sets. | Detailed analysis of the challenges involved in STS clustering was presented by Chen @cite_4 @cite_23 . He proposed an alternate distance measure to solve this issue. We submit that the use of bounded spherical, i.e., COIN clustering for discovery of frequent motif from time-series works in practice, so STS clustering is meaningful, at least to us as, we found it useful as well as highly efficient for our practical application scenario. Code for these techniques was not available; however, they did not focus on efficiency per se, and used standard clustering techniques such as k-means that is clearly outperformed by BIRCH as shown in @cite_10 . Of course, unlike the Epenthesis approach of @cite_5 that is parameter free, approaches based on subsequence clustering all rely on at least the motif width being an input parameter. | {
"cite_N": [
"@cite_10",
"@cite_5",
"@cite_4",
"@cite_23"
],
"mid": [
"2097747115",
"2083236658",
"2125501392",
"2010294100"
],
"abstract": [
"Time series clustering has been shown effective in providing useful information in various domains. There seems to be an increased interest in time series clustering as part of the effort in temporal data mining research. To provide an overview, this paper surveys and summarizes previous works that investigated the clustering of time series data in various application domains. The basics of time series clustering are presented, including general-purpose clustering algorithms commonly used in time series clustering studies, the criteria for evaluating the performance of the clustering results, and the measures to determine the similarity dissimilarity between two time series being compared, either in the forms of raw data, extracted features, or some model parameters. The past researchs are organized into three groups depending upon whether they work directly with the raw data either in the time or frequency domain, indirectly with features extracted from the raw data, or indirectly with models built from the raw data. The uniqueness and limitation of previous research are discussed and several possible topics for future research are identified. Moreover, the areas that time series clustering have been applied to are also summarized, including the sources of data used. It is hoped that this review will serve as the steppingstone for those interested in advancing this area of research.",
"Given the pervasiveness of time series data in all human endeavors, and the ubiquity of clustering as a data mining application, it is somewhat surprising that the problem of time series clustering from a single stream remains largely unsolved. Most work on time series clustering considers the clustering of individual time series, e.g., gene expression profiles, individual heartbeats or individual gait cycles. The few attempts at clustering time series streams have been shown to be objectively incorrect in some cases, and in other cases shown to work only on the most contrived datasets by carefully adjusting a large set of parameters. In this work, we make two fundamental contributions. First, we show that the problem definition for time series clustering from streams currently used is inherently flawed, and a new definition is necessary. Second, we show that the Minimum Description Length (MDL) framework offers an efficient, effective and essentially parameter-free method for time series clustering. We show that our method produces objectively correct results on a wide variety of datasets from medicine, zoology and industrial process analyses.",
"The startling claim was made that sequential time series clustering is meaningless. This has important consequences for a significant amount of work in the literature, since such a claim invalidates this work's contribution. In this paper, we show that sequential time series clustering is not meaningless, and that the problem highlighted in these works stem from their use of the Euclidean distance metric as the distance measure in the subsequence vector space. As a solution, we consider quite a general class of time series, and propose a regime based on two types of similarity that can exist between subsequence vectors, which give rise naturally to an alternative distance measure to Euclidean distance in the subsequence vector space. We show that, using this alternative distance measure, sequential time series clustering can indeed be meaningful. We repeat a key experiment in the work on which the \"meaningless\" claim was based, and show that our method leads to a successful clustering outcome.",
"Sequential time series clustering is a technique used to extract important features from time series data. The method can be shown to be the process of clustering in the delay-vector space formalism used in the Dynamical Systems literature. Recently, the startling claim was made that sequential time series clustering is meaningless. This has important consequences for a significant amount of work in the literature, since such a claim invalidates these work’s contribution. In this paper, we show that sequential time series clustering is not meaningless, and that the problem highlighted in these works stem from their use of the Euclidean distance metric as the distance measure in the delay-vector space. As a solution, we consider quite a general class of time series, and propose a regime based on two types of similarity that can exist between delay vectors, giving rise naturally to an alternative distance measure to Euclidean distance in the delay-vector space. We show that, using this alternative distance measure, sequential time series clustering can indeed be meaningful. We repeat a key experiment in the work on which the “meaningless” claim was based, and show that our method leads to a successful clustering outcome."
]
} |
1501.00405 | 2952616410 | While analyzing vehicular sensor data, we found that frequently occurring waveforms could serve as features for further analysis, such as rule mining, classification, and anomaly detection. The discovery of waveform patterns, also known as time-series motifs, has been studied extensively; however, available techniques for discovering frequently occurring time-series motifs were found lacking in either efficiency or quality: Standard subsequence clustering results in poor quality, to the extent that it has even been termed 'meaningless'. Variants of hierarchical clustering using techniques for efficient discovery of 'exact pair motifs' find high-quality frequent motifs, but at the cost of high computational complexity, making such techniques unusable for our voluminous vehicular sensor data. We show that good quality frequent motifs can be discovered using bounded spherical clustering of time-series subsequences, which we refer to as COIN clustering, with near linear complexity in time-series size. COIN clustering addresses many of the challenges that previously led to subsequence clustering being viewed as meaningless. We describe an end-to-end motif-discovery procedure using a sequence of pre and post-processing techniques that remove trivial-matches and shifted-motifs, which also plagued previous subsequence-clustering approaches. We demonstrate that our technique efficiently discovers frequent motifs in voluminous vehicular sensor data as well as in publicly available data sets. | To the best of our knowledge @cite_10 , little has been written regarding the use of bounded spherical (COIN) clustering techniques, especially for motif discovery, e.g., using BIRCH @cite_26 . Our COIN-LSH approach improves on quality of motifs discovered using BIRCH while showing similar performance, and is also parallelizable using techniques such as in @cite_3 . A concept similar to LSH was used in @cite_6 for discovery of pair motifs in images, but on discrete symbolic representation of time-series, and the hash-functions were chosen by omitting specific dimensions. In contrast, we use subsequences in their original form and hashing based on random hyperplanes in d-dimensional space. A concept similar to COIN has been used in @cite_15 , but for pair motifs rather than frequent motifs. | {
"cite_N": [
"@cite_26",
"@cite_6",
"@cite_3",
"@cite_15",
"@cite_10"
],
"mid": [
"2095897464",
"2086506616",
"2293892294",
"1513731586",
"2097747115"
],
"abstract": [
"Finding useful patterns in large datasets has attracted considerable interest recently, and one of the most widely studied problems in this area is the identification of clusters, or densely populated regions, in a multi-dimensional dataset. Prior work does not adequately address the problem of large datasets and minimization of I O costs.This paper presents a data clustering method named BIRCH (Balanced Iterative Reducing and Clustering using Hierarchies), and demonstrates that it is especially suitable for very large databases. BIRCH incrementally and dynamically clusters incoming multi-dimensional metric data points to try to produce the best quality clustering with the available resources (i.e., available memory and time constraints). BIRCH can typically find a good clustering with a single scan of the data, and improve the quality further with a few additional scans. BIRCH is also the first clustering algorithm proposed in the database area to handle \"noise\" (data points that are not part of the underlying pattern) effectively.We evaluate BIRCH's time space efficiency, data input order sensitivity, and clustering quality through several experiments. We also present a performance comparisons of BIRCH versus CLARANS, a clustering method proposed recently for large datasets, and show that BIRCH is consistently superior.",
"The problem of efficiently finding images that are similar to a target image has attracted much attention in the image processing community and is rightly considered an information retrieval task. However, the problem of finding structure and regularities in large image datasets is an area in which data mining is beginning to make fundamental contributions. In this work, we consider the new problem of discovering shape motifs, which are approximately repeated shapes within (or between) image collections. As we shall show, shape motifs can have applications in tasks as diverse as anthropology, law enforcement, and historical manuscript mining. Brute force discovery of shape motifs could be untenably slow, especially as many domains may require an expensive rotation invariant distance measure. We introduce an algorithm that is two to three orders of magnitude faster than brute force search, and demonstrate the utility of our approach with several real world datasets from diverse domains.",
"In this paper we describe graph-based parallel algorithms for entity resolution that improve over the map-reduce approach. We compare two approaches to parallelize a Locality Sensitive Hashing (LSH) accelerated,IterativeMatch-Merge (IMM) entity resolution technique: BCP, where records hashed together are compared at a single node reducer, vs an alternative mechanism (RCP) where comparison load is better distributed across processors especially in the presence of severely skewed bucket sizes. We analyze the BCP and RCP approaches analytically as well as empirically using a large synthetically generated dataset. We generalize the lessons learned from our experience and submit that the RCP approach is also applicable in many similar applications that rely on LSH or related grouping strategies to minimize pair-wise comparisons.",
"Time series motifs are pairs of individual time series, or subsequences of a longer time series, which are very similar to each other. As with their discrete analogues in computational biology, this similarity hints at structure which has been conserved for some reason and may therefore be of interest. Since the formalism of time series motifs in 2002, dozens of researchers have used them for diverse applications in many different domains. Because the obvious algorithm for computing motifs is quadratic in the number of items, more than a dozen approximate algorithms to discover motifs have been proposed in the literature. In this work, for the first time, we show a tractable exact algorithm to find time series motifs. As we shall show through extensive experiments, our algorithm is up to three orders of magnitude faster than brute-force search in large datasets. We further show that our algorithm is fast enough to be used as a subroutine in higher level data mining algorithms for anytime classification, near-duplicate detection and summarization, and we consider detailed case studies in domains as diverse as electroencephalograph interpretation and entomological telemetry data mining.",
"Time series clustering has been shown effective in providing useful information in various domains. There seems to be an increased interest in time series clustering as part of the effort in temporal data mining research. To provide an overview, this paper surveys and summarizes previous works that investigated the clustering of time series data in various application domains. The basics of time series clustering are presented, including general-purpose clustering algorithms commonly used in time series clustering studies, the criteria for evaluating the performance of the clustering results, and the measures to determine the similarity dissimilarity between two time series being compared, either in the forms of raw data, extracted features, or some model parameters. The past researchs are organized into three groups depending upon whether they work directly with the raw data either in the time or frequency domain, indirectly with features extracted from the raw data, or indirectly with models built from the raw data. The uniqueness and limitation of previous research are discussed and several possible topics for future research are identified. Moreover, the areas that time series clustering have been applied to are also summarized, including the sources of data used. It is hoped that this review will serve as the steppingstone for those interested in advancing this area of research."
]
} |
1501.00405 | 2952616410 | While analyzing vehicular sensor data, we found that frequently occurring waveforms could serve as features for further analysis, such as rule mining, classification, and anomaly detection. The discovery of waveform patterns, also known as time-series motifs, has been studied extensively; however, available techniques for discovering frequently occurring time-series motifs were found lacking in either efficiency or quality: Standard subsequence clustering results in poor quality, to the extent that it has even been termed 'meaningless'. Variants of hierarchical clustering using techniques for efficient discovery of 'exact pair motifs' find high-quality frequent motifs, but at the cost of high computational complexity, making such techniques unusable for our voluminous vehicular sensor data. We show that good quality frequent motifs can be discovered using bounded spherical clustering of time-series subsequences, which we refer to as COIN clustering, with near linear complexity in time-series size. COIN clustering addresses many of the challenges that previously led to subsequence clustering being viewed as meaningless. We describe an end-to-end motif-discovery procedure using a sequence of pre and post-processing techniques that remove trivial-matches and shifted-motifs, which also plagued previous subsequence-clustering approaches. We demonstrate that our technique efficiently discovers frequent motifs in voluminous vehicular sensor data as well as in publicly available data sets. | The problem of trivially matching subsequences has been identified in the research literature related to STS clustering @cite_4 @cite_23 @cite_12 @cite_8 @cite_0 @cite_1 . Most of these approaches @cite_4 @cite_23 @cite_1 focus on non-overlapping subsequences at the outset therefore such approaches may altogether miss some of the motifs due to their lower support. Further, Chen has also argued in @cite_12 that removing the subsequences before clustering also does not completely solve the issue of smoothing of subsequence clusters. Not enough attention has been given to an approach for removal of such subsequences after clustering, primarily because of the absence of suitable clustering method itself. Similar to above publications we also remove the trivially matching subsequences before clustering, however one of the key-contributions of our work is removal of trivially matching subsequences through post-processing, see , as well as highlighting the importance of level-splitting so that the discovered motifs are useful in practice. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_1",
"@cite_0",
"@cite_23",
"@cite_12"
],
"mid": [
"2125501392",
"2006761268",
"34835763",
"",
"2010294100",
"1516722970"
],
"abstract": [
"The startling claim was made that sequential time series clustering is meaningless. This has important consequences for a significant amount of work in the literature, since such a claim invalidates this work's contribution. In this paper, we show that sequential time series clustering is not meaningless, and that the problem highlighted in these works stem from their use of the Euclidean distance metric as the distance measure in the subsequence vector space. As a solution, we consider quite a general class of time series, and propose a regime based on two types of similarity that can exist between subsequence vectors, which give rise naturally to an alternative distance measure to Euclidean distance in the subsequence vector space. We show that, using this alternative distance measure, sequential time series clustering can indeed be meaningful. We repeat a key experiment in the work on which the \"meaningless\" claim was based, and show that our method leads to a successful clustering outcome.",
"Several important time series data mining problems reduce to the core task of finding approximately repeated subsequences in a longer time series. In an earlier work, we formalized the idea of approximately repeated subsequences by introducing the notion of time series motifs. Two limitations of this work were the poor scalability of the motif discovery algorithm, and the inability to discover motifs in the presence of noise.Here we address these limitations by introducing a novel algorithm inspired by recent advances in the problem of pattern discovery in biosequences. Our algorithm is probabilistic in nature, but as we show empirically and theoretically, it can find time series motifs with very high probability even in the presence of noise or \"don't care\" symbols. Not only is the algorithm fast, but it is an anytime algorithm, producing likely candidate motifs almost immediately, and gradually improving the quality of results over time.",
"The problem of locating motifs in real-valued, multivariate time series data involves the discovery of sets of recurring patterns embedded in the time series. Each set is composed of several non-overlapping subsequences and constitutes a motif because all of the included subsequences are similar. The ability to automatically discover such motifs allows intelligent systems to form endogenously meaningful representations of their environment through unsupervised sensor analysis. In this paper, we formulate a unifying view of motif discovery as a problem of locating regions of high density in the space of all time series subsequences. Our approach is efficient (sub-quadratic in the length of the data), requires fewer user-specified parameters than previous methods, and naturally allows variable length motif occurrences and non-linear temporal warping. We evaluate the performance of our approach using four data sets from different domains including on-body inertial sensors and speech.",
"",
"Sequential time series clustering is a technique used to extract important features from time series data. The method can be shown to be the process of clustering in the delay-vector space formalism used in the Dynamical Systems literature. Recently, the startling claim was made that sequential time series clustering is meaningless. This has important consequences for a significant amount of work in the literature, since such a claim invalidates these work’s contribution. In this paper, we show that sequential time series clustering is not meaningless, and that the problem highlighted in these works stem from their use of the Euclidean distance metric as the distance measure in the delay-vector space. As a solution, we consider quite a general class of time series, and propose a regime based on two types of similarity that can exist between delay vectors, giving rise naturally to an alternative distance measure to Euclidean distance in the delay-vector space. We show that, using this alternative distance measure, sequential time series clustering can indeed be meaningful. We repeat a key experiment in the work on which the “meaningless” claim was based, and show that our method leads to a successful clustering outcome.",
"Clustering time series data using the popular subsequence (STS) technique has been widely used in the data mining and wider communities. Recently the conclusion was made that it is meaningless, based on the findings that it produces (a) clustering outcomes for distinct time series that are not distinguishable from one another, and (b) cluster centroids that are smoothed. More recent work has since showed that (a) could be solved by introducing a lag in the subsequence vector construction process, however we show in this paper that such an approach does not solve (b). Motivating the terminology that a clustering method which overcomes (a) is meaningful, while one which overcomes (a) and (b) is useful, we propose an approach that produces useful time series clustering. The approach is based on restricting the clustering space to extend only over the region visited by the time series in the subsequence vector space. We test the approach on a set of 12 diverse real-world and synthetic data sets and find that (a) one can distinguish between the clusterings of these time series, and (b) that the centroids produced in each case retain the character of the underlying series from which they came."
]
} |
1501.00405 | 2952616410 | While analyzing vehicular sensor data, we found that frequently occurring waveforms could serve as features for further analysis, such as rule mining, classification, and anomaly detection. The discovery of waveform patterns, also known as time-series motifs, has been studied extensively; however, available techniques for discovering frequently occurring time-series motifs were found lacking in either efficiency or quality: Standard subsequence clustering results in poor quality, to the extent that it has even been termed 'meaningless'. Variants of hierarchical clustering using techniques for efficient discovery of 'exact pair motifs' find high-quality frequent motifs, but at the cost of high computational complexity, making such techniques unusable for our voluminous vehicular sensor data. We show that good quality frequent motifs can be discovered using bounded spherical clustering of time-series subsequences, which we refer to as COIN clustering, with near linear complexity in time-series size. COIN clustering addresses many of the challenges that previously led to subsequence clustering being viewed as meaningless. We describe an end-to-end motif-discovery procedure using a sequence of pre and post-processing techniques that remove trivial-matches and shifted-motifs, which also plagued previous subsequence-clustering approaches. We demonstrate that our technique efficiently discovers frequent motifs in voluminous vehicular sensor data as well as in publicly available data sets. | Finally, note that our definition of frequent motifs is very different from those presented in @cite_6 @cite_15 @cite_11 as they either focus on finding pairs of similar subsequences or clustering different time-series rather than subsequences of the same series (so they do not face the problem of trivial matches). | {
"cite_N": [
"@cite_15",
"@cite_6",
"@cite_11"
],
"mid": [
"1513731586",
"2086506616",
"2078559879"
],
"abstract": [
"Time series motifs are pairs of individual time series, or subsequences of a longer time series, which are very similar to each other. As with their discrete analogues in computational biology, this similarity hints at structure which has been conserved for some reason and may therefore be of interest. Since the formalism of time series motifs in 2002, dozens of researchers have used them for diverse applications in many different domains. Because the obvious algorithm for computing motifs is quadratic in the number of items, more than a dozen approximate algorithms to discover motifs have been proposed in the literature. In this work, for the first time, we show a tractable exact algorithm to find time series motifs. As we shall show through extensive experiments, our algorithm is up to three orders of magnitude faster than brute-force search in large datasets. We further show that our algorithm is fast enough to be used as a subroutine in higher level data mining algorithms for anytime classification, near-duplicate detection and summarization, and we consider detailed case studies in domains as diverse as electroencephalograph interpretation and entomological telemetry data mining.",
"The problem of efficiently finding images that are similar to a target image has attracted much attention in the image processing community and is rightly considered an information retrieval task. However, the problem of finding structure and regularities in large image datasets is an area in which data mining is beginning to make fundamental contributions. In this work, we consider the new problem of discovering shape motifs, which are approximately repeated shapes within (or between) image collections. As we shall show, shape motifs can have applications in tasks as diverse as anthropology, law enforcement, and historical manuscript mining. Brute force discovery of shape motifs could be untenably slow, especially as many domains may require an expensive rotation invariant distance measure. We introduce an algorithm that is two to three orders of magnitude faster than brute force search, and demonstrate the utility of our approach with several real world datasets from diverse domains.",
"Time series clustering has become an increasingly important research topic over the past decade. Most existing methods for time series clustering rely on distances calculated from the entire raw data using the Euclidean distance or Dynamic Time Warping distance as the distance measure. However, the presence of significant noise, dropouts, or extraneous data can greatly limit the accuracy of clustering in this domain. Moreover, for most real world problems, we cannot expect objects from the same class to be equal in length. As a consequence, most work on time series clustering only considers the clustering of individual time series \"behaviors,\" e.g., individual heart beats or individual gait cycles, and contrives the time series in some way to make them all equal in length. However, contriving the data in such a way is often a harder problem than the clustering itself. In this work, we show that by using only some local patterns and deliberately ignoring the rest of the data, we can mitigate the above problems and cluster time series of different lengths, i.e., cluster one heartbeat with multiple heartbeats. To achieve this we exploit and extend a recently introduced concept in time series data mining called shapelets. Unlike existing work, our work demonstrates for the first time the unintuitive fact that shapelets can be learned from unlabeled time series. We show, with extensive empirical evaluation in diverse domains, that our method is more accurate than existing methods. Moreover, in addition to accurate clustering results, we show that our work also has the potential to give insights into the domains to which it is applied."
]
} |
1501.00199 | 2950904389 | Matrix completion and approximation are popular tools to capture a user's preferences for recommendation and to approximate missing data. Instead of using low-rank factorization we take a drastically different approach, based on the simple insight that an additive model of co-clusterings allows one to approximate matrices efficiently. This allows us to build a concise model that, per bit of model learned, significantly beats all factorization approaches to matrix approximation. Even more surprisingly, we find that summing over small co-clusterings is more effective in modeling matrices than classic co-clustering, which uses just one large partitioning of the matrix. Following Occam's razor principle suggests that the simple structure induced by our model better captures the latent preferences and decision making processes present in the real world than classic co-clustering or matrix factorization. We provide an iterative minimization algorithm, a collapsed Gibbs sampler, theoretical guarantees for matrix approximation, and excellent empirical evidence for the efficacy of our approach. We achieve state-of-the-art results on the Netflix problem with a fraction of the model complexity. | A co-clustering approach to recommendation was proposed by @cite_9 . This model uses co-clustering to allow for sharing of strength within each group. However, it does not overcome the rank- @math problem, i.e. while clustering reduces intra-cluster variance and improves generalization, it does not increase the rank beyond what a simple factorization model is capable of doing. Finally, @cite_5 proposed a factorization model based on a Dirichlet process over users and columns. All these models are closely related to the mixed-membership stochastic blockmodels of @cite_10 . | {
"cite_N": [
"@cite_5",
"@cite_9",
"@cite_10"
],
"mid": [
"2156338064",
"",
"2107107106"
],
"abstract": [
"Matrix factorization algorithms are frequently used in the machine leaming community to find low dimensional representations of data. We introduce a novel generative Bayesian probabilistic model for unsupervised matrix and tensor factorization. The model consists of several interacting LDA models, one for each modality. We describe an efficient collapsed Gibbs sampler for inference. We also derive the non-parametric form of the model where interacting LDA models are replaced with interacting HDP models. Experiments demonstrate that the model is useful for prediction of missing data with two or more modalities as well as learning the latent structure in the data.",
"",
"Consider data consisting of pairwise measurements, such as presence or absence of links between pairs of objects. These data arise, for instance, in the analysis of protein interactions and gene regulatory networks, collections of author-recipient email, and social networks. Analyzing pairwise measurements with probabilistic models requires special assumptions, since the usual independence or exchangeability assumptions no longer hold. Here we introduce a class of variance allocation models for pairwise measurements: mixed membership stochastic blockmodels. These models combine global parameters that instantiate dense patches of connectivity (blockmodel) with local parameters that instantiate node-specific variability in the connections (mixed membership). We develop a general variational inference algorithm for fast approximate posterior inference. We demonstrate the advantages of mixed membership stochastic blockmodels with applications to social networks and protein interaction networks."
]
} |
1501.00199 | 2950904389 | Matrix completion and approximation are popular tools to capture a user's preferences for recommendation and to approximate missing data. Instead of using low-rank factorization we take a drastically different approach, based on the simple insight that an additive model of co-clusterings allows one to approximate matrices efficiently. This allows us to build a concise model that, per bit of model learned, significantly beats all factorization approaches to matrix approximation. Even more surprisingly, we find that summing over small co-clusterings is more effective in modeling matrices than classic co-clustering, which uses just one large partitioning of the matrix. Following Occam's razor principle suggests that the simple structure induced by our model better captures the latent preferences and decision making processes present in the real world than classic co-clustering or matrix factorization. We provide an iterative minimization algorithm, a collapsed Gibbs sampler, theoretical guarantees for matrix approximation, and excellent empirical evidence for the efficacy of our approach. We achieve state-of-the-art results on the Netflix problem with a fraction of the model complexity. | Co-clustering It was was originally used primarily for understanding the clustering of rows and columns of a matrix rather than for matrix approximation or completion @cite_11 . This formulation was well suited for biological tasks, but it computationally evolved to cover a wider variety of objectives @cite_7 . @cite_0 defined a soft co-clustering objective akin to a factorization model. Recent work has defined a Bayesian model for co-clustering focused on matrix modeling @cite_21 . @cite_8 focuses on exploiting co-clustering ensembles, but do so by finding a single consensus co-clustering. As far as we know, ours is the first work to use an additive combination of co-clusterings. | {
"cite_N": [
"@cite_7",
"@cite_8",
"@cite_21",
"@cite_0",
"@cite_11"
],
"mid": [
"2138072998",
"1500243278",
"",
"2009685860",
"2036328877"
],
"abstract": [
"Co-clustering is a powerful data mining technique with varied applications such as text clustering, microarray analysis and recommender systems. Recently, an information-theoretic co-clustering approach applicable to empirical joint probability distributions was proposed. In many situations, co-clustering of more general matrices is desired. In this paper, we present a substantially generalized co-clustering framework wherein any Bregman divergence can be used in the objective function, and various conditional expectation based constraints can be considered based on the statistics that need to be preserved. Analysis of the co-clustering problem leads to the minimum Bregman information principle, which generalizes the maximum entropy principle, and yields an elegant meta algorithm that is guaranteed to achieve local optimality. Our methodology yields new algorithms and also encompasses several previously known clustering and co-clustering algorithms based on alternate minimization.",
"Forming consensus clusters from multiple input clusterings can improve accuracy and robustness. Current clustering ensemble methods require specifying the number of consensus clusters. A poor choice can lead to under or over fitting. This paper proposes a nonparametric Bayesian clustering ensemble (NBCE) method, which can discover the number of clusters in the consensus clustering. Three inference methods are considered: collapsed Gibbs sampling, variational Bayesian inference, and collapsed variational Bayesian inference. Comparison of NBCE with several other algorithms demonstrates its versatility and superior stability.",
"",
"Co-clustering is a generalization of unsupervised clustering that has recently drawn renewed attention, driven by emerging data mining applications in diverse areas. Whereas clustering groups entire columns of a data matrix, co-clustering groups columns over select rows only, i.e., it simultaneously groups rows and columns. The concept generalizes to data “boxes” and higher-way tensors, for simultaneous grouping along multiple modes. Various co-clustering formulations have been proposed, but no workhorse analogous to K-means has emerged. This paper starts from K-means and shows how co-clustering can be formulated as a constrained multilinear decomposition with sparse latent factors. For three- and higher-way data, uniqueness of the multilinear decomposition implies that, unlike matrix co-clustering, it is possible to unravel a large number of possibly overlapping co-clusters. A basic multi-way co-clustering algorithm is proposed that exploits multilinearity using Lasso-type coordinate updates. Various line search schemes are then introduced to speed up convergence, and suitable modifications are proposed to deal with missing values. The imposition of latent sparsity pays a collateral dividend: it turns out that sequentially extracting one co-cluster at a time is almost optimal, hence the approach scales well for large datasets. The resulting algorithms are benchmarked against the state-of-art in pertinent simulations, and applied to measured data, including the ENRON e-mail corpus.",
"Abstract Clustering algorithms are now in widespread use for sorting heterogeneous data into homogeneous blocks. If the data consist of a number of variables taking values over a number of cases, these algorithms may be used either to construct clusters of variables (using, say, correlation as a measure of distance between variables) or clusters of cases. This article presents a model, and a technique, for clustering cases and variables simultaneously. The principal advantage in this approach is the direct interpretation of the clusters on the data."
]
} |
1501.00199 | 2950904389 | Matrix completion and approximation are popular tools to capture a user's preferences for recommendation and to approximate missing data. Instead of using low-rank factorization we take a drastically different approach, based on the simple insight that an additive model of co-clusterings allows one to approximate matrices efficiently. This allows us to build a concise model that, per bit of model learned, significantly beats all factorization approaches to matrix approximation. Even more surprisingly, we find that summing over small co-clusterings is more effective in modeling matrices than classic co-clustering, which uses just one large partitioning of the matrix. Following Occam's razor principle suggests that the simple structure induced by our model better captures the latent preferences and decision making processes present in the real world than classic co-clustering or matrix factorization. We provide an iterative minimization algorithm, a collapsed Gibbs sampler, theoretical guarantees for matrix approximation, and excellent empirical evidence for the efficacy of our approach. We achieve state-of-the-art results on the Netflix problem with a fraction of the model complexity. | Matrix Approximation There exists a large body of work on matrix approximation in the theoretical computer science community. They focus mainly on efficient low-rank approximations, e.g. by projection or by interpolation. Examples of the projection based strategy are @cite_3 @cite_14 . Essentially one aims to find a general low-rank approximation of the matrix, as is common in most recommender models. | {
"cite_N": [
"@cite_14",
"@cite_3"
],
"mid": [
"2949526110",
"2117756735"
],
"abstract": [
"We reconsider randomized algorithms for the low-rank approximation of symmetric positive semi-definite (SPSD) matrices such as Laplacian and kernel matrices that arise in data analysis and machine learning applications. Our main results consist of an empirical evaluation of the performance quality and running time of sampling and projection methods on a diverse suite of SPSD matrices. Our results highlight complementary aspects of sampling versus projection methods; they characterize the effects of common data preprocessing steps on the performance of these algorithms; and they point to important differences between uniform sampling and nonuniform sampling methods based on leverage scores. In addition, our empirical results illustrate that existing theory is so weak that it does not provide even a qualitative guide to practice. Thus, we complement our empirical results with a suite of worst-case theoretical bounds for both random sampling and random projection methods. These bounds are qualitatively superior to existing bounds---e.g. improved additive-error bounds for spectral and Frobenius norm error and relative-error bounds for trace norm error---and they point to future directions to make these algorithms useful in even larger-scale machine learning applications.",
"Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, robustness, and or speed. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the @math dominant components of the singular value decomposition of an @math matrix. (i) For a dense input matrix, randomized algorithms require @math floating-point operations (flops) in contrast to @math for classical algorithms. (ii) For a sparse input matrix, the flop count matches classical Krylov subspace methods, but the randomized approach is more robust and can easily be reorganized to exploit multiprocessor architectures. (iii) For a matrix that is too large to fit in fast memory, the randomized techniques require only a constant number of passes over the data, as opposed to @math passes for classical algorithms. In fact, it is sometimes possible to perform matrix approximation with a single pass over the data."
]
} |
1501.00199 | 2950904389 | Matrix completion and approximation are popular tools to capture a user's preferences for recommendation and to approximate missing data. Instead of using low-rank factorization we take a drastically different approach, based on the simple insight that an additive model of co-clusterings allows one to approximate matrices efficiently. This allows us to build a concise model that, per bit of model learned, significantly beats all factorization approaches to matrix approximation. Even more surprisingly, we find that summing over small co-clusterings is more effective in modeling matrices than classic co-clustering, which uses just one large partitioning of the matrix. Following Occam's razor principle suggests that the simple structure induced by our model better captures the latent preferences and decision making processes present in the real world than classic co-clustering or matrix factorization. We provide an iterative minimization algorithm, a collapsed Gibbs sampler, theoretical guarantees for matrix approximation, and excellent empirical evidence for the efficacy of our approach. We achieve state-of-the-art results on the Netflix problem with a fraction of the model complexity. | A more parsimonious strategy is to seek decompositions. There one aims to approximate columns of a matrix by a linear combination of a subset of other columns @cite_6 . Nonetheless this requires us to store at least one, possibly more scaling coefficients per column. Also note the focus on column interpolations --- this can easily be extended to row and column interpolations, simply by first performing a row interpolation and then interpolating the columns. To the best of our knowledge, the problem of approximating matrices with piecewise constant block matrices as we propose here is not the focus of research in TCS. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2951907999"
],
"abstract": [
"There has been significant interest and progress recently in algorithms that solve regression problems involving tall and thin matrices in input sparsity time. These algorithms find shorter equivalent of a n*d matrix where n >> d, which allows one to solve a poly(d) sized problem instead. In practice, the best performances are often obtained by invoking these routines in an iterative fashion. We show these iterative methods can be adapted to give theoretical guarantees comparable and better than the current state of the art. Our approaches are based on computing the importances of the rows, known as leverage scores, in an iterative manner. We show that alternating between computing a short matrix estimate and finding more accurate approximate leverage scores leads to a series of geometrically smaller instances. This gives an algorithm that runs in @math time for any @math , where the @math term is comparable to the cost of solving a regression problem on the small approximation. Our results are built upon the close connection between randomized matrix algorithms, iterative methods, and graph sparsification."
]
} |
1501.00199 | 2950904389 | Matrix completion and approximation are popular tools to capture a user's preferences for recommendation and to approximate missing data. Instead of using low-rank factorization we take a drastically different approach, based on the simple insight that an additive model of co-clusterings allows one to approximate matrices efficiently. This allows us to build a concise model that, per bit of model learned, significantly beats all factorization approaches to matrix approximation. Even more surprisingly, we find that summing over small co-clusterings is more effective in modeling matrices than classic co-clustering, which uses just one large partitioning of the matrix. Following Occam's razor principle suggests that the simple structure induced by our model better captures the latent preferences and decision making processes present in the real world than classic co-clustering or matrix factorization. We provide an iterative minimization algorithm, a collapsed Gibbs sampler, theoretical guarantees for matrix approximation, and excellent empirical evidence for the efficacy of our approach. We achieve state-of-the-art results on the Netflix problem with a fraction of the model complexity. | Succinct modeling The data mining community has focused on finding succinct models of data, often directly optimizing the model size described by the minimum description language (MDL) principle @cite_15 . Finding effective ways to compress real world data allows for better modeling and understanding of the datasets. This approach has led to valuable results in pattern and item-set mining @cite_22 @cite_12 as well as graph summarization @cite_19 . However, these approaches typically focus on modeling databases of discrete items rather than real-valued datasets with missing values. | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_22",
"@cite_12"
],
"mid": [
"",
"2054658115",
"2124066753",
"2118262358"
],
"abstract": [
"",
"The number of digits it takes to write down an observed sequence x\"1, ..., x\"N of a time series depends on the model with its parameters that one assumes to have generated the observed data. Accordingly, by finding the model which minimizes the description length one obtains estimates of both the integer-valued structure parameters and the real-valued system parameters.",
"One of the major problems in pattern mining is the explosion of the number of results. Tight constraints reveal only common knowledge, while loose constraints lead to an explosion in the number of returned patterns. This is caused by large groups of patterns essentially describing the same set of transactions. In this paper we approach this problem using the MDL principle: the best set of patterns is that set that compresses the database best. For this task we introduce the Krimp algorithm. Experimental evaluation shows that typically only hundreds of itemsets are returned; a dramatic reduction, up to seven orders of magnitude, in the number of frequent item sets. These selections, called code tables, are of high quality. This is shown with compression ratios, swap-randomisation, and the accuracies of the code table-based Krimp classifier, all obtained on a wide range of datasets. Further, we extensively evaluate the heuristic choices made in the design of the algorithm.",
"Most, if not all, databases are mixtures of samples from different distributions. Transactional data is no exception. For the prototypical example, supermarket basket analysis, one also expects a mixture of different buying patterns. Households of retired people buy different collections of items than households with young children. Models that take such underlying distributions into account are in general superior to those that do not. In this paper we introduce two MDL-based algorithms that follow orthogonal approaches to identify the components in a transaction database. The first follows a model-based approach, while the second is data-driven. Both are parameter-free: the number of components and the components themselves are chosen such that the combined complexity of data and models is minimised. Further, neither prior knowledge on the distributions nor a distance metric on the data is required. Experiments with both methods show that highly characteristic components are identified."
]
} |
1501.00287 | 247584736 | We study consistency of learning algorithms for a multi-class performance metric that is a non-decomposable function of the confusion matrix of a classifier and cannot be expressed as a sum of losses on individual data points; examples of such performance metrics include the macro F-measure popular in information retrieval and the G-mean metric used in class-imbalanced problems. While there has been much work in recent years in understanding the consistency properties of learning algorithms for binary' non-decomposable metrics, little is known either about the form of the optimal classifier for a general multi-class non-decomposable metric, or about how these learning algorithms generalize to the multi-class case. In this paper, we provide a unified framework for analysing a multi-class non-decomposable performance metric, where the problem of finding the optimal classifier for the performance metric is viewed as an optimization problem over the space of all confusion matrices achievable under the given distribution. Using this framework, we show that (under a continuous distribution) the optimal classifier for a multi-class performance metric can be obtained as the solution of a cost-sensitive classification problem, thus generalizing several previous results on specific binary non-decomposable metrics. We then design a consistent learning algorithm for concave multi-class performance metrics that proceeds via a sequence of cost-sensitive classification problems, and can be seen as applying the conditional gradient (CG) optimization method over the space of feasible confusion matrices. To our knowledge, this is the first efficient learning algorithm (whose running time is polynomial in the number of classes) that is consistent for a large family of multi-class non-decomposable metrics. Our consistency proof uses a novel technique based on the convergence analysis of the CG method. | There have been several algorithms designed to optimize non-decomposable classification metrics, particularly in the binary classification setting; these include the binary plug-in algorithm that applies an empirical threshold to a class probability estimate @cite_0 @cite_30 @cite_16 @cite_9 @cite_28 , cost-sensitive risk minimization based approaches @cite_4 @cite_9 @cite_2 , methods that optimize convex and non-convex approximations to the given performance metric @cite_35 @cite_10 @cite_36 @cite_14 @cite_5 , and decision-theoretic methods that learn a class probability estimate and compute predictions that maximize the expected value of the performance metric on a test set @cite_23 @cite_45 @cite_42 . Of these, the plug-in method is known to be consistent for any binary performance metric for which the optimal classifier is threshold-based @cite_39 , while the cost-sensitive approach is shown to be consistent for the class of fractional-linear performance metrics @cite_41 . There have also been results characterizing the optimal classifier for several binary non-decomposable metrics @cite_27 @cite_11 @cite_16 @cite_9 , with the specific form of the classifier available in closed-form for fraction-linear metrics (i.e., metrics that are ratios of linear functions) @cite_41 . | {
"cite_N": [
"@cite_30",
"@cite_35",
"@cite_36",
"@cite_41",
"@cite_42",
"@cite_2",
"@cite_5",
"@cite_10",
"@cite_4",
"@cite_39",
"@cite_23",
"@cite_28",
"@cite_27",
"@cite_16",
"@cite_14",
"@cite_9",
"@cite_0",
"@cite_45",
"@cite_11"
],
"mid": [
"",
"",
"",
"",
"",
"",
"",
"",
"13608894",
"2109689843",
"2152089394",
"",
"2951709405",
"",
"",
"",
"1979495886",
"",
""
],
"abstract": [
"",
"",
"",
"",
"",
"",
"",
"",
"Support vector machines (SVMs) are regularly used for classification of unbalanced data by weighting more heavily the error contribution from the rare class. This heuristic technique is often used to learn classifiers with high F-measure, although this particular application of SVMs has not been rigorously examined. We provide significant and new theoretical results that support this popular heuristic. Specifically, we demonstrate that with the right parameter settings SVMs approximately optimize F-measure in the same way that SVMs have already been known to approximately optimize accuracy. This finding has a number of theoretical and practical implications for using SVMs in F-measure optimization.",
"We study consistency properties of algorithms for non-decomposable performance measures that cannot be expressed as a sum of losses on individual data points, such as the F-measure used in text retrieval and several other performance measures used in class imbalanced settings. While there has been much work on designing algorithms for such performance measures, there is limited understanding of the theoretical properties of these algorithms. Recently, (2012) showed consistency results for two algorithms that optimize the F-measure, but their results apply only to an idealized setting, where precise knowledge of the underlying probability distribution (in the form of the 'true' posterior class probability) is available to a learning algorithm. In this work, we consider plug-in algorithms that learn a classifier by applying an empirically determined threshold to a suitable 'estimate' of the class probability, and provide a general methodology to show consistency of these methods for any non-decomposable measure that can be expressed as a continuous function of true positive rate (TPR) and true negative rate (TNR), and for which the Bayes optimal classifier is the class probability function thresholded suitably. We use this template to derive consistency results for plug-in algorithms for the F-measure and for the geometric mean of TPR and precision; to our knowledge, these are the first such results for these measures. In addition, for continuous distributions, we show consistency of plug-in algorithms for any performance measure that is a continuous and monotonically increasing function of TPR and TNR. Experimental results confirm our theoretical findings.",
"We compare the plug-in rule approach for optimizing the Fβ-measure in multilabel classification with an approach based on structured loss minimization, such as the structured support vector machine (SSVM). Whereas the former derives an optimal prediction from a probabilistic model in a separate inference step, the latter seeks to optimize the Fβ-measure directly during the training phase. We introduce a novel plug-in rule algorithm that estimates all parameters required for a Bayes-optimal prediction via a set of multinomial regression models, and we compare this algorithm with SSVMs in terms of computational complexity and statistical consistency. As a main theoretical result, we show that our plug-in rule algorithm is consistent, whereas the SSVM approaches are not. Finally, we present results of a large experimental study showing the benefits of the introduced algorithm.",
"",
"F-measures are popular performance metrics, particularly for tasks with imbalanced data sets. Algorithms for learning to maximize F-measures follow two approaches: the empirical utility maximization (EUM) approach learns a classifier having optimal performance on training data, while the decision-theoretic approach learns a probabilistic model and then predicts labels with maximum expected F-measure. In this paper, we investigate the theoretical justifications and connections for these two approaches, and we study the conditions under which one approach is preferable to the other using synthetic and real datasets. Given accurate models, our results suggest that the two approaches are asymptotically equivalent given large training and test sets. Nevertheless, empirically, the EUM approach appears to be more robust against model misspecification, and given a good model, the decision-theoretic approach appears to be better for handling rare classes and a common domain adaptation scenario.",
"",
"",
"",
"Thresholding strategies in automated text categorization are an underexplored area of research. This paper presents an examination of the effect of thresholding strategies on the performance of a classifier under various conditions. Using k-Nearest Neighbor (kNN) as the classifier and five evaluation benchmark collections as the testbets, three common thresholding methods were investigated, including rank-based thresholding (RCut), proportion-based assignments (PCut) and score-based local optimization (SCut); in addition, new variants of these methods are proposed to overcome significant problems in the existing approaches. Experimental results show that the choice of thresholding strategy can significantly influence the performance of kNN, and that the optimal'' strategy may vary by application. SCut is potentially better for fine-tuning but risks overfitting. PCut copes better with rare categories and exhibits a smoother trade-off in recall versus precision, but is not suitable for online decision making. RCut is most natural for online response but is too coarse-grained for global or local optimization. RTCut, a new method combining the strength of category ranking and scoring, outperforms both PCut and RCut significantly.",
"",
""
]
} |
1412.8556 | 2951625587 | We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training. | A single, un-normalized cell of the scale-invariant feature transform'' SIFT @cite_36 and its variants @cite_25 @cite_21 @cite_14 can be written compactly as a formula @cite_4 @cite_41 : h_ SIFT ( | I, )[x] = N _ ( - I(y) ) N _ (y-x) d (y) where @math is the image restricted to a square domain, centered at a location @math with size @math in the lattice @math determined by the response to a difference-of-Gaussian (DoG) operator across all locations and scales (SIFT detector ). Here @math , @math is the independent variable, ranging from @math to @math , corresponding to an orientation histogram bin of size @math , and @math is the spatial pooling scale . The kernel @math is bilinear of size @math and @math separable-bilinear of size @math @cite_41 , although they could be replaced by a Gaussian with standard deviation @math and an angular Gaussian @cite_43 with dispersion parameter @math . The SIFT descriptor is the concatenation of @math cells computed at locations @math on a @math lattice @math , and normalized. | {
"cite_N": [
"@cite_14",
"@cite_4",
"@cite_36",
"@cite_41",
"@cite_21",
"@cite_43",
"@cite_25"
],
"mid": [
"2161969291",
"1916673044",
"",
"2066941820",
"2150782236",
"",
"1677409904"
],
"abstract": [
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.",
"We frame the problem of local representation of imaging data as the computation of minimal sufficient statistics that are invariant to nuisance variability induced by viewpoint and illumination. We show that, under very stringent conditions, these are related to “feature descriptors” commonly used in Computer Vision. Such conditions can be relaxed if multiple views of the same scene are available. We propose a sampling-based and a point-estimate based approximation of such a representation, compared empirically on image-to-(multiple)image matching, for which we introduce a multi-view wide-baseline matching benchmark, consisting of a mixture of real and synthetic objects with ground truth camera motion and dense three-dimensional geometry.",
"",
"VLFeat is an open and portable library of computer vision algorithms. It aims at facilitating fast prototyping and reproducible research for computer vision scientists and students. It includes rigorous implementations of common building blocks such as feature detectors, feature extractors, (hierarchical) k-means clustering, randomized kd-tree matching, and super-pixelization. The source code and interfaces are fully documented. The library integrates directly with MATLAB, a popular language for computer vision research.",
"Establishing visual correspondences is an essential component of many computer vision problems, and is often done with robust, local feature-descriptors. Transmission and storage of these descriptors are of critical importance in the context of mobile distributed camera networks and large indexing problems. We propose a framework for computing low bit-rate feature descriptors with a 20× reduction in bit rate. The framework is low complexity and has significant speed-up in the matching stage. We represent gradient histograms as tree structures which can be efficiently compressed. We show how to efficiently compute distances between descriptors in their compressed representation eliminating the need for decoding. We perform a comprehensive performance comparison with SIFT, SURF, and other low bit-rate descriptors and show that our proposed CHoG descriptor outperforms existing schemes.",
"",
"In this paper, we present a novel scale- and rotation-invariant interest point detector and descriptor, coined SURF (Speeded Up Robust Features). It approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster. This is achieved by relying on integral images for image convolutions; by building on the strengths of the leading existing detectors and descriptors (in casu, using a Hessian matrix-based measure for the detector, and a distribution-based descriptor); and by simplifying these methods to the essential. This leads to a combination of novel detection, description, and matching steps. The paper presents experimental results on a standard evaluation set, as well as on imagery obtained in the context of a real-life object recognition application. Both show SURF's strong performance."
]
} |
1412.8556 | 2951625587 | We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training. | Pooling is commonly understood as the combination of responses of feature detectors descriptors at nearby locations, aimed at transforming the joint feature representation into a more usable one that preserves important information (intrinsic variability) while discarding irrelevant detail (nuisance variability) @cite_15 @cite_19 . However, precisely how pooling trades off these two conflicting aims is unclear and mostly addressed empirically in end-to-end comparisons with numerous confounding factors. Exceptions include @cite_15 , where intrinsic and nuisance variability are combined and abstracted into the variance and distance between the means of scalar random variables in a binary classification task. For more general settings, the goals of reducing nuisance variability while preserving intrinsic variability is elusive as a single image does not afford the ability to separate the two @cite_4 . | {
"cite_N": [
"@cite_19",
"@cite_15",
"@cite_4"
],
"mid": [
"",
"2162931300",
"1916673044"
],
"abstract": [
"",
"Many modern visual recognition algorithms incorporate a step of spatial 'pooling', where the outputs of several nearby feature detectors are combined into a local or global 'bag of features', in a way that preserves task-related information while removing irrelevant details. Pooling is used to achieve invariance to image transformations, more compact representations, and better robustness to noise and clutter. Several papers have shown that the details of the pooling operation can greatly influence the performance, but studies have so far been purely empirical. In this paper, we show that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted. We provide a detailed theoretical analysis of max pooling and average pooling, and give extensive empirical comparisons for object recognition tasks.",
"We frame the problem of local representation of imaging data as the computation of minimal sufficient statistics that are invariant to nuisance variability induced by viewpoint and illumination. We show that, under very stringent conditions, these are related to “feature descriptors” commonly used in Computer Vision. Such conditions can be relaxed if multiple views of the same scene are available. We propose a sampling-based and a point-estimate based approximation of such a representation, compared empirically on image-to-(multiple)image matching, for which we introduce a multi-view wide-baseline matching benchmark, consisting of a mixture of real and synthetic objects with ground truth camera motion and dense three-dimensional geometry."
]
} |
1412.8556 | 2951625587 | We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training. | In neural network architectures, there is evidence that abstracting spatial pooling hierarchically, aggregating nearby responses in feature maps, is beneficial @cite_15 . This process could be extended by aggregating across different neighborhood sizes in feature space. To the best of our knowledge, the only architecture that performs some kind of pooling across scales is @cite_38 , although the justification provided in @cite_26 only concerns translation within each scale. The same goes for @cite_33 , where pooling (low-pass filtering) is only performed within each scale, and not across scales. Other works learn the regions for spatial pooling, for instance @cite_19 @cite_5 , but still restrict pooling to within-scale, similar to @cite_24 , rather than across scales as we advocate. | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_33",
"@cite_24",
"@cite_19",
"@cite_5",
"@cite_15"
],
"mid": [
"2132172482",
"2100223191",
"",
"104211377",
"",
"2007178811",
"2162931300"
],
"abstract": [
"Primates are remarkably good at recognizing objects. The level of performance of their visual system and its robustness to image degradations still surpasses the best computer vision systems despite decades of engineering effort. In particular, the high accuracy of primates in ultra rapid object categorization and rapid serial visual presentation tasks is remarkable. Given the number of processing stages involved and typical neural latencies, such rapid visual processing is likely to be mostly feedforward. Here we show that a specific implementation of a class of feedforward theories of object recognition (that extend the Hubel and Wiesel simple-to-complex cell hierarchy and account for many anatomical and physiological constraints) can predict the level and the pattern of performance achieved by humans on a rapid masked animal vs. non-animal categorization task.",
"A goal of central importance in the study of hierarchical models for object recognition - and indeed the mammalian visual cortex - is that of understanding quantitatively the trade-off between invariance and selectivity, and how invariance and discrimination properties contribute towards providing an improved representation useful for learning from data. In this work we provide a general group-theoretic framework for characterizing and understanding invariance in a family of hierarchical models. We show that by taking an algebraic perspective, one can provide a concise set of conditions which must be met to establish invariance, as well as a constructive prescription for meeting those conditions. Analyses in specific cases of particular relevance to computer vision and text processing are given, yielding insight into how and when invariance can be achieved. We find that the minimal intrinsic properties of a hierarchical model needed to support a particular invariance can be clearly described, thereby encouraging efficient computational implementations.",
"",
"Fast visual recognition in the mammalian cortex seems to be a hierarchical process by which the representation of the visual world is transformed in multiple stages from low-level retinotopic features to high-level, global and invariant features, and to object categories. Every single step in this hierarchy seems to be subject to learning. How does the visual cortex learn such hierarchical representations by just looking at the world? How could computers learn such representations from data? Computer vision models that are weakly inspired by the visual cortex will be described. A number of unsupervised learning algorithms to train these models will be presented, which are based on the sparse auto-encoder concept. The effectiveness of these algorithms for learning invariant feature hierarchies will be demonstrated with a number of practical tasks such as scene parsing, pedestrian detection, and object classification.",
"",
"The objective of this work is to learn descriptors suitable for the sparse feature detectors used in viewpoint invariant matching. We make a number of novel contributions towards this goal. First, it is shown that learning the pooling regions for the descriptor can be formulated as a convex optimisation problem selecting the regions using sparsity. Second, it is shown that descriptor dimensionality reduction can also be formulated as a convex optimisation problem, using Mahalanobis matrix nuclear norm regularisation. Both formulations are based on discriminative large margin learning constraints. As the third contribution, we evaluate the performance of the compressed descriptors, obtained from the learnt real-valued descriptors by binarisation. Finally, we propose an extension of our learning formulations to a weakly supervised case, which allows us to learn the descriptors from unannotated image collections. It is demonstrated that the new learning methods improve over the state of the art in descriptor learning on the annotated local patches data set of and unannotated photo collections of .",
"Many modern visual recognition algorithms incorporate a step of spatial 'pooling', where the outputs of several nearby feature detectors are combined into a local or global 'bag of features', in a way that preserves task-related information while removing irrelevant details. Pooling is used to achieve invariance to image transformations, more compact representations, and better robustness to noise and clutter. Several papers have shown that the details of the pooling operation can greatly influence the performance, but studies have so far been purely empirical. In this paper, we show that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted. We provide a detailed theoretical analysis of max pooling and average pooling, and give extensive empirical comparisons for object recognition tasks."
]
} |
1412.8556 | 2951625587 | We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training. | We distinguish multi-scale methods that concatenate descriptors computed independently at each scale , from cross-scale pooling , where statistics of the image at different scales are combined directly in the descriptor. Examples of the former include @cite_22 , where ordinary SIFT descriptors computed on domains of different size are assumed to belong to a linear subspace, and @cite_5 , where Fisher vectors are computed for multiple sizes and aspect ratios and spatial pooling occurs within each level. Also bag-of-word (BoW) methods @cite_34 , as mid-level representations, aggregate different low level descriptors by counting their frequency after discretization. Typically, vector quantization or other clustering technique is used, each descriptor is associated with a cluster center ( word''), and the frequency of each word is recorded in lieu of the descriptors themselves. This can be done for domain size, by computing different descriptors at the same location, for different domain sizes, and then counting frequencies relative to a dictionary learned from a large training dataset (Sect. ). | {
"cite_N": [
"@cite_5",
"@cite_34",
"@cite_22"
],
"mid": [
"2007178811",
"2131846894",
"2016120301"
],
"abstract": [
"The objective of this work is to learn descriptors suitable for the sparse feature detectors used in viewpoint invariant matching. We make a number of novel contributions towards this goal. First, it is shown that learning the pooling regions for the descriptor can be formulated as a convex optimisation problem selecting the regions using sparsity. Second, it is shown that descriptor dimensionality reduction can also be formulated as a convex optimisation problem, using Mahalanobis matrix nuclear norm regularisation. Both formulations are based on discriminative large margin learning constraints. As the third contribution, we evaluate the performance of the compressed descriptors, obtained from the learnt real-valued descriptors by binarisation. Finally, we propose an extension of our learning formulations to a weakly supervised case, which allows us to learn the descriptors from unannotated image collections. It is demonstrated that the new learning methods improve over the state of the art in descriptor learning on the annotated local patches data set of and unannotated photo collections of .",
"We describe an approach to object and scene retrieval which searches for and localizes all the occurrences of a user outlined object in a video. The object is represented by a set of viewpoint invariant region descriptors so that recognition can proceed successfully despite changes in viewpoint, illumination and partial occlusion. The temporal continuity of the video within a shot is used to track the regions in order to reject unstable regions and reduce the effects of noise in the descriptors. The analogy with text retrieval is in the implementation where matches on descriptors are pre-computed (using vector quantization), and inverted file systems and document rankings are used. The result is that retrieved is immediate, returning a ranked list of key frames shots in the manner of Google. The method is illustrated for matching in two full length feature films.",
"Scale invariant feature detectors often find stable scales in only a few image pixels. Consequently, methods for feature matching typically choose one of two extreme options: matching a sparse set of scale invariant features, or dense matching using arbitrary scales. In this paper we turn our attention to the overwhelming majority of pixels, those where stable scales are not found by standard techniques. We ask, is scale-selection necessary for these pixels, when dense, scale-invariant matching is required and if so, how can it be achieved? We make the following contributions: (i) We show that features computed over different scales, even in low-contrast areas, can be different; selecting a single scale, arbitrarily or otherwise, may lead to poor matches when the images have different scales. (ii) We show that representing each pixel as a set of SIFTs, extracted at multiple scales, allows for far better matches than single-scale descriptors, but at a computational price. Finally, (iii) we demonstrate that each such set may be accurately represented by a low-dimensional, linear subspace. A subspace-to-point mapping may further be used to produce a novel descriptor representation, the Scale-Less SIFT (SLS), as an alternative to single-scale descriptors. These claims are verified by quantitative and qualitative tests, demonstrating significant improvements over existing methods."
]
} |
1412.8556 | 2951625587 | We introduce a simple modification of local image descriptors, such as SIFT, based on pooling gradient orientations across different domain sizes, in addition to spatial locations. The resulting descriptor, which we call DSP-SIFT, outperforms other methods in wide-baseline matching benchmarks, including those based on convolutional neural networks, despite having the same dimension of SIFT and requiring no training. | Aggregation across time, which may include changes of domain size, is advocated in @cite_13 , but in the absence of formulas it is unclear how this approach relates to our work. In @cite_7 , weights are shared across scales, which is not equivalent to pooling, but still establishes some dependencies across scales. MTD @cite_37 appears to be the first instance of pooling across scales, although the aggregation is global in scale-space with consequent loss of discriminative power. Most recently, @cite_18 advocates the same but in practice space-pooled VLAD descriptors obtained at different scales are simply concatenated. Also @cite_8 can be thought of as a form of pooling, but the resulting descriptor only captures the mean of the resulting distribution. In addition, @cite_40 exploits the possibility of estimating the proper scales for nearby features via scale propagation but still no pooling is performed across scales. Additional details in related prior work are discussed in Appendix . | {
"cite_N": [
"@cite_18",
"@cite_37",
"@cite_7",
"@cite_8",
"@cite_40",
"@cite_13"
],
"mid": [
"2951925341",
"2011341357",
"2950212154",
"2127597232",
"1938156704",
"2406196141"
],
"abstract": [
"Deep convolutional neural networks (CNN) have shown their promise as a universal representation for recognition. However, global CNN activations lack geometric invariance, which limits their robustness for classification and matching of highly variable scenes. To improve the invariance of CNN activations without degrading their discriminative power, this paper presents a simple but effective scheme called multi-scale orderless pooling (MOP-CNN). This scheme extracts CNN activations for local patches at multiple scale levels, performs orderless VLAD pooling of these activations at each level separately, and concatenates the result. The resulting MOP-CNN representation can be used as a generic feature for either supervised or unsupervised recognition tasks, from image classification to instance-level retrieval; it consistently outperforms global CNN activations without requiring any joint training of prediction layers for a particular target dataset. In absolute terms, it achieves state-of-the-art results on the challenging SUN397 and MIT Indoor Scenes classification datasets, and competitive results on ILSVRC2012 2013 classification and INRIA Holidays retrieval datasets.",
"We describe a system to learn an object template from a video stream, and localize and track the corresponding object in live video. The template is decomposed into a number of local descriptors, thus enabling detection and tracking in spite of partial occlusion. Each local descriptor aggregates contrast invariant statistics (normalized intensity and gradient orientation) across scales, in a way that enables matching under significant scale variations. Low-level tracking during the training video sequence enables capturing object-specific variability due to the shape of the object, which is encapsulated in the descriptor. Salient locations on both the template and the target image are used as hypotheses to expedite matching.",
"Scene parsing, or semantic segmentation, consists in labeling each pixel in an image with the category of the object it belongs to. It is a challenging task that involves the simultaneous detection, segmentation and recognition of all the objects in the image. The scene parsing method proposed here starts by computing a tree of segments from a graph of pixel dissimilarities. Simultaneously, a set of dense feature vectors is computed which encodes regions of multiple sizes centered on each pixel. The feature extractor is a multiscale convolutional network trained from raw pixels. The feature vectors associated with the segments covered by each node in the tree are aggregated and fed to a classifier which produces an estimate of the distribution of object categories contained in the segment. A subset of tree nodes that cover the image are then selected so as to maximize the average \"purity\" of the class distributions, hence maximizing the overall likelihood that each segment will contain a single object. The convolutional network feature extractor is trained end-to-end from raw pixels, alleviating the need for engineered features. After training, the system is parameter free. The system yields record accuracies on the Stanford Background Dataset (8 classes), the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170 classes) while being an order of magnitude faster than competing approaches, producing a 320 240 image labeling in less than 1 second.",
"We address the problem of finding point correspondences in images by way of an approach to template matching that is robust under affine distortions. This is achieved by applying \"geometric blur\" to both the template and the image, resulting in a fall-off in similarity that is close to linear in the norm of the distortion between the template and the image. Results in wide baseline stereo correspondence, face detection, and feature correspondence are included.",
"We seek a practical method for establishing dense correspondences between two images with similar content, but possibly different 3D scenes. One of the challenges in designing such a system is the local scale differences of objects appearing in the two images. Previous methods often considered only few image pixels; matching only pixels for which stable scales may be reliably estimated. Recently, others have considered dense correspondences, but with substantial costs associated with generating, storing and matching scale invariant descriptors. Our work is motivated by the observation that pixels in the image have contexts—the pixels around them—which may be exploited in order to reliably estimate local scales. We make the following contributions. (i) We show that scales estimated in sparse interest points may be propagated to neighboring pixels where this information cannot be reliably determined. Doing so allows scale invariant descriptors to be extracted anywhere in the image. (ii) We explore three means for propagating this information: using the scales at detected interest points, using the underlying image information to guide scale propagation in each image separately, and using both images together. Finally, (iii), we provide extensive qualitative and quantitative results, demonstrating that scale propagation allows for accurate dense correspondences to be obtained even between very different images, with little computational costs beyond those required by existing methods.",
"This paper analyzes some of the challenges in performing automatic annotation and ranking of music audio, and proposes a few improvements. First, we motivate the use of principal component analysis on the mel-scaled spectrum. Secondly, we present an analysis of the impact of the selection of pooling functions for summarization of the features over time. We show that combining several pooling functions improves the performance of the system. Finally, we introduce the idea of multiscale learning. By incorporating these ideas in our model, we obtained state-of-the-art performance on the Magnatagatune dataset."
]
} |
1412.8073 | 1413004681 | We investigate isoperimetric upper bounds for sums of consecutive Steklov eigenvalues of planar domains. The normalization involves the perimeter and scale-invariant geometric factors which measure deviation of the domain from roundness. We prove sharp upper bounds for both starlike and simply connected domains for a large collection of spectral functionals including partial sums of the zeta function and heat trace. The proofs rely on a special class of quasiconformal mappings. | Uniformization theory enables these results to be generalized to compact Riemann surfaces with boundary, a setting in which the upper bounds involve also the number of boundary components and the genus @cite_25 @cite_15 . In dimension @math , additional geometric upper bounds have been obtained, as follows. Brock @cite_24 considered domains with fixed volume rather than fixed perimeter, in @math , and proved that the ball minimizes the sum of reciprocals @math . On compact manifolds, an upper bounds for @math was given by Fraser--Schoen @cite_25 , in terms of the volume and a quantity which they called the . For domains, methods from metric geometry were used by Colbois @cite_22 to bound each individual eigenvalue @math in terms of the perimeter and volume of the domain, and this work was recently improved by Hassannezhad @cite_37 . For compact hypersurfaces with boundary, Ilias--Makhoul @cite_8 proved upper bounds for @math in terms of various mean curvatures of the boundary. | {
"cite_N": [
"@cite_37",
"@cite_22",
"@cite_8",
"@cite_24",
"@cite_15",
"@cite_25"
],
"mid": [
"2069460639",
"2963327004",
"2053519960",
"2033983423",
"2963906339",
"2041867826"
],
"abstract": [
"In this paper, we find upper bounds for the eigenvalues of the Laplacian in the conformal class of a compact Riemannian manifold (M,g). These upper bounds depend only on the dimension and a conformal invariant that we call “min-conformal volume”. Asymptotically, these bounds are consistent with the Weyl law and improve previous results by Korevaar and Yang and Yau. The proof relies on the construction of a suitable family of disjoint domains providing supports for a family of test functions. This method is interesting for itself and powerful. As a further application of the method we obtain an upper bound for the eigenvalues of the Steklov problem in a domain with C1 boundary in a complete Riemannian manifold in terms of the isoperimetric ratio of the domain and the conformal invariant that we introduce.",
"Abstract We prove that the normalized Steklov eigenvalues of a bounded domain in a complete Riemannian manifold are bounded above in terms of the inverse of the isoperimetric ratio of the domain. Consequently, the normalized Steklov eigenvalues of a bounded domain in Euclidean space, hyperbolic space or a standard hemisphere are uniformly bounded above. On a compact surface with boundary, we obtain uniform bounds for the normalized Steklov eigenvalues in terms of the genus. We also establish a relationship between the Steklov eigenvalues of a domain and the eigenvalues of the Laplace–Beltrami operator on its boundary hypersurface.",
"Abstract Let M be a compact submanifold with boundary of a Euclidean space or a Sphere. In this paper, we derive an upper bound for the first non-zero eigenvalue p 1 of Steklov problem on M in terms of the r -th mean curvatures of its boundary ∂ M . The upper bound obtained is sharp.",
"Let Ω be a bounded smooth domain in ℝn and let 0 = λ1 ≤ λ2 ≤ … denote the eigenvalues of the Stekloff problem: Δu = 0 in Ω and (∥u) (∥ν) = λi on ∥Ω. We show that , where denotes the second eigenvalue of the Stekloff problem in a ball having the same measure as Ω. The proof is based on a weighted isoperimetric inequality.",
"We give explicit isoperimetric upper bounds for all Steklov eigenvalues of a compact orientable surface with boundary, in terms of the genus, the length of the boundary, and the number of boundary components. Our estimates generalize a recent result of Fraser-Schoen, as well as the classical inequalites obtained by Hersch-Payne-Schiffer, whose approach is used in the present paper.",
"We consider the relationship of the geometry of compact Riemannian manifolds with boundary to the first nonzero eigenvalue σ1 of the Dirichlet-to-Neumann map (Steklov eigenvalue). For surfaces Σ with genus γ and k boundary components we obtain the upper bound σ1L(∂Σ)⩽2(γ+k)π. For γ=0 and k=1 this result was obtained by Weinstock in 1954, and is sharp. We attempt to find the best constant in this inequality for annular surfaces (γ=0 and k=2). For rotationally symmetric metrics we show that the best constant is achieved by the induced metric on the portion of the catenoid centered at the origin which meets a sphere orthogonally and hence is a solution of the free boundary problem for the area functional in the ball. For a general class of (not necessarily rotationally symmetric) metrics on the annulus, which we call supercritical, we prove that σ1(Σ)L(∂Σ) is dominated by that of the critical catenoid with equality if and only if the annulus is conformally equivalent to the critical catenoid by a conformal transformation which is an isometry on the boundary. Motivated by the annulus case, we show that a proper submanifold of the ball is immersed by Steklov eigenfunctions if and only if it is a free boundary solution. We then prove general upper bounds for conformal metrics on manifolds of any dimension which can be properly conformally immersed into the unit ball in terms of certain conformal volume quantities. We show that these bounds are only achieved when the manifold is minimally immersed by first Steklov eigenfunctions. We also use these ideas to show that any free boundary solution in two dimensions has area at least π, and we observe that this implies the sharp isoperimetric inequality for free boundary solutions in the two-dimensional case."
]
} |
1412.8073 | 1413004681 | We investigate isoperimetric upper bounds for sums of consecutive Steklov eigenvalues of planar domains. The normalization involves the perimeter and scale-invariant geometric factors which measure deviation of the domain from roundness. We prove sharp upper bounds for both starlike and simply connected domains for a large collection of spectral functionals including partial sums of the zeta function and heat trace. The proofs rely on a special class of quasiconformal mappings. | Turning now to lower bounds, the minimum of each eigenvalue @math among domains of fixed perimeter or fixed volume is easily seen to be zero, by a pinching'' construction [Section 2.2] GP10b . Geometric lower bounds must therefore involve some other restrictions. An early result is that of Kuttler--Sigillito @cite_2 , who considered planar starlike domains and gave a bound in terms of the radius function and its derivative (see sec:starlike ). One should also mention a recent paper of Jammes @cite_17 , where a lower bound in the spirit of the classical Cheeger inequality is proved for the first nonzero Steklov eigenvalue. See also @cite_13 . | {
"cite_N": [
"@cite_13",
"@cite_17",
"@cite_2"
],
"mid": [
"2067534086",
"2280552577",
"1986464995"
],
"abstract": [
"Let (Mn, g) be a compact Riemannian manifold with boundary and dimensionn⩾2. In this paper we discuss the first non-zero eigenvalue problem (1) Problem (1) is known as the Stekloff problem because it was introduced by him in 1902, for bounded domains of the plane. We discuss estimates of the eigenvalueν1in terms of the geometry of the manifold (Mn, g). In the two-dimensional case we generalize Payne's Theorem [P] for bounded domains in the plane to non-negative curvature manifolds. In this case we show thatν1⩾k0, wherekg⩾k0andkgrepresents the geodesic curvature of the boundary. In higher dimensionsn⩾3 for non-negative Ricci curvature manifolds we show thatν1>k0 2, wherek0is a lower bound for any eigenvalue of the second fundamental form of the boundary. We introduce an isoperimetric constant and prove a Cheeger's type inequality for the Stekloff eigenvalue.",
"We prove a Cheeger inequality for the first positive Steklov eigenvalue. It involves two isoperimetric constants.",
""
]
} |
1412.8073 | 1413004681 | We investigate isoperimetric upper bounds for sums of consecutive Steklov eigenvalues of planar domains. The normalization involves the perimeter and scale-invariant geometric factors which measure deviation of the domain from roundness. We prove sharp upper bounds for both starlike and simply connected domains for a large collection of spectral functionals including partial sums of the zeta function and heat trace. The proofs rely on a special class of quasiconformal mappings. | Regarding other eigenvalue functionals, Dittmar @cite_38 proved that among simply connected planar domains with given conformal radius, the disk minimizes the infinite sum of reciprocals of all squares, @math . Henrot--Philippin--Safoui @cite_32 proved that among convex domains of fixed measure in @math , the product of the first @math nonzero Steklov eigenvalues is maximal for a ball. Their method is based on an isoperimetric inequality for moment of inertia. Edward @cite_9 proved for simply connected domains @math of perimeter @math that the relative sum of squares is minimal for the unit disk: @math . | {
"cite_N": [
"@cite_38",
"@cite_9",
"@cite_32"
],
"mid": [
"1983665969",
"2046586251",
"2963619381"
],
"abstract": [
"Let 0 = λ1 < λ2 ≤ λ3 ≤ … be the Stekloff eigenvalues of a plane domain. The paper is concerned with formulas for ∑∞2λ(–2)j in simply and doubly connected domains. In the simply connected case it is proven that the disk minimized this sum. (© 2004 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim)",
"Study of the zeta function associated to the Neumann operator on planar domains yields an inequality for Steklov eigenvalues for planar domains.",
"In this paper we establish isoperimetric inequalities for the product of some moments of inertia. As an application, we obtain an isoperimetric inequality for the product of the @math first nonzero eigenvalues of the Stekloff problem in @math ."
]
} |
1412.8073 | 1413004681 | We investigate isoperimetric upper bounds for sums of consecutive Steklov eigenvalues of planar domains. The normalization involves the perimeter and scale-invariant geometric factors which measure deviation of the domain from roundness. We prove sharp upper bounds for both starlike and simply connected domains for a large collection of spectral functionals including partial sums of the zeta function and heat trace. The proofs rely on a special class of quasiconformal mappings. | Incidentally, to justify the interpretation of the Steklov problem in terms of a membrane whose mass is concentrated at the boundary, one may compare the Rayleigh quotient for the Steklov problem with the usual Rayleigh quotient for the Neumann Laplacian. For spectral convergence results as the mass concentrates onto the boundary, see recent work of Lamberti and Provenzano @cite_28 . | {
"cite_N": [
"@cite_28"
],
"mid": [
"2169938562"
],
"abstract": [
"We consider the Steklov eigenvalues of the Laplace operator as limiting Neumann eigenvalues in a problem of boundary mass concentration. We discuss the asymptotic behavior of the Neumann eigenvalues in a ball and we deduce that the Steklov eigenvalues minimize the Neumann eigenvalues. Moreover, we study the dependence of the eigenvalues of the Steklov problem upon perturbation of the mass density and show that the Steklov eigenvalues violates a maximum principle in spectral optimization problems."
]
} |
1412.8073 | 1413004681 | We investigate isoperimetric upper bounds for sums of consecutive Steklov eigenvalues of planar domains. The normalization involves the perimeter and scale-invariant geometric factors which measure deviation of the domain from roundness. We prove sharp upper bounds for both starlike and simply connected domains for a large collection of spectral functionals including partial sums of the zeta function and heat trace. The proofs rely on a special class of quasiconformal mappings. | The literature on the spectral geometry of the Steklov problem is expanding rapidly, and so we had to omit many papers here. We refer to @cite_14 @cite_23 for recent surveys. | {
"cite_N": [
"@cite_14",
"@cite_23"
],
"mid": [
"2964153620",
"2284966262"
],
"abstract": [
"We give an overview of results on shape optimization for low eigenvalues of the Laplacian on bounded planar domains with Neumann and Steklov boundary conditions. These results share a common feature: they are proved using methods of complex analysis. In particular, we present modernized proofs of the classical inequalities due to Szego and Weinstock for the first nonzero Neumann and Steklov eigenvalues. We also extend the inequality for the second nonzero Neumann eigenvalue, obtained recently by Nadirashvili and the authors, to nonhomogeneous membranes with log-subharmonic densities. In the homogeneous case, we show that this inequality is strict, which implies that the maximum of the second nonzero Neumann eigenvalue is not attained in the class of simply connected membranes of a given mass. The same is true for the second nonzero Steklov eigenvalue, as follows from our results on the Hersch–Payne–Schiffer inequalities. Copyright © 2009 John Wiley & Sons, Ltd.",
"The Steklov problem is an eigenvalue problem with the spectral parameter in the boundary conditions, which has various applications. Its spectrum coincides with that of the Dirichlet-to-Neumann operator. Over the past years, there has been a growing interest in the Steklov problem from the viewpoint of spectral geometry. While this problem shares some common properties with its more familiar Dirichlet and Neumann cousins, its eigenval- ues and eigenfunctions have a number of distinctive geometric features, which makes the subject especially appealing. In this survey we discuss some recent advances and open questions, particularly in the study of spectral asymptotics, spectral invariants, eigenvalue estimates, and nodal geometry."
]
} |
1412.8281 | 2289366859 | This paper presents a new user feedback mechanism based on Wikipedia concepts for interactive retrieval. In this mechanism, the system presents to the user a group of Wikipedia concepts, and the user can choose those relevant to refine his her query. To realize this mechanism, we propose methods to address two problems: 1) how to select a small number of possibly relevant Wikipedia concepts to show the user, and 2) how to re-rank retrieved documents given the user-identified Wikipedia concepts. Our methods are evaluated on three TREC data sets. The experiment results show that our methods can dramatically improve retrieval performances. | Wikipedia has been shown to be a useful resource for many intelligent tasks, including pseudo relevance feedback @cite_20 , query expansion @cite_16 , cross-language information retrieval @cite_22 @cite_12 , text classification @cite_17 , etc. @cite_20 propose a query-dependent method for selecting Wikipedia articles for pseudo relevance feedback. @cite_16 propose to use Wikipedia as an external corpus to expand difficult queries. In this paper, we explore the usage of Wikipedia in interactive retrieval and propose a new user feedback mechanism based on Wikipedia. | {
"cite_N": [
"@cite_22",
"@cite_16",
"@cite_20",
"@cite_12",
"@cite_17"
],
"mid": [
"2400374328",
"2007585013",
"2102563107",
"58646613",
"2189067371"
],
"abstract": [
"We describe a method which is able to translate queries extended by narrative information from one language to another, with help of an appropriate machine readable dictionary and the Wikipedia on-line encyclopedia. Processing occurs in three steps: rst, we look up possible translations phrase by phrase using both the dictionary and the cross-lingual links provided by Wikipedia; second, improbable translations, detected by a simple language model computed over a large corpus of documents written in the target language, are eliminated; and nally, further ltering is applied by matching Wikipedia concepts against the query narrative and removing translations not related to the overall query topic. Experiments performed on the Los Angeles Times 2002 corpus, translating from Hungarian to English showed that while queries generated at end of the second step were roughly only half as e ective as original queries, primarily due to the limitations of our tools, after the third step precision improved signi cantly, reaching 60 of the native English level.",
"In an ad-hoc retrieval task, the query is usually short and the user expects to find the relevant documents in the first several result pages. We explored the possibilities of using Wikipedia's articles as an external corpus to expand ad-hoc queries. Results show promising improvements over measures that emphasize on weak queries.",
"Pseudo-relevance feedback (PRF) via query-expansion has been proven to be e®ective in many information retrieval (IR) tasks. In most existing work, the top-ranked documents from an initial search are assumed to be relevant and used for PRF. One problem with this approach is that one or more of the top retrieved documents may be non-relevant, which can introduce noise into the feedback process. Besides, existing methods generally do not take into account the significantly different types of queries that are often entered into an IR system. Intuitively, Wikipedia can be seen as a large, manually edited document collection which could be exploited to improve document retrieval effectiveness within PRF. It is not obvious how we might best utilize information from Wikipedia in PRF, and to date, the potential of Wikipedia for this task has been largely unexplored. In our work, we present a systematic exploration of the utilization of Wikipedia in PRF for query dependent expansion. Specifically, we classify TREC topics into three categories based on Wikipedia: 1) entity queries, 2) ambiguous queries, and 3) broader queries. We propose and study the effectiveness of three methods for expansion term selection, each modeling the Wikipedia based pseudo-relevance information from a different perspective. We incorporate the expansion terms into the original query and use language modeling IR to evaluate these methods. Experiments on four TREC test collections, including the large web collection GOV2, show that retrieval performance of each type of query can be improved. In addition, we demonstrate that the proposed method out-performs the baseline relevance model in terms of precision and robustness.",
"This paper introduces CL-ESA, a new multilingual retrieval model for the analysis of cross-language similarity. The retrieval model exploits the multilingual alignment of Wikipedia: given a document d written in language L we construct a concept vector d for d, where each dimension i in d quantifies the similarity of d with respect to a document di* chosen from the \"L-subset\" of Wikipedia. Likewise, for a second document d′ written in language L′, L ≠ L′, we construct a concept vector d′, using from the L′-subset of the Wikipedia the topic-aligned counterparts d′i* of our previously chosen documents. Since the two concept vectors d and d′ are collection-relative representations of d and d′ they are language-independent. I. e., their similarity can directly be computed with the cosine similarity measure, for instance. We present results of an extensive analysis that demonstrates the power of this new retrieval model: for a query document d the topically most similar documents from a corpus in another language are properly ranked. Salient property of the new retrieval model is its robustness with respect to both the size and the quality of the index document collection.",
"Most existing methods for text categorization employ induction algorithms that use the words appearing in the training documents as features. While they perform well in many categorization tasks, these methods are inherently limited when faced with more complicated tasks where external knowledge is essential. Recently, there have been efforts to augment these basic features with external knowledge, including semi-supervised learning and transfer learning. In this work, we present a new framework for automatic acquisition of world knowledge and methods for incorporating it into the text categorization process. Our approach enhances machine learning algorithms with features generated from domain-specific and common-sense knowledge. This knowledge is represented by ontologies that contain hundreds of thousands of concepts, further enriched through controlled Web crawling. Prior to text categorization, a feature generator analyzes the documents and maps them onto appropriate ontology concepts that augment the bag of words used in simple supervised learning. Feature generation is accomplished through contextual analysis of document text, thus implicitly performing word sense disambiguation. Coupled with the ability to generalize concepts using the ontology, this approach addresses two significant problems in natural language processing---synonymy and polysemy. Categorizing documents with the aid of knowledge-based features leverages information that cannot be deduced from the training documents alone. We applied our methodology using the Open Directory Project, the largest existing Web directory built by over 70,000 human editors. Experimental results over a range of data sets confirm improved performance compared to the bag of words document representation."
]
} |
1412.8281 | 2289366859 | This paper presents a new user feedback mechanism based on Wikipedia concepts for interactive retrieval. In this mechanism, the system presents to the user a group of Wikipedia concepts, and the user can choose those relevant to refine his her query. To realize this mechanism, we propose methods to address two problems: 1) how to select a small number of possibly relevant Wikipedia concepts to show the user, and 2) how to re-rank retrieved documents given the user-identified Wikipedia concepts. Our methods are evaluated on three TREC data sets. The experiment results show that our methods can dramatically improve retrieval performances. | Different types of user feedback have been shown useful for ad-hoc retrieval, including document-based relevance feedback @cite_14 @cite_7 @cite_0 , term-based feedback @cite_13 , metadata-based faceted feedback @cite_15 @cite_10 @cite_1 @cite_2 . In this paper, we study a new type of user feedback based on Wikipedia concepts and show that this type of feedback can be very useful for retrieval. | {
"cite_N": [
"@cite_14",
"@cite_7",
"@cite_10",
"@cite_1",
"@cite_0",
"@cite_2",
"@cite_15",
"@cite_13"
],
"mid": [
"2164547069",
"2000672666",
"2145227439",
"2485771844",
"2023759880",
"2134135631",
"2157726995",
"2158976269"
],
"abstract": [
"1332840 Primer compositions DOW CORNINGCORP 6 Oct 1971 [30 Dec 1970] 46462 71 Heading C3T [Also in Divisions B2 and C4] A primer composition comprises 1 pbw of tetra ethoxy or propoxy silane or poly ethyl or propyl silicate or any mixture thereof, 0A75-2A5 pbw of bis(acetylacetonyl) diisopropyl titanate, 0A75- 5 pbw of a compound CF 3 CH 2 CH 2 Si[OSi(CH 3 ) 2 - X] 3 wherein each X is H or -CH 2 CH 2 Si- (OOCCH 3 ) 3 , at least one being the latter, and 1-20 pbw of a ketone, hydrocarbon or halohydrocarbon solvent boiling not above 150‹ C. In the examples 1 pbw each of bis(acetylacetonyl)diisopropyl titanate, polyethyl silicate and are dissolved in 10 pbw of acetone or in 9 pbw of light naphtha and 1 of methylisobutylketone. The solutions are used to prime Ti panels, to which a Pt-catalysed room-temperature vulcanizable poly-trifluoropropylmethyl siloxanebased rubber is then applied.",
"Relevance feedback is an automatic process, introduced over 20 years ago, designed to produce query formulations following an initial retrieval operation. The principal relevance feedback methods described over the years are examined briefly, and evaluation data are included to demonstrate the effectiveness of the various methods. Prescriptions are given for conducting text retrieval operations iteratively using relevance feedback.",
"Existing adaptive filtering systems learn user profiles based on users' relevance judgments on documents. In some cases, users have some prior knowledge about what features are important for a document to be relevant. For example, a Spanish speaker may only want news written in Spanish, and thus a relevant document should contain the feature \"Language: Spanish\"; a researcher working on HIV knows an article with the medical subject \"Subject: AIDS\" is very likely to be interesting to him her. Semi-structured documents with rich faceted metadata are increasingly prevalent over the Internet. Motivated by the commonly used faceted search interface in e-commerce, we study whether users' prior knowledge about faceted features could be exploited for filtering semi-structured documents. We envision two faceted feedback solicitation mechanisms, and propose a novel user profile learning algorithm that can incorporate user feedback on features. To evaluate the proposed work, we use two data sets from the TREC filtering track, and conduct a user study on Amazon Mechanical Turk. Our experimental results show that user feedback on faceted features is useful for filtering. The new user profile learning algorithm can effectively learn from user feedback on faceted features and performs better than several other methods adapted from the feature-based feedback techniques proposed for retrieval and text classification tasks in previous work.",
"Unlike unstructured documents which only consist of plain text, semi-structured documents contain plenty of structured information including metadata, document fields, annotations, etc. Content-based filtering is to identify user-interested documents from a stream of documents based on the analysis of document content. When dealing with semi-structured documents, many existing filtering approaches either ignore the structured information, or simply use them as features. This dissertation focuses on the better use of document structured information for content-based filtering. We find that structured information is useful in the following problems. First, structured information is useful for user profile initialization in topic tracking tasks. At the early stage of a topic tracking task, the system performance tends to be low due to the limited number of labeled documents from the user. To deal with this problem, we propose two new user feedback mechanisms based on document structured information (facet-value pairs and Wikipedia concepts respectively). The new feedback mechanisms allow the system to quickly get some feedback from the user and refine the user profile. Our experiment results show that the new user feedback can significantly improve the filtering performance in topic tracking tasks. Second, structured information is useful for semi-structured document summarization in retrieval filtering tasks with user quries. In a retrieval filtering task where many documents are delivered, the user selects documents to read based on the short summaries of documents. In this sense, document summaries should be informative enough so that the user can make right decisions on which documents to read. To achieve this goal, we propose a new document-summarization method that can generate better summaries for semi-structured documents with rich metadata in filtering retrieval scenarios. Third, structured information can be easily incorporated into discriminative models for personalized recommendation. We propose two flexible Bayesian hierarchical models for joint user profile learning. The proposed models are discriminative, thus can easily incorporate various types of document structured information. They also have the advantages of being able to borrow discriminative information from similar users and capture multiple interests of individual users.",
"Relevance feedback is an effective approach to improve retrieval quality over the initial query. Typical relevance feedback methods usually select top-ranked documents for relevance judgments, then query expansion or model updating are carried out based on the feedback documents. However, the number of feedback documents is usually limited due to expensive human labeling. Thus relevant documents in the feedback set are hardly representative of all relevant documents and the feedback set is actually biased. As a result, the performance of relevance feedback will get hurt. In this paper, we first show how and where the bias problem exists through experiments. Then we study how the bias can be reduced by utilizing the unlabeled documents. After analyzing the usefulness of a document to relevance feedback, we propose an approach that extends the feedback set with carefully selected unlabeled documents by heuristics. Our experiment results show that the extended feedback set has less bias than the original feedback set and better performance can be achieved when the extended feedback set is used for relevance feedback.",
"Most existing content-based filtering approaches including Rocchio, Language Models, SVM, Logistic Regression, Neural Networks, etc. learn user profiles independently without capturing the similarity among users. The Bayesian hierarchical models learn user profiles jointly and have the advantage of being able to borrow information from other users through a Bayesian prior. The standard Bayesian hierarchical model assumes all user profiles are generated from the same prior. However, considering the diversity of user interests, this assumption might not be optimal. Besides, most existing content-based filtering approaches implicitly assume that each user profile corresponds to exactly one user interest and fail to capture a user's multiple interests (information needs). In this paper, we present a flexible Bayesian hierarchical modeling approach to model both commonality and diversity among users as well as individual users' multiple interests. We propose two models each with different assumptions, and the proposed models are called Discriminative Factored Prior Models (DFPM). In our models, each user profile is modeled as a discriminative classifier with a factored model as its prior, and different factors contribute in different levels to each user profile. Compared with existing content-based filtering models, DFPM are interesting because they can 1) borrow discriminative criteria of other users while learning a particular user profile through the factored prior; 2) trade off well between diversity and commonality among users; and 3) handle the challenging classification situation where each class contains multiple concepts. The experimental results on a dataset collected from real users on digg.com show that our models significantly outperform the baseline models of L-2 regularized logistic regression and the standard Bayesian hierarchical model with logistic regression",
"Motivated by the commonly used faceted search interface in e-commerce, this paper investigates interactive relevance feedback mechanism based on faceted document metadata. In this mechanism, the system recommends a group of document facet-value pairs, and lets users select relevant ones to restrict the returned documents. We propose four facet-value pair recommendation approaches and two retrieval models that incorporate user feedback on document facets. Evaluated based on user feedback collected through Amazon Mechanical Turk, our experimental results show that the Boolean filtering approach, which is widely used in faceted search in e-commerce, doesn't work well for text document retrieval, due to the incompleteness (low recall) of metadata assignment in semi-structured text documents. Instead, a soft model performs more effectively. The faceted feedback mechanism can also be combined with document-based relevance feedback and pseudo relevance feedback to further improve the retrieval performance.",
"In this paper we study term-based feedback for information retrieval in the language modeling approach. With term feedback a user directly judges the relevance of individual terms without interaction with feedback documents, taking full control of the query expansion process. We propose a cluster-based method for selecting terms to present to the user for judgment, as well as effective algorithms for constructing refined query language models from user term feedback. Our algorithms are shown to bring significant improvement in retrieval accuracy over a non-feedback baseline, and achieve comparable performance to relevance feedback. They are helpful even when there are no relevant documents in the top."
]
} |
1412.8281 | 2289366859 | This paper presents a new user feedback mechanism based on Wikipedia concepts for interactive retrieval. In this mechanism, the system presents to the user a group of Wikipedia concepts, and the user can choose those relevant to refine his her query. To realize this mechanism, we propose methods to address two problems: 1) how to select a small number of possibly relevant Wikipedia concepts to show the user, and 2) how to re-rank retrieved documents given the user-identified Wikipedia concepts. Our methods are evaluated on three TREC data sets. The experiment results show that our methods can dramatically improve retrieval performances. | Query expansion is a fundamental technique for dealing with the term mismatch problem in information retrieval. The basic idea is to find additional terms that are related to the underlying information need to expand the user query. Many methods for term selection have been studied @cite_4 @cite_21 @cite_11 @cite_6 @cite_3 @cite_9 @cite_19 . In this paper, our document ranking methods are based on query expansion. We rely on user-identified Wikipedia concepts to select high-quality terms for query expansion. | {
"cite_N": [
"@cite_4",
"@cite_9",
"@cite_21",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_11"
],
"mid": [
"1979459060",
"2065096648",
"2105106523",
"1999817920",
"1525341925",
"301977886",
"1987996059"
],
"abstract": [
"Automatic query expansion has long been suggested as a technique for dealing with the fundamental issue of word mismatch in information retrieval. A number of approaches to expansion have been studied and, more recently, attention has focused on techniques that analyze the corpus to discover word relationship (global techniques) and those that analyze documents retrieved by the initial query ( local feedback). In this paper, we compare the effectiveness of these approaches and show that, although global analysis haa some advantages, local analysia is generally more effective. We also show that using global analysis techniques.",
"In the framework of a relevance feedback system, term values or term weights may be used to (a) select new terms for inclusion in a query, and or (b) weight the terms for retrieval purposes once selected. It has sometimes been assumed that the same weighting formula should be used for both purposes. This paper sketches a quantitative argument which suggests that the two purposes require different weighting formulae.",
"Applications such as office automation, news filtering, help facilities in complex systems, and the like require the ability to retrieve documents from full-text databases where vocabulary problems can be particularly severe. Experiments performed on small collections with single-domain thesauri suggest that expanding query vectors with words that are lexically related to the original query words can ameliorate some of the problems of mismatched vocabularies. This paper examines the utility of lexical query expansion in the large, diverse TREC collection. Concepts are represented by WordNet synonym sets and are expanded by following the typed links included in WordNet. Experimental results show this query expansion technique makes little difference in retrieval effectiveness if the original queries are relatively complete descriptions of the information being sought even when the concepts to be expanded are selected by hand. Less well developed queries can be significantly improved by expansion of hand-chosen concepts. However, an automatic procedure that can approximate the set of hand picked synonym sets has yet to be devised, and expanding by the synonym sets that are automatically generated can degrade retrieval performance.",
"Most casual users of IR systems type short queries. Recent research has shown that adding new words to these queries via odhoc feedback improves the retrieval effectiveness of such queries. We investigate ways to improve this query expansion process by refining the set of documents used in feedback. We start by using manually formulated Boolean filters along with proximity constraints. Our approach is similar to the one proposed by Hearst[l2]. Next, we investigate a completely automatic method that makes use of term cooccurrence information to estimate word correlation. Experimental results show that refining the set of documents used in query expansion often prevents the query drift caused by blind expansion and yields substantial improvements in retrieval effectiveness, both in terms of average precision and precision in the top twenty documents. More importantly, the fully automatic approach developed in this study performs competitively with the best manual approach and requires little computational overhead.",
"The Smart information retrieval project emphasizes completely automatic approaches to the understanding and retrieval of large quantities of text. We continue our work in TREC 3, performing runs in the routing, ad-hoc, and foreign language environments. Our major focus is massive query expansion : adding from 300 to 530 terms to each query. These terms come from known relevant documents in the case of routing, and from just the top retrieved documents in the case of ad-hoc and Spanish. This approach improves effectiveness from 7 to 25 in the various experiments. Other ad-hoc work extends our investigations into combining global similarities, giving an overall indication of how a document matches a query, with local similarities identifying a smaller part of the document which matches the query. Using an overlapping text window definition of local, we achieve a 16 improvement.",
"Abstract : The relevance feedback track in TREC 2009 focuses on two sub tasks: actively selecting good documents for users to provide relevance feedback and retrieving documents based on user relevance feedback. For the first task, we tried a clustering based method and the Transductive Experimental Design (TED) method proposed by . For clustering based method, we use the K-means algorithm to cluster the top retrieved documents and choose the most representative document of each cluster. The TED method aims to find documents that are hard-to-predict and representative of the unlabeled documents. For the second task, we did query expansion based on a relevance model learned on the relevant documents.",
"Query expansion methods have been studied for a long time - with debatable success in many instances. In this paper we present a probabilistic query expansion model based on a similarity thesaurus which was constructed automatically. A similarity thesaurus reflects domain knowledge about the particular collection from which it is constructed. We address the two important issues with query expansion: the selection and the weighting of additional search terms. In contrast to earlier methods, our queries are expanded by adding those terms that are most similar to the concept of the query, rather than selecting terms that are similar to the query terms. Our experiments show that this kind of query expansion results in a notable improvement in the retrieval effectiveness when measured using both recall-precision and usefulness."
]
} |
1412.8534 | 1772613325 | Artificial neural networks are powerful pattern classifiers; however, they have been surpassed in accuracy by methods such as support vector machines and random forests that are also easier to use and faster to train. Backpropagation, which is used to train artificial neural networks, suffers from the herd effect problem which leads to long training times and limit classification accuracy. We use the disjunctive normal form and approximate the boolean conjunction operations with products to construct a novel network architecture. The proposed model can be trained by minimizing an error function and it allows an effective and intuitive initialization which solves the herd-effect problem associated with backpropagation. This leads to state-of-the art classification accuracy and fast training times. In addition, our model can be jointly optimized with convolutional features in an unified structure leading to state-of-the-art results on computer vision problems with fast convergence rates. A GPU implementation of LDNN with optional convolutional features is also available | Extensive research has been performed on variants of the backpropagation algorithm including batch vs. stochastic learning @cite_53 @cite_15 , squared error vs. cross-entropy @cite_68 and optimal learning rates @cite_5 @cite_26 . Many other practical choices including normalization of inputs, initialization of weights, stopping criteria, activation functions, target output values that will not saturate the activation functions, shuffling training examples, momentum terms in optimization, and optimization techniques that make use of the second-order derivatives of the error are summarized in @cite_51 . More recently, Hinton proposed a Dropout scheme for backpropagation which helps prevent co-adaptation of feature detectors @cite_9 . Despite the extensive effort devoted to making learning MLPs as efficient as possible, the fundamental problems outlined in remain because they arise from the architecture of MLPs. Contrastive divergence @cite_52 @cite_59 can be used to pre-train networks in an unsupervised manner prior to backpropagation such that the herd-effect problem is alleviated. Contrastive divergence has been used successfully to train deep networks. The LDNN model proposed in this paper can be seen as an architectural alternative for supervised learning of ANNs. | {
"cite_N": [
"@cite_26",
"@cite_53",
"@cite_9",
"@cite_52",
"@cite_59",
"@cite_5",
"@cite_15",
"@cite_68",
"@cite_51"
],
"mid": [
"2131859749",
"637650731",
"",
"2136922672",
"2100495367",
"2042318263",
"21721852",
"2038111264",
"1576278180"
],
"abstract": [
"An adaptive on-line algorithm extending the learning of learning idea is proposed and theoretically motivated. Relying only on gradient flow information it can be applied to learning continuous functions or distributions, even when no explicit loss function is given and the Hessian is not available. Its efficiency is demonstrated for a non-stationary blind separation task of acoustic signals.",
"Control theory approach (P.J. Antsaklis). Computational learning theory for artificial neural networks (M. Anthony, N. Biggs). Time-summating network approach (P.C. Bressloff). The numerical analysis approach (S.W. Ellacott). Self-organizing neural networks for stable control of autonomous behaviour in a changing world (S. Grossberg). On-line learning processes in artificial neural networks (T.M. Heskes, B. Kappen). Multilayer functionals (D.S. Modha, R. Hecht-Nielsen). Neural networks: the spin glass approach (D. Sherrington). Dynamics of attractor neural networks (T. Coolen, D. Sherrincton). Information theory and neural networks (J.G. Taylor, M.D. Plumbley). Mathematical analysis of a competitive network for attention (J.G. Taylor, F.N. Alavi).",
"",
"We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.",
"High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.",
"",
"",
"This paper demonstrates how the backpropagation algorithm (BP) and its variants can be accelerated significantly while the quality of the trained nets will increase. Two modifications were proposed: First, instead of the usual quadratic error we use the cross entropy as an error function and second, we normalize the input patterns. The first modification eliminates the so called sigmoid prime factor of the update rule for the output units. In order to balance the dynamic range of the inputs we use normalization. The combination of both modifications is called CEN–Optimization (Cross Entropy combined with Pattern Normalization). As our simulation results show CEN–Optimization can't only improve online BP but also RPPROP, the most sophisticated BP variant known today. Even though RPROP yields usually much better results than online BP the performance gap between CEN–BP and CEN–RPROP is smaller than between the standard versions of those algorithms. By means of CEN–RPROP it is nearly guaranteed to achieve an error of zero (with respect to the training set). Simultaneously, the generalization performance of the trained nets can be increased, because less complex networks suffice to fit the training set. Compared to the usual SSE (summed squared error) one can yield lower training errors with fewer weights.",
"The twenty last years have been marked by an increase in available data and computing power. In parallel to this trend, the focus of neural network research and the practice of training neural networks has undergone a number of important changes, for example, use of deep learning machines. The second edition of the book augments the first edition with more tricks, which have resulted from 14 years of theory and experimentation by some of the world's most prominent neural network researchers. These tricks can make a substantial difference (in terms of speed, ease of implementation, and accuracy) when it comes to putting algorithms to work on real problems."
]
} |
1412.8379 | 326163211 | We develop a definitive physical-space scattering theory for the scalar wave equation on Kerr exterior backgrounds in the general subextremal case |a|<M. In particular, we prove results corresponding to "existence and uniqueness of scattering states" and "asymptotic completeness" and we show moreover that the resulting "scattering matrix" mapping radiation fields on the past horizon and past null infinity to radiation fields on the future horizon and future null infinity is a bounded operator. The latter allows us to give a time-domain theory of superradiant reflection. The boundedness of the scattering matrix shows in particular that the maximal amplification of solutions associated to ingoing finite-energy wave packets on past null infinity is bounded. On the frequency side, this corresponds to the novel statement that the suitably normalised reflection and transmission coefficients are uniformly bounded independently of the frequency parameters. We further complement this with a demonstration that superradiant reflection indeed amplifies the energy radiated to future null infinity of suitable wave-packets as above. The results make essential use of a refinement of our recent proof [M. Dafermos, I. Rodnianski and Y. Shlapentokh-Rothman, Decay for solutions of the wave equation on Kerr exterior spacetimes III: the full subextremal case |a|<M, arXiv:1402.6034] of boundedness and decay for solutions of the Cauchy problem so as to apply in the class of solutions where only a degenerate energy is assumed finite. We show in contrast that the analogous scattering maps cannot be defined for the class of finite non-degenerate energy solutions. This is due to the fact that the celebrated horizon red-shift effect acts as a blue-shift instability when solving the wave equation backwards. | Let us specifically mention here a related recent important advance by Georgescu, G 'erard and H "afner @cite_32 which proves scattering results for fixed-azimuthal mode (i.e. fixed @math ) solutions of the Klein-Gordon equation in the very slowly rotating Kerr-de Sitter case @math . This is in part based on work on the Cauchy problem due to Dyatlov @cite_15 . For additional background on the Cauchy problem on other black hole spacetimes, besides references mentioned previously, we refer the reader to the lecture notes @cite_38 . | {
"cite_N": [
"@cite_38",
"@cite_15",
"@cite_32"
],
"mid": [
"2150477501",
"2018414839",
"2962737369"
],
"abstract": [
"These lecture notes, based on a course given at the Zurich Clay Summer School (June 23-July 18, 2008), review our current mathematical understanding of the global behaviour of waves on black hole exterior backgrounds. Interest in this problem stems from its relationship to the non-linear stability of the black hole spacetimes themselves as solutions to the Einstein equations, one of the central open problems of general relativity. After an introductory discussion of the Schwarzschild geometry and the black hole concept, the classical theorem of Kay and Wald on the boundedness of scalar waves on the exterior region of Schwarzschild is reviewed. The original proof is presented, followed by a new more robust proof of a stronger boundedness statement. The problem of decay of scalar waves on Schwarzschild is then addressed, and a theorem proving quantitative decay is stated and its proof sketched. This decay statement is carefully contrasted with the type of statements derived heuristically in the physics literature for the asymptotic tails of individual spherical harmonics. Following this, our recent proof of the boundedness of solutions to the wave equation on axisymmetric stationary backgrounds (including slowly-rotating Kerr and Kerr-Newman) is reviewed and a new decay result for slowly-rotating Kerr spacetimes is stated and proved. This last result was announced at the summer school and appears in print here for the first time. A discussion of the analogue of these problems for spacetimes with a positive cosmological constant follows. Finally, a general framework is given for capturing the red-shift effect for non-extremal black holes. This unifies and extends some of the analysis of the previous sections. The notes end with a collection of open problems.",
"We provide a rigorous definition of quasi-normal modes for a rotating black hole. They are given by the poles of a certain meromorphic family of operators and agree with the heuristic definition in the physics literature. If the black hole rotates slowly enough, we show that these poles form a discrete subset of ( C ) . As an application we prove that the local energy of linear waves in that background decays exponentially once orthogonality to the zero resonance is imposed.",
"We show asymptotic completeness for a class of superradiant Klein-Gordon equations. Our results are applied to the Klein-Gordon equation on the De Sitter Kerr metric with small angular momentum of the black hole. For this equation we obtain asymptotic completeness for fixed angular momentum of the field."
]
} |
1412.8375 | 2952512230 | This paper considers the resource allocation problem in an Orthogonal Frequency Division Multiple Access (OFDMA) based cognitive radio (CR) network, where the CR base station adopts full overlay scheme to transmit both private and open information to multiple users with average delay and power constraints. A stochastic optimization problem is formulated to develop flow control and radio resource allocation in order to maximize the long-term system throughput of open and private information in CR system and ensure the stability of primary system. The corresponding optimal condition for employing full overlay is derived in the context of concurrent transmission of open and private information. An online resource allocation scheme is designed to adapt the transmission of open and private information based on monitoring the status of primary system as well as the channel and queue states in the CR network. The scheme is proven to be asymptotically optimal in solving the stochastic optimization problem without knowing any statistical information. Simulations are provided to verify the analytical results and efficiency of the scheme. | There have been many works on spectrum sharing in OFDMA-based CR networks @cite_13 @cite_52 @cite_21 . According to @cite_19 @cite_33 , the access technology of the SUs can be divided in two categories: spectrum underlay and spectrum overlay. The first category means that SUs can access licensed spectrum during PUs' transmission, while as is mentioned in @cite_33 , this approach imposes severe constraints on the transmission power of SUs such that they can operate below the noise floor of PUs, e.g, in @cite_8 @cite_0 @cite_13 . The second category means that SUs can only access licensed spectrum when the PU is idle, e.g, in @cite_44 @cite_43 @cite_6 @cite_52 @cite_21 . Considering both these two strategies suffer from some drawbacks, the authors in @cite_30 propose a new cognitive overlay scheme requiring SUs to assess and control their interference impacts on PUs. In general, the cognitive base station (CBS) controls the aggregate interference to primary transmission by allowing SUs to monitor channel quality indicators (CQIs), power-control notifications and ACK NAK of primary transmission. In this paper, this novel thought is extended into an OFDMA-based CR system. | {
"cite_N": [
"@cite_30",
"@cite_33",
"@cite_8",
"@cite_21",
"@cite_52",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_44",
"@cite_43",
"@cite_13"
],
"mid": [
"2126875659",
"2153620960",
"",
"",
"",
"2138993731",
"2163042378",
"",
"",
"2028517088",
""
],
"abstract": [
"In this paper, we investigate distributed control of multiple secondary users attempting to access the channel of a high priority primary user. Our aim is to maximize the sum cognitive (secondary) user throughput under the constraint of primary user's queue stability. We consider the effect of primary user link adaptation that allows the primary transmitter (PTx) to adapt its transmission rate in response to the secondary interference-level at the primary receiver (PRx). To control the sum secondary interference to PRx beyond the traditional collision-avoidance paradigm, we propose a novel power-control algorithm for secondary nodes to function. To develop such a distributed algorithm and to improve secondary user adaptability, we allow secondary nodes to monitor primary's radio link control information on the feedback channel. We present practical schemes that approximate the optimum solution without relying on global channel information at each secondary node.",
"Compounding the confusion is the use of the broad term cognitive radio as a synonym for dynamic spectrum access. As an initial attempt at unifying the terminology, the taxonomy of dynamic spectrum access is provided. In this article, an overview of challenges and recent developments in both technological and regulatory aspects of opportunistic spectrum access (OSA). The three basic components of OSA are discussed. Spectrum opportunity identification is crucial to OSA in order to achieve nonintrusive communication. The basic functions of the opportunity identification module are identified",
"",
"",
"",
"Information flow in a telecommunication network is accomplished through the interaction of mechanisms at various design layers with the end goal of supporting the information exchange needs of the applications. In wireless networks in particular, the different layers interact in a nontrivial manner in order to support information transfer. In this text we will present abstract models that capture the cross-layer interaction from the physical to transport layer in wireless network architectures including cellular, ad-hoc and sensor networks as well as hybrid wireless-wireline. The model allows for arbitrary network topologies as well as traffic forwarding modes, including datagrams and virtual circuits. Furthermore the time varying nature of a wireless network, due either to fading channels or to changing connectivity due to mobility, is adequately captured in our model to allow for state dependent network control policies. Quantitative performance measures that capture the quality of service requirements in these systems depending on the supported applications are discussed, including throughput maximization, energy consumption minimization, rate utility function maximization as well as general performance functionals. Cross-layer control algorithms with optimal or suboptimal performance with respect to the above measures are presented and analyzed. A detailed exposition of the related analysis and design techniques is provided.",
"We consider QoS-aware spectrum sharing in cognitive wireless networks where secondary users are allowed to access the spectrum owned by a primary network provider. The interference from secondary users to primary users is constrained to be below the tolerable limit. Also, signal to interference plus noise ratio (SINR) of each secondary user is maintained higher than a desired level for QoS insurance. When network load is high, admission control needs to be performed to satisfy both QoS and interference constraints. We propose an admission control algorithm which is performed jointly with power control such that QoS requirements of all admitted secondary users are satisfied while keeping the interference to primary users below the tolerable limit. When all secondary users can be supported at minimum rates, we allow them to increase their transmission rates and share the spectrum in a fair manner. We formulate the joint power rate allocation with max-min fairness criterion as an optimization problem. We show how to transform it into a convex optimization problem so that its globally optimal solution can be obtained. Numerical results show that the proposed admission control algorithm achieves performance very close to the optimal solution. Also, impacts of different system and QoS parameters on the network performance are investigated for both admission control and rate power allocation problems.",
"",
"",
"We venture beyond the \"listen-before-talk\" strategy that is common in many traditional cognitive radio access schemes. We exploit the bi-directional nature of most primary communication systems. By intelligently choosing their transmission parameters based on the observation of primary user (PU) communications, secondary users (SUs) in a cognitive network can achieve higher spectrum usage while limiting their interference to the PU. Specifically, we propose that the SUs listen to the PU's feedback channel to assess their interference on the primary receiver (PU-Rx), and adjust radio power accordingly to satisfy the PU's interference constraint. We investigate both centralized and distributed power control algorithms without active PU cooperation. We show that the PU feedback information inherent in many two-way primary systems can be used as important coordination signal among multiple SUs to distributively achieve a joint performance guarantee on the primary receiver's quality of service.",
""
]
} |
1412.8375 | 2952512230 | This paper considers the resource allocation problem in an Orthogonal Frequency Division Multiple Access (OFDMA) based cognitive radio (CR) network, where the CR base station adopts full overlay scheme to transmit both private and open information to multiple users with average delay and power constraints. A stochastic optimization problem is formulated to develop flow control and radio resource allocation in order to maximize the long-term system throughput of open and private information in CR system and ensure the stability of primary system. The corresponding optimal condition for employing full overlay is derived in the context of concurrent transmission of open and private information. An online resource allocation scheme is designed to adapt the transmission of open and private information based on monitoring the status of primary system as well as the channel and queue states in the CR network. The scheme is proven to be asymptotically optimal in solving the stochastic optimization problem without knowing any statistical information. Simulations are provided to verify the analytical results and efficiency of the scheme. | Besides the interference constraints, the works of delay aware transmission are also quite relative to this paper. Huang and Fang in @cite_7 investigate both reliability and delay constraints in routing design for wireless sensor network. in @cite_9 summarize three approaches to deal with delay-aware resource allocation in wireless networks. A constrained predictive control strategy is proposed in @cite_27 to compensate for network-induced delays with stability guarantee. Those three methods are based on large deviation theory, Markov decision theory and Lyapunov optimization techniques. As to the first two methods, they have to know some statistical information on channel state and random arrival data rate to design algorithm, while these prior knowledge is expensive to get, even unavailable. To overcome this problem, many authors pay attention to Lyapunov optimization techniques. References @cite_40 and @cite_16 investigate scheduling in multi-hop wireless networks and resource allocation in cooperative communications, respectively as two typical applications of Lyapunov optimization in delay-limited system. In this paper, we utilize this tool to dispose the resource allocation problem in OFDMA-based CR networks. | {
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_27",
"@cite_40",
"@cite_16"
],
"mid": [
"1998944191",
"2156006461",
"1948647671",
"2950016243",
"2169750978"
],
"abstract": [
"Sensor nodes are densely deployed to accomplish various applications because of the inexpensive cost and small size. Depending on different applications, the traffic in the wireless sensor networks may be mixed with time-sensitive packets and reliability-demanding packets. Therefore, QoS routing is an important issue in wireless sensor networks. Our goal is to provide soft-QoS to different packets as path information is not readily available in wireless networks. In this paper, we utilize the multiple paths between the source and sink pairs for QoS provisioning. Unlike E2E QoS schemes, soft-QoS mapped into links on a path is provided based on local link state information. By the estimation and approximation of path quality, traditional NP-complete QoS problem can be transformed to a modest problem. The idea is to formulate the optimization problem as a probabilistic programming, then based on some approximation technique, we convert it into a deterministic linear programming, which is much easier and convenient to solve. More importantly, the resulting solution is also one to the original probabilistic programming. Simulation results demonstrate the effectiveness of our approach.",
"In this paper, a comprehensive survey is given on several major systematic approaches in dealing with delay-aware control problems, namely the equivalentrate constraint approach, the Lyapunov stability drift approach, and the approximate Markov decision process approach using stochastic learning. These approaches essentially embrace most of the existing literature regarding delay-aware resource control in wireless systems. They have their relative pros and cons in terms of performance, complexity, and implementation issues. For each of the approaches, the problem setup, the general solution, and the design methodology are discussed. Applications of these approaches to delay-aware resource allocation are illustrated with examples in single-hop wireless networks. Furthermore, recent results regarding delay-aware multihop routing designs in general multihop networks are elaborated. Finally, the delay performances of various approaches are compared through simulations using an example of the uplink OFDMA systems.",
"This paper investigates the problem of stabilizing predictive control for networked control systems with state and input constraints. Both sensor-to-controller and controller-to-actuator delays are considered and described by a multirate method. The control scheme is characterized as a constrained finite horizon predictive control optimization problem with a multirate network-induced delays compensation strategy. It is shown that the proposed predictive controller not only efficiently reduces the negative effects of the network-induced delays but also guarantees the closed-loop stability and constraints satisfaction. Simulation studies are used to investigate the efficiency of the derived method.",
"In this paper, we propose a cross-layer scheduling algorithm that achieves a throughput \"epsilon-close\" to the optimal throughput in multi-hop wireless networks with a tradeoff of O(1 epsilon) in delay guarantees. The algorithm aims to solve a joint congestion control, routing, and scheduling problem in a multi-hop wireless network while satisfying per-flow average end-to-end delay guarantees and minimum data rate requirements. This problem has been solved for both backlogged as well as arbitrary arrival rate systems. Moreover, we discuss the design of a class of low-complexity suboptimal algorithms, the effects of delayed feedback on the optimal algorithm, and the extensions of the proposed algorithm to different interference models with arbitrary link capacities.",
"We investigate optimal resource allocation for delay- limited cooperative communication in time varying wireless networks. Motivated by real-time applications that have stringent delay constraints, we develop a dynamic cooperation strategy that makes optimal use of network resources to achieve a target outage probability (reliability) for each user subj ect to average power constraints. Using the technique of Lyapunov optimization, we first present a general framework to solve t his problem and then derive quasi-closed form solutions for several cooperative protocols proposed in the literature. Unlike earlier works, our scheme does not require prior knowledge of the statistical description of the packet arrival, channel sta te and node mobility processes and can be implemented in an online fashion."
]
} |
1412.7854 | 1812168010 | Traditional object recognition approaches apply feature extraction, part deformation handling, occlusion handling and classification sequentially while they are independent from each other. Ouyang and Wang proposed a model for jointly learning of all of the mentioned processes using one deep neural network. We utilized, and manipulated their toolbox in order to apply it in car detection scenarios where it had not been tested. Creating a single deep architecture from these components, improves the interaction between them and can enhance the performance of the whole system. We believe that the approach can be used as a general purpose object detection toolbox. We tested the algorithm on UIUC car dataset, and achieved an outstanding result. The accuracy of our method was 97 while the previously reported results showed an accuracy of up to 91 . We strongly believe that having an experiment on a larger dataset can show the advantage of using deep models over shallow ones. | Several approaches to object detection were proposed in the past that use some form of learning. In most such approaches, images are represented by using some features, and a learning method is used to identify regions in the feature space that correspond to the object class. There are a large variety in types of features used and the learning methods applied @cite_9 . | {
"cite_N": [
"@cite_9"
],
"mid": [
"1508960934"
],
"abstract": [
"From the Publisher: The accessible presentation of this book gives both a general view of the entire computer vision enterprise and also offers sufficient detail to be able to build useful applications. Users learn techniques that have proven to be useful by first-hand experience and a wide range of mathematical methods. A CD-ROM with every copy of the text contains source code for programming practice, color images, and illustrative movies. Comprehensive and up-to-date, this book includes essential topics that either reflect practical significance or are of theoretical importance. Topics are discussed in substantial and increasing depth. Application surveys describe numerous important application areas such as image based rendering and digital libraries. Many important algorithms broken down and illustrated in pseudo code. Appropriate for use by engineers as a comprehensive reference to the computer vision enterprise."
]
} |
1412.7854 | 1812168010 | Traditional object recognition approaches apply feature extraction, part deformation handling, occlusion handling and classification sequentially while they are independent from each other. Ouyang and Wang proposed a model for jointly learning of all of the mentioned processes using one deep neural network. We utilized, and manipulated their toolbox in order to apply it in car detection scenarios where it had not been tested. Creating a single deep architecture from these components, improves the interaction between them and can enhance the performance of the whole system. We believe that the approach can be used as a general purpose object detection toolbox. We tested the algorithm on UIUC car dataset, and achieved an outstanding result. The accuracy of our method was 97 while the previously reported results showed an accuracy of up to 91 . We strongly believe that having an experiment on a larger dataset can show the advantage of using deep models over shallow ones. | Most of the traditional approaches use features such as SIFT or HOG @cite_20 to extract the overall shape of an object, and apply different learning approaches to train the system. All of these approaches have one common point that the features are manually generated, and only used for the training purposes. While some recent investigations show that learning features from the training data is very helpful and improves the accuracy of the program @cite_3 . | {
"cite_N": [
"@cite_3",
"@cite_20"
],
"mid": [
"1885921277",
"2161969291"
],
"abstract": [
"We introduce a new approach for learning part-based object detection through feature synthesis. Our method consists of an iterative process of feature generation and pruning. A feature generation procedure is presented in which basic part-based features are developed into a feature hierarchy using operators for part localization, part refining and part combination. Feature pruning is done using a new feature selection algorithm for linear SVM, termed Predictive Feature Selection (PFS), which is governed by weight prediction. The algorithm makes it possible to choose from O(106) features in an efficient but accurate manner. We analyze the validity and behavior of PFS and empirically demonstrate its speed and accuracy advantages over relevant competitors. We present an empirical evaluation of our method on three human detection datasets including the current de-facto benchmarks (the INRIA and Caltech pedestrian datasets) and a new challenging dataset of children images in difficult poses. The evaluation suggests that our approach is on a par with the best current methods and advances the state-of-the-art on the Caltech pedestrian training dataset.",
"We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds."
]
} |
1412.7854 | 1812168010 | Traditional object recognition approaches apply feature extraction, part deformation handling, occlusion handling and classification sequentially while they are independent from each other. Ouyang and Wang proposed a model for jointly learning of all of the mentioned processes using one deep neural network. We utilized, and manipulated their toolbox in order to apply it in car detection scenarios where it had not been tested. Creating a single deep architecture from these components, improves the interaction between them and can enhance the performance of the whole system. We believe that the approach can be used as a general purpose object detection toolbox. We tested the algorithm on UIUC car dataset, and achieved an outstanding result. The accuracy of our method was 97 while the previously reported results showed an accuracy of up to 91 . We strongly believe that having an experiment on a larger dataset can show the advantage of using deep models over shallow ones. | We propose a model for car detection that benefits from deep learning approaches and is capable of detecting different classes of cars. The approach uses the training data to improve its low-level features. Even though we applied this approach to side view pictures of cars, it is extendable to other views of the cars too. We very closely followed the work by Ouyang @cite_6 and used the same methodology to detect different object class. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2156547346"
],
"abstract": [
"Feature extraction, deformation handling, occlusion handling, and classification are four important components in pedestrian detection. Existing methods learn or design these components either individually or sequentially. The interaction among these components is not yet well explored. This paper proposes that they should be jointly learned in order to maximize their strengths through cooperation. We formulate these four components into a joint deep learning framework and propose a new deep network architecture. By establishing automatic, mutual interaction among components, the deep model achieves a 9 reduction in the average miss rate compared with the current best-performing pedestrian detection approaches on the largest Caltech benchmark dataset."
]
} |
1412.7978 | 1972468285 | This paper presents an information theoretic approach to the concept of intelligence in the computational sense. We introduce a probabilistic framework from which computation alintelligence is shown to be an entropy minimizing process at the local level. Using this new scheme, we develop a simple data driven clustering example and discuss its applications. | Many sources claim to have computational theories of intelligence, but for the most part these theories'' merely act to describe certain aspects of intelligence @cite_6 . For example, Meyer in @cite_12 suggests that performance on multiple tasks is dependent on adaptive executive control, but makes no claim on the emergence of such characteristics. Others discuss how data is aggregated. This type of analysis is especially relevant in computer vision and image recognition @cite_9 . | {
"cite_N": [
"@cite_9",
"@cite_12",
"@cite_6"
],
"mid": [
"1995756857",
"1974302391",
"1577491022"
],
"abstract": [
"An algorithm is proposed for solving the stereoscopic matching problem. The algorithm consists of five steps: (1) Each image is filtered at different orientations with bar masks of four sizes that increase with eccentricity; the equivalent filters are one or two octaves wide. (2) Zero-crossings in the filtered images, which roughly correspond to edges, are localized. Positions of the ends of lines and edges are also found. (3) For each mask orientation and size, matching takes place between pairs of zero-crossings or terminations of the same sign in the two images, for a range of disparities up to about the width of the mask's central region. (4) Wide masks can control vergence movements, thus causing small masks to come into correspondence. (5) When a correspondence is achieved, it is stored in a dynamic buffer, called the 2 @math -D sketch. It is shown that this proposal provides a theoretical framework for most existing psychophysical and neurophysiological data about stereopsis. Several critical experimental predictions are also made, for instance about the size of Panum's area under various conditions. The results of such experiments would tell us whether, for example, co-operativity is necessary for the matching process.",
"Abstract : Persistent controversies about human multiple task performance suggest that research on it will benefit from increased use of precise computational models. Toward this objective, the present report outlines a comprehensive theoretical framework for understanding and predicting the performance of concurrent perceptual motor and cognitive tasks. The framework involves an Executive Process Interactive Control (EPIC) architecture, which has component modules that process information at perceptual, cognitive, and motor levels. On the basis of EPIC, computational models that use a production system formalism may be constructed to simulate multiple task performance under a variety of conditions. These models account well for reaction time data from representative paradigms such as the psychological refractory period (PRP) procedure. With modest numbers of parameters, good fits between empirical and simulated reaction times support several key conclusions: (1) at a cognitive level, people can apply distinct sets of production rules simultaneously for executing the procedures of multiple tasks; (2) there is no immutable central response selection or decision bottleneck; (3) people's capacity to process information and take action at peripheral perceptual motor levels is limited; (4) to cope with such limits and to satisfy task priorities, flexible scheduling strategies are used; (5) these strategies are mediated by executive cognitive processes that coordinate concurrent tasks adaptively. The initial success of EPIC and models based on it suggest that they may help characterize multiple task performance across many domains, including ones that have substantial practical relevance.",
"Preface. Emergence of a Theory. Knowledge. Perception. Goal Seeking and Planning. A Reference Model Architecture. Behavior Generation. World Modeling, Value Judgment, and Knowledge Representation. Sensory Processing. Engineering Unmanned Ground Vehicles. Future Possibilities. References. Index."
]
} |
1412.7978 | 1972468285 | This paper presents an information theoretic approach to the concept of intelligence in the computational sense. We introduce a probabilistic framework from which computation alintelligence is shown to be an entropy minimizing process at the local level. Using this new scheme, we develop a simple data driven clustering example and discuss its applications. | Inspired by physics and cosmology, Wissner-Gross asserts autonomous agents act to maximize the entropy in their environment @cite_16 . Specifically he proposes a path integral formulation from which he derives a gradient which can be analogized as a causal force propelling a system along a gradient of maximum entropy over time. Using this idea, he created a startup called that applies this principal in ingenious ways in a variety of different applications, ranging from anything to teaching a robot to walk upright, to maximizing profit potential in the stock market. | {
"cite_N": [
"@cite_16"
],
"mid": [
"2003624562"
],
"abstract": [
"Recent advances in fields ranging from cosmology to computer science have hinted at a possible deep connection between intelligence and entropy maximization, but no formal physical relationship between them has yet been established. Here, we explicitly propose a first step toward such a relationship in the form of a causal generalization of entropic forces that we find can cause two defining behaviors of the human cognitive niche'' tool use and social cooperation to spontaneously emerge in simple physical systems. Our results suggest a potentially general thermodynamic model of adaptive behavior as a nonequilibrium process in open systems."
]
} |
1412.7890 | 1892981132 | We investigate the problem of reconstructing signals from a subsampled convolution of their modulated versions and a known filter. The problem is studied as applies to a specific imaging architecture that relies on spatial phase modulation by randomly coded “masks.” The diversity induced by the random masks is deemed to improve the conditioning of the deconvolution problem while maintaining sampling efficiency. We analyze a linear model of the imaging system, where the joint effect of the spatial modulation, blurring, and spatial subsampling is represented concisely by a measurement matrix. We provide a bound on the conditioning of this measurement matrix in terms of the number of masks @math , the dimension (i.e., the pixel count) of the scene image @math , and certain characteristics of the blurring kernel and subsampling operator. The derived bound shows that the stable deconvolution is possible with high probability even if the number of masks (i.e., @math ) is as small as @math , meaning that the total number of (scalar) measurements is within a logarithmic factor of the image size. Furthermore, beyond a critical number of masks determined by the extent of blurring and subsampling, use of every additional mask improves the conditioning of the measurement matrix. We also consider a more interesting scenario where the target image is known to be sparse. We show that under mild conditions on the blurring kernel, with high probability the measurement matrix is a restricted isometry when the number of masks is within a logarithmic factor of the sparsity of the scene image. Therefore, the scene image can be reconstructed using any of the well-known sparse recovery algorithms such as the basis pursuit. The bound on the required number of masks grows linearly in sparsity of the scene image but logarithmically in its ambient dimension. The bound provides a quantitative view of the effect of the blurring and subsampling on the required number of masks, which is critical for designing efficient imaging systems. | Classical deconvolution techniques can be broadly categorized in two frameworks based on their approaches to regularization of the inverse problem. Methods of the first category, including Wiener filtering and a variety of Bayesian methods, assume some stochastic model for the image or the blurring kernel that is often application specific. Methods of the second category, that are essentially some variants of the least squares, only use the deterministic spatial or spectral structures of the image such as smoothness for regularization. For a comprehensive survey of classic deconvolution methods for image restoration and reconstruction we refer the interested readers to @cite_1 and @cite_4 . | {
"cite_N": [
"@cite_1",
"@cite_4"
],
"mid": [
"2150060382",
"2075041908"
],
"abstract": [
"The article introduces digital image restoration to the reader who is just beginning in this field, and provides a review and analysis for the reader who may already be well-versed in image restoration. The perspective on the topic is one that comes primarily from work done in the field of signal processing. Thus, many of the techniques and works cited relate to classical signal processing approaches to estimation theory, filtering, and numerical analysis. In particular, the emphasis is placed primarily on digital image restoration algorithms that grow out of an area known as \"regularized least squares\" methods. It should be noted, however, that digital image restoration is a very broad field, as we discuss, and thus contains many other successful approaches that have been developed from different perspectives, such as optics, astronomy, and medical imaging, just to name a few. In the process of reviewing this topic, we address a number of very important issues in this field that are not typically discussed in the technical literature.",
"▪ Abstract Digital image reconstruction is a robust means by which the underlying images hidden in blurry and noisy data can be revealed. The main challenge is sensitivity to measurement noise in the input data, which can be magnified strongly, resulting in large artifacts in the reconstructed image. The cure is to restrict the permitted images. This review summarizes image reconstruction methods in current use. Progressively more sophisticated image restrictions have been developed, including (a) filtering the input data, (b) regularization by global penalty functions, and (c) spatially adaptive methods that impose a variable degree of restriction across the image. The most reliable reconstruction is the most conservative one, which seeks the simplest underlying image consistent with the input data. Simplicity is context-dependent, but for most imaging applications, the simplest reconstructed image is the smoothest one. Imposing the maximum, spatially adaptive smoothing permitted by the data results in t..."
]
} |
1412.7890 | 1892981132 | We investigate the problem of reconstructing signals from a subsampled convolution of their modulated versions and a known filter. The problem is studied as applies to a specific imaging architecture that relies on spatial phase modulation by randomly coded “masks.” The diversity induced by the random masks is deemed to improve the conditioning of the deconvolution problem while maintaining sampling efficiency. We analyze a linear model of the imaging system, where the joint effect of the spatial modulation, blurring, and spatial subsampling is represented concisely by a measurement matrix. We provide a bound on the conditioning of this measurement matrix in terms of the number of masks @math , the dimension (i.e., the pixel count) of the scene image @math , and certain characteristics of the blurring kernel and subsampling operator. The derived bound shows that the stable deconvolution is possible with high probability even if the number of masks (i.e., @math ) is as small as @math , meaning that the total number of (scalar) measurements is within a logarithmic factor of the image size. Furthermore, beyond a critical number of masks determined by the extent of blurring and subsampling, use of every additional mask improves the conditioning of the measurement matrix. We also consider a more interesting scenario where the target image is known to be sparse. We show that under mild conditions on the blurring kernel, with high probability the measurement matrix is a restricted isometry when the number of masks is within a logarithmic factor of the sparsity of the scene image. Therefore, the scene image can be reconstructed using any of the well-known sparse recovery algorithms such as the basis pursuit. The bound on the required number of masks grows linearly in sparsity of the scene image but logarithmically in its ambient dimension. The bound provides a quantitative view of the effect of the blurring and subsampling on the required number of masks, which is critical for designing efficient imaging systems. | In recent years there has been an increasing interest in the application of CS in various imaging modalities including but not limited to holography @cite_18 , coded aperture spectral imaging @cite_3 , fluorescent microscopy @cite_16 , and sub-wavelength imaging @cite_14 . The CS-based imaging systems are particularly interesting in applications where the measurements are time-consuming or expensive. Furthermore, by exploiting the sparsity of the scene image, the CS imaging methods can operate at SNR regimes where conventional imaging methods may perform poorly. A survey of practical advantages and challenges of various CS imaging systems can be found in @cite_5 . The first CS imaging system was introduced as the single-pixel camera'' in @cite_2 where a single sensor integrates the randomly masked versions of the scene image for a few different masks. Effectively, the single-pixel camera measures the inner product of the scene image and the randomly generated masks. Using the fact that natural images are often (nearly) sparse in some basis, it is shown in @cite_2 that CS allows accurate image reconstruction in this single-pixel architecture. | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_16"
],
"mid": [
"",
"2021942939",
"2079319869",
"2122548617",
"2035657841",
"2114456237"
],
"abstract": [
"",
"We show that, in contrast to popular belief, sub-wavelength information can be recovered from the far-field of an optical image, thereby overcoming the loss of information embedded in decaying evanescent waves. The only requirement is that the image is known to be sparse, a specific but very general and wide-spread property of signals which occur almost everywhere in nature. The reconstruction method relies on newly-developed compressed sensing techniques, which we adapt to optical super-resolution and sub-wavelength imaging. Our approach exhibits robustness to noise and imperfections. We provide an experimental proof-of-principle by demonstrating image recovery at a spatial resolution 5-times higher than the finest resolution defined by a spatial filter. The technique is general, and can be extended beyond optical microscopy, for example, to atomic force microscopes, scanning-tunneling microscopes, and other imaging systems.",
"Imaging spectroscopy involves the sensing of a large amount of spatial information across a multitude of wavelengths. Conventional approaches to hyperspectral sensing scan adjacent zones of the underlying spectral scene and merge the results to construct a spectral data cube. Push broom spectral imaging sensors, for instance, capture a spectral cube with one focal plane array (FPA) measurement per spatial line of the scene [1], [2]. Spectrometers based on optical bandpass filters sequentially scan the scene by tuning the bandpass filters in steps. The disadvantage of these techniques is that they require scanning a number of zones linearly in proportion to the desired spatial and spectral resolution. This article surveys compressive coded aperture spectral imagers, also known as coded aperture snapshot spectral imagers (CASSI) [1], [3], [4], which naturally embody the principles of compressive sensing (CS) [5], [6]. The remarkable advantage of CASSI is that the entire data cube is sensed with just a few FPA measurements and, in some cases, with as little as a single FPA shot.",
"In this article, the authors present a new approach to building simpler, smaller, and cheaper digital cameras that can operate efficiently across a broader spectral range than conventional silicon-based cameras. The approach fuses a new camera architecture based on a digital micromirror device with the new mathematical theory and algorithms of compressive sampling.",
"The emerging field of compressed sensing has potentially powerful implications for the design of optical imaging devices. In particular, compressed sensing theory suggests that one can recover a scene at a higher resolution than is dictated by the pitch of the focal plane array. This rather remarkable result comes with some important caveats however, especially when practical issues associated with physical implementation are taken into account. This tutorial discusses compressed sensing in the context of optical imaging devices, emphasizing the practical hurdles related to building such devices, and offering suggestions for overcoming these hurdles. Examples and analysis specifically related to infrared imaging highlight the challenges associated with large format focal plane arrays and how these challenges can be mitigated using compressed sensing ideas.",
"The mathematical theory of compressed sensing (CS) asserts that one can acquire signals from measurements whose rate is much lower than the total bandwidth. Whereas the CS theory is now well developed, challenges concerning hardware implementations of CS-based acquisition devices—especially in optics—have only started being addressed. This paper presents an implementation of compressive sensing in fluorescence microscopy and its applications to biomedical imaging. Our CS microscope combines a dynamic structured wide-field illumination and a fast and sensitive single-point fluorescence detection to enable reconstructions of images of fluorescent beads, cells, and tissues with undersampling ratios (between the number of pixels and number of measurements) up to 32. We further demonstrate a hyperspectral mode and record images with 128 spectral channels and undersampling ratios up to 64, illustrating the potential benefits of CS acquisition for higher-dimensional signals, which typically exhibits extreme redundancy. Altogether, our results emphasize the interest of CS schemes for acquisition at a significantly reduced rate and point to some remaining challenges for CS fluorescence microscopy."
]
} |
1412.7059 | 2949957320 | State-of-the-art emergency navigation approaches are designed to evacuate civilians during a disaster based on real-time decisions using a pre-defined algorithm and live sensory data. Hence, casualties caused by the poor decisions and guidance are only apparent at the end of the evacuation process and cannot then be remedied. Previous research shows that the performance of routing algorithms for evacuation purposes are sensitive to the initial distribution of evacuees, the occupancy levels, the type of disaster and its as well its locations. Thus an algorithm that performs well in one scenario may achieve bad results in another scenario. This problem is especially serious in heuristic-based routing algorithms for evacuees where results are affected by the choice of certain parameters. Therefore, this paper proposes a simulation-based evacuee routing algorithm that optimises evacuation by making use of the high computational power of cloud servers. Rather than guiding evacuees with a predetermined routing algorithm, a robust Cognitive Packet Network based algorithm is first evaluated via a cloud-based simulator in a faster-than-real-time manner, and any "simulated casualties" are then re-routed using a variant of Dijkstra's algorithm to obtain new safe paths for them to exits. This approach can be iterated as long as corrective action is still possible. | However, wireless sensor network based emergency navigation systems suffer from inherent disadvantages such as limited computing capability, restrained battery power and restricted storage capacity. Hence, it is difficult for this type of architecture to provide optimal solutions in a timely fashion. Via using static or mobile sensors as thin client and offloading intensive computations to remote servers, cloud enabled emergency navigation systems have the potential to revolutionise this field. By leveraging existing public cloud services such as social network sites, Ref. @cite_12 @cite_10 present emergency warning systems to gather and disperse multi-media emergency information among users. Ref. @cite_16 @cite_14 employ built-in camera on the smart phones to take snapshots and upload to servers to identify the location of evacuees. Based on the extracted data, the cloud-based emergency navigation system can provide appropriate paths for civilians. | {
"cite_N": [
"@cite_14",
"@cite_16",
"@cite_10",
"@cite_12"
],
"mid": [
"",
"1964195006",
"2106723111",
"2152721874"
],
"abstract": [
"",
"Emergency applications have recently become widely available on modern smart phones. Nearly all of these commercial applications have focused on providing simple accident information in outdoor settings. AR in indoor environments poses unique challenges, due to the unavailability of GPS indoors and WiFi-based positioning limitations. In this paper, we propose the use of Rescue Me, a novel system based on indoor mobile AR applications using personalized pedometry and one that recommends the most optimal, uncrowded exit path to users. We have developed the Rescue Me application for use within large scale buildings, with complex paths. We show how Rescue Meleverages the sensors on a smart phone, in conjunction with emergency information and daily-based user behavior, to deliver evacuation information in emergency situations.",
"Cloud Computing, Mobile Technology, and SocialNetworking Services such as Facebook and Twitter has becomean integral part of society during the event of an emergency ordisaster. In the wake and aftermath of a disaster, atremendous number of people used social networking sites topost and share information. The Department of Health andHuman Services (HHS) had sponsored a challenge for softwareapplication developers to design a Facebook application to helppeople prepare for emergencies and to obtain support fromfriends and families during its aftermath. Lockheed Martinhad responded to this challenge by creating a cloud and mobilebased Facebook application called the Personal EmergencyPreparedness Plan (PEPP). This paper discusses the designand integration of the PEPP Facebook App with the intentionof serving as reference architecture for developing socialnetworking applications using cloud computing and mobiletechnology.",
"Natural and man-made emergencies pose an ever-present threat to the society. In response to the growing number of recent disasters, such as the Indonesian volcanic eruption, Gulf of Mexico oil spill, Haitian earthquake, Pakistani floods, and in particular, the Red River crest that causes flood almost every year here in Fargo, North Dakota, we propose a community-based scalable cloud computing infrastructure for large-scale emergency management. This infrastructure will coordinate various organizations and integrate massive amounts of heterogeneous data sources to effectively deploy personnel and logistics to aid in search and rescue. The infrastructure also will aid in damage assessment, enumeration, and coordination to support sustainable livelihood by protecting lives, properties and the environment."
]
} |
1412.6986 | 1808614931 | The use of local memory is important to improve the performance of OpenCL programs. However, its use may not always benefit performance, depending on various application characteristics, and there is no simple heuristic for deciding when to use it. We develop a machine learning model to decide if the optimization is beneficial or not. We train the model with millions of synthetic benchmarks and show that it can predict if the optimization should be applied for a single array, in both synthetic and real benchmarks, with high accuracy. | There has been growing interest in the use of machine learning to auto-tune the performance of GPU applications @cite_1 @cite_5 @cite_9 @cite_15 @cite_7 . For example, @cite_7 explore the use of neural network for auto-tuning thread coarsening, and @cite_9 use a decision tree to decide if an OpenCL kernel should be executed on the CPU or the GPU. In contrast, we focus on auto-tuning the use of local memory. | {
"cite_N": [
"@cite_7",
"@cite_9",
"@cite_1",
"@cite_5",
"@cite_15"
],
"mid": [
"1989562524",
"1982020565",
"2160786443",
"2033088400",
""
],
"abstract": [
"OpenCL has been designed to achieve functional portability across multi-core devices from different vendors. However, the lack of a single cross-target optimizing compiler severely limits performance portability of OpenCL programs. Programmers need to manually tune applications for each specific device, preventing effective portability. We target a compiler transformation specific for data-parallel languages: thread-coarsening and show it can improve performance across different GPU devices. We then address the problem of selecting the best value for the coarsening factor parameter, i.e., deciding how many threads to merge together. We experimentally show that this is a hard problem to solve: good configurations are difficult to find and naive coarsening in fact leads to substantial slowdowns. We propose a solution based on a machine-learning model that predicts the best coarsening factor using kernel-function static features. The model automatically specializes to the different architectures considered. We evaluate our approach on 17 benchmarks on four devices: two Nvidia GPUs and two different generations of AMD GPUs. Using our technique, we achieve speedups between 1.11× and 1.33× on average.",
"General purpose GPU based systems are highly attractive as they give potentially massive performance at little cost. Realizing such potential is challenging due to the complexity of programming. This paper presents a compiler based approach to automatically generate optimized OpenCL code from data-parallel OpenMP programs for GPUs. Such an approach brings together the benefits of a clear high level-language (OpenMP) and an emerging standard (OpenCL) for heterogeneous multi-cores. A key feature of our scheme is that it leverages existing transformations, especially data transformations, to improve performance on GPU architectures and uses predictive modeling to automatically determine if it is worthwhile running the OpenCL code on the GPU or OpenMP code on the multi-core host. We applied our approach to the entire NAS parallel benchmark suite and evaluated it on two distinct GPU based systems: Core i7 NVIDIA GeForce GTX 580 and Core 17 AMD Radeon 7970. We achieved average (up to) speedups of 4.51× and 4.20× (143× and 67×) respectively over a sequential baseline. This is, on average, a factor 1.63 and 1.56 times faster than a hand-coded, GPU-specific OpenCL implementation developed by independent expert programmers.",
"Recent years have seen a trend in using graphic processing units (GPU) as accelerators for general-purpose computing. The inexpensive, single-chip, massively parallel architecture of GPU has evidentially brought factors of speedup to many numerical applications. However, the development of a high-quality GPU application is challenging, due to the large optimization space and complex unpredictable effects of optimizations on GPU program performance. Recently, several studies have attempted to use empirical search to help the optimization. Although those studies have shown promising results, one important factor—program inputs—in the optimization has remained unexplored. In this work, we initiate the exploration in this new dimension. By conducting a series of measurement, we find that the ability to adapt to program inputs is important for some applications to achieve their best performance on GPU. In light of the findings, we develop an input-adaptive optimization framework, namely G-ADAPT, to address the influence by constructing cross-input predictive models for automatically predicting the (near-)optimal configurations for an arbitrary input to a GPU program. The results demonstrate the promise of the framework in serving as a tool to alleviate the productivity bottleneck in GPU programming.",
"The rapidly evolving landscape of multicore architectures makes the construction of efficient libraries a daunting task. A family of methods known collectively as “auto-tuning” has emerged to address this challenge. Two major approaches to auto-tuning are empirical and model-based: empirical autotuning is a generic but slow approach that works by measuring runtimes of candidate implementations, model-based auto-tuning predicts those runtimes using simplified abstractions designed by hand. We show that machine learning methods for non-linear regression can be used to estimate timing models from data, capturing the best of both approaches. A statistically-derived model offers the speed of a model-based approach, with the generality and simplicity of empirical auto-tuning. We validate our approach using the filterbank correlation kernel described in Pinto and Cox [2012], where we find that 0.1 seconds of hill climbing on the regression model (“predictive auto-tuning”) can achieve almost the same speed-up as is brought by minutes of empirical auto-tuning. Our approach is not specific to filterbank correlation, nor even to GPU kernel auto-tuning, and can be applied to almost any templated-code optimization problem, spanning a wide variety of problem types, kernel types, and platforms.",
""
]
} |
1412.6986 | 1808614931 | The use of local memory is important to improve the performance of OpenCL programs. However, its use may not always benefit performance, depending on various application characteristics, and there is no simple heuristic for deciding when to use it. We develop a machine learning model to decide if the optimization is beneficial or not. We train the model with millions of synthetic benchmarks and show that it can predict if the optimization should be applied for a single array, in both synthetic and real benchmarks, with high accuracy. | There is work that explored auto-tuning of the use of local memory, but focused on the use of analytical modeling and empirical search @cite_6 . In contrast, we build machine learning models, which have the potential to be more accurate than analytical approaches. | {
"cite_N": [
"@cite_6"
],
"mid": [
"1993320379"
],
"abstract": [
"Contemporary many-core processors such as the GeForce 8800 GTX enable application developers to utilize various levels of parallelism to enhance the performance of their applications. However, iterative optimization for such a system may lead to a local performance maximum, due to the complexity of the system. We propose program optimization carving, a technique that begins with a complete optimization space and prunes it down to a set of configurations that is likely to contain the global maximum. The remaining configurations can then be evaluated to determine the one with the best performance. The technique can reduce the number of configurations to be evaluated by as much as 98 and is successful at finding a near-best configuration. For some applications, we show that this approach is significantly superior to random sampling of the search space."
]
} |
1412.6986 | 1808614931 | The use of local memory is important to improve the performance of OpenCL programs. However, its use may not always benefit performance, depending on various application characteristics, and there is no simple heuristic for deciding when to use it. We develop a machine learning model to decide if the optimization is beneficial or not. We train the model with millions of synthetic benchmarks and show that it can predict if the optimization should be applied for a single array, in both synthetic and real benchmarks, with high accuracy. | Finally, there is a large body of work that treats auto-tuning for platforms other than GPUs, including multi-cores @cite_8 @cite_12 and single-core processors @cite_2 @cite_3 @cite_11 . In contrast, we focus on GPUs. | {
"cite_N": [
"@cite_8",
"@cite_3",
"@cite_2",
"@cite_12",
"@cite_11"
],
"mid": [
"2461867464",
"2060533244",
"2118937112",
"2033139628",
"2156560068"
],
"abstract": [
"Multicore architectures have become so complex and diverse that there is no obvious path to achieving good performance. Hundreds of code transformations, compiler flags, architectural features and optimization parameters result in a search space that can take many machinemonths to explore exhaustively. Inspired by successes in the systems community, we apply state-of-the-art machine learning techniques to explore this space more intelligently. On 7-point and 27-point stencil code, our technique takes about two hours to discover a configuration whose performance is within 1 of and up to 18 better than that achieved by a human expert. This factor of 2000 speedup over manual exploration of the auto-tuning parameter space enables us to explore optimizations that were previously off-limits. We believe the opportunity for using machine learning in multicore autotuning is even more promising than the successes to date in the systems literature.",
"Instruction scheduling is a compiler optimization that can improve program speed, sometimes by 10 or more, but it can also be expensive. Furthermore, time spent optimizing is more important in a Java just-in-time (JIT) compiler than in a traditional one because a JIT compiles code at run time, adding to the running time of the program. We found that, on any given block of code, instruction scheduling often does not produce significant benefit and sometimes degrades speed. Thus, we hoped that we could focus scheduling effort on those blocks that benefit from it.Using supervised learning we induced heuristics to predict which blocks benefit from scheduling. The induced function chooses, for each block, between list scheduling and not scheduling the block at all. Using the induced function we obtained over 90 of the improvement of scheduling every block but with less than 25 of the scheduling effort. When used in combination with profile-based adaptive optimization, the induced function remains effective but gives a smaller reduction in scheduling effort. Deciding when to optimize, and which optimization(s) to apply, is an important open problem area in compiler research. We show that supervised learning solves one of these problems well.",
"Compiler writers have crafted many heuristics over the years to approximately solve NP-hard problems efficiently. Finding a heuristic that performs well on a broad range of applications is a tedious and difficult process. This paper introduces Meta Optimization, a methodology for automatically fine-tuning compiler heuristics. Meta Optimization uses machine-learning techniques to automatically search the space of compiler heuristics. Our techniques reduce compiler design complexity by relieving compiler writers of the tedium of heuristic tuning. Our machine-learning system uses an evolutionary algorithm to automatically find effective compiler heuristics. We present promising experimental results. In one mode of operation Meta Optimization creates application-specific heuristics which often result in impressive speedups. For hyperblock formation, one optimization we present in this paper, we obtain an average speedup of 23 (up to 73 ) for the applications in our suite. Furthermore, by evolving a compiler's heuristic over several benchmarks, we can create effective, general-purpose heuristics. The best general-purpose heuristic our system found for hyperblock formation improved performance by an average of 25 on our training set, and 9 on a completely unrelated test set. We demonstrate the efficacy of our techniques on three different optimizations in this paper: hyperblock formation, register allocation, and data prefetching.",
"Stream based languages are a popular approach to expressing parallelism in modern applications. The efficient mapping of streaming parallelism to multi-core processors is, however, highly dependent on the program and underlying architecture. We address this by developing a portable and automatic compiler-based approach to partitioning streaming programs using machine learning. Our technique predicts the ideal partition structure for a given streaming application using prior knowledge learned off-line. Using the predictor we rapidly search the program space (without executing any code) to generate and select a good partition. We applied this technique to standard StreamIt applications and compared against existing approaches. On a 4-core platform, our approach achieves 60 of the best performance found by iteratively compiling and executing over 3000 different partitions per program. We obtain, on average, a 1.90x speedup over the already tuned partitioning scheme of the StreamIt compiler. When compared against a state-of-the-art analytical, model-based approach, we achieve, on average, a 1.77x performance improvement. By porting our approach to a 8-core platform, we are able to obtain 1.8x improvement over the StreamIt default scheme, demonstrating the portability of our approach.",
"Tuning compiler optimizations for rapidly evolving hardware makes porting and extending an optimizing compiler for each new platform extremely challenging. Iterative optimization is a popular approach to adapting programs to a new architecture automatically using feedback-directed compilation. However, the large number of evaluations required for each program has prevented iterative compilation from widespread take-up in production compilers. Machine learning has been proposed to tune optimizations across programs systematically but is currently limited to a few transformations, long training phases and critically lacks publicly released, stable tools. Our approach is to develop a modular, extensible, self-tuning optimization infrastructure to automatically learn the best optimizations across multiple programs and architectures based on the correlation between program features, run-time behavior and optimizations. In this paper we describe Milepost GCC, the first publicly-available open-source machine learning-based compiler. It consists of an Interactive Compilation Interface (ICI) and plugins to extract program features and exchange optimization data with the cTuning.org open public repository. It automatically adapts the internal optimization heuristic at function-level granularity to improve execution time, code size and compilation time of a new program on a given architecture. Part of the MILEPOST technology together with low-level ICI-inspired plugin framework is now included in the mainline GCC. We developed machine learning plugins based on probabilistic and transductive approaches to predict good combinations of optimizations. Our preliminary experimental results show that it is possible to automatically reduce the execution time of individual MiBench programs, some by more than a factor of 2, while also improving compilation time and code size. On average we are able to reduce the execution time of the MiBench benchmark suite by 11 for the ARC reconfigurable processor. We also present a realistic multi-objective optimization scenario for Berkeley DB library using Milepost GCC and improve execution time by approximately 17 , while reducing compilation time and code size by 12 and 7 respectively on Intel Xeon processor."
]
} |
1412.6632 | 1811254738 | In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html . | The methods based on the deep neural network developed rapidly in recent years in both the field of computer vision and natural language. For computer vision, @cite_0 propose a deep Convolutional Neural Networks (CNN) with 8 layers (denoted as AlexNet) and outperform previous methods by a large margin in the image classification task of ImageNet challenge ( @cite_47 ). This network structure is widely used in computer vision, e.g. @cite_4 design a object detection framework (RCNN) based on this work. Recently, @cite_14 propose a CNN with over 16 layers (denoted as VggNet) and performs substantially better than the AlexNet. For natural language, the Recurrent Neural Network (RNN) shows the state-of-the-art performance in many tasks, such as speech recognition and word embedding learning ( @cite_5 @cite_19 @cite_16 ). Recently, RNNs have been successfully applied to machine translation to extract semantic information from the source sentence and generate target sentences (e.g. @cite_35 , @cite_6 and @cite_15 ). | {
"cite_N": [
"@cite_35",
"@cite_14",
"@cite_4",
"@cite_15",
"@cite_6",
"@cite_0",
"@cite_19",
"@cite_5",
"@cite_47",
"@cite_16"
],
"mid": [
"1753482797",
"1686810756",
"2102605133",
"",
"2950635152",
"",
"2171928131",
"",
"2952020226",
"2950133940"
],
"abstract": [
"We introduce a class of probabilistic continuous translation models called Recurrent Continuous Translation Models that are purely based on continuous representations for words, phrases and sentences and do not rely on alignments or phrasal translation units. The models have a generation and a conditioning aspect. The generation of the translation is modelled with a target Recurrent Language Model, whereas the conditioning on the source sentence is modelled with a Convolutional Sentence Model. Through various experiments, we show first that our models obtain a perplexity with respect to gold translations that is > 43 lower than that of stateof-the-art alignment-based translation models. Secondly, we show that they are remarkably sensitive to the word order, syntax, and meaning of the source sentence despite lacking alignments. Finally we show that they match a state-of-the-art system when rescoring n-best lists of translations.",
"In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.",
"Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.",
"",
"In this paper, we propose a novel neural network model called RNN Encoder-Decoder that consists of two recurrent neural networks (RNN). One RNN encodes a sequence of symbols into a fixed-length vector representation, and the other decodes the representation into another sequence of symbols. The encoder and decoder of the proposed model are jointly trained to maximize the conditional probability of a target sequence given a source sequence. The performance of a statistical machine translation system is empirically found to improve by using the conditional probabilities of phrase pairs computed by the RNN Encoder-Decoder as an additional feature in the existing log-linear model. Qualitatively, we show that the proposed model learns a semantically and syntactically meaningful representation of linguistic phrases.",
"",
"We present several modifications of the original recurrent neural network language model (RNN LM).While this model has been shown to significantly outperform many competitive language modeling techniques in terms of accuracy, the remaining problem is the computational complexity. In this work, we show approaches that lead to more than 15 times speedup for both training and testing phases. Next, we show importance of using a backpropagation through time algorithm. An empirical comparison with feedforward networks is also provided. In the end, we discuss possibilities how to reduce the amount of parameters in the model. The resulting RNN model can thus be smaller, faster both during training and testing, and more accurate than the basic one.",
"",
"The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object category classification and detection on hundreds of object categories and millions of images. The challenge has been run annually from 2010 to present, attracting participation from more than fifty institutions. This paper describes the creation of this benchmark dataset and the advances in object recognition that have been possible as a result. We discuss the challenges of collecting large-scale ground truth annotation, highlight key breakthroughs in categorical object recognition, provide a detailed analysis of the current state of the field of large-scale image classification and object detection, and compare the state-of-the-art computer vision accuracy with human accuracy. We conclude with lessons learned in the five years of the challenge, and propose future directions and improvements.",
"The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of \"Canada\" and \"Air\" cannot be easily combined to obtain \"Air Canada\". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible."
]
} |
1412.6632 | 1811254738 | In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html . | Many previous methods treat the task of describing images as a retrieval task and formulate the problem as a ranking or embedding learning problem ( @cite_37 @cite_1 @cite_25 ). They first extract the word and sentence features (e.g. @cite_25 uses dependency tree Recursive Neural Network to extract sentence features) as well as the image features. Then they optimize a ranking cost to learn an embedding model that maps both the sentence feature and the image feature to a common semantic feature space. In this way, they can directly calculate the distance between images and sentences. Recently, @cite_17 show that object level image features based on object detection results can generate better results than image features extracted at the global level. | {
"cite_N": [
"@cite_37",
"@cite_1",
"@cite_25",
"@cite_17"
],
"mid": [
"68733909",
"2123024445",
"2149557440",
"2953276893"
],
"abstract": [
"The ability to associate images with natural language sentences that describe what is depicted in them is a hallmark of image understanding, and a prerequisite for applications such as sentence-based image search. In analogy to image search, we propose to frame sentence-based image annotation as the task of ranking a given pool of captions. We introduce a new benchmark collection for sentence-based image description and search, consisting of 8,000 images that are each paired with five different captions which provide clear descriptions of the salient entities and events. We introduce a number of systems that perform quite well on this task, even though they are only based on features that can be obtained with minimal supervision. Our results clearly indicate the importance of training on multiple captions per image, and of capturing syntactic (word order-based) and semantic features of these captions. We also perform an in-depth comparison of human and automatic evaluation metrics for this task, and propose strategies for collecting human judgments cheaply and on a very large scale, allowing us to augment our collection with additional relevance judgments of which captions describe which image. Our analysis shows that metrics that consider the ranked list of results for each query image or sentence are significantly more robust than metrics that are based on a single response per query. Moreover, our study suggests that the evaluation of ranking-based image description systems may be fully automated.",
"Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources - such as text data - both to train visual models and to constrain their predictions. In this paper we present a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches state-of-the-art performance on the 1000-class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zero-shot predictions achieving hit rates of up to 18 across thousands of novel labels never seen by the visual model.",
"Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.",
"We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit."
]
} |
1412.6632 | 1811254738 | In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel image captions. It directly models the probability distribution of generating a word given previous words and an image. Image captions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on four benchmark datasets (IAPR TC-12, Flickr 8K, Flickr 30K and MS COCO). Our model outperforms the state-of-the-art methods. In addition, we apply the m-RNN model to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval. The project page of this work is: www.stat.ucla.edu junhua.mao m-RNN.html . | Shortly after @cite_18 , several papers appear with record breaking results (e.g. @cite_20 @cite_21 @cite_7 @cite_34 @cite_31 @cite_44 ). Many of them are built on recurrent neural networks. It demonstrates the effectiveness of storing context information in a recurrent layer. Our work has two major difference from these methods. Firstly, we incorporate a two-layer word embedding system in the m-RNN network structure which learns the word representation more efficiently than the single-layer word embedding. Secondly, we do not use the recurrent layer to store the visual information. The image representation is inputted to the m-RNN model along with every word in the sentence description. It utilizes of the capacity of the recurrent layer more efficiently, and allows us to achieve state-of-the-art performance using a relatively small dimensional recurrent layer. In the experiments, we show that these two strategies lead to better performance. Our method is still the best-performing approach for almost all the evaluation metrics. | {
"cite_N": [
"@cite_18",
"@cite_7",
"@cite_21",
"@cite_44",
"@cite_31",
"@cite_34",
"@cite_20"
],
"mid": [
"2159243025",
"2951912364",
"2951805548",
"2122180654",
"2949769367",
"2951183276",
"1527575280"
],
"abstract": [
"In this paper, we present a multimodal Recurrent Neural Network (m-RNN) model for generating novel sentence descriptions to explain the content of images. It directly models the probability distribution of generating a word given previous words and the image. Image descriptions are generated by sampling from this distribution. The model consists of two sub-networks: a deep recurrent neural network for sentences and a deep convolutional network for images. These two sub-networks interact with each other in a multimodal layer to form the whole m-RNN model. The effectiveness of our model is validated on three benchmark datasets (IAPR TC-12, Flickr 8K, and Flickr 30K). Our model outperforms the state-of-the-art generative method. In addition, the m-RNN model can be applied to retrieval tasks for retrieving images or sentences, and achieves significant performance improvement over the state-of-the-art methods which directly optimize the ranking objective function for retrieval.",
"Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.",
"We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.",
"In this paper we explore the bi-directional mapping between images and their sentence-based descriptions. We propose learning this mapping using a recurrent neural network. Unlike previous approaches that map both sentences and images to a common embedding, we enable the generation of novel sentences given an image. Using the same model, we can also reconstruct the visual features associated with an image given its visual description. We use a novel recurrent visual memory that automatically learns to remember long-term visual concepts to aid in both sentence generation and visual feature reconstruction. We evaluate our approach on several tasks. These include sentence generation, sentence retrieval and image retrieval. State-of-the-art results are shown for the task of generating novel image descriptions. When compared to human generated captions, our automatically generated captions are preferred by humans over @math of the time. Results are better than or comparable to state-of-the-art results on the image and sentence retrieval tasks for methods using similar visual features.",
"This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1 . When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34 of the time.",
"Models based on deep convolutional networks have dominated recent image interpretation tasks; we investigate whether models which are also recurrent, or \"temporally deep\", are effective for tasks involving sequences, visual and otherwise. We develop a novel recurrent convolutional architecture suitable for large-scale visual learning which is end-to-end trainable, and demonstrate the value of these models on benchmark video recognition tasks, image description and retrieval problems, and video narration challenges. In contrast to current models which assume a fixed spatio-temporal receptive field or simple temporal averaging for sequential processing, recurrent convolutional models are \"doubly deep\"' in that they can be compositional in spatial and temporal \"layers\". Such models may have advantages when target concepts are complex and or training data are limited. Learning long-term dependencies is possible when nonlinearities are incorporated into the network state updates. Long-term RNN models are appealing in that they directly can map variable-length inputs (e.g., video frames) to variable length outputs (e.g., natural language text) and can model complex temporal dynamics; yet they can be optimized with backpropagation. Our recurrent long-term models are directly connected to modern visual convnet models and can be jointly trained to simultaneously learn temporal dynamics and convolutional perceptual representations. Our results show such models have distinct advantages over state-of-the-art models for recognition or generation which are separately defined and or optimized.",
"Inspired by recent advances in multimodal learning and machine translation, we introduce an encoder-decoder pipeline that learns (a): a multimodal joint embedding space with images and text and (b): a novel language model for decoding distributed representations from our space. Our pipeline effectively unifies joint image-text embedding models with multimodal neural language models. We introduce the structure-content neural language model that disentangles the structure of a sentence to its content, conditioned on representations produced by the encoder. The encoder allows one to rank images and sentences while the decoder can generate novel descriptions from scratch. Using LSTM to encode sentences, we match the state-of-the-art performance on Flickr8K and Flickr30K without using object detections. We also set new best results when using the 19-layer Oxford convolutional network. Furthermore we show that with linear encoders, the learned embedding space captures multimodal regularities in terms of vector space arithmetic e.g. *image of a blue car* - \"blue\" + \"red\" is near images of red cars. Sample captions generated for 800 images are made available for comparison."
]
} |
1412.6857 | 2231088175 | We address the problem of contour detection via per-pixel classifications of edge point. To facilitate the process, the proposed approach leverages with DenseNet, an efficient implementation of multiscale convolutional neural networks (CNNs), to extract an informative feature vector for each pixel and uses an SVM classifier to accomplish contour detection. The main challenge lies in adapting a pre-trained per-image CNN model for yielding per-pixel image features. We propose to base on the DenseNet architecture to achieve pixelwise fine-tuning and then consider a cost-sensitive strategy to further improve the learning with a small dataset of edge and non-edge image patches. In the experiment of contour detection, we look into the effectiveness of combining per-pixel features from different CNN layers and obtain comparable performances to the state-of-the-art on BSDS500. | The AlexNet by is perhaps the most popular implementation of CNNs for generic object classification. The model has been shown to outperform competing approaches based on traditional features in solving a number of mainstream computer vision problems. In and @cite_0 , CNNs are used for image segmentation. To extend CNNs for object detection, utilize CNNs for semantic segmentation. use CNNs to predict object locations via sliding window, while learning multi-stage features of CNNs for pedestrian detection is proposed in . also consider features from a deep CNN in a region proposal framework to achieve state-of-the-art object detection results on the PASCAL VOC dataset. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2165914352"
],
"abstract": [
"Edge detection is one of the most studied problems in computer vision, yet it remains a very challenging task. It is difficult since often the decision for an edge cannot be made purely based on low level cues such as gradient, instead we need to engage all levels of information, low, middle, and high, in order to decide where to put edges. In this paper we propose a novel supervised learning algorithm for edge and object boundary detection which we refer to as Boosted Edge Learning or BEL for short. A decision of an edge point is made independently at each location in the image; a very large aperture is used providing significant context for each decision. In the learning stage, the algorithm selects and combines a large number of features across different scales in order to learn a discriminative model using an extended version of the Probabilistic Boosting Tree classification algorithm. The learning based framework is highly adaptive and there are no parameters to tune. We show applications for edge detection in a number of specific image domains as well as on natural images. We test on various datasets including the Berkeley dataset and the results obtained are very good."
]
} |
1412.6579 | 2399029393 | In search for a foundational framework for reasoning about observable behavior of programs that may not terminate, we have previously devised a trace-based big-step semantics for While. In this semantics, both traces and evaluation (relating initial states of program runs to traces they produce) are defined coinductively. On terminating runs, this semantics agrees with the standard inductive state-based semantics. Here we present a Hoare logic counterpart of our coinductive trace-based semantics and prove it sound and complete. Our logic subsumes the standard partial-correctness state-based Hoare logic as well as the total-correctness variation: they are embeddable. In the converse direction, projections can be constructed: a derivation of a Hoare triple in our trace-based logic can be translated into a derivation in the state-based logic of a translated, weaker Hoare triple. Since we work with a constructive underlying logic, the range of program properties we can reason about has a fine structure; in particular, we can distinguish between termination and nondivergence, e.g., unbounded classically total search fails to be terminating, but is nonetheless nondivergent. Our meta-theory is entirely constructive as well, and we have formalized it in Coq. | Some other works on coinductive big-step semantics include Glesner @cite_2 and Nestra @cite_4 @cite_13 . In these it is accepted that a program evaluation can somehow continue after an infinite number of small steps. With Glesner, this seems to have been a curious unintended side-effect of the design, which she was experimenting with just for the interest of it. Nestra developed a nonstandard semantics with transfinite traces on purpose in order to obtain a soundness result for a widely used slicing transformation that is unsound standardly (can turn nonterminating runs into terminating runs). | {
"cite_N": [
"@cite_13",
"@cite_4",
"@cite_2"
],
"mid": [
"1997489695",
"",
"2108932868"
],
"abstract": [
"Abstract Transfinite semantics is a semantics according to which program executions can continue working after an infinite number of steps. Such a view of programs can be useful in the theory of program transformations. So far, transfinite semantics have been succesfully defined for iterative loops. This paper provides an exhaustive definition for semantics that enable also infinitely deep recursion. The definition is actually a parametric schema that defines a family of different transfinite semantics. As standard semantics also match the same schema, our framework describes both standard and transfinite semantics in a uniform way. All semantics are expressed as greatest fixpoints of monotone operators on some complete lattices. It turns out that, for transfinite semantics, the corresponding lattice operators are cocontinuous. According to Kleene’s theorem, this shows that transfinite semantics can be expressed as a limit of iteration which is not transfinite.",
"",
"Formal semantics of programming languages needs to model the potentially infinite state transition behavior of programs as well as the computation of their final results simultaneously. This requirement is essential in correctness proofs for compilers. We show that a greatest fixed point interpretation of natural semantics is able to model both aspects equally well. Technically, we infer this interpretation of natural semantics based on an easily omprehensible introduction to the dual definition and proof principles of induction and coinduction. Furthermore, we develop a proof calculus based on it and demonstrate its application for two typical problems."
]
} |
1412.6579 | 2399029393 | In search for a foundational framework for reasoning about observable behavior of programs that may not terminate, we have previously devised a trace-based big-step semantics for While. In this semantics, both traces and evaluation (relating initial states of program runs to traces they produce) are defined coinductively. On terminating runs, this semantics agrees with the standard inductive state-based semantics. Here we present a Hoare logic counterpart of our coinductive trace-based semantics and prove it sound and complete. Our logic subsumes the standard partial-correctness state-based Hoare logic as well as the total-correctness variation: they are embeddable. In the converse direction, projections can be constructed: a derivation of a Hoare triple in our trace-based logic can be translated into a derivation in the state-based logic of a translated, weaker Hoare triple. Since we work with a constructive underlying logic, the range of program properties we can reason about has a fine structure; in particular, we can distinguish between termination and nondivergence, e.g., unbounded classically total search fails to be terminating, but is nonetheless nondivergent. Our meta-theory is entirely constructive as well, and we have formalized it in Coq. | Our trace-based coinductive big-step semantics @cite_1 was heavily inspired by Capretta's @cite_14 modelling of nontermination in a constructive setting similar to ours. Rather than using coinductive possibly infinite traces, he works with a coinductive notion of a possibly infinitely delayed value (for statements, this corresponds to delaying the final state). The categorical basis appears in Rutten's work @cite_17 . But Rutten only studied the classical setting (any program terminates or not), where a delayed state collapses to a choice of between a state or a designated token signifying nontermination. | {
"cite_N": [
"@cite_14",
"@cite_1",
"@cite_17"
],
"mid": [
"2128992690",
"2340695609",
"2126979663"
],
"abstract": [
"A fertile field of research in theoretical computer science investigates the rep- resentation of general recursive functions in intensional type theories. Among the most successful approaches are: the use of wellfounded relations, implementation of operational semantics, formalization of domain theory, and inductive definition of domain predicates. Here, a different solution is proposed: exploiting coinductive types to model infinite com- putations. To every type A we associate a type of partial elements A � , coinductively generated by two constructors: the first, p aq just returns an element a: A; the second, ⊲ x, adds a computation step to a recursive element x: A � . We show how this simple device is sufficient to formalize all recursive functions between two given types. It allows the definition of fixed points of finitary, that is, continuous, operators. We will compare this approach to different ones from the literature. Finally, we mention that the formalization, with appropriate structural maps, defines a strong monad.",
"We present four coinductive operational semantics for the While language accounting for both terminating and non-terminating program runs: big-step and small-step relational semantics and big-step and small-step functional semantics. The semantics employ traces (possi- bly infinite sequences of states) to record the states that program runs go through. The relational semantics relate statement-state pairs to traces, whereas the functional semantics return traces for statement-state pairs. All four semantics are equivalent. We formalize the semantics and their equivalence proofs in the constructive setting of Coq.",
"An illustration of coinduction in terms of a notion of weak bisimilarity is presented. First, an operational semantics O for while programs is defined in terms of a final automaton. It identifies any two programs that are weakly bisimilar, and induces in a canonical manner a compositional model D. Next O = D is proved by coinduction."
]
} |
1412.6579 | 2399029393 | In search for a foundational framework for reasoning about observable behavior of programs that may not terminate, we have previously devised a trace-based big-step semantics for While. In this semantics, both traces and evaluation (relating initial states of program runs to traces they produce) are defined coinductively. On terminating runs, this semantics agrees with the standard inductive state-based semantics. Here we present a Hoare logic counterpart of our coinductive trace-based semantics and prove it sound and complete. Our logic subsumes the standard partial-correctness state-based Hoare logic as well as the total-correctness variation: they are embeddable. In the converse direction, projections can be constructed: a derivation of a Hoare triple in our trace-based logic can be translated into a derivation in the state-based logic of a translated, weaker Hoare triple. Since we work with a constructive underlying logic, the range of program properties we can reason about has a fine structure; in particular, we can distinguish between termination and nondivergence, e.g., unbounded classically total search fails to be terminating, but is nonetheless nondivergent. Our meta-theory is entirely constructive as well, and we have formalized it in Coq. | Hofmann and Pavlova @cite_8 consider a VDM-style logic with finite trace assertions that are applied to all finite prefixes of the trace of a possibly nonterminating run of a program. This logic allows reasoning about safety, but not liveness. We expect that we should be able to embed a logic like this in ours. | {
"cite_N": [
"@cite_8"
],
"mid": [
"1579441434"
],
"abstract": [
"Ghost variables are assignable variables that appear in program annotations but do not correspond to physical entities. They are used to facilitate specification and verification, e.g., by using a ghost variable to count the number of iterations of a loop, and also to express extra-functional behaviours. In this paper we give a formal model of ghost variables and show how they can be eliminated from specifications and proofs in a compositional and automatic way. Thus, with the results of this paper ghost variables can be seen as a specification pattern rather than a primitive notion."
]
} |
1412.6534 | 1896038424 | Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. | Beyond the bounds on the BER based on the divergence measures, a number of other bounds exist based on different functionals of the distributions. In @cite_25 , the authors derive a new functional based on a Gaussian-Weighted sinusoid that yields tighter bounds on the BER than other popular approaches. Avi-Itzhak proposes arbitrarily tight bounds on the BER in @cite_19 . Both of these sets of bounds are tighter than the bounds we derive here; however, these bounds cannot be estimated without at least partial knowledge of the underlying distribution. A strength of the bounds proposed in this paper is that they are empirically estimable without knowing a parametric model for the underlying distribution. | {
"cite_N": [
"@cite_19",
"@cite_25"
],
"mid": [
"2144088155",
"2040172398"
],
"abstract": [
"This paper presents new upper and lower bounds on the minimum probability of error of Bayesian decision systems for the two-class problem. These bounds can be made arbitrarily close to the exact minimum probability of error, making them tighter than any previously known bounds.",
"In this paper, we present a new upper bound on the minimum probability of error of Bayesian decision systems for statistical pattern recognition. This new bound is continuous everywhere and is shown to be tighter than several existing bounds such as the Bhattacharyya and the Bayesian bounds. Numerical results are also presented. >"
]
} |
1412.6534 | 1896038424 | Information divergence functions play a critical role in statistics and information theory. In this paper we show that a non-parametric f-divergence measure can be used to provide improved bounds on the minimum binary classification probability of error for the case when the training and test data are drawn from the same distribution and for the case where there exists some mismatch between training and test distributions. We confirm the theoretical results by designing feature selection algorithms using the criteria from these bounds and by evaluating the algorithms on a series of pathological speech classification tasks. | In addition to work on bounding the Bayes error rate, recently there have been a number of attempts to bound the error rate in classification problems for the case where the training data and test data are drawn from different distributions (an area known as domain-adaptation or transfer learning in the machine learning literature). In @cite_28 @cite_9 , Ben-David relate the expected error on the test data to the expected error on the training data, for the case when no labeled test data is available. In @cite_38 , the authors derive new bounds for the case where a small subset of labeled data from the test distribution is available. In @cite_26 , Mansour generalize these bounds to the regression problem. In @cite_3 , the authors present a new theoretical analysis of the multi-source domain adaptation problem based on the @math -divergence. In contrast to these models, we propose a general non-parametric bound that can be estimated without assuming an underlying model for the data and without restrictions on the hypothesis class. | {
"cite_N": [
"@cite_38",
"@cite_26",
"@cite_28",
"@cite_9",
"@cite_3"
],
"mid": [
"2110091014",
"2953369858",
"2131953535",
"",
"1608944489"
],
"abstract": [
"Empirical risk minimization offers well-known learning guarantees when training and test data come from the same domain. In the real world, though, we often wish to adapt a classifier from a source domain with a large amount of training data to different target domain with very little training data. In this work we give uniform convergence bounds for algorithms that minimize a convex combination of source and target empirical risk. The bounds explicitly model the inherent trade-off between training on a large but inaccurate source data set and a small but accurate target training set. Our theory also gives results when we have multiple source domains, each of which may have a different number of instances, and we exhibit cases in which minimizing a non-uniform combination of source risks can achieve much lower target error than standard empirical risk minimization.",
"This paper addresses the general problem of domain adaptation which arises in a variety of applications where the distribution of the labeled sample available somewhat differs from that of the test data. Building on previous work by Ben- (2007), we introduce a novel distance between distributions, discrepancy distance, that is tailored to adaptation problems with arbitrary loss functions. We give Rademacher complexity bounds for estimating the discrepancy distance from finite samples for different loss functions. Using this distance, we derive novel generalization bounds for domain adaptation for a wide family of loss functions. We also present a series of novel adaptation bounds for large classes of regularization-based algorithms, including support vector machines and kernel ridge regression based on the empirical discrepancy. This motivates our analysis of the problem of minimizing the empirical discrepancy for various loss functions for which we also give novel algorithms. We report the results of preliminary experiments that demonstrate the benefits of our discrepancy minimization algorithms for domain adaptation.",
"Discriminative learning methods for classification perform well when training and test data are drawn from the same distribution. In many situations, though, we have labeled training data for a source domain, and we wish to learn a classifier which performs well on a target domain with a different distribution. Under what conditions can we adapt a classifier trained on the source domain for use in the target domain? Intuitively, a good feature representation is a crucial factor in the success of domain adaptation. We formalize this intuition theoretically with a generalization bound for domain adaption. Our theory illustrates the tradeoffs inherent in designing a representation for domain adaptation and gives a new justification for a recently proposed model. It also points toward a promising new model for domain adaptation: one which explicitly minimizes the difference between the source and target domains, while at the same time maximizing the margin of the training set.",
"",
"This paper presents a novel theoretical study of the general problem of multiple source adaptation using the notion of Renyi divergence. Our results build on our previous work [12], but significantly broaden the scope of that work in several directions. We extend previous multiple source loss guarantees based on distribution weighted combinations to arbitrary target distributions P, not necessarily mixtures of the source distributions, analyze both known and unknown target distribution cases, and prove a lower bound. We further extend our bounds to deal with the case where the learner receives an approximate distribution for each source instead of the exact one, and show that similar loss guarantees can be achieved depending on the divergence between the approximate and true distributions. We also analyze the case where the labeling functions of the source domains are somewhat different. Finally, we report the results of experiments with both an artificial data set and a sentiment analysis task, showing the performance benefits of the distribution weighted combinations and the quality of our bounds based on the Renyi divergence."
]
} |
1412.6706 | 1780723536 | Abstract : Graphs change over time, and typically variations on the small multiples or animation pattern is used to convey this dynamism visually. However, both of these classical techniques have significant drawbacks, so a new approach, Storyline Visualization of Events on a Network (SVEN) is proposed. SVEN builds on storyline techniques, conveying nodes as contiguous lines over time. SVEN encodes time in a natural manner, along the horizontal axis, and optimizes the vertical placement of storylines to decrease clutter (line crossings, straightness, and bends) in the drawing. This paper demonstrates SVEN on several different flavors of real-world dynamic data, and outlines the remaining near-term future work. | Dynamic graph visualization has its origin in static graph visualization, and many modern graph drawing algorithms have their basis in the original Kamada-Kawai @cite_54 and Fruchterman-Reingold @cite_52 force-directed'' algorithms. These algorithms define a model of a physical system from the graph whose energy can be measured and consequently minimized to produce an aesthetically pleasing drawing, ideally. Popular, modern, freely available graph drawing packages include GraphViz @cite_26 , Gephi @cite_49 , and D @math @cite_1 @cite_16 . | {
"cite_N": [
"@cite_26",
"@cite_54",
"@cite_1",
"@cite_52",
"@cite_49",
"@cite_16"
],
"mid": [
"1842847600",
"2075220720",
"",
"2167482691",
"2125910575",
"2164381372"
],
"abstract": [
"Graphviz is a heterogeneous collection of graph drawing tools containing batch layout programs (dot, neato, fdp, twopi); a platform for incremental layout (Dynagraph); customizable graph editors (dotty, Grappa); a server for including graphs in Web pages (WebDot); support for graphs as COM objects (Montage); utility programs useful in graph visualization; and libraries for attributed graphs. The software is available under an Open Source license. The article[1] provides a detailed description of the package.",
"",
"",
"SUMMARY We present a modification of the spring-embedder model of Eades [ Congresses Numerantium, 42, 149–160, (1984)] for drawing undirected graphs with straight edges. Our heuristic strives for uniform edge lengths, and we develop it in analogy to forces in natural systems, for a simple, elegant, conceptuallyintuitive, and efficient algorithm.",
"Gephi is an open source software for graph and network analysis. It uses a 3D render engine to display large networks in real-time and to speed up the exploration. A flexible and multi-task architecture brings new possibilities to work with complex data sets and produce valuable visual results. We present several key features of Gephi in the context of interactive exploration and interpretation of networks. It provides easy and broad access to network data and allows for spatializing, filtering, navigating, manipulating and clustering. Finally, by presenting dynamic features of Gephi, we highlight key aspects of dynamic network visualization.",
"We describe a new technique for graph layout subject to constraints. Compared to previous techniques the proposed method is much faster and scalable to much larger graphs. For a graph with n nodes, m edges and c constraints it computes incremental layout in time O(n log n+m+c) per iteration. Also, it supports a much more powerful class of constraint: inequalities or equalities over the Euclidean distance between nodes. We demonstrate the power of this technique by application to a number of diagramming conventions which previous constrained graph layout methods could not support. Further, the constraint-satisfaction method-inspired by recent work in position-based dynamics--is far simpler to implement than previous methods."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.