aid
stringlengths
9
15
mid
stringlengths
7
10
abstract
stringlengths
78
2.56k
related_work
stringlengths
92
1.77k
ref_abstract
dict
1605.02431
2354739900
The classical result of Vandermonde decomposition of positive semidefinite Toeplitz matrices can date back to the early twentieth century. It forms the basis of modern subspace and recent atomic norm methods for frequency estimation. In this paper, we study the Vandermonde decomposition in which the frequencies are restricted to lie in a given interval, referred to as frequency-selective Vandermonde decomposition. The existence and uniqueness of the decomposition are studied under explicit conditions on the Toeplitz matrix. The new result is connected by duality to the positive real lemma for trigonometric polynomials nonnegative on the same frequency interval. Its applications in the theory of moments and line spectral estimation are illustrated. In particular, it provides a solution to the truncated trigonometric K-moment problem. It is used to derive a primal semidefinite program formulation of the frequency-selective atomic norm in which the frequencies are known a priori to lie in a certain frequency band. Numerical examples are also provided.
The problem of frequency estimation with restriction on the frequency band was studied in @cite_3 @cite_7 @cite_6 . In @cite_3 , an FS atomic norm formulation (or constrained atomic norm in the language of @cite_3 ) was proposed and a dual SDP formulation was presented by applying the theory of positive trigonometric polynomials. In contrast to this, we show in this paper that a primal SDP formulation of the FS atomic norm can be obtained by applying the new FS Vandermonde decomposition. In @cite_7 , the interval prior was interpreted as a prior distribution of the frequencies and a weighted atomic norm approach was then devised that is an approximate but faster implementation of the FS atomic norm. Although the paper @cite_10 does not provide or imply the FS Vandermonde decomposition result, it obtained independently a primal SDP formulation of the FS atomic norm based on a different technique.
{ "cite_N": [ "@cite_10", "@cite_7", "@cite_3", "@cite_6" ], "mid": [ "2396154830", "2402002470", "2964118433", "" ], "abstract": [ "We present an extension of recent semidefinite programming formulations for atomic decomposition over continuous dictionaries, with applications to continuous or ‘gridless’ compressed sensing. The dictionary considered in this paper is defined in terms of a general matrix pencil and is parameterized by a complex variable that varies over a segment of a line or circle in the complex plane. The main result of the paper is the formulation as a convex semidefinite optimization problem, and a simple constructive proof of the equivalence. The techniques are illustrated with a direction of arrival estimation problem, and an example of low-rank structured matrix decomposition.", "This paper concerns the line spectral estimation problem within the recent super-resolution framework. The frequencies of interest are assumed to follow a prior probability distribution. To effectively and efficiently exploit the prior information, we devise a weighted atomic norm approach that is physically sound and can be formulated as convex programming like the standard atomic norm method. Numerical simulations are provided to demonstrate the superior performance of the proposed approach in accuracy and speed compared to the state-of-the-art.", "We address the problem of super-resolution frequency recovery using prior knowledge of the structure of a spectrally sparse, undersampled signal. In many applications of interest, some structure information about the signal spectrum is often known. The prior information might be simply knowing precisely some signal frequencies or the likelihood of a particular frequency component in the signal. We devise a general semidefinite program to recover these frequencies using theories of positive trigonometric polynomials. Our theoretical analysis shows that, given sufficient prior information, perfect signal reconstruction is possible using signal samples no more than thrice the number of signal frequencies. Numerical experiments demonstrate great performance enhancements using our method. We show that the nominal resolution necessary for the grid-free results can be improved if prior information is suitably employed.", "" ] }
1605.02431
2354739900
The classical result of Vandermonde decomposition of positive semidefinite Toeplitz matrices can date back to the early twentieth century. It forms the basis of modern subspace and recent atomic norm methods for frequency estimation. In this paper, we study the Vandermonde decomposition in which the frequencies are restricted to lie in a given interval, referred to as frequency-selective Vandermonde decomposition. The existence and uniqueness of the decomposition are studied under explicit conditions on the Toeplitz matrix. The new result is connected by duality to the positive real lemma for trigonometric polynomials nonnegative on the same frequency interval. Its applications in the theory of moments and line spectral estimation are illustrated. In particular, it provides a solution to the truncated trigonometric K-moment problem. It is used to derive a primal semidefinite program formulation of the frequency-selective atomic norm in which the frequencies are known a priori to lie in a certain frequency band. Numerical examples are also provided.
The paper @cite_5 studied the super-resolution problem on semialgebraic sets in the real domain and provided an SDP formulation of the resulting atomic norm. To do so, the key is to apply the moment theory on semialgebraic sets in the real domain (a.k.a. the truncated @math -moment problem in the real domain). In contrast to this, we provide a first solution to the truncated trigonometric @math -moment problem and then apply this result to study super-resolution on semi-algebraic sets on the unit circle.
{ "cite_N": [ "@cite_5" ], "mid": [ "2170169932" ], "abstract": [ "We investigate the multi-dimensional Super Resolution problem on closed semi-algebraic domains for various sampling schemes such as Fourier or moments. We present a new semidefinite programming (SDP) formulation of the 1 -minimization in the space of Radon measures in the multi-dimensional frame on semi-algebraic sets. While standard approaches have focused on SDP relaxations of the dual program (a popular approach is based on Gram matrix representations), this paper introduces an exact formulation of the primal 1 -minimization exact recovery problem of Super Resolution that unleashes standard techniques (such as moment-sum-of-squares hier-archies) to overcome intrinsic limitations of previous works in the literature. Notably, we show that one can exactly solve the Super Resolution problem in dimension greater than 2 and for a large family of domains described by semi-algebraic sets." ] }
1605.02531
2394056850
Suppose that we are given a time series where consecutive samples are believed to come from a probabilistic source, that the source changes from time to time and that the total number of sources is fixed. Our objective is to estimate the distributions of the sources. A standard approach to this problem is to model the data as a hidden Markov model (HMM). However, since the data often lacks the Markov or the stationarity properties of an HMM, one can ask whether this approach is still suitable or perhaps another approach is required. In this paper we show that a maximum likelihood HMM estimator can be used to approximate the source distributions in a much larger class of models than HMMs. Specifically, we propose a natural and fairly general non-stationary model of the data, where the only restriction is that the sources do not change too often. Our main result shows that for this model, a maximum-likelihood HMM estimator produces the correct second moment of the data, and the results can be extended to higher moments.
Moments of the data play an important role in our approach. In recent years, moments of the data have been used for parameter estimation in various mixture models. For instance, in @cite_10 , @cite_8 , it was shown that for several types of mixture models, the underlying distributions @math can be inferred from the second moment of the data under an anchor words'' assumption on @math s. In @cite_28 it was shown that for a sufficiently large number of samples and under lighter assumptions on @math , the third moment of the data can be used to reconstruct @math for a variety of mixtures, including the HMM. Note that the use of moments in this paper is different. Our estimator is the classical maximum likelihood estimator rather than an estimator based on moments. We use moments only as a tool to show that properties of the estimator approximate the properties of the true model.
{ "cite_N": [ "@cite_28", "@cite_10", "@cite_8" ], "mid": [ "2149655761", "2128521126", "2105617746" ], "abstract": [ "Mixture models are a fundamental tool in applied statistics and machine learning for treating data taken from multiple subpopulations. The current practice for estimating the parameters of such models relies on local search heuristics (e.g., the EM algorithm) which are prone to failure, and existing consistent methods are unfavorable due to their high computational and sample complexity which typically scale exponentially with the number of mixture components. This work develops an efficient method of moments approach to parameter estimation for a broad class of high-dimensional mixture models with many components, including multi-view mixtures of Gaussians (such as mixtures of axis-aligned Gaussians) and hidden Markov models. The new method leads to rigorous unsupervised learning results for mixture models that were not achieved by previous works; and, because of its simplicity, it offers a viable alternative to EM for practical deployment.", "Topic Modeling is an approach used for automatic comprehension and classification of data in a variety of settings, and perhaps the canonical application is in uncovering thematic structure in a corpus of documents. A number of foundational works both in machine learning and in theory have suggested a probabilistic model for documents, whereby documents arise as a convex combination of (i.e. distribution on) a small number of topic vectors, each topic vector being a distribution on words (i.e. a vector of word-frequencies). Similar models have since been used in a variety of application areas, the Latent Dirichlet Allocation or LDA model of is especially popular. Theoretical studies of topic modeling focus on learning the model's parameters assuming the data is actually generated from it. Existing approaches for the most part rely on Singular Value Decomposition (SVD), and consequently have one of two limitations: these works need to either assume that each document contains only one topic, or else can only recover the span of the topic vectors instead of the topic vectors themselves. This paper formally justifies Nonnegative Matrix Factorization (NMF) as a main tool in this context, which is an analog of SVD where all vectors are nonnegative. Using this tool we give the first polynomial-time algorithm for learning topic models without the above two limitations. The algorithm uses a fairly mild assumption about the underlying topic matrix called separability, which is usually found to hold in real-life data. Perhaps the most attractive feature of our algorithm is that it generalizes to yet more realistic models that incorporate topic-topic correlations, such as the Correlated Topic Model (CTM) and the Pachinko Allocation Model (PAM). We hope that this paper will motivate further theoretical results that use NMF as a replacement for SVD -- just as NMF has come to replace SVD in many applications.", "Topic models provide a useful method for dimensionality reduction and exploratory data analysis in large text corpora. Most approaches to topic model inference have been based on a maximum likelihood objective. Efficient algorithms exist that approximate this objective, but they have no provable guarantees. Recently, algorithms have been introduced that provide provable bounds, but these algorithms are not practical because they are inefficient and not robust to violations of model assumptions. In this paper we present an algorithm for topic model inference that is both provable and practical. The algorithm produces results comparable to the best MCMC implementations while running orders of magnitude faster." ] }
1605.02531
2394056850
Suppose that we are given a time series where consecutive samples are believed to come from a probabilistic source, that the source changes from time to time and that the total number of sources is fixed. Our objective is to estimate the distributions of the sources. A standard approach to this problem is to model the data as a hidden Markov model (HMM). However, since the data often lacks the Markov or the stationarity properties of an HMM, one can ask whether this approach is still suitable or perhaps another approach is required. In this paper we show that a maximum likelihood HMM estimator can be used to approximate the source distributions in a much larger class of models than HMMs. Specifically, we propose a natural and fairly general non-stationary model of the data, where the only restriction is that the sources do not change too often. Our main result shows that for this model, a maximum-likelihood HMM estimator produces the correct second moment of the data, and the results can be extended to higher moments.
Finally, we make essential use of type theory for Markov chains. The results we use were obtained in @cite_26 , where second order and higher order type theory is developed.
{ "cite_N": [ "@cite_26" ], "mid": [ "2130594164" ], "abstract": [ "Let X_ 1 ,X_ 2 , be independent identically distributed random variables taking values in a finite set X and consider the conditional joint distribution of the first m elements of the sample X_ 1 , , X_ n on the condition that X_ 1 =x_ 1 and the sliding block sample average of a function h( , ) defined on X^ 2 exceeds a threshold > Eh(X_ 1 , X_ 2 ) . For m fixed and n , this conditional joint distribution is shown to converge m the m -step joint distribution of a Markov chain started in x_ 1 which is closest to X_ l , X_ 2 , in Kullback-Leibler information divergence among all Markov chains whose two-dimensional stationary distribution P( , ) satisfies P(x, y)h(x, y) , provided some distribution P on X_ 2 having equal marginals does satisfy this constraint with strict inequality. Similar conditional limit theorems are obtained when X_ 1 , X_ 2 , is an arbitrary finite-order Markov chain and more general conditioning is allowed." ] }
1605.02464
2371978797
Person re-identification (re-id) consists of associating individual across camera network, which is valuable for intelligent video surveillance and has drawn wide attention. Although person re-identification research is making progress, it still faces some challenges such as varying poses, illumination and viewpoints. For feature representation in re-identification, existing works usually use low-level descriptors which do not take full advantage of body structure information, resulting in low representation ability. discrimination. To solve this problem, this paper proposes the mid-level body-structure based feature representation (BSFR) which introduces body structure pyramid for codebook learning and feature pooling in the vertical direction of human body. Besides, varying viewpoints in the horizontal direction of human body usually causes the data missing problem, @math , the appearances obtained in different orientations of the identical person could vary significantly. To address this problem, the orientation driven bag of appearances (ODBoA) is proposed to utilize person orientation information extracted by orientation estimation technic. To properly evaluate the proposed approach, we introduce a new re-identification dataset (Market-1203) based on the Market-1501 dataset and propose a new re-identification dataset (PKU-Reid). Both datasets contain multiple images captured in different body orientations for each person. Experimental results on three public datasets and two proposed datasets demonstrate the superiority of the proposed approach, indicating the effectiveness of body structure and orientation information for improving re-identification performance.
Oliver @math @cite_47 introduce the concept of bag of appearances (BoA) which is a container of color features that fully represents a person by collecting all his different appearances obtained from Kinect. They perform person matching in a probabilistic framework by accumulating the probability of pairwise matching for all of the elements in each bag with appearance and height information. However, BoA contains much redundant data redundancy and ignores the orientation information, resulting in limited accuracy, large storage cost and computation cost. Inspired by the concept of BoA @cite_47 , we introduce ODBoA to store and select the candidate elements in each bag for person matching. Since mid-level feature fusion describes person appearance more comprehensively and is more robust to misalignment and background noise, a mid-level feature pooling strategy is employed to construct a single signature for each person based on BSFR.
{ "cite_N": [ "@cite_47" ], "mid": [ "2046119740" ], "abstract": [ "People re-identification in uncontrolled scenarios is a difficult task since people appearance may significantly vary along time due to changes in illumination, changes in the person pose or the presence of undesired objects in the scene. In order to cope with this temporal variability in the person appearance, we introduce the concept of Bags of Appearances (BoA) to describe each person. A BoA is a container of color features that fully represents a person by collecting all their different appearances along time. Matching of bags is performed in a probabilistic framework by accumulating the probability of matching for all of the elements of each bag. Experiments have been conducted in a real shop where clients were re-identified at the entrance and exit. Results improve state-of-the art methods and confirm that our proposal successfully copes with rough changes in the people appearance." ] }
1605.02269
2378505114
The past few years has seen the rapid growth of data min- ing approaches for the analysis of data obtained from Mas- sive Open Online Courses (MOOCs). The objectives of this study are to develop approaches to predict the scores a stu- dent may achieve on a given grade-related assessment based on information, considered as prior performance or prior ac- tivity in the course. We develop a personalized linear mul- tiple regression (PLMR) model to predict the grade for a student, prior to attempting the assessment activity. The developed model is real-time and tracks the participation of a student within a MOOC (via click-stream server logs) and predicts the performance of a student on the next as- sessment within the course offering. We perform a com- prehensive set of experiments on data obtained from three openEdX MOOCs via a Stanford University initiative. Our experimental results show the promise of the proposed ap- proach in comparison to baseline approaches and also helps in identification of key features that are associated with the study habits and learning behaviors of students.
Several researchers have focused on the analysis of education data (including MOOCs), in an effort to understand the characteristics of student learning behaviors and motivation within this education model @cite_1 . Boyer et. al. @cite_14 focus on the stopout prediction problem within MOOCs; by designing a set of processes using information from previous courses and the previous weeks of the current course. Brinton et. al. @cite_5 developed an approach to predict if a student answers a question correct on the first attempt via click-stream information and social learning networks. Kennedy et. al. @cite_19 analyzed the relationship between a student's prior knowledge on end-of-MOOC performance. Sunar et. al. @cite_18 developed an approach to predict the possible interactions between peers participating in a MOOC.
{ "cite_N": [ "@cite_18", "@cite_14", "@cite_1", "@cite_19", "@cite_5" ], "mid": [ "2144137092", "2168585122", "2021601071", "2081777295", "1492355514" ], "abstract": [ "High attrition rates are one of the biggest concerns in MOOCs. One of the possible causes may be learners’ lack of interactions and low levels of participations in MOOCs online discussions. Research to measure and predict recurrent interactions of learners in MOOCs online discussions has the potential to gain inside into the likely impact on the attrition rate. It is argued that personalisation in MOOCs has the potential to increase learners’ interactions and associated factors to continuous friendships. In this paper, a detailed analysis has been carried out of learners’ interactions within a MOOC. This paper investigates learners’ interaction habits and their recurrent interactions throughout the entire duration of a MOOC’s course, and consequently proposes a method to measure the interactions and predict possible interactions between peers. The findings denote that when a learner interacted with their peer, they most probably interact again in the following weeks. Moreover, our proposed prediction method also demonstrate promising results towards predicting future interactions between learners based on their previous relationships", "Data recorded while learners are interacting with Massive Open Online Courses (MOOC) platforms provide a unique opportunity to build predictive models that can help anticipate future behaviors and develop interventions. But since most of the useful predictive problems are defined for a real-time framework, using knowledge drawn from the past courses becomes crucial. To address this challenge, we designed a set of processes that take advantage of knowledge from both previous courses and previous weeks of the same course to make real time predictions on learners behavior. In particular, we evaluate multiple transfer learning methods. In this article, we present our results for the stopout prediction problem (predicting which learners are likely to stop engaging in the course). We believe this paper is a first step towards addressing the need of transferring knowledge across courses.", "This review pursues a twofold goal, the first is to preserve and enhance the chronicles of recent educational data mining (EDM) advances development; the second is to organize, analyze, and discuss the content of the review based on the outcomes produced by a data mining (DM) approach. Thus, as result of the selection and analysis of 240 EDM works, an EDM work profile was compiled to describe 222 EDM approaches and 18 tools. A profile of the EDM works was organized as a raw data base, which was transformed into an ad-hoc data base suitable to be mined. As result of the execution of statistical and clustering processes, a set of educational functionalities was found, a realistic pattern of EDM approaches was discovered, and two patterns of value-instances to depict EDM approaches based on descriptive and predictive models were identified. One key finding is: most of the EDM approaches are ground on a basic set composed by three kinds of educational systems, disciplines, tasks, methods, and algorithms each. The review concludes with a snapshot of the surveyed EDM works, and provides an analysis of the EDM strengths, weakness, opportunities, and threats, whose factors represent, in a sense, future work to be fulfilled.", "While MOOCs have taken the world by storm, questions remain about their pedagogical value and high rates of attrition. In this paper we argue that MOOCs which have open entry and open curriculum structures, place pressure on learners to not only have the requisite knowledge and skills to complete the course, but also the skills to traverse the course in adaptive ways that lead to success. The empirical study presented in the paper investigated the degree to which students' prior knowledge and skills, and their engagement with the MOOC as measured through learning analytics, predict end-of-MOOC performance. The findings indicate that prior knowledge is the most significant predictor of MOOC success followed by students' ability to revise and revisit their previous work.", "We study student performance prediction in Massive Open Online Courses (MOOCs), where the objective is to predict whether a user will be Correct on First Attempt (CFA) in answering a question. In doing so, we develop novel techniques that leverage behavioral data collected by MOOC platforms. Using video-watching clickstream data from one of our MOOCs, we first extract summary quantities (e.g., fraction played, number of pauses) for each user-video pair, and show how certain intervals sets of values for these behaviors quantify that a pair is more likely to be CFA or not for the corresponding question. Motivated by these findings, our methods are designed to determine suitable intervals from training data and to use the corresponding success estimates as learning features in prediction algorithms. Tested against a large set of empirical data, we find that our schemes outperform standard algorithms (i.e., without behavioral data) for all datasets and metrics tested. Moreover, the improvement is particularly pronounced when considering the first few course weeks, demonstrating the “early detection” capability of such clickstream data. We also discuss how CFA prediction can be used to depict graphs of the Social Learning Network (SLN) of students, which can help instructors manage courses more effectively." ] }
1605.02269
2378505114
The past few years has seen the rapid growth of data min- ing approaches for the analysis of data obtained from Mas- sive Open Online Courses (MOOCs). The objectives of this study are to develop approaches to predict the scores a stu- dent may achieve on a given grade-related assessment based on information, considered as prior performance or prior ac- tivity in the course. We develop a personalized linear mul- tiple regression (PLMR) model to predict the grade for a student, prior to attempting the assessment activity. The developed model is real-time and tracks the participation of a student within a MOOC (via click-stream server logs) and predicts the performance of a student on the next as- sessment within the course offering. We perform a com- prehensive set of experiments on data obtained from three openEdX MOOCs via a Stanford University initiative. Our experimental results show the promise of the proposed ap- proach in comparison to baseline approaches and also helps in identification of key features that are associated with the study habits and learning behaviors of students.
Most similar to our proposed work, Bayesian Knowledge Tracing (BKT) @cite_2 has been adapted to predict whether a student can get a MOOC assessment correct or not. BKT was first developed @cite_10 for modeling the evolving knowledge states of students monitored within Intelligent Tutoring Systems (ITS). Pardos et. al. proposed a model Item Difficulty Effect Model (IDEM) that incorporates the difficulty levels of different questions and modifies the original BKT by adding an Item node to every question node. By identifying the challenges associated with modeling MOOC data, the IDEM approach and extensions that involve splitting questions into several sub-parts and incorporating resource (knowledge) information @cite_4 are considered state-of-the-art MOOC assessment prediction approaches and referred as KT-IDEM. However, this approach can only predict a binary value grade. In contrast, the model proposed in this paper is able to predict both, a continuous and a binary grade.
{ "cite_N": [ "@cite_10", "@cite_4", "@cite_2" ], "mid": [ "2015040676", "2183887294", "2146856593" ], "abstract": [ "This paper describes an effort to model students' changing knowledge state during skill acquisition. Students in this research are learning to write short programs with the ACT Programming Tutor (APT). APT is constructed around a production rule cognitive model of programming knowledge, called theideal student model. This model allows the tutor to solve exercises along with the student and provide assistance as necessary. As the student works, the tutor also maintains an estimate of the probability that the student has learned each of the rules in the ideal model, in a process calledknowledge tracing. The tutor presents an individualized sequence of exercises to the student based on these probability estimates until the student has ‘mastered’ each rule. The programming tutor, cognitive model and learning and performance assumptions are described. A series of studies is reviewed that examine the empirical validity of knowledge tracing and has led to modifications in the process. Currently the model is quite successful in predicting test performance. Further modifications in the modeling process are discussed that may improve performance levels.", "Open Online Courses (MOOCs) are an increasingly pervasive newcomer to the virtual landscape of higher-education, delivering a wide variety of topics in science, engineering, and the humanities. However, while technological innovation is enabling unprecedented open access to high quality educational material, these systems generally inherit similar homework, exams, and instructional resources to that of their classroom counterparts and currently lack an underlying model with which to talk about learning. In this paper we will show how existing learner modeling techniques based on Bayesian Knowledge Tracing can be adapted to the inaugural course, 6.002x: circuit design, on the edX MOOC platform. We identify three distinct challenges to modeling MOOC data and provide predictive evaluations of the respective modeling approach to each challenge. The challenges identified are; lack of an explicit knowledge component model, allowance for unpenalized multiple problem attempts, and multiple pathways through the system that allow for learning influences outside of the current assessment.", "Many models in computer education and assessment take into account difficulty. However, despite the positive results of models that take difficulty in to account, knowledge tracing is still used in its basic form due to its skill level diagnostic abilities that are very useful to teachers. This leads to the research question we address in this work: Can KT be effectively extended to capture item difficulty and improve prediction accuracy? There have been a variety of extensions to KT in recent years. One such extension was Baker's contextual guess and slip model. While this model has shown positive gains over KT in internal validation testing, it has not performed well relative to KT on unseen in-tutor data or post-test data, however, it has proven a valuable model to use alongside other models. The contextual guess and slip model increases the complexity of KT by adding regression steps and feature generation. The added complexity of feature generation across datasets may have hindered the performance of this model. Therefore, one of the aims of our work here is to make the most minimal of modifications to the KT model in order to add item difficulty and keep the modification limited to changing the topology of the model. We analyze datasets from two intelligent tutoring systems with KT and a model we have called KT-IDEM (Item Difficulty Effect Model) and show that substantial performance gains can be achieved with this minor modification that incorporates item difficulty." ] }
1605.02350
2376727710
Correctness of multi-threaded programs typically requires that they satisfy liveness properties. For example, a program may require that no thread is starved of a shared resource, or that all threads eventually agree on a single value. This paper presents a method for proving that such liveness properties hold. Two particular challenges addressed in this work are that (1) the correctness argument may rely on global behaviour of the system (e.g., the correctness argument may require that all threads collectively progress towards "the good thing" rather than one thread progressing while the others do not interfere), and (2) such programs are often designed to be executed by any number of threads, and the desired liveness properties must hold regardless of the number of threads that are active in the program.
There exist proof systems for verifying liveness properties of parameterized systems (for example, @cite_7 ). However, the problem of automatically constructing such proofs has not been explored. To the best of our knowledge, this paper is the first to address the topic of automatic verification of liveness properties of (infinite-state) programs with a parameterized number of threads.
{ "cite_N": [ "@cite_7" ], "mid": [ "1988987404" ], "abstract": [ "This paper introduces parametrized verification diagrams (PVDs), a formalism that allows to prove temporal properties of parametrized concurrent systems, in which a given program is executed by an unbounded number of processes. PVDs extend general verification diagrams (GVDs). GVDs encode succinctly a proof that a non-parametrized reactive system satisfies a given temporal property. Even though GVDs are known to be sound and complete for non-parametrized systems, proving temporal properties of parametrized systems potentially requires to find a different diagram for each instantiation of the parameter (number of processes). In turn, each diagram requires to discharge and prove a different collection of verification conditions. PVDs allow a emph[single] diagram to represent the proof that all instances of the parametrized system for an arbitrary number of threads running concurrently satisfy the temporal specification. Checking the proof represented by a PVD requires proving only a finite collection of quantifier-free verification conditions. The PVDs we present here exploit the symmetry assumption, under which process identifiers are interchangeable. This assumption covers a large class of concurrent systems, including concurrent data types. We illustrate the use of PVDs in the verification of an infinite state mutual exclusion protocol." ] }
1605.02350
2376727710
Correctness of multi-threaded programs typically requires that they satisfy liveness properties. For example, a program may require that no thread is starved of a shared resource, or that all threads eventually agree on a single value. This paper presents a method for proving that such liveness properties hold. Two particular challenges addressed in this work are that (1) the correctness argument may rely on global behaviour of the system (e.g., the correctness argument may require that all threads collectively progress towards "the good thing" rather than one thread progressing while the others do not interfere), and (2) such programs are often designed to be executed by any number of threads, and the desired liveness properties must hold regardless of the number of threads that are active in the program.
considers systems that consist of unboundedly many finite-state processes running in parallel @cite_32 @cite_8 @cite_21 @cite_18 @cite_25 @cite_3 . In this paper, we develop an approach to the problem of verifying liveness properties of parameterized , in which processes are infinite state. This demands substantially different techniques than those used in parameterized model checking. The techniques used in this paper are more closely related to and .
{ "cite_N": [ "@cite_18", "@cite_8", "@cite_21", "@cite_32", "@cite_3", "@cite_25" ], "mid": [ "1966856316", "2055083505", "1561112849", "1853246977", "2761734916", "1515157987" ], "abstract": [ "The method of invisible invariants was developed originally in order to verify safety properties of parameterized systems in a fully automatic manner. The method is based on (1) a project&generalize heuristic to generate auxiliary constructs for parameterized systems and (2) a small-model theorem, implying that it is sufficient to check the validity of logical assertions of a certain syntactic form on small instantiations of a parameterized system. The approach can be generalized to any deductive proof rule that (1) requires auxiliary constructs that can be generated by project&generalize, and (2) the premises resulting when using the constructs are of the form covered by the small-model theorem.The method of invisible ranking, presented here, generalizes the approach to liveness properties of parameterized systems. Starting with a proof rule and cases where the method can be applied almost “as is,” the paper progresses to develop deductive proof rules for liveness and extend the small-model theorem to cover many intricate families of parameterized systems.", "Regular model checking is a form of symbolic model checking for parameterized and infinite-state systems whose states can be represented as words of arbitrary length over a finite alphabet, in which regular sets of words are used to represent sets of states. We present LTL(MSO), a combination of the logics monadic second-order logic (MSO) and LTL as a natural logic for expressing the temporal properties to be verified in regular model checking. In other words, LTL(MSO) is a natural specification language for both the system and the property under consideration. LTL(MSO) is a two-dimensional modal logic, where MSO is used for specifying properties of system states and transitions, and LTL is used for specifying temporal properties. In addition, the first-order quantification in MSO can be used to express properties parameterized on a position or process. We give a technique for model checking LTL(MSO), which is adapted from the automata-theoretic approach: a formula is translated to a buchi regular transition system with a regular set of accepting states, and regular model checking techniques are used to search for models. We have implemented the technique, and show its application to a number of parameterized algorithms from the literature.", "The paper presents a method for the automatic verification of a certain class of parameterized systems. These are bounded-data systems consisting of N processes (N being the parameter), where each process is finite-state. First, we show that if we use the standard deductive inv rule for proving invariance properties, then all the generated verification conditions can be automatically resolved by finite-state (BDD-based) methods with no need for interactive theorem proving.Next, we show how to use model-checking techniques over finite (and small) instances of the parameterized system in order to derive candidates for invariant assertions. Combining this automatic computation of invariants with the previously mentioned resolution of the VCs (verification conditions) yields a (necessarily) incomplete but fully automatic sound method for verifying bounded-data parameterized systems. The generated invariants can be transferred to the VC-validation phase without ever been examined by the user, which explains why we refer to them as \"invisible\".We illustrate the method on a non-trivial example of a cache protocol, provided by Steve German.", "In this paper, we develop a counterexample-guided abstraction refinement (CEGAR) framework for monotonic abstraction, an approach that is particularly useful in automatic verification of safety properties for parameterized systems. The main drawback of verification using monotonic abstraction is that it sometimes generates spurious counterexamples. Our CEGAR algorithm automatically extracts from each spurious counterexample a set of configurations called a \"Safety Zone\"and uses it to refine the abstract transition system of the next iteration. We have developed a prototype based on this idea; and our experimentation shows that the approach allows to verify many of the examples that cannot be handled by the original monotonic abstraction approach.", "We characterize the complexity of liveness verification for parameterized systems consisting of a leader process and arbitrarily many anonymous and identical contributor processes. Processes communicate through a shared, bounded-value register. While each operation on the register is atomic, there is no synchronization primitive to execute a sequence of operations atomically.", "The methods of Invisible Invariants and Invisible Ranking were developed originally in order to verify temporal properties of parameterized systems in a fully automatic manner. These methods are based on an instantiate-project-and-generalize heuristic for the automatic generation of auxiliary constructs and a small model property implying that it is sufficient to check validity of a deductive rule premises using these constructs on small instantiations of the system. The previous version of the method of Invisible Ranking was restricted to cases where the helpful assertions and ranking functions for a process depended only on the local state of this process and not on any neighboring process, which seriously restricted the applicability of the method, and often required the introduction of auxiliary variables." ] }
1605.02350
2376727710
Correctness of multi-threaded programs typically requires that they satisfy liveness properties. For example, a program may require that no thread is starved of a shared resource, or that all threads eventually agree on a single value. This paper presents a method for proving that such liveness properties hold. Two particular challenges addressed in this work are that (1) the correctness argument may rely on global behaviour of the system (e.g., the correctness argument may require that all threads collectively progress towards "the good thing" rather than one thread progressing while the others do not interfere), and (2) such programs are often designed to be executed by any number of threads, and the desired liveness properties must hold regardless of the number of threads that are active in the program.
an active field with many effective techniques @cite_11 @cite_22 @cite_4 @cite_26 @cite_19 @cite_15 . One of the goals of the present paper is to adapt the incremental style of termination analysis pioneered by @cite_24 @cite_11 to the setting of parameterized programs. The essence of this idea is to construct a termination argument iteratively via abstraction refinement: First, sample some behaviours of the program and prove that those are terminating. Second, assemble a termination argument for the example behaviours into a candidate termination argument. Third, use a safety checker to prove that the termination argument applies to behaviours of the program. If the safety check succeeds, the program terminates; if not, we can use the counter-example to improve the termination argument.
{ "cite_N": [ "@cite_26", "@cite_4", "@cite_22", "@cite_24", "@cite_19", "@cite_15", "@cite_11" ], "mid": [ "286513891", "2012816689", "1589106570", "1503537039", "14146387", "1102030435", "2124909257" ], "abstract": [ "We present a novel approach to termination analysis. In a first step, the analysis uses a program as a black-box which exhibits only a finite set of sample traces. Each sample trace is infinite but can be represented by a finite lasso. The analysis can \"learn\" a program from a termination proof for the lasso, a program that is terminating by construction. In a second step, the analysis checks that the set of sample traces is representative in a sense that we can make formal. An experimental evaluation indicates that the approach is a potentially useful addition to the portfolio of existing approaches to termination analysis.", "Proof, verification and analysis methods for termination all rely on two induction principles: (1) a variant function or induction on data ensuring progress towards the end and (2) some form of induction on the program structure. The abstract interpretation design principle is first illustrated for the design of new forward and backward proof, verification and analysis methods for safety. The safety collecting semantics defining the strongest safety property of programs is first expressed in a constructive fixpoint form. Safety proof and checking verification methods then immediately follow by fixpoint induction. Static analysis of abstract safety properties such as invariance are constructively designed by fixpoint abstraction (or approximation) to (automatically) infer safety properties. So far, no such clear design principle did exist for termination so that the existing approaches are scattered and largely not comparable with each other. For (1), we show that this design principle applies equally well to potential and definite termination. The trace-based termination collecting semantics is given a fixpoint definition. Its abstraction yields a fixpoint definition of the best variant function. By further abstraction of this best variant function, we derive the Floyd Turing termination proof method as well as new static analysis methods to effectively compute approximations of this best variant function. For (2), we introduce a generalization of the syntactic notion of struc- tural induction (as found in Hoare logic) into a semantic structural induction based on the new semantic concept of inductive trace cover covering execution traces by segments, a new basis for formulating program properties. Its abstractions allow for generalized recursive proof, verification and static analysis methods by induction on both program structure, control, and data. Examples of particular instances include Floyd's handling of loop cutpoints as well as nested loops, Burstall's intermittent assertion total correctness proof method, and Podelski-Rybalchenko transition invariants.", "Termination proving has traditionally been based on the search for (possibly lexicographic) ranking functions. In recent years, however, the discovery of termination proof techniques based on Ramsey's theorem have led to new automation strategies, e.g. size-change, or iterative reductions from termination to safety. In this paper we revisit the decision to use Ramsey-based termination arguments in the iterative approach. We describe a new iterative termination proving procedure that instead searches for lexicographic termination arguments. Using experimental evidence we show that this new method leads to dramatic speedups.", "Abstraction can often lead to spurious counterexamples. Counterexample-guided abstraction refinement is a method of strengthening abstractions based on the analysis of these spurious counterexamples. For invariance properties, a counterexample is a finite trace that violates the invariant; it is spurious if it is possible in the abstraction but not in the original system. When proving termination or other liveness properties of infinite-state systems, a useful notion of spurious counterexamples has remained an open problem. For this reason, no counterexample-guided abstraction refinement algorithm was known for termination. In this paper, we address this problem and present the first known automatic counterexample-guided abstraction refinement algorithm for termination proofs. We exploit recent results on transition invariants and transition predicate abstraction. We identify two reasons for spuriousness: abstractions that are too coarse, and candidate transition invariants that are too strong. Our counterexample-guided abstraction refinement algorithm successively weakens candidate transition invariants and refines the abstraction.", "An algorithmic-learning-based termination analysis technique is presented. The new technique combines transition predicate abstraction, algorithmic learning, and decision procedures to compute transition invariants as proofs of program termination. Compared to the previous approaches that mostly aim to find a particular form of transition invariants, our technique does not commit to any particular one. For the examples that the previous approaches simply give up and report failure our technique can still prove the termination. We compare our technique with others on several benchmarks from literature including PolyRank examples, SNU realtime benchmark, and Windows device driver examples. The result shows that our technique outperforms others both in efficiency and effectiveness.", "FuncTion is a research prototype static analyzer designed for proving conditional termination of C programs. The tool automatically infers piecewise-defined ranking functions and sufficient preconditions for termination by means of abstract interpretation. It combines a variety of abstract domains in order to balance the precision and cost of the analysis.", "Program termination is central to the process of ensuring that systems code can always react. We describe a new program termination prover that performs a path-sensitive and context-sensitive program analysis and provides capacity for large program fragments (i.e. more than 20,000 lines of code) together with support for programming language features such as arbitrarily nested loops, pointers, function-pointers, side-effects, etc.We also present experimental results on device driver dispatch routines from theWindows operating system. The most distinguishing aspect of our tool is how it shifts the balance between the two tasks of constructing and respectively checking the termination argument. Checking becomes the hard step. In this paper we show how we solve the corresponding challenge of checking with binary reachability analysis." ] }
1605.02350
2376727710
Correctness of multi-threaded programs typically requires that they satisfy liveness properties. For example, a program may require that no thread is starved of a shared resource, or that all threads eventually agree on a single value. This paper presents a method for proving that such liveness properties hold. Two particular challenges addressed in this work are that (1) the correctness argument may rely on global behaviour of the system (e.g., the correctness argument may require that all threads collectively progress towards "the good thing" rather than one thread progressing while the others do not interfere), and (2) such programs are often designed to be executed by any number of threads, and the desired liveness properties must hold regardless of the number of threads that are active in the program.
Termination analyses have been developed for the setting of concurrent programs @cite_27 @cite_28 @cite_6 . Our work differs in two respects. First, our technique handles the case that there are unboundedly many threads operating simultaneously in the system. Second, the aforementioned techniques prove termination using arguments. A thread-local termination argument expresses that each thread individually progresses towards some goal assuming that its environment (formed by the other threads) is either passive or at least does not disrupt its progress. In contrast, the technique proposed in the paper is able to reason about termination that requires coordination between all threads (that is, all threads together progress towards some goal). This enables our approach to prove liveness for programs such as the Ticket protocol (Figure ): proving that some distinguished thread will eventually enter its critical section requires showing that all threads collectively make progress on increasing the value of the service number until the distinguished thread's ticket is reached.
{ "cite_N": [ "@cite_28", "@cite_27", "@cite_6" ], "mid": [ "182717014", "2145062333", "2270988982" ], "abstract": [ "Automated verification of multi-threaded programs is difficult. Direct treatment of all possible thread interleavings by reasoning about the program globally is a prohibitively expensive task, even for small programs. Rely-guarantee reasoning is a promising technique to address this challenge by reducing the verification problem to reasoning about each thread individually with the help of assertions about other threads. In this paper, we propose a proof rule that uses rely-guarantee reasoning for compositional verification of termination properties. The crux of our proof rule lies in its compositionality wrt. the thread structure of the program and wrt. the applied termination arguments --- transition invariants. We present a method for automating the proof rule using an abstraction refinement procedure that is based on solving recursion-free Horn clauses. To deal with termination, we extend an existing Horn-clause solver with the capability to handle well-foundedness constraints. Finally, we present an experimental evaluation of our algorithm on a set of micro-benchmarks.", "Concurrent programs are often designed such that certain functions executing within critical threads must terminate. Examples of such cases can be found in operating systems, web servers, e-mail clients, etc. Unfortunately, no known automatic program termination prover supports a practical method of proving the termination of threads. In this paper we describe such a procedure. The procedure's scalability is achieved through the use of environment models that abstract away the surrounding threads. The procedure's accuracy is due to a novel method of incrementally constructing environment abstractions. Our method finds the conditions that a thread requires of its environment in order to establish termination by looking at the conditions necessary to prove that certain paths through the thread represent well-founded relations if executed in isolation of the other threads. The paper gives a description of experimental results using an implementation of our procedureon Windows device drivers and adescription of a previously unknown bug found withthe tool.", "We describe a method for proving termination of massively parallel GPU kernels. An implementation in KITTeL is able to show termination of 94 of the 598 kernels in our benchmark suite." ] }
1605.02350
2376727710
Correctness of multi-threaded programs typically requires that they satisfy liveness properties. For example, a program may require that no thread is starved of a shared resource, or that all threads eventually agree on a single value. This paper presents a method for proving that such liveness properties hold. Two particular challenges addressed in this work are that (1) the correctness argument may rely on global behaviour of the system (e.g., the correctness argument may require that all threads collectively progress towards "the good thing" rather than one thread progressing while the others do not interfere), and (2) such programs are often designed to be executed by any number of threads, and the desired liveness properties must hold regardless of the number of threads that are active in the program.
deals with proving safety properties of infinite state concurrent programs with unboundedly many threads @cite_1 @cite_0 @cite_12 @cite_20 . Safety analysis is relevant to liveness analysis in two respects: (1) In liveness analysis based on abstraction refinement, checking the validity of a correctness argument is reduced to the verification of a safety property @cite_24 @cite_11 (2) An invariant is generally needed in order to establish (or to ) a ranking function. Well-founded proof spaces can be seen as an extension of @cite_2 , a proof system for parameterized safety analysis, to prove liveness properties. A more extensive comparison between proof spaces and other methods for parameterized safety analysis can be found in @cite_2 .
{ "cite_N": [ "@cite_1", "@cite_0", "@cite_24", "@cite_2", "@cite_12", "@cite_20", "@cite_11" ], "mid": [ "1243232056", "169389188", "1503537039", "2090547808", "2159835799", "1516842532", "2124909257" ], "abstract": [ "Monotonicity in concurrent systems stipulates that, in any global state, extant system actions remain executable when new processes are added to the state. This concept is not only natural and common in multi-threaded software, but also useful: if every thread’s memory is finite, monotonicity often guarantees the decidability of safety property verification even when the number of running threads is unknown. In this paper, we show that the act of obtaining finite-data thread abstractions for model checking can be at odds with monotonicity: Predicate-abstracting certain widely used monotone software results in non-monotone multi-threaded Boolean programs — the monotonicity is lost in the abstraction. As a result, well-established sound and complete safety checking algorithms become inapplicable; in fact, safety checking turns out to be undecidable for the obtained class of unbounded-thread Boolean programs. We demonstrate how the abstract programs can be modified into monotone ones, without affecting safety properties of the non-monotone abstraction. This significantly improves earlier approaches of enforcing monotonicity via overapproximations.", "We examine the problem of inferring invariants for parametrized systems. Parametrized systems are concurrent systems consisting of an a priori unbounded number of process instances running the same program. Such systems are commonly encountered in many situations including device drivers, distributed systems, and robotic swarms. In this paper we describe a technique that enables leveraging off-the-shelf invariant generators designed for sequential programs to infer invariants of parametrized systems. The central challenge in invariant inference for parametrized systems is that naively exploding the transition system with all interleavings is not just impractical but impossible. In our approach, the key enabler is the notion of a reflective abstraction that we prove has an important correspondence with inductive invariants. This correspondence naturally gives rise to an iterative invariant generation procedure that alternates between computing candidate invariants and creating reflective abstractions.", "Abstraction can often lead to spurious counterexamples. Counterexample-guided abstraction refinement is a method of strengthening abstractions based on the analysis of these spurious counterexamples. For invariance properties, a counterexample is a finite trace that violates the invariant; it is spurious if it is possible in the abstraction but not in the original system. When proving termination or other liveness properties of infinite-state systems, a useful notion of spurious counterexamples has remained an open problem. For this reason, no counterexample-guided abstraction refinement algorithm was known for termination. In this paper, we address this problem and present the first known automatic counterexample-guided abstraction refinement algorithm for termination proofs. We exploit recent results on transition invariants and transition predicate abstraction. We identify two reasons for spuriousness: abstractions that are too coarse, and candidate transition invariants that are too strong. Our counterexample-guided abstraction refinement algorithm successively weakens candidate transition invariants and refines the abstraction.", "In this paper, we present a new approach to automatically verify multi-threaded programs which are executed by an unbounded number of threads running in parallel. The starting point for our work is the problem of how we can leverage existing automated verification technology for sequential programs (abstract interpretation, Craig interpolation, constraint solving, etc.) for multi-threaded programs. Suppose that we are given a correctness proof for a trace of a program (or for some other program fragment). We observe that the proof can always be decomposed into a finite set of Hoare triples, and we ask what can be proved from the finite set of Hoare triples using only simple combinatorial inference rules (without access to a theorem prover and without the possibility to infer genuinely new Hoare triples)? We introduce a proof system where one proves the correctness of a multi-threaded program by showing that for each trace of the program, there exists a correctness proof in the space of proofs that are derivable from a finite set of axioms using simple combinatorial inference rules. This proof system is complete with respect to the classical proof method of establishing an inductive invariant (which uses thread quantification and control predicates). Moreover, it is possible to algorithmically check whether a given set of axioms is sufficient to prove the correctness of a multi-threaded program, using ideas from well-structured transition systems.", "We consider a language of recursively defined formulas about arrays of variables, suitable for specifying safety properties of parameterized systems. We then present an abstract interpretation framework which translates a paramerized system as a symbolic transition system which propagates such formulas as abstractions of underlying concrete states. The main contribution is a proof method for implications between the formulas, which then provides for an implementation of this abstract interpreter.", "We present a new technique for speeding up static analysis of (shared memory) concurrent programs. We focus on analyses that compute thread correlations : such analyses infer invariants that capture correlations between the local states of different threads (as well as the global state). Such invariants are required for verifying many natural properties of concurrent programs. Tracking correlations between different thread states, however, is very expensive. A significant factor that makes such analysis expensive is the cost of applying abstract transformers. In this paper, we introduce a technique that exploits the notion of footprints and memoization to compute individual abstract transformers more efficiently. We have implemented this technique in our concurrent shape analysis framework. We have used this implementation to prove properties of fine-grained concurrent programs with a shared, mutable, heap in the presence of an unbounded number of objects and threads. The properties we verified include memory safety, data structure invariants, partial correctness, and linearizability. Our empirical evaluation shows that our new technique reduces the analysis time significantly (e.g., by a factor of 35 in one case).", "Program termination is central to the process of ensuring that systems code can always react. We describe a new program termination prover that performs a path-sensitive and context-sensitive program analysis and provides capacity for large program fragments (i.e. more than 20,000 lines of code) together with support for programming language features such as arbitrarily nested loops, pointers, function-pointers, side-effects, etc.We also present experimental results on device driver dispatch routines from theWindows operating system. The most distinguishing aspect of our tool is how it shifts the balance between the two tasks of constructing and respectively checking the termination argument. Checking becomes the hard step. In this paper we show how we solve the corresponding challenge of checking with binary reachability analysis." ] }
1605.02305
2354576866
Depth estimation from single monocular images is a key component of scene understanding and has benefited largely from deep convolutional neural networks (CNN) recently. In this article, we take advantage of the recent deep residual networks and propose a simple yet effective approach to this problem. We formulate depth estimation as a pixel-wise classification task. Specifically, we first discretize the continuous depth values into multiple bins and label the bins according to their depth range. Then we train fully convolutional deep residual networks to predict the depth label of each pixel. Performing discrete depth label classification instead of continuous depth value regression allows us to predict a confidence in the form of probability distribution. We further apply fully-connected conditional random fields (CRF) as a post processing step to enforce local smoothness interactions, which improves the results. We evaluate our approach on both indoor and outdoor datasets and achieve state-of-the-art performance.
Previous depth estimation methods are mainly based on geometric models. For example, the works of @cite_31 @cite_17 @cite_5 rely on box-shaped models and try to fit the box edges to those observed in the image. These methods are limited to only model particular scene structures and therefore are not applicable for general-scene depth estimations. More recently, non-parametric methods @cite_12 are explored. These methods consist of candidate images retrieval, scene alignment and then depth inference using optimizations with smoothness constraints. These methods are based on the assumption that scenes with semantically similar appearances should have similar depth distributions when densely aligned.
{ "cite_N": [ "@cite_5", "@cite_31", "@cite_12", "@cite_17" ], "mid": [ "1481823314", "1818727054", "2074254947", "2145567954" ], "abstract": [ "In this paper we propose the first exact solution to the problem of estimating the 3D room layout from a single image. This problem is typically formulated as inference in a Markov random field, where potentials count image features (e.g., geometric context, orientation maps, lines in accordance with vanishing points) in each face of the layout. We present a novel branch and bound approach which splits the label space in terms of candidate sets of 3D layouts, and efficiently bounds the potentials in these sets by restricting the contribution of each individual face. We employ integral geometry in order to evaluate these bounds in constant time, and as a consequence, we not only obtain the exact solution, but also in less time than approximate inference tools such as message-passing. We demonstrate the effectiveness of our approach in two benchmarks and show that our bounds are tight, and only a few evaluations are necessary.", "In this paper we show that a geometric representation of an object occurring in indoor scenes, along with rich scene structure can be used to produce a detector for that object in a single image. Using perspective cues from the global scene geometry, we first develop a 3D based object detector. This detector is competitive with an image based detector built using state-of-the-art methods; however, combining the two produces a notably improved detector, because it unifies contextual and geometric information. We then use a probabilistic model that explicitly uses constraints imposed by spatial layout - the locations of walls and floor in the image - to refine the 3D object estimates. We use an existing approach to compute spatial layout [1], and use constraints such as objects are supported by floor and can not stick through the walls. The resulting detector (a) has significantly improved accuracy when compared to the state-of-the-art 2D detectors and (b) gives a 3D interpretation of the location of the object, derived from a 2D image. We evaluate the detector on beds, for which we give extensive quantitative results derived from images of real scenes.", "We describe a technique that automatically generates plausible depth maps from videos using non-parametric depth sampling. We demonstrate our technique in cases where past methods fail (non-translating cameras and dynamic scenes). Our technique is applicable to single images as well as videos. For videos, we use local motion cues to improve the inferred depth maps, while optical flow is used to ensure temporal depth consistency. For training and evaluation, we use a Kinect-based system to collect a large data set containing stereoscopic videos with known depths. We show that our depth estimation technique outperforms the state-of-the-art on benchmark databases. Our technique can be used to automatically convert a monoscopic video into stereo for 3D visualization, and we demonstrate this through a variety of visually pleasing results for indoor and outdoor scenes, including results from the feature film Charade.", "There has been a recent push in extraction of 3D spatial layout of scenes. However, none of these approaches model the 3D interaction between objects and the spatial layout. In this paper, we argue for a parametric representation of objects in 3D, which allows us to incorporate volumetric constraints of the physical world. We show that augmenting current structured prediction techniques with volumetric reasoning significantly improves the performance of the state-of-the-art." ] }
1605.02305
2354576866
Depth estimation from single monocular images is a key component of scene understanding and has benefited largely from deep convolutional neural networks (CNN) recently. In this article, we take advantage of the recent deep residual networks and propose a simple yet effective approach to this problem. We formulate depth estimation as a pixel-wise classification task. Specifically, we first discretize the continuous depth values into multiple bins and label the bins according to their depth range. Then we train fully convolutional deep residual networks to predict the depth label of each pixel. Performing discrete depth label classification instead of continuous depth value regression allows us to predict a confidence in the form of probability distribution. We further apply fully-connected conditional random fields (CRF) as a post processing step to enforce local smoothness interactions, which improves the results. We evaluate our approach on both indoor and outdoor datasets and achieve state-of-the-art performance.
@cite_0 proposed a neural regression forest (NRF) architecture which combines convolutional neural networks with random forests for predicting depths in the continuous domain via regression. The NRF processes a data sample with an ensemble of binary regression trees and the final depth estimation is made by fusing the individual regression results. It allows for parallelizable training of all shallow CNNs, and efficient enforcing of smoothness in depth estimation results. @cite_18 applied the deep residual networks for depth estimation. In order to improve the output resolution, they presented a novel way to efficiently learn feature map up-sampling within the network. They also presented a reverse Huber loss which is driven by the value distributions commonly present in depth maps for the network optimization.
{ "cite_N": [ "@cite_0", "@cite_18" ], "mid": [ "2436453945", "2963591054" ], "abstract": [ "This paper presents a novel deep architecture, called neural regression forest (NRF), for depth estimation from a single image. NRF combines random forests and convolutional neural networks (CNNs). Scanning windows extracted from the image represent samples which are passed down the trees of NRF for predicting their depth. At every tree node, the sample is filtered with a CNN associated with that node. Results of the convolutional filtering are passed to left and right children nodes, i.e., corresponding CNNs, with a Bernoulli probability, until the leaves, where depth estimations are made. CNNs at every node are designed to have fewer parameters than seen in recent work, but their stacked processing along a path in the tree effectively amounts to a deeper CNN. NRF allows for parallelizable training of all \"shallow\" CNNs, and efficient enforcing of smoothness in depth estimation results. Our evaluation on the benchmark Make3D and NYUv2 datasets demonstrates that NRF outperforms the state of the art, and gracefully handles gradually decreasing training datasets.", "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available." ] }
1605.02305
2354576866
Depth estimation from single monocular images is a key component of scene understanding and has benefited largely from deep convolutional neural networks (CNN) recently. In this article, we take advantage of the recent deep residual networks and propose a simple yet effective approach to this problem. We formulate depth estimation as a pixel-wise classification task. Specifically, we first discretize the continuous depth values into multiple bins and label the bins according to their depth range. Then we train fully convolutional deep residual networks to predict the depth label of each pixel. Performing discrete depth label classification instead of continuous depth value regression allows us to predict a confidence in the form of probability distribution. We further apply fully-connected conditional random fields (CRF) as a post processing step to enforce local smoothness interactions, which improves the results. We evaluate our approach on both indoor and outdoor datasets and achieve state-of-the-art performance.
Experiment results in the aforementioned works reveal that depth estimation benefits from: (a) an increased number of layers in deep networks; (b) obtaining fine-level details. In this work, we take advantage of the successful deep residual networks @cite_29 and formulate depth estimation as a dense prediction task. We also apply fully connected CRFs @cite_4 as post-processing. Although @cite_18 also applied the deep residual network for depth estimation, our method is different from @cite_18 in 3 distinct ways: Firstly, we formulate depth estimation as a classification task, while @cite_18 formulated depth estimation as a regression task. Secondly, we can obtain the confidence of depth predictions which can be used during training and post-processing. Lastly, in order to obtain high resolution predictions, @cite_18 applied an up-sampling scheme while we simply use bilinear interpolation.
{ "cite_N": [ "@cite_29", "@cite_4", "@cite_18" ], "mid": [ "2949650786", "2161236525", "2963591054" ], "abstract": [ "Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57 error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28 relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.", "Most state-of-the-art techniques for multi-class image segmentation and labeling use conditional random fields defined over pixels or image regions. While region-level models often feature dense pairwise connectivity, pixel-level models are considerably larger and have only permitted sparse graph structures. In this paper, we consider fully connected CRF models defined on the complete set of pixels in an image. The resulting graphs have billions of edges, making traditional inference algorithms impractical. Our main contribution is a highly efficient approximate inference algorithm for fully connected CRF models in which the pairwise edge potentials are defined by a linear combination of Gaussian kernels. Our experiments demonstrate that dense connectivity at the pixel level substantially improves segmentation and labeling accuracy.", "This paper addresses the problem of estimating the depth map of a scene given a single RGB image. We propose a fully convolutional architecture, encompassing residual learning, to model the ambiguous mapping between monocular images and depth maps. In order to improve the output resolution, we present a novel way to efficiently learn feature map up-sampling within the network. For optimization, we introduce the reverse Huber loss that is particularly suited for the task at hand and driven by the value distributions commonly present in depth maps. Our model is composed of a single architecture that is trained end-to-end and does not rely on post-processing techniques, such as CRFs or other additional refinement steps. As a result, it runs in real-time on images or videos. In the evaluation, we show that the proposed model contains fewer parameters and requires fewer training data than the current state of the art, while outperforming all approaches on depth estimation. Code and models are publicly available." ] }
1605.01930
2345466534
Millimeter wave (mmWave) communication is envisioned as a cornerstone to fulfill the data rate requirements for fifth generation (5G) cellular networks. In mmWave communication, beamforming is considered as a key technology to combat the high path-loss, and unlike in conventional microwave communication, beamforming may be necessary even during initial access cell search. Among the proposed beamforming schemes for initial cell search, analog beamforming is a power efficient approach but suffers from its inherent search delay during initial access. In this work, we argue that analog beamforming can still be a viable choice when context information about mmWave base stations (BS) is available at the mobile station (MS). We then study how the performance of analog beamforming degrades in case of angular errors in the available context information. Finally, we present an analog beamforming receiver architecture that uses multiple arrays of Phase Shifters and a single RF chain to combat the effect of angular errors, showing that it can achieve the same performance as hybrid beamforming.
In the past few years, there has been an increased interest in millimeter wave (mmWave) technology to fulfill the data rate requirements foreseen for the fifth generation cellular communication (5G) @cite_4 . However, frequencies in mmWave bands experience high path-loss, which in comparison to microwave bands may result in a significant coverage reduction when considering omnidirectional communication. To overcome these coverage issues, beamforming at mmWave is an effective solution. Due to the small wavelengths at mmWave frequencies, a large number of antennas can be packed in a small space, and this allows to generate high gains and highly directional beams.
{ "cite_N": [ "@cite_4" ], "mid": [ "2095843437" ], "abstract": [ "Almost all mobile communication systems today use spectrum in the range of 300 MHz-3 GHz. In this article, we reason why the wireless community should start looking at the 3-300 GHz spectrum for mobile broadband applications. We discuss propagation and device technology challenges associated with this band as well as its unique advantages for mobile communication. We introduce a millimeter-wave mobile broadband (MMB) system as a candidate next generation mobile communication system. We demonstrate the feasibility for MMB to achieve gigabit-per-second data rates at a distance up to 1 km in an urban mobile environment. A few key concepts in MMB network architecture such as the MMB base station grid, MMB interBS backhaul link, and a hybrid MMB + 4G system are described. We also discuss beamforming techniques and the frame structure of the MMB air interface." ] }
1605.01930
2345466534
Millimeter wave (mmWave) communication is envisioned as a cornerstone to fulfill the data rate requirements for fifth generation (5G) cellular networks. In mmWave communication, beamforming is considered as a key technology to combat the high path-loss, and unlike in conventional microwave communication, beamforming may be necessary even during initial access cell search. Among the proposed beamforming schemes for initial cell search, analog beamforming is a power efficient approach but suffers from its inherent search delay during initial access. In this work, we argue that analog beamforming can still be a viable choice when context information about mmWave base stations (BS) is available at the mobile station (MS). We then study how the performance of analog beamforming degrades in case of angular errors in the available context information. Finally, we present an analog beamforming receiver architecture that uses multiple arrays of Phase Shifters and a single RF chain to combat the effect of angular errors, showing that it can achieve the same performance as hybrid beamforming.
Recently, two different approaches have been considered for directional initial access cell discovery. Firstly, in @cite_15 , considering a HetNet scenario, context information (CI) regarding mobile station (MS) positioning is provided to the mmWave base station (BS) by the microwave BS. Based on this, the mmWave BS points its beam (using analog beamforming) in the desired direction. The authors also address the issue of erroneous CI, proposing that the BS, in addition to searching in the CI based direction, also searches the rest of the angular space by forming beams in different directions and also with different beamwidth (to increase the coverage). Results showed that the enhanced cell discovery, where in case of positioning error the BS searches the adjacent angular directions, outperforms the greedy search approach where the BS searches the angular space sequentially. In addition, at the MS, omnidirectional reception is considered, which in comparison to a directional reception results in a reduced gain. Recently, the authors of @cite_15 extended their work by considering a more complex channel model with multiple rays and obstacles @cite_1 .
{ "cite_N": [ "@cite_15", "@cite_1" ], "mid": [ "2964273971", "2183911578" ], "abstract": [ "The exploitation of the mm-wave bands is one of the most promising solutions for 5G mobile radio networks. However, the use of mm-wave technologies in cellular networks is not straightforward due to mm-wave severe propagation conditions that limit access availability. In order to overcome this obstacle, hybrid network architectures are being considered where mm-wave small cells can exploit an overlay coverage layer based on legacy technology. The additional mm-wave layer can also take advantage of a functional split between control and user plane, that allows to delegate most of the signaling functions to legacy base stations and to gather context information from users for resource optimization. However, mm-wave technology requires multiple antennas and highly directional transmissions to compensate for high path loss and limited power. Directional transmissions must be also used for the cell discovery and synchronization process, and this can lead to a non negligible delay due to need to scan the cell area with multiple transmissions in different angles. In this paper, we propose to exploit the context information related to user position, provided by the separated control plane, to improve the cell discovery procedure and minimize delay. We investigate the fundamental trade-offs of the cell discovery process with directional antennas and the effects of the context information accuracy on its performance. Numerical results are provided to validate our observations.", "With the advent of next-generation mobile devices, wireless networks must be upgraded to fill the gap between huge user data demands and scarce channel capacity. Mm-waves technologies appear as the key-enabler for the future 5G networks design, exhibiting large bandwidth availability and high data rate. As counterpart, the small wave-length incurs in a harsh signal propagation that limits the transmission range. To overcome this limitation, array of antennas with a relatively high number of small elements are used to exploit beamforming techniques that greatly increase antenna directionality both at base station and user terminal. These very narrow beams are used during data transfer and tracking techniques dynamically adapt the direction according to terminal mobility. During cell discovery when initial synchronization must be acquired, however, directionality can delay the process since the best direction to point the beam is unknown. All space must be scanned using the tradeoff between beam width and transmission range. Some support to speed up the cell search process can come from the new architectures for 5G currently being investigated, where conventional wireless network and mm-waves technologies coexist. In these architecture a functional split between C-plane and U-plane allows to guarantee the continuous availability of a signaling channel through conventional wireless technologies with the opportunity to convey context information from users to network. In this paper, we investigate the use of position information provided by user terminals in order to improve the performance of the cell search process. We analyze mm-wave propagation environment and show how it is possible to take into account of position inaccuracy and reflected rays in presence of obstacles." ] }
1605.01930
2345466534
Millimeter wave (mmWave) communication is envisioned as a cornerstone to fulfill the data rate requirements for fifth generation (5G) cellular networks. In mmWave communication, beamforming is considered as a key technology to combat the high path-loss, and unlike in conventional microwave communication, beamforming may be necessary even during initial access cell search. Among the proposed beamforming schemes for initial cell search, analog beamforming is a power efficient approach but suffers from its inherent search delay during initial access. In this work, we argue that analog beamforming can still be a viable choice when context information about mmWave base stations (BS) is available at the mobile station (MS). We then study how the performance of analog beamforming degrades in case of angular errors in the available context information. Finally, we present an analog beamforming receiver architecture that uses multiple arrays of Phase Shifters and a single RF chain to combat the effect of angular errors, showing that it can achieve the same performance as hybrid beamforming.
In @cite_10 , exhaustive and hierarchical searches are compared while considering analog, digital and hybrid beamforming at the BS and the MS. In exhaustive search, the whole angular space is covered by sequentially transmitting beams in a time division multiplexing fashion, and initial beamforming is done by selecting the best combination of Tx-Rx beams. The hierarchical search, instead, is a multiple step process. In the first step, a MS initially utilizes fewer antennas to form a relatively small number of wide beams. The received signal is combined with all the beams and the best combiner beam is selected as a reference for the next step, where several narrower beamwidth combiners are formed, within the initially selected beam. Considering scenarios with limited mobility, the process finishes when the combiner beam is within the range of @math to @math . However, selecting an incorrect combiner in the initial stage can result in an initial access error in the following stage.
{ "cite_N": [ "@cite_10" ], "mid": [ "1987804395" ], "abstract": [ "Cellular systems were designed for carrier frequencies in the microwave band (below 3 GHz) but will soon be operating in frequency bands up to 6 GHz. To meet the ever increasing demands for data, deployments in bands above 6 GHz, and as high as 75 GHz, are envisioned. However, as these systems migrate beyond the microwave band, certain channel characteristics can impact their deployment, especially the coverage range. To increase coverage, beamforming can be used but this role of beamforming is different than in current cellular systems, where its primary role is to improve data throughput. Because cellular procedures enable beamforming after a user establishes access with the system, new procedures are needed to enable beamforming during cell discovery and acquisition. This paper discusses several issues that must be resolved in order to use beamforming for access at millimeter wave (mmWave) frequencies, and presents solutions for initial access. Several approaches are verified by computer simulations, and it is shown that reliable network access and satisfactory coverage can be achieved in mmWave frequencies." ] }
1605.01930
2345466534
Millimeter wave (mmWave) communication is envisioned as a cornerstone to fulfill the data rate requirements for fifth generation (5G) cellular networks. In mmWave communication, beamforming is considered as a key technology to combat the high path-loss, and unlike in conventional microwave communication, beamforming may be necessary even during initial access cell search. Among the proposed beamforming schemes for initial cell search, analog beamforming is a power efficient approach but suffers from its inherent search delay during initial access. In this work, we argue that analog beamforming can still be a viable choice when context information about mmWave base stations (BS) is available at the mobile station (MS). We then study how the performance of analog beamforming degrades in case of angular errors in the available context information. Finally, we present an analog beamforming receiver architecture that uses multiple arrays of Phase Shifters and a single RF chain to combat the effect of angular errors, showing that it can achieve the same performance as hybrid beamforming.
Recently, in @cite_11 , iterative and exhaustive search schemes using analog beamforming have been studied and compared, and the authors showed that the optimal scheme depends on the target SNR regime.
{ "cite_N": [ "@cite_11" ], "mid": [ "2345176225" ], "abstract": [ "The millimeter wave frequencies (roughly above 10 GHz) offer the availability of massive bandwidth to greatly increase the capacity of fifth generation (5G) cellular wireless systems. However, to overcome the high isotropic pathloss at these frequencies, highly directional transmissions will be required at both the base station (BS) and the mobile user equipment (UE) to establish sufficient link budget in wide area networks. This reliance on directionality has important implications for control layer procedures. Initial access in particular can be significantly delayed due to the need for the BS and the UE to find the initial directions of transmission. This paper provides a survey of several recently proposed techniques. Detection probability and delay analysis is performed to compare various techniques including exhaustive and iterative search. We show that the optimal strategy depends on the target SNR regime." ] }
1605.01930
2345466534
Millimeter wave (mmWave) communication is envisioned as a cornerstone to fulfill the data rate requirements for fifth generation (5G) cellular networks. In mmWave communication, beamforming is considered as a key technology to combat the high path-loss, and unlike in conventional microwave communication, beamforming may be necessary even during initial access cell search. Among the proposed beamforming schemes for initial cell search, analog beamforming is a power efficient approach but suffers from its inherent search delay during initial access. In this work, we argue that analog beamforming can still be a viable choice when context information about mmWave base stations (BS) is available at the mobile station (MS). We then study how the performance of analog beamforming degrades in case of angular errors in the available context information. Finally, we present an analog beamforming receiver architecture that uses multiple arrays of Phase Shifters and a single RF chain to combat the effect of angular errors, showing that it can achieve the same performance as hybrid beamforming.
Specifically, in this paper we address the following issues: how ABF with CI performs in comparison to ABF with non CI based approaches (Random @cite_2 and Exhaustive @cite_10 search); how the angular error in the provided CI will affect the performance of the initial access process; how the optimal number of MS antennas that results in the best access performance varies with the angular error in the available CI.
{ "cite_N": [ "@cite_10", "@cite_2" ], "mid": [ "1987804395", "779733492" ], "abstract": [ "Cellular systems were designed for carrier frequencies in the microwave band (below 3 GHz) but will soon be operating in frequency bands up to 6 GHz. To meet the ever increasing demands for data, deployments in bands above 6 GHz, and as high as 75 GHz, are envisioned. However, as these systems migrate beyond the microwave band, certain channel characteristics can impact their deployment, especially the coverage range. To increase coverage, beamforming can be used but this role of beamforming is different than in current cellular systems, where its primary role is to improve data throughput. Because cellular procedures enable beamforming after a user establishes access with the system, new procedures are needed to enable beamforming during cell discovery and acquisition. This paper discusses several issues that must be resolved in order to use beamforming for access at millimeter wave (mmWave) frequencies, and presents solutions for initial access. Several approaches are verified by computer simulations, and it is shown that reliable network access and satisfactory coverage can be achieved in mmWave frequencies.", "The acute disparity between increasing bandwidth demand and available spectrum has brought millimeter wave (mmWave) bands to the forefront of candidate solutions for the next-generation cellular networks. Highly directional transmissions are essential for cellular communication in these frequencies to compensate for higher isotropic path loss. This reliance on directional beamforming, however, complicates initial cell search since mobiles and base stations must jointly search over a potentially large angular directional space to locate a suitable path to initiate communication. To address this problem, this paper proposes a directional cell discovery procedure where base stations periodically transmit synchronization signals, potentially in time-varying random directions, to scan the angular space. Detectors for these signals are derived based on a Generalized Likelihood Ratio Test (GLRT) under various signal and receiver assumptions. The detectors are then simulated under realistic design parameters and channels based on actual experimental measurements at 28 GHz in New York City. The study reveals two key findings: 1) digital beamforming can significantly outperform analog beamforming even when digital beamforming uses very low quantization to compensate for the additional power requirements and 2) omnidirectional transmissions of the synchronization signals from the base station generally outperform random directional scanning." ] }
1605.01999
2152233525
We address the issue of visual saliency from three perspectives. First, we consider saliency detection as a frequency domain analysis problem. Second, we achieve this by employing the concept of nonsaliency. Third, we simultaneously consider the detection of salient regions of different size. The paper proposes a new bottom-up paradigm for detecting visual saliency, characterized by a scale-space analysis of the amplitude spectrum of natural images. We show that the convolution of the image amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale is equivalent to an image saliency detector. The saliency map is obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. A Hypercomplex Fourier Transform performs the analysis in the frequency domain. Using available databases, we demonstrate experimentally that the proposed model can predict human fixation data. We also introduce a new image database and use it to show that the saliency detector can highlight both small and large salient regions, as well as inhibit repeated distractors in cluttered images. In addition, we show that it is able to predict salient regions on which people focus their attention.
Recently, a simple and fast algorithm, called the Spectrum Residual (SR) was proposed in @cite_2 . This paper argues that the spectrum residual corresponds to image saliency. Thus given an image @math , it was first transformed into the frequency domain: @math . The amplitude @math and phase @math spectra are calculated, and then the log amplitude spectrum is obtained: @math . Given these definitions, the spectrum residual was defined as: Thus we can rewrite ) as follows:
{ "cite_N": [ "@cite_2" ], "mid": [ "2146103513" ], "abstract": [ "The ability of human visual system to detect visual saliency is extraordinarily fast and reliable. However, computational modeling of this basic intelligent behavior still remains a challenge. This paper presents a simple method for the visual saliency detection. Our model is independent of features, categories, or other forms of prior knowledge of the objects. By analyzing the log-spectrum of an input image, we extract the spectral residual of an image in spectral domain, and propose a fast method to construct the corresponding saliency map in spatial domain. We test this model on both natural pictures and artificial images such as psychological patterns. The result indicate fast and robust saliency detection of our method." ] }
1605.01779
2345496421
With the recent popularity of graphical clustering methods, there has been an increased focus on the information between samples. We show how learning cluster structure using edge features naturally and simultaneously determines the most likely number of clusters and addresses data scale issues. These results are particularly useful in instances where (a) there are a large number of clusters and (b) we have some labeled edges. Applications in this domain include image segmentation, community discovery and entity resolution. Our model is an extension of the planted partition model and our solution uses results of correlation clustering, which achieves a partition O(log(n))-close to the log-likelihood of the true clustering.
The work most closely related to ours extends the stochastic block model edge weights to other parametric distributions. Motivated by observations that Bernoulli random variables often do not capture the degree complexity in social networks, Karrer & Newman Karrer2011 , Mariadassou2010 and Ball2011 each used Poisson distributed edge weights. This may also be a good choice because the Bernoulli degree distribution is asymptotically Poisson @cite_3 . considered an SBM with edge weights drawn from the exponential family distribution Aicher2013 . Like Thomas & Blitzstein Thomas2011 , he also showed better results than thresholding to binary edges. Lastly, Balakrishnan2011 consider Normally distributed edge weights as a method of analyzing spectral clustering recovery with noise.
{ "cite_N": [ "@cite_3" ], "mid": [ "2038151181" ], "abstract": [ "The proliferation of models for networks raises challenging problems of model selection: the data are sparse and globally dependent, and models are typically high-dimensional and have large numbers of latent variables. Together, these issues mean that the usual model-selection criteria do not work properly for networks. We illustrate these challenges, and show one way to resolve them, by considering the key network-analysis problem of dividing a graph into communities or blocks of nodes with homogeneous patterns of links to the rest of the network. The standard tool for undertaking this is the stochastic block model, under which the probability of a link between two nodes is a function solely of the blocks to which they belong. This imposes a homogeneous degree distribution within each block; this can be unrealistic, so degree-corrected block models add a parameter for each node, modulating its overall degree. The choice between ordinary and degree-corrected block models matters because they make very different inferences about communities. We present the first principled and tractable approach to model selection between standard and degree-corrected block models, based on new large-graph asymptotics for the distribution of log-likelihood ratios under the stochastic block model, finding substantial departures from classical results for sparse graphs. We also develop linear-time approximations for log-likelihoods under both the stochastic block model and the degree-corrected model, using belief propagation. Applications to simulated and real networks show excellent agreement with our approximations. Our results thus both solve the practical problem of deciding on degree correction and point to a general approach to model selection in network analysis." ] }
1605.01779
2345496421
With the recent popularity of graphical clustering methods, there has been an increased focus on the information between samples. We show how learning cluster structure using edge features naturally and simultaneously determines the most likely number of clusters and addresses data scale issues. These results are particularly useful in instances where (a) there are a large number of clusters and (b) we have some labeled edges. Applications in this domain include image segmentation, community discovery and entity resolution. Our model is an extension of the planted partition model and our solution uses results of correlation clustering, which achieves a partition O(log(n))-close to the log-likelihood of the true clustering.
The original results by Bansal2004 showed a constant factor approximation for . The current state-of-the-art for binary edges is a 3-approximation @cite_8 , which Pan2015 recently parallelized to cluster one billion samples in 5 seconds. Ailon2008 also showed a linear-time 5-approximation on weighted probability graphs and a 2-approximation on weighted probability graphs obeying the triangle inequality. Demaine2006 showed an @math -approximation for arbitrarily weighted graphs using the results of Leighton & Rao Leighton1999 . Solving is equivalent to APX-hard minimum multi-cut @cite_4 @cite_16 .
{ "cite_N": [ "@cite_16", "@cite_4", "@cite_8" ], "mid": [ "", "1985875030", "2091858563" ], "abstract": [ "", "We consider the following general correlation-clustering problem [N. Bansal, A. Blum, S. Chawla, Correlation clustering, in: Proc. 43rd Annu. IEEE Symp. on Foundations of Computer Science, Vancouver, Canada, November 2002, pp. 238-250]: given a graph with real nonnegative edge weights and a 〈+〉 〈-〉 edge labelling, partition the vertices into clusters to minimize the total weight of cut 〈+〉 edges and uncut 〈-〉 edges. Thus, 〈+〉 edges with large weights (representing strong correlations between endpoints) encourage those endpoints to belong to a common cluster while 〈-〉 edges with large weights encourage the endpoints to belong to different clusters. In contrast to most clustering problems, correlation clustering specifies neither the desired number of clusters nor a distance threshold for clustering; both of these parameters are effectively chosen to be best possible by the problem definition.Correlation clustering was introduced by [Correlation clustering, in: Proc. 43rd Annu. IEEE Syrup. on Foundations of Computer Science, Vancouver, Canada, November 2002, pp. 238-250], motivated by both document clustering and agnostic learning. They proved NP-hardness and gave constant-factor approximation algorithms for the special case in which the graph is complete (full information) and every edge has the same weight. We give an O(log n)-approximation algorithm for the general case based on a linear-programming rounding and the \"region-growing\" technique. We also prove that this linear program has a gap of Ω(log n), and therefore our approximation is tight under this approach. We also give an O(r3)-approximation algorithm for Kr, r-minor-free graphs. On the other hand, we show that the problem is equivalent to minimum multicut, and therefore APX-hard and difficult to approximate better than Θ(log n).", "We address optimization problems in which we are given contradictory pieces of input information and the goal is to find a globally consistent solution that minimizes the extent of disagreement with the respective inputs. Specifically, the problems we address are rank aggregation, the feedback arc set problem on tournaments, and correlation and consensus clustering. We show that for all these problems (and various weighted versions of them), we can obtain improved approximation factors using essentially the same remarkably simple algorithm. Additionally, we almost settle a long-standing conjecture of Bang-Jensen and Thomassen and show that unless NP⊆BPP, there is no polynomial time algorithm for the problem of minimum feedback arc set in tournaments." ] }
1605.01825
2346111880
This paper deals with a challenging, frequently encountered, yet not properly investigated problem in two-frame optical flow estimation. That is, the input frames are compounds of two imaging layers -- one desired background layer of the scene, and one distracting, possibly moving layer due to transparency or reflection. In this situation, the conventional brightness constancy constraint -- the cornerstone of most existing optical flow methods -- will no longer be valid. In this paper, we propose a robust solution to this problem. The proposed method performs both optical flow estimation, and image layer separation. It exploits a generalized double-layer brightness consistency constraint connecting these two tasks, and utilizes the priors for both of them. Experiments on both synthetic data and real images have confirmed the efficacy of the proposed method. To the best of our knowledge, this is the first attempt towards handling generic optical flow fields of two-frame images containing transparency or reflection.
One of the first work for multiple optical flow computation is possibly due to Shiwaza al @cite_35 @cite_46 . By assuming the two underlying flow fields to be constant ( pure translating), they derived a generalized brightness constancy constraint for the multi-motion case. However, this constant motion assumption is restrictive, not applicable for general flow fields with complex motions. Nevertheless, their method, being one of the first, has inspired a number of variants and extensions @cite_14 @cite_34 @cite_23 @cite_4 . Some variants operate in the Fourier domain, @cite_25 @cite_30 @cite_24 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_14", "@cite_4", "@cite_24", "@cite_23", "@cite_46", "@cite_34", "@cite_25" ], "mid": [ "2107964393", "", "", "1828399793", "2116483574", "", "2149652326", "2004076731", "1890671033" ], "abstract": [ "We extend the principle of phase-based techniques for measuring optical flow and binocular disparity to multiple motion estimation. We analyse multiple optical flows by estimating phase gradients (instantaneous fre­ quencies) from a set of independent bandpass quadrature filter pairs. Our approach is similar to that of Shizawa and Mase [22], in which nth-order differential operators are required to compute n simultaneous velocity es­ timates. The approach presented here only requires a set of band-pass filters and their first derivatives.", "", "", "In this paper a mechanism for computing multiple motion models along with the corresponding regions of support for a sequence of images is presented. The mechanism is an indirect parametric motion estimation technique. Firstly, a robust technique based on the fundamental constraint equation of multiple optical flow is used to estimate a dense multiple vector field of the scene. Secondly, from the recovered low-level data, motion models and their corresponding regions of support are estimated through a variant of the expectation-maximisation (EM) algorithm. The proposed algorithm is shown to provide good motion estimates and regions of support.", "Using low-order global motion hypotheses and the assumption that there are no more than two motions at a single point, it is possible to successfully decompose motion stimuli that contain additively combined transparent layers. It is assumed that the space of flow fields is sufficiently smooth that a relatively coarse sampling of the flow parameter(s) will produce a set of vector fields that can be combined to reasonably approximate the actual motions in the scene. The definition of support from an exclusively spatial notion is extended to include the spatio-temporal energy domain. The key insight is that when processing transparent motion displays, the support of a motion hypothesis should exist over both a region of space and velocity, so that it can be isolated both spatially and in terms of local velocity. >", "", "A unified theoretical framework for motion transparency and motion boundaries by devising fundamental constraint equations of multiple optical flow is proposed. This framework can handle flow discontinuities at motion boundaries as well as flow multiplicities due to transparency of objects in a unified manner. The constraint equations are formulated by a composition of homogeneously parametrized differential operators on the space-time image. Fitting algorithms for the constraints which result in eigensystem analyses are described. To determine the number of flows, the authors use the margin energy, a measure of goodness of fit which is the difference between the first and the second lower eigenenergy of the eigensystem. They also hypothesize a criterion for multiplicity. The measure and the criterion are derived from the analogy of quantum mechanics. It is demonstrated that the margin energy can determine the transparency and discontinuities of the flow field as regions of more than one flow. >", "This paper is concerned with the estimation of the motions and the segmentation of the spatial supports of the different layers involved in transparent X-ray image sequences. Classical motion estimation methods fail on sequences involving transparent effects since they do not explicitly model this phenomenon. We propose a method that comprises three main steps: initial block-matching for two-layer transparent motion estimation, motion clustering with 3D Hough transform, and joint transparent layer segmentation and parametric motion estimation. It is validated on synthetic and real clinical X-ray image sequences. Secondly, we derive an original transparent motion compensation method compatible with any spatiotemporal filtering technique. A direct transparentmotion compensation method is proposed. To overcome its limitations, a novel hybrid filter is introduced which locally selects which type of motion compensation is to be carried out for optimal denoising. Convincing experiments on synthetic and real clinical images are also reported.", "The measurement of multiple velocities using phase-based methods is discussed. In particular, phase gradients (instantaneous frequency) from different bandpass channels (quadrature filter outputs) are used to estimate multiple image velocities in a single neighborhood. The approach is similar to that of M. Shizawa and K. Mase (1990) in which nth-order differential operators are required to compute n simultaneous velocity estimates. However, to use instantaneous frequency, the output of each channel must be differentiated only once. >" ] }
1605.01825
2346111880
This paper deals with a challenging, frequently encountered, yet not properly investigated problem in two-frame optical flow estimation. That is, the input frames are compounds of two imaging layers -- one desired background layer of the scene, and one distracting, possibly moving layer due to transparency or reflection. In this situation, the conventional brightness constancy constraint -- the cornerstone of most existing optical flow methods -- will no longer be valid. In this paper, we propose a robust solution to this problem. The proposed method performs both optical flow estimation, and image layer separation. It exploits a generalized double-layer brightness consistency constraint connecting these two tasks, and utilizes the priors for both of them. Experiments on both synthetic data and real images have confirmed the efficacy of the proposed method. To the best of our knowledge, this is the first attempt towards handling generic optical flow fields of two-frame images containing transparency or reflection.
The flow estimation problem for two-layer images in this paper should not be confused with those works concerning motion-layer segmentation", albeit the two do share some similarity and the boundary between them can sometimes be fuzzy. For example, Wang and Adelson @cite_41 proposed to segment the image layers based on a pre-computed optical flow field. Irani al @cite_19 used temporal integration to track occluding or transparent moving objects with parametric motion. Black and others @cite_29 @cite_0 @cite_7 @cite_28 proposed a number of algorithms for multiple parametric motion estimation and segmentation. Yang and Li @cite_37 fit a flow filed with piecewise parametric models. Weiss @cite_21 presented a nonparametric motion estimation and segmentation method to handle generic smooth motions, thus this method is more related to ours. However, the method of Weiss and most other aforementioned methods primarily focused on image and motion segmentation, while we decompose the whole image into two composite brightness layers, and compute one generic flow field on each layer.
{ "cite_N": [ "@cite_37", "@cite_7", "@cite_41", "@cite_28", "@cite_29", "@cite_21", "@cite_0", "@cite_19" ], "mid": [ "", "2154220963", "2142912032", "159023233", "", "2097099938", "", "2041628650" ], "abstract": [ "", "Layered models are a powerful way of describing natural scenes containing smooth surfaces that may overlap and occlude each other. For image motion estimation, such models have a long history but have not achieved the wide use or accuracy of non-layered methods. We present a new probabilistic model of optical flow in layers that addresses many of the shortcomings of previous approaches. In particular, we define a probabilistic graphical model that explicitly captures: 1) occlusions and disocclusions; 2) depth ordering of the layers; 3) temporal consistency of the layer segmentation. Additionally the optical flow in each layer is modeled by a combination of a parametric model and a smooth deviation based on an MRF with a robust spatial prior; the resulting model allows roughness in layers. Finally, a key contribution is the formulation of the layers using an image-dependent hidden field prior based on recent models for static scene segmentation. The method achieves state-of-the-art results on the Middlebury benchmark and produces meaningful scene segmentations as well as detected occlusion regions.", "We describe a system for representing moving images with sets of overlapping layers. Each layer contains an intensity map that defines the additive values of each pixel, along with an alpha map that serves as a mask indicating the transparency. The layers are ordered in depth and they occlude each other in accord with the rules of compositing. Velocity maps define how the layers are to be warped over time. The layered representation is more flexible than standard image transforms and can capture many important properties of natural image sequences. We describe some methods for decomposing image sequences into layers using motion analysis, and we discuss how the representation may be used for image coding and other applications. >", "Videos contain complex spatially-varying motion blur due to the combination of object motion, camera motion, and depth variation with finite shutter speeds. Existing methods to estimate optical flow, deblur the images, and segment the scene fail in such cases. In particular, boundaries between differently moving objects cause problems, because here the blurred images are a combination of the blurred appearances of multiple surfaces. We address this with a novel layered model of scenes in motion. From a motion-blurred video sequence, we jointly estimate the layer segmentation and each layer’s appearance and motion. Since the blur is a function of the layer motion and segmentation, it is completely determined by our generative model. Given a video, we formulate the optimization problem as minimizing the pixel error between the blurred frames and images synthesized from the model, and solve it using gradient descent. We demonstrate our approach on synthetic and real sequences.", "", "Grouping based on common motion, or \"common fate\" provides a powerful cue for segmenting image sequences. Recently a number of algorithms have been developed that successfully perform motion segmentation by assuming that the motion of each group can be described by a low dimensional parametric model (e.g. affine). Typically the assumption is that motion segments correspond to planar patches in 3D undergoing rigid motion. Here we develop an alternative approach, where the motion of each group is described by a smooth dense flow field and the stability of the estimation is ensured by means of a prior distribution on the class of flow fields. We present a variant of the EM algorithm that can segment image sequences by fitting multiple smooth flow fields to the spatiotemporal data. Using the method of Green's functions, we show how the estimation of a single smooth flow field can be performed in closed form, thus making the multiple model estimation computationally feasible. Furthermore, the number of models is estimated automatically using similar methods to those used in the parametric approach. We illustrate the algorithm's performance on synthetic and real image sequences.", "", "Computing the motions of several moving objects in image sequences involves simultaneous motion analysis and segmentation. This task can become complicated when image motion changes significantly between frames, as with camera vibrations. Such vibrations make tracking in longer sequences harder, as temporal motion constancy cannot be assumed. The problem becomes even more difficult in the case of transparent motions." ] }
1605.01825
2346111880
This paper deals with a challenging, frequently encountered, yet not properly investigated problem in two-frame optical flow estimation. That is, the input frames are compounds of two imaging layers -- one desired background layer of the scene, and one distracting, possibly moving layer due to transparency or reflection. In this situation, the conventional brightness constancy constraint -- the cornerstone of most existing optical flow methods -- will no longer be valid. In this paper, we propose a robust solution to this problem. The proposed method performs both optical flow estimation, and image layer separation. It exploits a generalized double-layer brightness consistency constraint connecting these two tasks, and utilizes the priors for both of them. Experiments on both synthetic data and real images have confirmed the efficacy of the proposed method. To the best of our knowledge, this is the first attempt towards handling generic optical flow fields of two-frame images containing transparency or reflection.
The proposed method involves solving two tasks simultaneously: optical flow field estimation, and reflection transparent layer separation. For the second task, many researches have been published previously. For example, Levin al @cite_13 @cite_36 proposed methods for separating an image into two transparent layers using local statistics priors of natural images. Single image solutions are also investigated in @cite_16 and @cite_31 . To utilize multiple frames, layer separation methods have been proposed based on aligning the frames with one layer @cite_15 @cite_39 @cite_45 or multiple layers @cite_42 @cite_26 . Sarel and Irani @cite_32 presented an information theory based approach for separating transparent layers by minimizing the correlation between the layers. Chen al @cite_38 gave a gradient domain approach for moving layer separation which is also based on information theory. Schechner al @cite_12 developed a method for layer separation using image focus as a cue. By using independent component analysis, Farid and Adelson @cite_33 proposed a layer separation method which works on multiple observations under different mixing weights. Techniques for image layer separation were also developed in the field of intrinsic image video extraction @cite_1 @cite_8 @cite_5 .
{ "cite_N": [ "@cite_38", "@cite_26", "@cite_33", "@cite_15", "@cite_36", "@cite_8", "@cite_42", "@cite_1", "@cite_32", "@cite_39", "@cite_45", "@cite_5", "@cite_31", "@cite_16", "@cite_13", "@cite_12" ], "mid": [ "2122397382", "2099847307", "", "1573197209", "2107530646", "2136748901", "2100140302", "2116919352", "1558369361", "", "", "1994246617", "2136095362", "1980212291", "2135473332", "1543446142" ], "abstract": [ "Multi-exposure X-ray imaging can see through objects and separate different material into transparent layers. However, layer motion makes the separation task under-determined. Instead of aligning the non-rigid motion, we address the layer separation problem in gradient domain and propose an energy optimization framework to regularize it by explicitly enforcing independence constraint. It is shown that gradient domain allows more accurate and robust independence analysis between non-stationary signal using mutual information (MI) and hence achieves better separation. Furthermore, gradient fields contain sufficient information for full reconstruction of separated layers by solving the Poisson Equation. For efficient regularization of the gradient separation, energy terms based on the Taylor expansion of MI is further derived. Evaluation on both synthesized and real datasets proves the effectiveness of our algorithm and its robustness to complex tissue motion.", "We address the problem of blind separation of multiple source layers from their linear mixtures with unknown mixing coefficients and unknown layer motions. Such mixtures can occur when one takes photos through a transparent medium, like a window glass, and the camera or the medium moves between snapshots. To understand how to achieve correct separation, we study the statistics of natural images in the Labelme data set. We not only confirm the well-known sparsity of image gradients, but also discover new joint behavior patterns of image gradients. Based on these statistical properties, we develop a sparse blind separation algorithm to estimate both layer motions and linear mixing coefficients and then recover all layers. This method can handle general parameterized motions, including translations, scalings, rotations, and other transformations. In addition, the number of layers is automatically identified, and all layers can be recovered, even in the underdetermined case where mixtures are fewer than layers. The effectiveness of this technology is shown in experiments on both simulated and real superimposed images.", "", "When estimating foreground and background layers (or equivalently an alpha matte), it is often the case that pixel measurements contain mixed colours which are a combination of foreground and background. Object boundaries, especially at thin sub-pixel structures like hair, pose a serious problem.In this paper we present a multiple view algorithm for computing the alpha matte. Using a Bayesian framework, we model each pixel as a combined sample from the foreground and background and compute a MAP estimate to factor the two. The novelties in this work include the incorporation of three different types of priors for enhancing the results in problematic scenes. The priors used are inequality constraints on colour and alpha values, spatial continuity, and the probability distribution of alpha values.The combination of these priors result in accurate and visually satisfying estimates. We demonstrate the method on real image sequences with varying degrees of geometric and photometric complexity. The output enables virtual objects to be added between the foreground and background layers, and we give examples of this augmentation to the original sequences.", "When we take a picture through transparent glass, the image we obtain is often a linear superposition of two images: The image of the scene beyond the glass plus the image of the scene reflected by the glass. Decomposing the single input image into two images is a massively ill-posed problem: In the absence of additional knowledge about the scene being viewed, there are an infinite number of valid decompositions. In this paper, we focus on an easier problem: user assisted separation in which the user interactively labels a small number of gradients as belonging to one of the layers. Even given labels on part of the gradients, the problem is still ill-posed and additional prior knowledge is needed. Following recent results on the statistics of natural images, we use a sparsity prior over derivative filters. This sparsity prior is optimized using the iterative reweighted least squares (IRLS) approach. Our results show that using a prior derived from the statistics of natural images gives a far superior performance compared to a Gaussian prior and it enables good separations from a modest number of labeled gradients.", "Intrinsic images are a useful midlevel description of scenes proposed by H.G. Barrow and J.M. Tenenbaum (1978). An image is de-composed into two images: a reflectance image and an illumination image. Finding such a decomposition remains a difficult problem in computer vision. We focus on a slightly, easier problem: given a sequence of T images where the reflectance is constant and the illumination changes, can we recover T illumination images and a single reflectance image? We show that this problem is still imposed and suggest approaching it as a maximum-likelihood estimation problem. Following recent work on the statistics of natural images, we use a prior that assumes that illumination images will give rise to sparse filter outputs. We show that this leads to a simple, novel algorithm for recovering reflectance images. We illustrate the algorithm's performance on real and synthetic image sequences.", "Many natural images contain reflections and transparency, i.e., they contain mixtures of reflected and transmitted light. When viewed from a moving camera, these appear as the superposition of component layer images moving relative to each other. The problem of multiple motion recovery has been previously studied by a number of researchers. However no one has yet demonstrated how to accurately recover the component images themselves. In this paper we develop an optimal approach to recovering layer images and their associated motions from an arbitrary number of composite images. We develop two different techniques for estimating the component layer images given known motion estimates. The first approach uses constrained least squares to recover the layer images. The second approach iteratively refines lower and upper bounds on the layer images using two novel compositing operations, namely minimum- and maximum-composites of aligned images. We combine these layer extraction techniques with a dominant motion estimator and a subsequent motion refinement stage. This results in a completely automated system that recovers transparent images and motions from a collection of input images.", "Interpreting real-world images requires the ability distinguish the different characteristics of the scene that lead to its final appearance. Two of the most important of these characteristics are the shading and reflectance of each point in the scene. We present an algorithm that uses multiple cues to recover shading and reflectance intrinsic images from a single image. Using both color information and a classifier trained to recognize gray-scale patterns, given the lighting direction, each image derivative is classified as being caused by shading or a change in the surface's reflectance. The classifiers gather local evidence about the surface's form and color, which is then propagated using the generalized belief propagation algorithm. The propagation step disambiguates areas of the image where the correct classification is not clear from local evidence. We use real-world images to demonstrate results and show how each component of the system affects the results.", "In this paper we present an approach for separating two transparent layers in images and video sequences. Given two initial unknown physical mixtures, I 1 and I 2, of real scene layers, L 1 and L 2, we seek a layer separation which minimizes the structural correlations across the two layers, at every image point. Such a separation is achieved by transferring local grayscale structure from one image to the other wherever it is highly correlated with the underlying local grayscale structure in the other image, and vice versa. This bi-directional transfer operation, which we call the “layer information exchange”, is performed on diminishing window sizes, from global image windows (i.e., the entire image), down to local image windows, thus detecting similar grayscale structures at varying scales across pixels. We show the applicability of this approach to various real-world scenarios, including image and video transparency separation. In particular, we show that this approach can be used for separating transparent layers in images obtained under different polarizations, as well as for separating complex non-rigid transparent motions in video sequences. These can be done without prior knowledge of the layer mixing model (simple additive, alpha-mated composition with an unknown alpha-map, or other), and under unknown complex temporal changes (e.g., unknown varying lighting conditions).", "", "", "We present a method to decompose a video into its intrinsic components of reflectance and shading, plus a number of related example applications in video editing such as segmentation, stylization, material editing, recolorization and color transfer. Intrinsic decomposition is an ill-posed problem, which becomes even more challenging in the case of video due to the need for temporal coherence and the potentially large memory requirements of a global approach. Additionally, user interaction should be kept to a minimum in order to ensure efficiency. We propose a probabilistic approach, formulating a Bayesian Maximum a Posteriori problem to drive the propagation of clustered reflectance values from the first frame, and defining additional constraints as priors on the reflectance and shading. We explicitly leverage temporal information in the video by building a causal-anticausal, coarse-to-fine iterative scheme, and by relying on optical flow information. We impose no restrictions on the input video, and show examples representing a varied range of difficult cases. Our method is the first one designed explicitly for video; moreover, it naturally ensures temporal consistency, and compares favorably against the state of the art in this regard.", "Layer decomposition from a single image is an under-constrained problem, because there are more unknowns than equations. This paper studies a slightly easier but very useful alternative where only the background layer has substantial image gradients and structures. We propose to solve this useful alternative by an expectation-maximization (EM) algorithm that employs the hidden Markov model (HMM), which maintains spatial coherency of smooth and overlapping layers, and helps to preserve image details of the textured background layer. We demonstrate that, using a small amount of user input, various seemingly unrelated problems in computational photography can be effectively addressed by solving this alternative using our EM-HMM algorithm.", "This paper addresses extracting two layers from an image where one layer is smoother than the other. This problem arises most notably in intrinsic image decomposition and reflection interference removal. Layer decomposition from a single-image is inherently ill-posed and solutions require additional constraints to be enforced. We introduce a novel strategy that regularizes the gradients of the two layers such that one has a long tail distribution and the other a short tail distribution. While imposing the long tail distribution is a common practice, our introduction of the short tail distribution on the second layer is unique. We formulate our problem in a probabilistic framework and describe an optimization scheme to solve this regularization with only a few iterations. We apply our approach to the intrinsic image and reflection removal problems and demonstrate high quality layer separation on par with other techniques but being significantly faster than prevailing methods.", "Certain simple images are known to trigger a percept of transparency: the input image I is perceived as the sum of two images I(x,y) = I1(x,y) + I2(x,y). This percept is puzzling. First, why do we choose the \"more complicated\" description with two images rather than the \"simpler\" explanation I(x,y) = I1(x,y) + 0 ? Second, given the infinite number of ways to express I as a sum of two images, how do we compute the \"best\" decomposition? Here we suggest that transparency is the rational percept of a system that is adapted to the statistics of natural scenes. We present a probabilistic model of images based on the qualitative statistics of derivative filters and \"corner detectors\" in natural scenes and use this model to find the most probable decomposition of a novel image. The optimization is performed using loopy belief propagation. We show that our model computes perceptually \"correct\" decompositions on synthetic images and discuss its application to real images.", "Consider situations where the depth at each point in the scene is multi-valued, due to the presence of a virtual image semi-reflected by a transparent surface. The semi-reflected image is linearly superimposed on the image of an object that is behind the transparent surface. A novel approach is proposed for the separation of the superimposed layers. Focusing on either of the layers yields initial separation, but crosstalk remains. The separation is enhanced by mutual blurring of the perturbing components in the images. However, this blurring requires the estimation of the defocus blur kernels. We thus propose a method for self calibration of the blur kernels, given the raw images. The kernels are sought to minimize the mutual information of the recovered layers. Autofocusing and depth estimation in the presence of semi-reflections are also considered. Experimental results are presented." ] }
1605.01825
2346111880
This paper deals with a challenging, frequently encountered, yet not properly investigated problem in two-frame optical flow estimation. That is, the input frames are compounds of two imaging layers -- one desired background layer of the scene, and one distracting, possibly moving layer due to transparency or reflection. In this situation, the conventional brightness constancy constraint -- the cornerstone of most existing optical flow methods -- will no longer be valid. In this paper, we propose a robust solution to this problem. The proposed method performs both optical flow estimation, and image layer separation. It exploits a generalized double-layer brightness consistency constraint connecting these two tasks, and utilizes the priors for both of them. Experiments on both synthetic data and real images have confirmed the efficacy of the proposed method. To the best of our knowledge, this is the first attempt towards handling generic optical flow fields of two-frame images containing transparency or reflection.
In the context of stereo matching with transparency, Szeliski and Golland @cite_11 simultaneously recovered disparities, true colors, and opacity of visible surface elements. Tsin al @cite_27 estimated both depth and colors of the component layers. Li al @cite_6 proposed a simultaneous video defogging and stereo matching algorithm.
{ "cite_N": [ "@cite_27", "@cite_6", "@cite_11" ], "mid": [ "2116086749", "1923499759", "1906648922" ], "abstract": [ "In this paper, we address stereo matching in the presence of a class of non-Lambertian effects, where image formation can be modeled as the additive superposition of layers at different depths. The presence of such effects makes it impossible for traditional stereo vision algorithms to recover depths using direct color matching-based methods. We develop several techniques to estimate both depths and colors of the component layers. Depth hypotheses are enumerated in pairs, one from each layer, in a nested plane sweep. For each pair of depth hypotheses, matching is accomplished using spatial-temporal differencing. We then use graph cut optimization to solve for the depths of both layers. This is followed by an iterative color update algorithm which we proved to be convergent. Our algorithm recovers depth and color estimates for both synthetic and real image sequences.", "We present a method to jointly estimate scene depth and recover the clear latent image from a foggy video sequence. In our formulation, the depth cues from stereo matching and fog information reinforce each other, and produce superior results than conventional stereo or defogging algorithms. We first improve the photo-consistency term to explicitly model the appearance change due to the scattering effects. The prior matting Laplacian constraint on fog transmission imposes a detail-preserving smoothness constraint on the scene depth. We further enforce the ordering consistency between scene depth and fog transmission at neighboring points. These novel constraints are formulated together in an MRF framework, which is optimized iteratively by introducing auxiliary variables. The experiment results on real videos demonstrate the strength of our method.", "This paper formulates and solves a new variant of the stereo correspondence problem: simultaneously recovering the disparities, true colors, and opacities of visible surface elements. This problem arises in newer applications of stereo reconstruction, such as view interpolation and the layering of real imagery with synthetic graphics for special effects and virtual studio applications. While this problem is intrinsically more difficult than traditional stereo correspondence, where only the disparities are being recovered, it provides a principled way of dealing with commonly occurring problems such as occlusions and the handling of mixed (foreground background) pixels near depth discontinuities. It also provides a novel means for separating foreground and background objects (matting), without the use of a special blue screen. We formulate the problem as the recovery of colors and opacities in a generalized 3-D (x, y, d) disparity space, and solve the problem using a combination of initial evidence aggregation followed by iterative energy minimization." ] }
1605.01825
2346111880
This paper deals with a challenging, frequently encountered, yet not properly investigated problem in two-frame optical flow estimation. That is, the input frames are compounds of two imaging layers -- one desired background layer of the scene, and one distracting, possibly moving layer due to transparency or reflection. In this situation, the conventional brightness constancy constraint -- the cornerstone of most existing optical flow methods -- will no longer be valid. In this paper, we propose a robust solution to this problem. The proposed method performs both optical flow estimation, and image layer separation. It exploits a generalized double-layer brightness consistency constraint connecting these two tasks, and utilizes the priors for both of them. Experiments on both synthetic data and real images have confirmed the efficacy of the proposed method. To the best of our knowledge, this is the first attempt towards handling generic optical flow fields of two-frame images containing transparency or reflection.
The recent work of Xue al @cite_44 has a very similar formulation compared to ours. However, the goal and motivation of obstruction-free photography from a video sequence in @cite_44 are different from ours. The underlying assumptions on the flow fields, the employed flow solvers and the initialization techniques are dissimilar.
{ "cite_N": [ "@cite_44" ], "mid": [ "1978900400" ], "abstract": [ "We present a unified computational approach for taking photos through reflecting or occluding elements such as windows and fences. Rather than capturing a single image, we instruct the user to take a short image sequence while slightly moving the camera. Differences that often exist in the relative position of the background and the obstructing elements from the camera allow us to separate them based on their motions, and to recover the desired background scene as if the visual obstructions were not there. We show results on controlled experiments and many real and practical scenarios, including shooting through reflections, fences, and raindrop-covered windows." ] }
1605.01760
2346422319
We present a new method for performing Boolean operations on volumes represented as triangle meshes. In contrast to existing methods which treat meshes as 3D polyhedra and try to partition the faces at their exact intersection curves, we treat meshes as adaptive surfaces which can be arbitrarily refined. Rather than depending on computing precise face intersections, our approach refines the input meshes in the intersection regions, then discards intersecting triangles and fills the resulting holes with high-quality triangles. The original intersection curves are approximated to a user-definable precision, and our method can identify and preserve creases and sharp features. Advantages of our approach include the ability to trade speed for accuracy, support for open meshes, and the ability to incorporate tolerances to handle cases where large numbers of faces are slightly inter-penetrating or near-coincident.
BSP-based methods are highly effective for mesh Booleans. With careful design of predicates, provably-robust methods have been presented @cite_4 . Campen and Kobbelt @cite_8 extended this technique, improving performance with an adaptive octree and fixed-precision arithmetic. Wang and Manocha @cite_1 present a fast and robust technique for extracting an output mesh from a BSP-tree. However, the output mesh is again completely re-tessellated. This is problematic in many contexts where the input meshes may have properties bound to geometric elements, such as UV-maps.
{ "cite_N": [ "@cite_1", "@cite_4", "@cite_8" ], "mid": [ "2113997045", "", "2117442141" ], "abstract": [ "We present an efficient algorithm to extract the manifold surface that approximates the boundary of a solid represented by a Binary Space Partition (BSP) tree. Our polygonization algorithm repeatedly performs clipping operations on volumetric cells that correspond to a spatial convex partition and computes the boundary by traversing the connected cells. We use point-based representations along with finite-precision arithmetic to improve the efficiency and generate the B-rep approximation of a BSP solid. The core of our polygonization method is a novel clipping algorithm that uses a set of logical operations to make it resistant to degeneracies resulting from limited precision of floating-point arithmetic. The overall BSP to B-rep conversion algorithm can accurately generate boundaries with sharp and small features, and is faster than prior methods. At the end of this paper, we use this algorithm for a few geometric processing applications including Boolean operations, model repair, and mesh reconstruction.", "", "We present a new technique to implement operators that modify the topology of polygonal meshes at intersections and self-intersections. Depending on the modification strategy, this effectively results in operators for Boolean combinations or for the construction of outer hulls that are suited for mesh repair tasks and accurate meshbased front tracking of deformable materials that split and merge. By combining an adaptive octree with nested binary space partitions (BSP), we can guarantee exactness (= correctness) and robustness (= completeness) of the algorithm while still achieving higher performance and less memory consumption than previous approaches. The efficiency and scalability in terms of runtime and memory is obtained by an operation localization scheme. We restrict the essential computations to those cells in the adaptive octree where intersections actually occur. Within those critical cells, we convert the input geometry into a plane-based BSP-representation which allows us to perform all computations exactly even with fixed precision arithmetics. We carefully analyze the precision requirements of the involved geometric data and predicates in order to guarantee correctness and show how minimal input mesh quantization can be used to safely rely on computations with standard floating point numbers. We properly evaluate our method with respect to precision, robustness, and efficiency." ] }
1605.01760
2346422319
We present a new method for performing Boolean operations on volumes represented as triangle meshes. In contrast to existing methods which treat meshes as 3D polyhedra and try to partition the faces at their exact intersection curves, we treat meshes as adaptive surfaces which can be arbitrarily refined. Rather than depending on computing precise face intersections, our approach refines the input meshes in the intersection regions, then discards intersecting triangles and fills the resulting holes with high-quality triangles. The original intersection curves are approximated to a user-definable precision, and our method can identify and preserve creases and sharp features. Advantages of our approach include the ability to trade speed for accuracy, support for open meshes, and the ability to incorporate tolerances to handle cases where large numbers of faces are slightly inter-penetrating or near-coincident.
Various other mesh processing techniques have been developed to provide Boolean-like'' behavior. For example Bernstein and Wojtan @cite_13 present a method for adaptively merging meshes as they collide. @cite_16 approximate Boolean union when intersections are detected during mesh-based fluid surface tracking. Similar to our approach, their method deletes overlaps and fill the gaps. However rather than a simple polygon fill, our method uses adaptive front marching to closely approximate the intersection curves and can preserve sharp features on the input.
{ "cite_N": [ "@cite_16", "@cite_13" ], "mid": [ "1974176857", "1966900315" ], "abstract": [ "We present a novel explicit surface tracking method. Its main advantage over existing approaches is the fact that it is both completely grid-free and fast which makes it ideal for the use in large unbounded domains. A further advantage is that its running time is less sensitive to temporal variations of the input mesh than existing approaches. In terms of performance, the method provides a good trade-off point between speed and quality. The main idea behind our approach to handle topological changes is to delete all overlapping triangles and to fill or join the resulting holes in a robust and efficient way while guaranteeing that the output mesh is both manifold and without boundary. We demonstrate the flexibility, speed and quality of our method in various applications such as Eulerian and Lagrangian liquid simulations and the simulation of solids under large plastic deformations.", "This paper presents a method for computing topology changes for triangle meshes in an interactive geometric modeling environment. Most triangle meshes in practice do not exhibit desirable geometric properties, so we develop a solution that is independent of standard assumptions and robust to geometric errors. Specifically, we provide the first method for topology change applicable to arbitrary non-solid, non-manifold, non-closed, self-intersecting surfaces. We prove that this new method for topology change produces the expected conventional results when applied to solid (closed, manifold, non-self-intersecting) surfaces---that is, we prove a backwards-compatibility property relative to prior work. Beyond solid surfaces, we present empirical evidence that our method remains tolerant to a variety of surface aberrations through the incorporation of a novel error correction scheme. Finally, we demonstrate how topology change applied to non-solid objects enables wholly new and useful behaviors." ] }
1605.01663
2345627551
Many verification tools come out of academic projects, whose natural constraints do not typically lead to a strong focus on usability. For widespread use, however, usability is essential. Using a well-known benchmark, the Tokeneer problem, we evaluate the usability of a recent and promising verification tool: AutoProof. The results show the efficacy of the tool in verifying a real piece of software and automatically discharging nearly two thirds of verification conditions. At the same time, the case study shows the demand for improved documentation and emphasizes the need for improvement in the tool itself and in the Eiffel IDE.
All these approaches (and others described in the literature) still leave an open issue, i.e., they are built around strict formal notations which affect the development process from the very beginning. These approaches demonstrate a low level of flexibility. To overcome this problem, a seamless methodological connection built on top of a portfolio of diverse notations and methods is presented in @cite_1 . Another approach is presented in @cite_22 @cite_12 using @cite_4 , where users start the development of system from a strict formal notation (i.e. Event-B), to then automatically translate it to Java code with JML @cite_21 specifications embedded (following Design-by-Contract methodology). Even though this approach enables users with less mathematical expertise to work on formal development, it does not give a seamlessly methodology for the development as presented in this paper.
{ "cite_N": [ "@cite_4", "@cite_22", "@cite_21", "@cite_1", "@cite_12" ], "mid": [ "", "2408436717", "2094160561", "2167237435", "2949510493" ], "abstract": [ "", "This paper describes a case study on the use of a formal methods tool for checking security properties of Tokeneer, a U.S. National Security Agency (NSA) project developed by Praxis, and released in 2008. We modelled Tokeneer as a series of abstract mathematical models related refinement steps in Event-B. We used the Rodin toolset for modelling Tokeneer in Event-B and for discharging associated proof obligations, and we used the EventB2Java code generator to generate Java code for the Event-B model of Tokeneer. After that, we wrote a series of JUnit tests to validate if the Java implementation of Tokeneer adhered to the security properties of Tokeneer described in the documentation provided by Praxis. To the best of our knowledge, modelling Tokeneer in Event-B and checking that its implementation adheres to those security properties is something that hasn’t been attempted before.", "JML is a behavioral interface specification language tailored to Java(TM). Besides pre- and postconditions, it also allows assertions to be intermixed with Java code; these aid verification and debugging. JML is designed to be used by working software engineers; to do this it follows Eiffel in using Java expressions in assertions. JML combines this idea from Eiffel with the model-based approach to specifications, typified by VDM and Larch, which results in greater expressiveness. Other expressiveness advantages over Eiffel include quantifiers, specification-only variables, and frame conditions.This paper discusses the goals of JML, the overall approach, and describes the basic features of the language through examples. It is intended for readers who have some familiarity with both Java and behavioral specification using pre- and postconditions.", "The success of a number of projects has been shown to be significantly improved by the use of a formalism. However, there remains an open issue: to what extent can a development process based on a singular formal notation and method succeed. The majority of approaches demonstrate a low level of flexibility by attempting to use a single notation to express all of the different aspects encountered in software development. Often, these approaches leave a number of scalability issues open. We prefer a more eclectic approach. In our experience, the use of a formalism-based toolkit with adequate notations for each development phase is a viable solution. Following this principle, any specific notation is used only where and when it is really suitable and not necessarily over the entire software lifecycle. The approach explored in this article is perhaps slowly emerging in practice - we hope to accelerate its adoption. However, the major challenge is still finding the best way to instantiate it for each specific application scenario. In this work, we describe a development process and method for automotive applications which consists of five phases. The process recognizes the need for having adequate (and tailored) notations (Problem Frames, Requirements State Machine Language, and Event-B) for each development phase as well as direct traceability between the documents produced during each phase. This allows for a stepwise verification validation of the system under development. The ideas for the formal development method have evolved over two significant case studies carried out in the DEPLOY project.", "Stepwise refinement and Design-by-Contract are two formal approaches for modelling systems. These approaches are widely used in the development of systems. Both approaches have (dis-)advantages. This thesis aims to answer, is it possible to combine both approaches in the development of systems, providing the user with the benefits of both? We answer this question by translating the stepwise refinement method with Event-B to Design-by-Contract with Java and JML, so users can take full advantage of both formal approaches without losing their benefits. This thesis presents a set of syntactic rules that translates Event-B to JML-annotated Java code. It also presents the implementation of the syntactic rules as the EventB2Java tool. We used the tool to translate several Event-B models. It generated JML-annotated Java code for all the considered models that serve as initial implementation. We also used EventB2Java for the development of two software applications. Additionally, we compared EventB2Java against two other tools for Event-B code generation. EventB2Java enables users to start the software development process in Event-B, where users can model the system and prove its consistency, to then transition to JML-annotated Java code, where users can continue the development process." ] }
1605.01368
2346935088
We introduce a novel unsupervised loss function for learning semantic segmentation with deep convolutional neural nets (ConvNet) when densely labeled training images are not available. More specifically, the proposed loss function penalizes the L1-norm of the gradient of the label probability vector image , i.e. total variation, produced by the ConvNet. This can be seen as a regularization term that promotes piecewise smoothness of the label probability vector image produced by the ConvNet during learning. The unsupervised loss function is combined with a supervised loss in a semi-supervised setting to learn ConvNets that can achieve high semantic segmentation accuracy even when only a tiny percentage of the pixels in the training images are labeled. We demonstrate significant improvements over the purely supervised setting in the Weizmann horse, Stanford background and Sift Flow datasets. Furthermore, we show that using the proposed piecewise smoothness constraint in the learning phase significantly outperforms post-processing results from a purely supervised approach with Markov Random Fields (MRF). Finally, we note that the framework we introduce is general and can be used to learn to label other types of structures such as curvilinear structures by modifying the unsupervised loss function accordingly.
Deep learning has recently made significant advances in the field of computer vision. ConvNets @cite_12 have been recently used for semantic segmentation @cite_28 @cite_20 @cite_3 @cite_0 @cite_37 @cite_25 @cite_22 @cite_23 . Ciresan @cite_0 extracted patches around each pixel and trained a ConvNet to detect cell membranes in electron microscopy images. Farabet @cite_28 proposed a multi-scale ConvNet to extract features for scene labeling. Chen @cite_3 incorporate fully connected CRFs to improve semantic segmentation. Long @cite_20 propose a Fully Convolutional Network (FCN) which alters the fully connected layers in the network to convolutional layers. FCN takes an input image of any size and outputs a probability map of the same size. FCN has shown significant improvements in accuracy for dense predictions. Liu @cite_23 also use CRF on top of FCN and show fine grained improvements for semantic segmentation.
{ "cite_N": [ "@cite_37", "@cite_25", "@cite_22", "@cite_28", "@cite_3", "@cite_0", "@cite_23", "@cite_12", "@cite_20" ], "mid": [ "2951277909", "1507506748", "2148850220", "2022508996", "1923697677", "2167510172", "2111077768", "2310919327", "1903029394" ], "abstract": [ "Scene parsing is a technique that consist on giving a label to all pixels in an image according to the class they belong to. To ensure a good visual coherence and a high class accuracy, it is essential for a scene parser to capture image long range dependencies. In a feed-forward architecture, this can be simply achieved by considering a sufficiently large input context patch, around each pixel to be labeled. We propose an approach consisting of a recurrent convolutional neural network which allows us to consider a large input context, while limiting the capacity of the model. Contrary to most standard approaches, our method does not rely on any segmentation methods, nor any task-specific features. The system is trained in an end-to-end manner over raw pixels, and models complex spatial dependencies with low inference cost. As the context size increases with the built-in recurrence, the system identifies and corrects its own errors. Our approach yields state-of-the-art performance on both the Stanford Background Dataset and the SIFT Flow Dataset, while remaining very fast at test time.", "We aim to detect all instances of a category in an image and, for each instance, mark the pixels that belong to it. We call this task Simultaneous Detection and Segmentation (SDS). Unlike classical bounding box detection, SDS requires a segmentation and not just a box. Unlike classical semantic segmentation, we require individual object instances. We build on recent work that uses convolutional neural networks to classify category-independent region proposals (R-CNN [16]), introducing a novel architecture tailored for SDS. We then use category-specific, top-down figure-ground predictions to refine our bottom-up proposals. We show a 7 point boost (16 relative) over our baselines on SDS, a 5 point boost (10 relative) over state-of-the-art on semantic segmentation, and state-of-the-art performance in object detection. Finally, we provide diagnostic tools that unpack performance and provide directions for future work.", "In this work, we address the problem of performing class specific unsupervised object segmentation, i.e., automatic segmentation without annotated training images. We propose a hybrid graph model (HGM) to integrate recognition and segmentation into a unified process. The vertices of a hybrid graph represent the entities associated to the object class or local image features. The vertices are connected by directed edges and or undirected ones, which represent the dependence between the shape priors of the class (for recognition) and the similarity between the color texture priors within an image (for segmentation), respectively. By simultaneously considering the Markov chain formed by the directed subgraph and the minimal cut of the undirected subgraph, the likelihood that the vertices belong to the underlying class can be computed. Given a set of images each containing objects of the same class, our HGM based method automatically identifies in each image the area that the objects occupy. Experiments on 14 sets of images show promising results.", "Scene labeling consists of labeling each pixel in an image with the category of the object it belongs to. We propose a method that uses a multiscale convolutional network trained from raw pixels to extract dense feature vectors that encode regions of multiple sizes centered on each pixel. The method alleviates the need for engineered features, and produces a powerful representation that captures texture, shape, and contextual information. We report results using multiple postprocessing methods to produce the final labeling. Among those, we propose a technique to automatically retrieve, from a pool of segmentation components, an optimal set of components that best explain the scene; these components are arbitrary, for example, they can be taken from a segmentation tree or from any family of oversegmentations. The system yields record accuracies on the SIFT Flow dataset (33 classes) and the Barcelona dataset (170 classes) and near-record accuracy on Stanford background dataset (eight classes), while being an order of magnitude faster than competing approaches, producing a 320×240 image labeling in less than a second, including feature extraction.", "Deep Convolutional Neural Networks (DCNNs) have recently shown state of the art performance in high level vision tasks, such as image classification and object detection. This work brings together methods from DCNNs and probabilistic graphical models for addressing the task of pixel-level classification (also called \"semantic image segmentation\"). We show that responses at the final layer of DCNNs are not sufficiently localized for accurate object segmentation. This is due to the very invariance properties that make DCNNs good for high level tasks. We overcome this poor localization property of deep networks by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF). Qualitatively, our \"DeepLab\" system is able to localize segment boundaries at a level of accuracy which is beyond previous methods. Quantitatively, our method sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 71.6 IOU accuracy in the test set. We show how these results can be obtained efficiently: Careful network re-purposing and a novel application of the 'hole' algorithm from the wavelet community allow dense computation of neural net responses at 8 frames per second on a modern GPU.", "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or non-membrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.", "This paper addresses semantic image segmentation by incorporating rich information into Markov Random Field (MRF), including high-order relations and mixture of label contexts. Unlike previous works that optimized MRFs using iterative algorithm, we solve MRF by proposing a Convolutional Neural Network (CNN), namely Deep Parsing Network (DPN), which enables deterministic end-toend computation in a single forward pass. Specifically, DPN extends a contemporary CNN architecture to model unary terms and additional layers are carefully devised to approximate the mean field algorithm (MF) for pairwise terms. It has several appealing properties. First, different from the recent works that combined CNN and MRF, where many iterations of MF were required for each training image during back-propagation, DPN is able to achieve high performance by approximating one iteration of MF. Second, DPN represents various types of pairwise terms, making many existing works as its special cases. Third, DPN makes MF easier to be parallelized and speeded up in Graphical Processing Unit (GPU). DPN is thoroughly evaluated on the PASCAL VOC 2012 dataset, where a single DPN model yields a new state-of-the-art segmentation accuracy of 77.5 .", "", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image." ] }
1605.01014
2952074561
We propose a novel cascaded framework, namely deep deformation network (DDN), for localizing landmarks in non-rigid objects. The hallmarks of DDN are its incorporation of geometric constraints within a convolutional neural network (CNN) framework, ease and efficiency of training, as well as generality of application. A novel shape basis network (SBN) forms the first stage of the cascade, whereby landmarks are initialized by combining the benefits of CNN features and a learned shape basis to reduce the complexity of the highly nonlinear pose manifold. In the second stage, a point transformer network (PTN) estimates local deformation parameterized as thin-plate spline transformation for a finer refinement. Our framework does not incorporate either handcrafted features or part connectivity, which enables an end-to-end shape prediction pipeline during both training and testing. In contrast to prior cascaded networks for landmark localization that learn a mapping from feature space to landmark locations, we demonstrate that the regularization induced through geometric priors in the DDN makes it easier to train, yet produces superior results. The efficacy and generality of the architecture is demonstrated through state-of-the-art performances on several benchmarks for multiple tasks such as facial landmark localization, human body pose estimation and bird part localization.
Human body pose estimation Estimating human pose is more challenging due to greater articulations. Pictorial structures is one of the early influential models for representing human body structure @cite_22 . The deformable part model (DPM) achieved significant progress in human body detection by combining pictorial structures with strong template features and latent-SVM learning @cite_55 . Yang and Ramanan extend the model by incorporating body part patterns @cite_25 , while Wang and Li propose a tree-structured learning framework to achieve better performance against handcrafted part connections @cite_47 . @cite_57 apply poselets @cite_58 to generate mid-level features regularizing pictorial structures. Chen and Yuille @cite_21 propose dependent pairwise relations with a graphical model for articulated pose estimation. Deep neural network based methods have resulted in better performances in this domain too. Toshev and Szegedy @cite_36 propose cascaded CNN regressors, Tompson al @cite_44 propose joint training for a CNN and a graphical model, while Fan al @cite_35 propose a dual-source deep network to combine local appearance with a holistic view. In contrast, our DDN also effectively learns part relationships while being easier to train and more efficient to evaluate.
{ "cite_N": [ "@cite_35", "@cite_22", "@cite_36", "@cite_55", "@cite_21", "@cite_57", "@cite_44", "@cite_47", "@cite_58", "@cite_25" ], "mid": [ "2949447708", "2030536784", "2113325037", "2168356304", "2155394491", "2143487029", "2952422028", "2949822654", "1864464506", "" ], "abstract": [ "We propose a new learning-based method for estimating 2D human pose from a single image, using Dual-Source Deep Convolutional Neural Networks (DS-CNN). Recently, many methods have been developed to estimate human pose by using pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective. In this paper, we propose to integrate both the local (body) part appearance and the holistic view of each local part for more accurate human pose estimation. Specifically, the proposed DS-CNN takes a set of image patches (category-independent object proposals for training and multi-scale sliding windows for testing) as the input and then learns the appearance of each local part by considering their holistic views in the full body. Using DS-CNN, we achieve both joint detection, which determines whether an image patch contains a body joint, and joint localization, which finds the exact location of the joint in the image patch. Finally, we develop an algorithm to combine these joint detection localization results from all the image patches for estimating the human pose. The experimental results show the effectiveness of the proposed method by comparing to the state-of-the-art human-pose estimation methods based on pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective.", "In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.", "We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.", "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function.", "We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.", "Typical approaches to articulated pose estimation combine spatial modelling of the human body with appearance modelling of body parts. This paper aims to push the state-of-the-art in articulated pose estimation in two ways. First we explore various types of appearance representations aiming to substantially improve the body part hypotheses. And second, we draw on and combine several recently proposed powerful ideas such as more flexible spatial models as well as image-conditioned spatial models. In a series of experiments we draw several important conclusions: (1) we show that the proposed appearance representations are complementary, (2) we demonstrate that even a basic tree-structure spatial human body model achieves state-of-the-art performance when augmented with the proper appearance representation, and (3) we show that the combination of the best performing appearance model with a flexible image-conditioned spatial model achieves the best result, significantly improving over the state of the art, on the Leeds Sports Poses'' and Parse'' benchmarks.", "This paper proposes a new hybrid architecture that consists of a deep Convolutional Network and a Markov Random Field. We show how this architecture is successfully applied to the challenging problem of articulated human pose estimation in monocular images. The architecture can exploit structural domain constraints such as geometric relationships between body joint locations. We show that joint training of these two model paradigms improves performance and allows us to significantly outperform existing state-of-the-art techniques.", "Simple tree models for articulated objects prevails in the last decade. However, it is also believed that these simple tree models are not capable of capturing large variations in many scenarios, such as human pose estimation. This paper attempts to address three questions: 1) are simple tree models sufficient? more specifically, 2) how to use tree models effectively in human pose estimation? and 3) how shall we use combined parts together with single parts efficiently? Assuming we have a set of single parts and combined parts, and the goal is to estimate a joint distribution of their locations. We surprisingly find that no latent variables are introduced in the Leeds Sport Dataset (LSP) during learning latent trees for deformable model, which aims at approximating the joint distributions of body part locations using minimal tree structure. This suggests one can straightforwardly use a mixed representation of single and combined parts to approximate their joint distribution in a simple tree model. As such, one only needs to build Visual Categories of the combined parts, and then perform inference on the learned latent tree. Our method outperformed the state of the art on the LSP, both in the scenarios when the training images are from the same dataset and from the PARSE dataset. Experiments on animal images from the VOC challenge further support our findings.", "Bourdev and Malik (ICCV 09) introduced a new notion of parts, poselets, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. In this paper we develop a new algorithm for detecting people using poselets. Unlike that work which used 3D annotations of keypoints, we use only 2D annotations which are much easier for naive human annotators. The main algorithmic contribution is in how we use the pattern of poselet activations. Individual poselet activations are noisy, but considering the spatial context of each can provide vital disambiguating information, just as object detection can be improved by considering the detection scores of nearby objects in the scene. This can be done by training a two-layer feed-forward network with weights set using a max margin technique. The refined poselet activations are then clustered into mutually consistent hypotheses where consistency is based on empirically determined spatial keypoint distributions. Finally, bounding boxes are predicted for each person hypothesis and shape masks are aligned to edges in the image to provide a segmentation. To the best of our knowledge, the resulting system is the current best performer on the task of people detection and segmentation with an average precision of 47.8 and 40.5 respectively on PASCAL VOC 2009.", "" ] }
1605.01014
2952074561
We propose a novel cascaded framework, namely deep deformation network (DDN), for localizing landmarks in non-rigid objects. The hallmarks of DDN are its incorporation of geometric constraints within a convolutional neural network (CNN) framework, ease and efficiency of training, as well as generality of application. A novel shape basis network (SBN) forms the first stage of the cascade, whereby landmarks are initialized by combining the benefits of CNN features and a learned shape basis to reduce the complexity of the highly nonlinear pose manifold. In the second stage, a point transformer network (PTN) estimates local deformation parameterized as thin-plate spline transformation for a finer refinement. Our framework does not incorporate either handcrafted features or part connectivity, which enables an end-to-end shape prediction pipeline during both training and testing. In contrast to prior cascaded networks for landmark localization that learn a mapping from feature space to landmark locations, we demonstrate that the regularization induced through geometric priors in the DDN makes it easier to train, yet produces superior results. The efficacy and generality of the architecture is demonstrated through state-of-the-art performances on several benchmarks for multiple tasks such as facial landmark localization, human body pose estimation and bird part localization.
Bird part localization Birds display significant appearance variations between classes and shape variations within the same class. An early work that incorporates a probabilistic model and user responses to localize bird parts is presented in @cite_24 . Chai al @cite_53 apply symbiotic segmentation for part detection. The exemplar-based model of @cite_43 , similar to @cite_10 , enforces pose and subcategory consistency to localize bird parts. Recently, CNN-based methods, for example, part-based R-CNN @cite_0 and Deep LAC @cite_5 have demonstrated significant performance improvements.
{ "cite_N": [ "@cite_53", "@cite_24", "@cite_43", "@cite_0", "@cite_5", "@cite_10" ], "mid": [ "", "2152411181", "2167229723", "66901128", "2345945060", "2032558548" ], "abstract": [ "", "We propose a visual recognition system that is designed for fine-grained visual categorization. The system is composed of a machine and a human user. The user, who is unable to carry out the recognition task by himself, is interactively asked to provide two heterogeneous forms of information: clicking on object parts and answering binary questions. The machine intelligently selects the most informative question to pose to the user in order to identify the object's class as quickly as possible. By leveraging computer vision and analyzing the user responses, the overall amount of human effort required, measured in seconds, is minimized. We demonstrate promising results on a challenging dataset of uncropped images, achieving a significant average reduction in human effort over previous methods.", "In this paper, we propose a novel approach for bird part localization, targeting fine-grained categories with wide variations in appearance due to different poses (including aspect and orientation) and subcategories. As it is challenging to represent such variations across a large set of diverse samples with tractable parametric models, we turn to individual exemplars. Specifically, we extend the exemplar-based models in [4] by enforcing pose and subcategory consistency at the parts. During training, we build pose-specific detectors scoring part poses across subcategories, and subcategory-specific detectors scoring part appearance across poses. At the testing stage, likely exemplars are matched to the image, suggesting part locations whose pose and subcategory consistency are well-supported by the image cues. From these hypotheses, part configuration can be predicted with very high accuracy. Experimental results demonstrate significant performance gains from our method on an extensive dataset: CUB-200-2011 [30], for both localization and classification tasks.", "In this paper, we propose a novel part-pair representation for part localization. In this representation, an object is treated as a collection of part pairs to model its shape and appearance. By changing the set of pairs to be used, we are able to impose either stronger or weaker geometric constraints on the part configuration. As for the appearance, we build pair detectors for each part pair, which model the appearance of an object at different levels of granularities. Our method of part localization exploits the part-pair representation, featuring the combination of non-parametric exemplars and parametric regression models. Non-parametric exemplars help generate reliable part hypotheses from very noisy pair detections. Then, the regression models are used to group the part hypotheses in a flexible way to predict the part locations. We evaluate our method extensively on the dataset CUB-200-2011 [32], where we achieve significant improvement over the state-of-the-art method on bird part localization. We also experiment with human pose estimation, where our method produces comparable results to existing works.", "A system and method are provided. The system includes a processor. The processor is configured to generate a response map for an image, using a four stage convolutional structure. The processor is further configured to generate a plurality of landmark points for the image based on the response map, using a shape basis neural network. The processor is additionally configured to generate an optimal shape for the image based on the plurality of landmark points for the image and the response map, using a point deformation neural network. A recognition system configured to identify the image based on the generated optimal shape to generate a recognition result of the image. The processor is also configured to operate a hardware-based machine based on the recognition result.", "We present a novel approach to localizing parts in images of human faces. The approach combines the output of local detectors with a non-parametric set of global models for the part locations based on over one thousand hand-labeled exemplar images. By assuming that the global models generate the part locations as hidden variables, we derive a Bayesian objective function. This function is optimized using a consensus of models for these hidden variables. The resulting localizer handles a much wider range of expression, pose, lighting and occlusion than prior ones. We show excellent performance on a new dataset gathered from the internet and show that our localizer achieves state-of-the-art performance on the less challenging BioID dataset." ] }
1605.01014
2952074561
We propose a novel cascaded framework, namely deep deformation network (DDN), for localizing landmarks in non-rigid objects. The hallmarks of DDN are its incorporation of geometric constraints within a convolutional neural network (CNN) framework, ease and efficiency of training, as well as generality of application. A novel shape basis network (SBN) forms the first stage of the cascade, whereby landmarks are initialized by combining the benefits of CNN features and a learned shape basis to reduce the complexity of the highly nonlinear pose manifold. In the second stage, a point transformer network (PTN) estimates local deformation parameterized as thin-plate spline transformation for a finer refinement. Our framework does not incorporate either handcrafted features or part connectivity, which enables an end-to-end shape prediction pipeline during both training and testing. In contrast to prior cascaded networks for landmark localization that learn a mapping from feature space to landmark locations, we demonstrate that the regularization induced through geometric priors in the DDN makes it easier to train, yet produces superior results. The efficacy and generality of the architecture is demonstrated through state-of-the-art performances on several benchmarks for multiple tasks such as facial landmark localization, human body pose estimation and bird part localization.
General pose estimation While the above works focus on a specific object domain, a few methods have been proposed towards pose estimation for general object categories. As a general framework, DPM has also been shown to be effective beyond human bodies for facial landmark localization @cite_59 . A successful example of more general pose estimation is the regression-based framework of @cite_46 and its variants such as @cite_6 @cite_56 . However, such methods are sensitive to initialization, which our framework avoids through an effective shape basis network.
{ "cite_N": [ "@cite_46", "@cite_6", "@cite_59", "@cite_56" ], "mid": [ "", "2111372597", "2047508432", "23474907" ], "abstract": [ "", "Human faces captured in real-world conditions present large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food). Current face landmark estimation approaches struggle under such conditions since they fail to provide a principled way of handling outliers. We propose a novel method, called Robust Cascaded Pose Regression (RCPR) which reduces exposure to outliers by detecting occlusions explicitly and using robust shape-indexed features. We show that RCPR improves on previous landmark estimation methods on three popular face datasets (LFPW, LFW and HELEN). We further explore RCPR's performance by introducing a novel face dataset focused on occlusion, composed of 1,007 faces presenting a wide range of occlusion patterns. RCPR reduces failure cases by half on all four datasets, at the same time as it detects face occlusions with a 80 40 precision recall.", "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "This paper explores the localization of pre-defined semantic object parts, which is much more challenging than traditional object detection and very important for applications such as face recognition, HCI and fine-grained object recognition. To address this problem, we make two critical improvements over the widely used deformable part model (DPM). The first is that we use appearance based shape regression to globally estimate the anchor location of each part and then locally refine each part according to the estimated anchor location under the constraint of DPM. The DPM with shape regression (SR-DPM) is more flexible than the traditional DPM by relaxing the fixed anchor location of each part. It enjoys the efficient dynamic programming inference as traditional DPM and can be discriminatively trained via a coordinate descent procedure. The second is that we propose to stack multiple SR-DPMs, where each layer uses the output of previous SR-DPM as the input to progressively refine the result. It provides an analogy to deep neural network while benefiting from hand-crafted feature and model. The proposed methods are applied to human pose estimation, face alignment and general object part localization tasks and achieve state-of-the-art performance." ] }
1605.01014
2952074561
We propose a novel cascaded framework, namely deep deformation network (DDN), for localizing landmarks in non-rigid objects. The hallmarks of DDN are its incorporation of geometric constraints within a convolutional neural network (CNN) framework, ease and efficiency of training, as well as generality of application. A novel shape basis network (SBN) forms the first stage of the cascade, whereby landmarks are initialized by combining the benefits of CNN features and a learned shape basis to reduce the complexity of the highly nonlinear pose manifold. In the second stage, a point transformer network (PTN) estimates local deformation parameterized as thin-plate spline transformation for a finer refinement. Our framework does not incorporate either handcrafted features or part connectivity, which enables an end-to-end shape prediction pipeline during both training and testing. In contrast to prior cascaded networks for landmark localization that learn a mapping from feature space to landmark locations, we demonstrate that the regularization induced through geometric priors in the DDN makes it easier to train, yet produces superior results. The efficacy and generality of the architecture is demonstrated through state-of-the-art performances on several benchmarks for multiple tasks such as facial landmark localization, human body pose estimation and bird part localization.
Learning transformations with CNNs Agrawal al use a Siamese network to predict discretized rigid ego-motion transformations formulated as a classification problem @cite_7 . Razavian al @cite_45 analyzes generating spatial information with CNNs, of which our SBN and PTN are specific examples in designing the spatial constraints. Our point transformer network is inspired by the the spatial transformer network of @cite_2 . Similar to WarpNet @cite_4 , we move beyond the motivation of spatial transformer as an attention mechanism driven by the classification objective, to predict a non-rigid transformation for geometric alignment. In contrast to WarpNet, we exploit both supervised and synthesized landmarks and use the point transformer network only for finer local deformations, while using the earlier stage of the cascade (the shape basis network) for global alignment.
{ "cite_N": [ "@cite_45", "@cite_4", "@cite_7", "@cite_2" ], "mid": [ "1747528051", "2952695679", "2951590555", "" ], "abstract": [ "Supervised training of a convolutional network for object classification should make explicit any information related to the class of objects and disregard any auxiliary information associated with the capture of the image or the variation within the object class. Does this happen in practice? Although this seems to pertain to the very final layers in the network, if we look at earlier layers we find that this is not the case. In fact, strong spatial information is implicit. This paper addresses this, in particular, exploiting the image representation at the first fully connected layer, i.e. the global image descriptor which has been recently shown to be most effective in a range of visual recognition tasks. We empirically demonstrate evidences for the finding in the contexts of four different tasks: 2d landmark detection, 2d object keypoints prediction, estimation of the RGB values of input image, and recovery of semantic label of each pixel. We base our investigation on a simple framework with ridge rigression commonly across these tasks, and show results which all support our insight. Such spatial information can be used for computing correspondence of landmarks to a good accuracy, but should potentially be useful for improving the training of the convolutional nets for classification purposes.", "We present an approach to matching images of objects in fine-grained datasets without using part annotations, with an application to the challenging problem of weakly supervised single-view reconstruction. This is in contrast to prior works that require part annotations, since matching objects across class and pose variations is challenging with appearance features alone. We overcome this challenge through a novel deep learning architecture, WarpNet, that aligns an object in one image with a different object in another. We exploit the structure of the fine-grained dataset to create artificial data for training this network in an unsupervised-discriminative learning approach. The output of the network acts as a spatial prior that allows generalization at test time to match real images across variations in appearance, viewpoint and articulation. On the CUB-200-2011 dataset of bird categories, we improve the AP over an appearance-only network by 13.6 . We further demonstrate that our WarpNet matches, together with the structure of fine-grained datasets, allow single-view reconstructions with quality comparable to using annotated point correspondences.", "The dominant paradigm for feature learning in computer vision relies on training neural networks for the task of object recognition using millions of hand labelled images. Is it possible to learn useful features for a diverse set of visual tasks using any other form of supervision? In biology, living organisms developed the ability of visual perception for the purpose of moving and acting in the world. Drawing inspiration from this observation, in this work we investigate if the awareness of egomotion can be used as a supervisory signal for feature learning. As opposed to the knowledge of class labels, information about egomotion is freely available to mobile agents. We show that given the same number of training images, features learnt using egomotion as supervision compare favourably to features learnt using class-label as supervision on visual tasks of scene recognition, object recognition, visual odometry and keypoint matching.", "" ] }
1605.00707
2952491602
Our work introduces a novel way to increase pose estimation accuracy by discovering parts from unannotated regions of training images. Discovered parts are used to generate more accurate appearance likelihoods for traditional part-based models like Pictorial Structures [13] and its derivatives. Our experiments on images of a hawkmoth in flight show that our proposed approach significantly improves over existing work [27] for this application, while also being more generally applicable. Our proposed approach localizes landmarks at least twice as accurately as a baseline based on a Mixture of Pictorial Structures (MPS) model. Our unique High-Resolution Moth Flight (HRMF) dataset is made publicly available with annotations.
In the context of computer vision, our approach to pose estimation combines ideas from established part-based models @cite_32 @cite_8 @cite_20 , with recent works on unsupervised or weakly supervised part discovery @cite_19 @cite_35 @cite_31 .
{ "cite_N": [ "@cite_35", "@cite_8", "@cite_32", "@cite_19", "@cite_31", "@cite_20" ], "mid": [ "", "2030536784", "2535410496", "2115628259", "2951702175", "2045798786" ], "abstract": [ "", "In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.", "We address the classic problems of detection, segmentation and pose estimation of people in images with a novel definition of a part, a poselet. We postulate two criteria (1) It should be easy to find a poselet given an input image (2) it should be easy to localize the 3D configuration of the person conditioned on the detection of a poselet. To permit this we have built a new dataset, H3D, of annotations of humans in 2D photographs with 3D joint information, inferred using anthropometric constraints. This enables us to implement a data-driven search procedure for finding poselets that are tightly clustered in both 3D joint configuration space as well as 2D image appearance. The algorithm discovers poselets that correspond to frontal and profile faces, pedestrians, head and shoulder views, among others. Each poselet provides examples for training a linear SVM classifier which can then be run over the image in a multiscale scanning mode. The outputs of these poselet detectors can be thought of as an intermediate layer of nodes, on top of which one can run a second layer of classification or regression. We show how this permits detection and localization of torsos or keypoints such as left shoulder, nose, etc. Experimental results show that we obtain state of the art performance on people detection in the PASCAL VOC 2007 challenge, among other datasets. We are making publicly available both the H3D dataset as well as the poselet parameters for use by other researchers.", "Recent work on mid-level visual representations aims to capture information at the level of complexity higher than typical \"visual words\", but lower than full-blown semantic objects. Several approaches [5,6,12,23] have been proposed to discover mid-level visual elements, that are both 1) representative, i.e., frequently occurring within a visual dataset, and 2) visually discriminative. However, the current approaches are rather ad hoc and difficult to analyze and evaluate. In this work, we pose visual element discovery as discriminative mode seeking, drawing connections to the the well-known and well-studied mean-shift algorithm [2, 1, 4, 8]. Given a weakly-labeled image collection, our method discovers visually-coherent patch clusters that are maximally discriminative with respect to the labels. One advantage of our formulation is that it requires only a single pass through the data. We also propose the Purity-Coverage plot as a principled way of experimentally analyzing and evaluating different visual discovery approaches, and compare our method against prior work on the Paris Street View dataset of [5]. We also evaluate our method on the task of scene classification, demonstrating state-of-the-art performance on the MIT Scene-67 dataset.", "The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, \"visual phrases\", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset.", "The primary problem dealt with in this paper is the following. Given some description of a visual object, find that object in an actual photograph. Part of the solution to this problem is the specification of a descriptive scheme, and a metric on which to base the decision of \"goodness\" of matching or detection." ] }
1605.00707
2952491602
Our work introduces a novel way to increase pose estimation accuracy by discovering parts from unannotated regions of training images. Discovered parts are used to generate more accurate appearance likelihoods for traditional part-based models like Pictorial Structures [13] and its derivatives. Our experiments on images of a hawkmoth in flight show that our proposed approach significantly improves over existing work [27] for this application, while also being more generally applicable. Our proposed approach localizes landmarks at least twice as accurately as a baseline based on a Mixture of Pictorial Structures (MPS) model. Our unique High-Resolution Moth Flight (HRMF) dataset is made publicly available with annotations.
One established part-based model is pictorial structures (PS) @cite_8 @cite_20 which continues to be the foundation for many 2D and 3D human pose estimation works @cite_36 @cite_30 @cite_11 @cite_2 @cite_10 @cite_23 @cite_15 @cite_34 . PS is a model that integrates the appearance of individual parts (unary terms) with preferred spatial relationships between parts (pairwise terms). Many PS-based works have a one-to-one mapping between parts in the model and annotations provided with the training images @cite_36 @cite_30 @cite_11 @cite_10 @cite_15 @cite_34 . As a result, these models ignore regions of the training images that are unannotated. If unannotated regions contain useful parts then these models cannot leverage them. In contrast, our work augments traditional PS-based models with useful parts discovered from unannotated regions.
{ "cite_N": [ "@cite_30", "@cite_8", "@cite_36", "@cite_23", "@cite_2", "@cite_15", "@cite_34", "@cite_10", "@cite_20", "@cite_11" ], "mid": [ "2084106378", "2030536784", "2036545421", "2097151019", "1989865957", "", "2047508432", "", "2045798786", "2171125807" ], "abstract": [ "In this paper we consider people detection and articulated pose estimation, two closely related and challenging problems in computer vision. Conceptually, both of these problems can be addressed within the pictorial structures framework (Felzenszwalb and Huttenlocher in Int. J. Comput. Vis. 61(1):55---79, 2005; Fischler and Elschlager in IEEE Trans. Comput. C-22(1):67---92, 1973), even though previous approaches have not shown such generality. A principal difficulty for such a general approach is to model the appearance of body parts. The model has to be discriminative enough to enable reliable detection in cluttered scenes and general enough to capture highly variable appearance. Therefore, as the first important component of our approach, we propose a discriminative appearance model based on densely sampled local descriptors and AdaBoost classifiers. Secondly, we interpret the normalized margin of each classifier as likelihood in a generative model and compute marginal posteriors for each part using belief propagation. Thirdly, non-Gaussian relationships between parts are represented as Gaussians in the coordinate system of the joint between the parts. Additionally, in order to cope with shortcomings of tree-based pictorial structures models, we augment our model with additional repulsive factors in order to discourage overcounting of image evidence. We demonstrate that the combination of these components within the pictorial structures framework results in a generic model that yields state-of-the-art performance for several datasets on a variety of tasks: people detection, upper body pose estimation, and full body pose estimation.", "In this paper we present a computationally efficient framework for part-based modeling and recognition of objects. Our work is motivated by the pictorial structure models introduced by Fischler and Elschlager. The basic idea is to represent an object by a collection of parts arranged in a deformable configuration. The appearance of each part is modeled separately, and the deformable configuration is represented by spring-like connections between pairs of parts. These models allow for qualitative descriptions of visual appearance, and are suitable for generic recognition problems. We address the problem of using pictorial structure models to find instances of an object in an image as well as the problem of learning an object model from training examples, presenting efficient algorithms in both cases. We demonstrate the techniques by learning models that represent faces and human bodies and using the resulting models to locate the corresponding objects in novel images.", "Pictorial structure models are the de facto standard for 2D human pose estimation. Numerous refinements and improvements have been proposed such as discriminatively trained body part detectors, flexible body models, and local and global mixtures. While these techniques allow to achieve state-of-the-art performance for 2D pose estimation, they have not yet been extended to enable pose estimation in 3D. This paper thus proposes a multi-view pictorial structures model that builds on recent advances in 2D pose estimation and incorporates evidence across multiple viewpoints to allow for robust 3D pose estimation. We evaluate our multi-view pictorial structures approach on the HumanEva-I and MPII Cooking dataset. In comparison to related work for 3D pose estimation our approach achieves similar or better results while operating on single-frames only and not relying on activity specific motion models or tracking. Notably, our approach outperforms state-of-the-art for activities with more complex motions.", "In this paper we consider the challenging problem of articulated human pose estimation in still images. We observe that despite high variability of the body articulations, human motions and activities often simultaneously constrain the positions of multiple body parts. Modelling such higher order part dependencies seemingly comes at a cost of more expensive inference, which resulted in their limited use in state-of-the-art methods. In this paper we propose a model that incorporates higher order part dependencies while remaining efficient. We achieve this by defining a conditional model in which all body parts are connected a-priori, but which becomes a tractable tree-structured pictorial structures model once the image observations are available. In order to derive a set of conditioning variables we rely on the poselet-based features that have been shown to be effective for people detection but have so far found limited application for articulated human pose estimation. We demonstrate the effectiveness of our approach on three publicly available pose estimation benchmarks improving or being on-par with state of the art in each case.", "Given an image of a person, the problem of human pose estimation can be briefly described as localizing the position and orientation of the body limbs. The complexity of the problem comes from issues like background clutter, changes in viewpoint, changes in appearance, self-occlusions of body parts, etc. Pictorial structures framework has been widely applied in human pose estimationn during the past few years [1]. Yang and Ramanan [7] proposed a simple yet efficient model that outperformed previous state of the art approaches. However, in addition to the difficulties of modelling small image patches for the body joints (see Fig. 1), the performance of their method is also compromised by the use of a tree-structured model. Although trees permit efficient and exact inference on graphical models, the restricted edge structure is insufficient for capturing all the important relations between parts. As a consequence, tree-structured pictorial structures suffer from the so-called “double-counting” phenomena.", "", "We present a unified model for face detection, pose estimation, and landmark estimation in real-world, cluttered images. Our model is based on a mixtures of trees with a shared pool of parts; we model every facial landmark as a part and use global mixtures to capture topological changes due to viewpoint. We show that tree-structured models are surprisingly effective at capturing global elastic deformation, while being easy to optimize unlike dense graph structures. We present extensive results on standard face benchmarks, as well as a new “in the wild” annotated dataset, that suggests our system advances the state-of-the-art, sometimes considerably, for all three tasks. Though our model is modestly trained with hundreds of faces, it compares favorably to commercial systems trained with billions of examples (such as Google Picasa and face.com).", "", "The primary problem dealt with in this paper is the following. Given some description of a visual object, find that object in an actual photograph. Part of the solution to this problem is the specification of a descriptive scheme, and a metric on which to base the decision of \"goodness\" of matching or detection.", "We consider the problem of automatically estimating the 3D pose of humans from images, taken from multiple calibrated views. We show that it is possible and tractable to extend the pictorial structures framework, popular for 2D pose estimation, to 3D. We discuss how to use this framework to impose view, skeleton, joint angle and intersection constraints in 3D. The 3D pictorial structures are evaluated on multiple view data from a professional football game. The evaluation is focused on computational tractability, but we also demonstrate how a simple 2D part detector can be plugged into the framework." ] }
1605.00707
2952491602
Our work introduces a novel way to increase pose estimation accuracy by discovering parts from unannotated regions of training images. Discovered parts are used to generate more accurate appearance likelihoods for traditional part-based models like Pictorial Structures [13] and its derivatives. Our experiments on images of a hawkmoth in flight show that our proposed approach significantly improves over existing work [27] for this application, while also being more generally applicable. Our proposed approach localizes landmarks at least twice as accurately as a baseline based on a Mixture of Pictorial Structures (MPS) model. Our unique High-Resolution Moth Flight (HRMF) dataset is made publicly available with annotations.
One exception to the reliance of part-based models on part annotations is the Deformable Part Models (DPM) work @cite_17 which learns parts with only bounding-box level supervision. While DPMs have shown success in object detection, they are not well suited for pose estimation applications where specific landmarks need to be localized. There is no guarantee that parts learned by a DPM will correspond to landmarks that need to be localized.
{ "cite_N": [ "@cite_17" ], "mid": [ "2168356304" ], "abstract": [ "We describe an object detection system based on mixtures of multiscale deformable part models. Our system is able to represent highly variable object classes and achieves state-of-the-art results in the PASCAL object detection challenges. While deformable part models have become quite popular, their value had not been demonstrated on difficult benchmarks such as the PASCAL data sets. Our system relies on new methods for discriminative training with partially labeled data. We combine a margin-sensitive approach for data-mining hard negative examples with a formalism we call latent SVM. A latent SVM is a reformulation of MI--SVM in terms of latent variables. A latent SVM is semiconvex, and the training problem becomes convex once latent information is specified for the positive examples. This leads to an iterative training algorithm that alternates between fixing latent values for positive examples and optimizing the latent SVM objective function." ] }
1605.00707
2952491602
Our work introduces a novel way to increase pose estimation accuracy by discovering parts from unannotated regions of training images. Discovered parts are used to generate more accurate appearance likelihoods for traditional part-based models like Pictorial Structures [13] and its derivatives. Our experiments on images of a hawkmoth in flight show that our proposed approach significantly improves over existing work [27] for this application, while also being more generally applicable. Our proposed approach localizes landmarks at least twice as accurately as a baseline based on a Mixture of Pictorial Structures (MPS) model. Our unique High-Resolution Moth Flight (HRMF) dataset is made publicly available with annotations.
Another established part-based model is the work of @cite_32 who introduce poselets . Poselets, can be thought of as mid-level parts that capture common configurations of low-level parts. Specifically, a single poselet (part) is defined by a set of visually similar image patches that contain similar configurations of annotations. This broader definition of part has proven to be useful for pose estimation as seen in the success of recent works @cite_21 @cite_22 @cite_2 @cite_23 @cite_14 . Unfortunately, like traditional parts, poselets are dependent on annotations and cannot capture parts from regions of training images that neither contain nor are near annotations.
{ "cite_N": [ "@cite_14", "@cite_22", "@cite_21", "@cite_32", "@cite_23", "@cite_2" ], "mid": [ "2049768550", "2031004336", "1864464506", "2535410496", "2097151019", "1989865957" ], "abstract": [ "We consider the problem of human parsing with part-based models. Most previous work in part-based models only considers rigid parts (e.g. torso, head, half limbs) guided by human anatomy. We argue that this representation of parts is not necessarily appropriate for human parsing. In this paper, we introduce hierarchical poselets–a new representation for human parsing. Hierarchical poselets can be rigid parts, but they can also be parts that cover large portions of human bodies (e.g. torso + left arm). In the extreme case, they can be the whole bodies. We develop a structured model to organize poselets in a hierarchical way and learn the model parameters in a max-margin framework. We demonstrate the superior performance of our proposed approach on two datasets with aggressive pose variations.", "A k-poselet is a deformable part model (DPM) with k parts, where each of the parts is a poselet, aligned to a specific configuration of keypoints based on ground-truth annotations. A separate template is used to learn the appearance of each part. The parts are allowed to move with respect to each other with a deformation cost that is learned at training time. This model is richer than both the traditional version of poselets and DPMs. It enables a unified approach to person detection and keypoint prediction which, barring contemporaneous approaches based on CNN features, achieves state-of-the-art keypoint prediction while maintaining competitive detection performance.", "Bourdev and Malik (ICCV 09) introduced a new notion of parts, poselets, constructed to be tightly clustered both in the configuration space of keypoints, as well as in the appearance space of image patches. In this paper we develop a new algorithm for detecting people using poselets. Unlike that work which used 3D annotations of keypoints, we use only 2D annotations which are much easier for naive human annotators. The main algorithmic contribution is in how we use the pattern of poselet activations. Individual poselet activations are noisy, but considering the spatial context of each can provide vital disambiguating information, just as object detection can be improved by considering the detection scores of nearby objects in the scene. This can be done by training a two-layer feed-forward network with weights set using a max margin technique. The refined poselet activations are then clustered into mutually consistent hypotheses where consistency is based on empirically determined spatial keypoint distributions. Finally, bounding boxes are predicted for each person hypothesis and shape masks are aligned to edges in the image to provide a segmentation. To the best of our knowledge, the resulting system is the current best performer on the task of people detection and segmentation with an average precision of 47.8 and 40.5 respectively on PASCAL VOC 2009.", "We address the classic problems of detection, segmentation and pose estimation of people in images with a novel definition of a part, a poselet. We postulate two criteria (1) It should be easy to find a poselet given an input image (2) it should be easy to localize the 3D configuration of the person conditioned on the detection of a poselet. To permit this we have built a new dataset, H3D, of annotations of humans in 2D photographs with 3D joint information, inferred using anthropometric constraints. This enables us to implement a data-driven search procedure for finding poselets that are tightly clustered in both 3D joint configuration space as well as 2D image appearance. The algorithm discovers poselets that correspond to frontal and profile faces, pedestrians, head and shoulder views, among others. Each poselet provides examples for training a linear SVM classifier which can then be run over the image in a multiscale scanning mode. The outputs of these poselet detectors can be thought of as an intermediate layer of nodes, on top of which one can run a second layer of classification or regression. We show how this permits detection and localization of torsos or keypoints such as left shoulder, nose, etc. Experimental results show that we obtain state of the art performance on people detection in the PASCAL VOC 2007 challenge, among other datasets. We are making publicly available both the H3D dataset as well as the poselet parameters for use by other researchers.", "In this paper we consider the challenging problem of articulated human pose estimation in still images. We observe that despite high variability of the body articulations, human motions and activities often simultaneously constrain the positions of multiple body parts. Modelling such higher order part dependencies seemingly comes at a cost of more expensive inference, which resulted in their limited use in state-of-the-art methods. In this paper we propose a model that incorporates higher order part dependencies while remaining efficient. We achieve this by defining a conditional model in which all body parts are connected a-priori, but which becomes a tractable tree-structured pictorial structures model once the image observations are available. In order to derive a set of conditioning variables we rely on the poselet-based features that have been shown to be effective for people detection but have so far found limited application for articulated human pose estimation. We demonstrate the effectiveness of our approach on three publicly available pose estimation benchmarks improving or being on-par with state of the art in each case.", "Given an image of a person, the problem of human pose estimation can be briefly described as localizing the position and orientation of the body limbs. The complexity of the problem comes from issues like background clutter, changes in viewpoint, changes in appearance, self-occlusions of body parts, etc. Pictorial structures framework has been widely applied in human pose estimationn during the past few years [1]. Yang and Ramanan [7] proposed a simple yet efficient model that outperformed previous state of the art approaches. However, in addition to the difficulties of modelling small image patches for the body joints (see Fig. 1), the performance of their method is also compromised by the use of a tree-structured model. Although trees permit efficient and exact inference on graphical models, the restricted edge structure is insufficient for capturing all the important relations between parts. As a consequence, tree-structured pictorial structures suffer from the so-called “double-counting” phenomena." ] }
1605.00707
2952491602
Our work introduces a novel way to increase pose estimation accuracy by discovering parts from unannotated regions of training images. Discovered parts are used to generate more accurate appearance likelihoods for traditional part-based models like Pictorial Structures [13] and its derivatives. Our experiments on images of a hawkmoth in flight show that our proposed approach significantly improves over existing work [27] for this application, while also being more generally applicable. Our proposed approach localizes landmarks at least twice as accurately as a baseline based on a Mixture of Pictorial Structures (MPS) model. Our unique High-Resolution Moth Flight (HRMF) dataset is made publicly available with annotations.
Looking beyond pose estimation, there have been recent works on unsupervised and weakly supervised part discovery @cite_19 @cite_35 @cite_31 . These works showed the utility of the parts they discovered by using them as feature representations of scenes for supervised scene classification. Our work takes inspiration from these methods and uses a simpler part discovery approach for the problem of pose estimation.
{ "cite_N": [ "@cite_19", "@cite_31", "@cite_35" ], "mid": [ "2115628259", "2951702175", "" ], "abstract": [ "Recent work on mid-level visual representations aims to capture information at the level of complexity higher than typical \"visual words\", but lower than full-blown semantic objects. Several approaches [5,6,12,23] have been proposed to discover mid-level visual elements, that are both 1) representative, i.e., frequently occurring within a visual dataset, and 2) visually discriminative. However, the current approaches are rather ad hoc and difficult to analyze and evaluate. In this work, we pose visual element discovery as discriminative mode seeking, drawing connections to the the well-known and well-studied mean-shift algorithm [2, 1, 4, 8]. Given a weakly-labeled image collection, our method discovers visually-coherent patch clusters that are maximally discriminative with respect to the labels. One advantage of our formulation is that it requires only a single pass through the data. We also propose the Purity-Coverage plot as a principled way of experimentally analyzing and evaluating different visual discovery approaches, and compare our method against prior work on the Paris Street View dataset of [5]. We also evaluate our method on the task of scene classification, demonstrating state-of-the-art performance on the MIT Scene-67 dataset.", "The goal of this paper is to discover a set of discriminative patches which can serve as a fully unsupervised mid-level visual representation. The desired patches need to satisfy two requirements: 1) to be representative, they need to occur frequently enough in the visual world; 2) to be discriminative, they need to be different enough from the rest of the visual world. The patches could correspond to parts, objects, \"visual phrases\", etc. but are not restricted to be any one of them. We pose this as an unsupervised discriminative clustering problem on a huge dataset of image patches. We use an iterative procedure which alternates between clustering and training discriminative classifiers, while applying careful cross-validation at each step to prevent overfitting. The paper experimentally demonstrates the effectiveness of discriminative patches as an unsupervised mid-level visual representation, suggesting that it could be used in place of visual words for many tasks. Furthermore, discriminative patches can also be used in a supervised regime, such as scene classification, where they demonstrate state-of-the-art performance on the MIT Indoor-67 dataset.", "" ] }
1605.00876
2346203955
Plug-in electric vehicles (PEVs) are considered as flexible loads since their charging schedules can be shifted over the course of a day without impacting drivers’ mobility. This property can be exploited to reduce charging costs and adverse network impacts. The increasing number of PEVs makes the use of distributed charging coordinating strategies preferable to centralized ones. In this paper, we propose an agent-based method which enables a fully distributed solution of the PEVs’ Coordinated Charging (PEV-CC) problem. This problem aims at coordinating the charging schedules of a fleet of PEVs to minimize costs of serving demand subject to individual PEV constraints originating from battery limitations and charging infrastructure characteristics. In our proposed approach, each PEV’s charging station is considered as an agent that is equipped with communication and computation capabilities. Our multiagent approach is an iterative procedure which finds a distributed solution for the first order optimality conditions of the underlying optimization problem through local computations and limited information exchange with neighboring agents. In particular, the updates for each agent incorporate local information such as the Lagrange multipliers, as well as enforcing the local PEV’s constraints as local innovation terms. Finally, the performance of our proposed algorithm is evaluated on a fleet of 100 PEVs as a test case, and the results are compared with the centralized solution of the PEV-CC problem.
Most communication-based decentralized approaches introduced so far require the exchange of information with an aggregator, which acts as a coordinating agent @cite_26 @cite_6 @cite_23 @cite_16 @cite_10 @cite_9 . The information exchanged is however not sensitive (typically the charging schedule and dual variables). The approaches in @cite_26 @cite_23 consider non-cooperative agents and are based on mean field game theory, whereas the approaches in @cite_6 @cite_16 @cite_10 @cite_9 consider cooperative agents. The charging optimization problem is decomposed using the Alternating Direction Method of Multipliers in @cite_16 @cite_10 . The decentralized approaches mentioned above require each PEV to communicate with a central agent and are therefore less robust towards failure than peer-to-peer based distributed schemes.
{ "cite_N": [ "@cite_26", "@cite_9", "@cite_6", "@cite_23", "@cite_16", "@cite_10" ], "mid": [ "2088077079", "2016038157", "2115594466", "2004264786", "2006918362", "1968228020" ], "abstract": [ "This paper develops a strategy to coordinate the charging of autonomous plug-in electric vehicles (PEVs) using concepts from non-cooperative games. The foundation of the paper is a model that assumes PEVs are cost-minimizing and weakly coupled via a common electricity price. At a Nash equilibrium, each PEV reacts optimally with respect to a commonly observed charging trajectory that is the average of all PEV strategies. This average is given by the solution of a fixed point problem in the limit of infinite population size. The ideal solution minimizes electricity generation costs by scheduling PEV demand to fill the overnight non-PEV demand “valley”. The paper's central theoretical result is a proof of the existence of a unique Nash equilibrium that almost satisfies that ideal. This result is accompanied by a decentralized computational algorithm and a proof that the algorithm converges to the Nash equilibrium in the infinite system limit. Several numerical examples are used to illustrate the performance of the solution strategy for finite populations. The examples demonstrate that convergence to the Nash equilibrium occurs very quickly over a broad range of parameters, and suggest this method could be useful in situations where frequent communication with PEVs is not possible. The method is useful in applications where fully centralized control is not possible, but where optimal or near-optimal charging patterns are essential to system operation.", "Efficient and reliable demand side management techniques for community charging of plug-in hybrid electrical vehicles (PHEVs) and plug-in electrical vehicles (PEVs) are needed, as large numbers of these vehicles are being introduced to the power grid. To avoid overloads and maximize customer preferences in terms of time and cost of charging, a constrained nonlinear optimization problem can be formulated. In this paper, we have developed a novel cooperative distributed algorithm for charging control of PHEVs PEVs that solves the constrained nonlinear optimization problem using Karush-Kuhn-Tucker (KKT) conditions and consensus networks in a distributed fashion. In our design, the global optimal power allocation under all local and global constraints is reached through peer-to-peer coordination of charging stations. Therefore, the need for a central control unit is eliminated. In this way, single-node congestion is avoided when the size of the problem is increased and the system gains robustness against single-link node failures. Furthermore, via Monte Carlo simulations, we have demonstrated that the proposed distributed method is scalable with the number of charging points and returns solutions, which are comparable to centralized optimization algorithms with a maximum of 2 sub-optimality. Thus, the main advantages of our approach are eliminating the need for a central energy management coordination unit, gaining robustness against single-link node failures, and being scalable in terms of single-node computations.", "We propose a decentralized algorithm to optimally schedule electric vehicle (EV) charging. The algorithm exploits the elasticity of electric vehicle loads to fill the valleys in electric load profiles. We first formulate the EV charging scheduling problem as an optimal control problem, whose objective is to impose a generalized notion of valley-filling, and study properties of optimal charging profiles. We then give a decentralized algorithm to iteratively solve the optimal control problem. In each iteration, EVs update their charging profiles according to the control signal broadcast by the utility company, and the utility company alters the control signal to guide their updates. The algorithm converges to optimal charging profiles (that are as “flat” as they can possibly be) irrespective of the specifications (e.g., maximum charging rate and deadline) of EVs, even if EVs do not necessarily update their charging profiles in every iteration, and use potentially outdated control signal when they update. Moreover, the algorithm only requires each EV solving its local problem, hence its implementation requires low computation capability. We also extend the algorithm to track a given load profile and to real-time implementation.", "Constrained charging control of large populations of Plug-in Electric Vehicles (PEVs) is addressed using mean field game theory. We consider PEVs as heterogeneous agents, with different charging constraints (plug-in times and deadlines). The agents minimize their own charging cost, but are weakly coupled by the common electricity price. We propose an iterative algorithm that, in the case of an infinite population, converges to the Nash equilibrium associated with a related decentralized optimization problem. In this way we approximate the centralized optimal solution, which in the unconstrained case fills the overnight power demand valley, via a decentralized procedure. The benefits of the proposed formulation in terms of convergence behavior and overall charging cost are illustrated through numerical simulations.", "The integration of Electric Vehicles (EVs) into the power grid is a challenging task. From the control perspective, one of the main challenges is the definition of a comprehensive control structure that is scalable to large EV numbers. This paper makes two key contributions: (i) It defines the EV ADMM framework for decentralized EV charging control. (ii) It evaluates EV ADMM using actual data and various EV fleet control problems. EV ADMM is a decentralized optimization algorithm based on the Alternating Direction Method of Multipliers (ADMM). It separates the centralized optimal fleet charging problem into individual optimization problems for the EVs plus one aggregator problem that optimizes fleet goals. Since the individual problems are coupled, they are solved consistently by passing incentive signals between them. The framework can be parameterized to trade-off the importance of fleet goals versus individual EV goals, such that aspects like battery lifetime can be considered. We show how EV ADMM can be applied to control an EV fleet to achieve goals such as demand valley filling and minimal-cost charging. Due to its flexibility and scalability, EV ADMM offers a practicable solution for optimal EV fleet control.", "Plug-in electric vehicles (PEVs) can be considered flexible loads, as the time when they are charged can be shifted to a certain extent without impacting the drivers' mobility. An aggregator coordinating these flexible resources aims to minimize the costs of charging, subject to individual PEV constraints, imposed by battery and charging infrastructure characteristics, as well as driving patterns. Since driving behavior cannot be perfectly forecasted, this problem is stochastic. In this paper, we propose a decentralized control algorithm to coordinate charging, based on the Alternating Direction Method of Multipliers (ADMM). In this setup, the aggregator and PEVs find the global solution by individually solving local optimization problems. The solution is found iteratively, whereby information between the PEVs and the aggregator is exchanged at each iteration. When the objective function of the charging optimization problem and the PEV constraints are convex, the algorithm converges to the global optimum. To take driving behavior uncertainty into account, the scheme considers several scenarios of driving patterns for each vehicle. A receding time horizon optimization is used, whereby at each new stage the representation of the fleet is updated with new scenarios consistent with the current observations. The local optimization problems, which can be solved very fast, could be solved in parallel for each PEV and scenario using decentralized computing, making the approach suitable for large-scale problems. A numerical example shows how this method can be applied to flatten the system load." ] }
1605.00876
2346203955
Plug-in electric vehicles (PEVs) are considered as flexible loads since their charging schedules can be shifted over the course of a day without impacting drivers’ mobility. This property can be exploited to reduce charging costs and adverse network impacts. The increasing number of PEVs makes the use of distributed charging coordinating strategies preferable to centralized ones. In this paper, we propose an agent-based method which enables a fully distributed solution of the PEVs’ Coordinated Charging (PEV-CC) problem. This problem aims at coordinating the charging schedules of a fleet of PEVs to minimize costs of serving demand subject to individual PEV constraints originating from battery limitations and charging infrastructure characteristics. In our proposed approach, each PEV’s charging station is considered as an agent that is equipped with communication and computation capabilities. Our multiagent approach is an iterative procedure which finds a distributed solution for the first order optimality conditions of the underlying optimization problem through local computations and limited information exchange with neighboring agents. In particular, the updates for each agent incorporate local information such as the Lagrange multipliers, as well as enforcing the local PEV’s constraints as local innovation terms. Finally, the performance of our proposed algorithm is evaluated on a fleet of 100 PEVs as a test case, and the results are compared with the centralized solution of the PEV-CC problem.
Recently, consensus-based approaches @cite_15 have been used to provide distributed control schemes for applications in electric power systems such as solving optimal power management problems @cite_27 @cite_13 @cite_19 , the Economic Dispatch problem @cite_28 @cite_20 @cite_12 @cite_21 and Optimal Power Flow problems @cite_22 . A neighborhood consensus potential @cite_8 @cite_15 in the iterative update procedure ensures that entities reach an agreement on a common variable, usually corresponding to electricity price in the aforementioned problems.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_28", "@cite_21", "@cite_19", "@cite_27", "@cite_15", "@cite_13", "@cite_12", "@cite_20" ], "mid": [ "", "2120286056", "2075469757", "2150823352", "", "2088619210", "2160643434", "2032987408", "2123801550", "2030423628" ], "abstract": [ "", "Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This paper presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.", "In a smart grid, effective distributed control algorithms could be embedded in distributed controllers to properly allocate electrical power among connected buses autonomously. By selecting the incremental cost of each generation unit as the consensus variable, the incremental cost consensus (ICC) algorithm is able to solve the conventional centralized economic dispatch problem in a distributed manner. The mathematical formulation of the algorithm has been presented in this paper. The results of several case studies have also been presented to show that the difference between network topologies will influence the convergence rate of the ICC algorithm.", "Economic dispatch problem (EDP) is an important class of optimization problems in the smart grid, which aims at minimizing the total cost when generating certain amount of power. In this work, a novel consensus based algorithm is proposed to solve EDP in a distributed fashion. The quadratic convex cost functions are assumed in the problem formulation, and the strongly connected communication topology is sufficient for the information exchange. Unlike centralized approaches, the proposed algorithm enables generators to collectively learn the mismatch between demand and total amount of power generation. The estimated mismatch is then used as a feedback mechanism to adjust current power generation by each generator. With a tactical initial setup, eventually, all generators can automatically minimize the total cost in a collective sense.", "", "Energy management is becoming a crucial issue in the future power grid system as more controllable energy resources and responsive loads with communications abilities are being introduced into the smart grid. This paper proposes a novel distributed approach to deal with energy management in the smart grid under dispatchable distributed generators and responsive loads using real-time pricing (RTP) and consensus networks to maximize the social welfare. In our algorithm, each distributed generation consumer unit, in response to the local price of energy, decides on its optimal power generation consumption level to maximize its benefit at the device level. However, the consensus-based coordination of price among local retailers drives the behavior of the overall system toward the global optimum, despite the greedy behavior of the generation and consumer units. The main features of our algorithm are computational and communicational scalability, as well as privacy of information.", "This paper provides a theoretical framework for analysis of consensus algorithms for multi-agent networked systems with an emphasis on the role of directed information flow, robustness to changes in network topology due to link node failures, time-delays, and performance guarantees. An overview of basic concepts of information consensus in networks and methods of convergence and performance analysis for the algorithms are provided. Our analysis framework is based on tools from matrix theory, algebraic graph theory, and control theory. We discuss the connections between consensus problems in networked dynamic systems and diverse applications including synchronization of coupled oscillators, flocking, formation control, fast consensus in small-world networks, Markov processes and gossip-based algorithms, load balancing in networks, rendezvous in space, distributed sensor fusion in sensor networks, and belief propagation. We establish direct connections between spectral and structural properties of complex networks and the speed of information diffusion of consensus algorithms. A brief introduction is provided on networked systems with nonlocal information flow that are considerably faster than distributed systems with lattice-type nearest neighbor interactions. Simulation results are presented that demonstrate the role of small-world effects on the speed of consensus algorithms and cooperative control of multivehicle formations", "This paper reviews signal processing research for applications in the future electric power grid, commonly referred to as smart grid. Generally, it is expected that the grid of the future would differ from the current system by the increased integration of distributed generation, distributed storage, demand response, power electronics, and communications and sensing technologies. The consequence is that the physical structure of the system becomes significantly more distributed. The existing centralized control structure is not suitable any more to operate such a highly distributed system. Hence, in this paper, we overview distributed approaches, all based on consensus @math innovations, for three common energy management functions: state estimation, economic dispatch, and optimal power flow. We survey the pertinent literature and summarize our work. Simulation results illustrate tradeoffs and the performance of consensus @math innovations for these three applications.", "This paper presents a distributed algorithm based on auction techniques and consensus protocols to solve the nonconvex economic dispatch problem. The optimization problem of the nonconvex economic dispatch includes several constraints such as valve-point loading effect, multiple fuel option, and prohibited operating zones. Each generating unit locally evaluates quantities used as bids in the auction mechanism. These units send their bids to their neighbors in a communication graph that supports the power system and which provides the required information flow. A consensus procedure is used to share the bids among the network agents and resolves the auction. As a result, the power distribution of generating units is updated and the generation cost is minimized. The effectiveness of this approach is demonstrated by simulations on standard test systems.", "The paper presents a fully distributed approach for economic dispatch in power systems. The approach is based on the consensus + innovations framework, in which each network agent participates in a collaborative process of neighborhood message exchange and local computation. The distributed approach is shown to converge to the optimal dispatch under rather weak assumptions on the agent communication network connectivity. Intuitively, the proposed approach includes a consensus term which achieves convergence to a common incremental cost value and an innovation term which ensures that the total generation is equal to the load to be supplied. Further, in the proposed approach each network bus only needs to be aware of its local cost parameters and the local predicted load and the neighborhood communication involves exchanging only the locally determined marginal energy price with a few neighbors. Robustness of the proposed approach and techniques for convergence rate improvement are discussed. Finally, simulation studies on the benchmark IEEE 14 bus system demonstrate the effectiveness of the approach." ] }
1605.00876
2346203955
Plug-in electric vehicles (PEVs) are considered as flexible loads since their charging schedules can be shifted over the course of a day without impacting drivers’ mobility. This property can be exploited to reduce charging costs and adverse network impacts. The increasing number of PEVs makes the use of distributed charging coordinating strategies preferable to centralized ones. In this paper, we propose an agent-based method which enables a fully distributed solution of the PEVs’ Coordinated Charging (PEV-CC) problem. This problem aims at coordinating the charging schedules of a fleet of PEVs to minimize costs of serving demand subject to individual PEV constraints originating from battery limitations and charging infrastructure characteristics. In our proposed approach, each PEV’s charging station is considered as an agent that is equipped with communication and computation capabilities. Our multiagent approach is an iterative procedure which finds a distributed solution for the first order optimality conditions of the underlying optimization problem through local computations and limited information exchange with neighboring agents. In particular, the updates for each agent incorporate local information such as the Lagrange multipliers, as well as enforcing the local PEV’s constraints as local innovation terms. Finally, the performance of our proposed algorithm is evaluated on a fleet of 100 PEVs as a test case, and the results are compared with the centralized solution of the PEV-CC problem.
In @cite_9 , a consensus-based method to coordinate PEV charging is proposed. However, it requires one of the agents to access information on the total charging demand. Moreover, @cite_17 proposes a consensus-based distributed charging rate control strategy for a PEV fleet to minimize total charging power loss, which overlooks PEV's limitations.
{ "cite_N": [ "@cite_9", "@cite_17" ], "mid": [ "2016038157", "2140300573" ], "abstract": [ "Efficient and reliable demand side management techniques for community charging of plug-in hybrid electrical vehicles (PHEVs) and plug-in electrical vehicles (PEVs) are needed, as large numbers of these vehicles are being introduced to the power grid. To avoid overloads and maximize customer preferences in terms of time and cost of charging, a constrained nonlinear optimization problem can be formulated. In this paper, we have developed a novel cooperative distributed algorithm for charging control of PHEVs PEVs that solves the constrained nonlinear optimization problem using Karush-Kuhn-Tucker (KKT) conditions and consensus networks in a distributed fashion. In our design, the global optimal power allocation under all local and global constraints is reached through peer-to-peer coordination of charging stations. Therefore, the need for a central control unit is eliminated. In this way, single-node congestion is avoided when the size of the problem is increased and the system gains robustness against single-link node failures. Furthermore, via Monte Carlo simulations, we have demonstrated that the proposed distributed method is scalable with the number of charging points and returns solutions, which are comparable to centralized optimization algorithms with a maximum of 2 sub-optimality. Thus, the main advantages of our approach are eliminating the need for a central energy management coordination unit, gaining robustness against single-link node failures, and being scalable in terms of single-node computations.", "Plug-in electric vehicles (PEVs) are a promising alternative to conventional fuel-based automobiles. However, a large number of PEVs connected to the grid simultaneously with poor charging coordination may impose severe stress on the power system. To allocate the available charging power, this paper proposes an optimal charging rate control of PEVs based on consensus algorithm, which aligns each PEV's interest with the system's benefit. The proposed strategy is implemented based on a multi-agent system framework, which only requires information exchanges among neighboring agents. The proposed distributed control solution enables the sharing of computational and communication burden among distributed agents, thus it is robust, scalable, and convenient for plug-and-play operation which allows PEVs to join and leave at arbitrary times. The effectiveness of the proposed algorithm is validated through simulations." ] }
1605.00807
2345399076
The blocks editor, such as the editor in Scratch, is widely applied for visual programming languages (VPL) nowadays. Despite it's friendly for non-programmers, it exists three main limitations while displaying block codes: (1) the readability, (2) the program structure, and (3) the re-use. To cope with these issues, we introduce a novel formatting tool, block shelves, into the editor for organizing blocks. A user could utilize shelves to constitute a user-defined structure for the VPL projects. Based on the experiment results, block shelves improves the block code navigating and searching significantly. Besides, for achieving code re-use, users could use shelf export import to share re-use their block codes between projects in the file format of eXtensible Markup Language (xml.) All functions were demonstrated on MIT App inventor 2, while all modifications were made in Google Blockly.
Approaches in improving the code readability and the code usability have been researched for decades, but the techniques designed and discussed are mostly for the text-based programs, such as indentation @cite_9 , coding style @cite_13 , variable naming @cite_5 , modulation @cite_11 , and applying text context to improve the productivity @cite_2 . However, more and more projects today are block-based. The current blocks editor design provides 3 functions: block commenting, block collapsing and sorting by block category, and block duplicating to manage the readability issues, the structure, and the re-use of block codes. The details of the functions are explained in the following.
{ "cite_N": [ "@cite_9", "@cite_2", "@cite_5", "@cite_13", "@cite_11" ], "mid": [ "2030432320", "2130344546", "2091280632", "1969685458", "2151191515" ], "abstract": [ "", "When working on a large software system, a programmer typically spends an inordinate amount of time sifting through thousands of artifacts to find just the subset of information needed to complete an assigned task. All too often, before completing the task the programmer must switch to working on a different task. These task switches waste time as the programmer must repeatedly find and identify the information relevant to the task-at-hand. In this paper, we present a mechanism that captures, models, and persists the elements and relations relevant to a task. We show how our task context model reduces information overload and focuses a programmer's work by filtering and ranking the information presented by the development environment. A task context is created by monitoring a programmer's activity and extracting the structural relationships of program artifacts. Operations on task contexts integrate with development environment features, such as structure display, search, and change management. We have validated our approach with a longitudinal field study of Mylar, our implementation of task context for the Eclipse development environment. We report a statistically significant improvement in the productivity of 16 industry programmers who voluntarily used Mylar for their daily work.", "The question of whether the use of good naming style in programs improves program comprehension has important implications for both programming practice and theories of program comprehension. Two experiments were done based on Pennington's (Stimulus structures and mental representations in expert comprehension of computer programs, Cognitive Psychology,19, 295-341, 1987) model of programmer comprehension. According to her model, different levels of knowledge, ranging from operational to functional, are extracted during comprehension in a bottom-up fashion. It was hypothesized that poor naming style would affect comprehension of function, but would not affect the other sorts of knowledge. An expertise effect was found, as well as evidence that knowledge of program function is independent of other sorts of knowledge. However, neither novices nor experts exhibited strong evidence of bottom-up comprehension. The results are discussed in terms of emerging theories of program comprehension which include knowledge representation, comprehension strategies, and the effects of ecological factors such as task demands and the role-expressiveness of the language.", "The consensus in the programming community is that indentation aids program comprehension, although many studies do not back this up. We tested program comprehension on a Pascal program. Two styles of indentation were used--blocked and nonblocked--in addition to four passible levels of indentation (0, 2, 4, 6 spaces). Both experienced and novice subjects were used. Although the blocking style made no difference, the level of identation had a significant effect on program comprehension. (2--4 spaces had the highest mean score for program comprehension.) We recommend that a moderate level of indentation be used to increase program comprehension and user satisfaction.", "A central feature of the evolution of large software systems is that change-which is necessary to add new functionality, accommodate new hardware, and repair faults-becomes increasingly difficult over time. We approach this phenomenon, which we term code decay, scientifically and statistically. We define code decay and propose a number of measurements (code decay indices) on software and on the organizations that produce it, that serve as symptoms, risk factors, and predictors of decay. Using an unusually rich data set (the fifteen-plus year change history of the millions of lines of software for a telephone switching system), we find mixed, but on the whole persuasive, statistical evidence of code decay, which is corroborated by developers of the code. Suggestive indications that perfective maintenance can retard code decay are also discussed." ] }
1605.00743
2950286390
Attributes possess appealing properties and benefit many computer vision problems, such as object recognition, learning with humans in the loop, and image retrieval. Whereas the existing work mainly pursues utilizing attributes for various computer vision problems, we contend that the most basic problem---how to accurately and robustly detect attributes from images---has been left under explored. Especially, the existing work rarely explicitly tackles the need that attribute detectors should generalize well across different categories, including those previously unseen. Noting that this is analogous to the objective of multi-source domain generalization, if we treat each category as a domain, we provide a novel perspective to attribute detection and propose to gear the techniques in multi-source domain generalization for the purpose of learning cross-category generalizable attribute detectors. We validate our understanding and approach with extensive experiments on four challenging datasets and three different problems.
Domain generalization is still at its early developing stage. A feature projection-based algorithm, Domain-Invariant Component Analysis (DICA), was introduced in @cite_79 to learn by minimizing the variance of the source domains. Recently, domain generation has been introduce into computer vision community for object recognition @cite_81 @cite_75 and video recognition @cite_48 . We propose to gear multi-source domain generalization techniques for the purpose of learning cross-category generalizable attribute detectors. Multi-source domain adaptation @cite_27 @cite_59 @cite_0 @cite_64 @cite_73 is related to our approach if we consider a transductive setting (i.e., the learner has access to the test data). While it assumes a single target domain, in attribute detection the test data are often sampled from more than one unseen domain.
{ "cite_N": [ "@cite_64", "@cite_48", "@cite_0", "@cite_79", "@cite_27", "@cite_81", "@cite_59", "@cite_73", "@cite_75" ], "mid": [ "2137901802", "1943722231", "2157989183", "2949436635", "2105523772", "96659543", "1822439997", "2069057437", "2953039697" ], "abstract": [ "Discriminative learning when training and test data belong to different distributions is a challenging and complex task. Often times we have very few or no labeled data from the test or target distribution but may have plenty of labeled data from multiple related sources with different distributions. The difference in distributions may be both in marginal and conditional probabilities. Most of the existing domain adaptation work focuses on the marginal probability distribution difference between the domains, assuming that the conditional probabilities are similar. However in many real world applications, conditional probability distribution differences are as commonplace as marginal probability differences. In this paper we propose a two-stage domain adaptation methodology which combines weighted data from multiple sources based on marginal probability differences (first stage) as well as conditional probability differences (second stage), with the target domain data. The weights for minimizing the marginal probability differences are estimated independently, while the weights for minimizing conditional probability differences are computed simultaneously by exploiting the potential interaction among multiple sources. We also provide a theoretical analysis on the generalization performance of the proposed multi-source domain adaptation formulation using the weighted Rademacher complexity measure. Empirical comparisons with existing state-of-the-art domain adaptation methods using three real-world datasets demonstrate the effectiveness of the proposed approach.", "In this work, we formulate a new weakly supervised domain generalization approach for visual recognition by using loosely labeled web images videos as training data. Specifically, we aim to address two challenging issues when learning robust classifiers: 1) coping with noise in the labels of training web images videos in the source domain; and 2) enhancing generalization capability of learnt classifiers to any unseen target domain. To address the first issue, we partition the training samples in each class into multiple clusters. By treating each cluster as a “bag” and the samples in each cluster as “instances”, we formulate a multi-instance learning (MIL) problem by selecting a subset of training samples from each training bag and simultaneously learning the optimal classifiers based on the selected samples. To address the second issue, we assume the training web images videos may come from multiple hidden domains with different data distributions. We then extend our MIL formulation to learn one classifier for each class and each latent domain such that multiple classifiers from each class can be effectively integrated to achieve better generalization capability. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our new approach for visual recognition by learning from web data.", "In visual recognition problems, the common data distribution mismatches between training and testing make domain adaptation essential. However, image data is difficult to manually divide into the discrete domains required by adaptation algorithms, and the standard practice of equating datasets with domains is a weak proxy for all the real conditions that alter the statistics in complex ways (lighting, pose, background, resolution, etc.) We propose an approach to automatically discover latent domains in image or video datasets. Our formulation imposes two key properties on domains: maximum distinctiveness and maximum learnability. By maximum distinctiveness, we require the underlying distributions of the identified domains to be different from each other to the maximum extent; by maximum learnability, we ensure that a strong discriminative model can be learned from the domain. We devise a nonparametric formulation and efficient optimization procedure that can successfully discover domains among both training and test data. We extensively evaluate our approach on object recognition and human activity recognition tasks.", "This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice.", "This paper presents a theoretical analysis of the problem of domain adaptation with multiple sources. For each source domain, the distribution over the input points as well as a hypothesis with error at most ∊ are given. The problem consists of combining these hypotheses to derive a hypothesis with small error with respect to the target domain. We present several theoretical results relating to this problem. In particular, we prove that standard convex combinations of the source hypotheses may in fact perform very poorly and that, instead, combinations weighted by the source distributions benefit from favorable theoretical guarantees. Our main result shows that, remarkably, for any fixed target function, there exists a distribution weighted combining rule that has a loss of at most ∊ with respect to any target mixture of the source distributions. We further generalize the setting from a single target function to multiple consistent target functions and show the existence of a combining rule with error at most 3∊. Finally, we report empirical results for a multiple source adaptation problem with a real-world dataset.", "In this paper, we propose a new approach for domain generalization by exploiting the low-rank structure from multiple latent source domains. Motivated by the recent work on exemplar-SVMs, we aim to train a set of exemplar classifiers with each classifier learnt by using only one positive training sample and all negative training samples. While positive samples may come from multiple latent domains, for the positive samples within the same latent domain, their likelihoods from each exemplar classifier are expected to be similar to each other. Based on this assumption, we formulate a new optimization problem by introducing the nuclear-norm based regularizer on the likelihood matrix to the objective function of exemplar-SVMs. We further extend Domain Adaptation Machine (DAM) to learn an optimal target classifier for domain adaptation. The comprehensive experiments for object recognition and action recognition demonstrate the effectiveness of our approach for domain generalization and domain adaptation.", "Recent domain adaptation methods successfully learn cross-domain transforms to map points between source and target domains. Yet, these methods are either restricted to a single training domain, or assume that the separation into source domains is known a priori. However, most available training data contains multiple unknown domains. In this paper, we present both a novel domain transform mixture model which outperforms a single transform model when multiple domains are present, and a novel constrained clustering method that successfully discovers latent domains. Our discovery method is based on a novel hierarchical clustering technique that uses available object category information to constrain the set of feasible domain separations. To illustrate the effectiveness of our approach we present experiments on two commonly available image datasets with and without known domain labels: in both cases our method outperforms baseline techniques which use no domain adaptation or domain adaptation methods that presume a single underlying domain shift.", "We propose a multiple source domain adaptation method, referred to as Domain Adaptation Machine (DAM), to learn a robust decision function (referred to as target classifier) for label prediction of patterns from the target domain by leveraging a set of pre-computed classifiers (referred to as auxiliary source classifiers) independently learned with the labeled patterns from multiple source domains. We introduce a new data-dependent regularizer based on smoothness assumption into Least-Squares SVM (LS-SVM), which enforces that the target classifier shares similar decision values with the auxiliary classifiers from relevant source domains on the unlabeled patterns of the target domain. In addition, we employ a sparsity regularizer to learn a sparse target classifier. Comprehensive experiments on the challenging TRECVID 2005 corpus demonstrate that DAM outperforms the existing multiple source domain adaptation methods for video concept detection in terms of effectiveness and efficiency.", "The problem of domain generalization is to take knowledge acquired from a number of related domains where training data is available, and to then successfully apply it to previously unseen domains. We propose a new feature learning algorithm, Multi-Task Autoencoder (MTAE), that provides good generalization performance for cross-domain object recognition. Our algorithm extends the standard denoising autoencoder framework by substituting artificially induced corruption with naturally occurring inter-domain variability in the appearance of objects. Instead of reconstructing images from noisy versions, MTAE learns to transform the original image into analogs in multiple related domains. It thereby learns features that are robust to variations across domains. The learnt features are then used as inputs to a classifier. We evaluated the performance of the algorithm on benchmark image recognition datasets, where the task is to learn features from multiple datasets and to then predict the image label from unseen datasets. We found that (denoising) MTAE outperforms alternative autoencoder-based models as well as the current state-of-the-art algorithms for domain generalization." ] }
1605.00743
2950286390
Attributes possess appealing properties and benefit many computer vision problems, such as object recognition, learning with humans in the loop, and image retrieval. Whereas the existing work mainly pursues utilizing attributes for various computer vision problems, we contend that the most basic problem---how to accurately and robustly detect attributes from images---has been left under explored. Especially, the existing work rarely explicitly tackles the need that attribute detectors should generalize well across different categories, including those previously unseen. Noting that this is analogous to the objective of multi-source domain generalization, if we treat each category as a domain, we provide a novel perspective to attribute detection and propose to gear the techniques in multi-source domain generalization for the purpose of learning cross-category generalizable attribute detectors. We validate our understanding and approach with extensive experiments on four challenging datasets and three different problems.
Denote by @math and @math respectively a Reproducing Kernel Hilbert Space and its associated kernel function. For an arbitrary distribution @math indexed by @math , the following mapping, is injective if @math is a characteristic kernel @cite_58 @cite_77 @cite_7 . In other words, the kernel mean map @math in the RKHS @math preserves all the statistical information of @math .
{ "cite_N": [ "@cite_77", "@cite_58", "@cite_7" ], "mid": [ "2950536412", "1946137962", "2124331852" ], "abstract": [ "We propose a framework for analyzing and comparing distributions, allowing us to design statistical tests to determine if two samples are drawn from different distributions. Our test statistic is the largest difference in expectations over functions in the unit ball of a reproducing kernel Hilbert space (RKHS). We present two tests based on large deviation bounds for the test statistic, while a third is based on the asymptotic distribution of this statistic. The test statistic can be computed in quadratic time, although efficient linear time approximations are available. Several classical metrics on distributions are recovered when the function space used to compute the difference in expectations is allowed to be more general (eg. a Banach space). We apply our two-sample tests to a variety of problems, including attribute matching for databases using the Hungarian marriage method, where they perform strongly. Excellent performance is also obtained when comparing distributions over graphs, for which these are the first such tests.", "We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a reproducing kernel Hilbert space. Applications of this technique can be found in two-sample tests, which are used for determining whether two sets of observations arise from the same distribution, covariate shift correction, local learning, measures of independence, and density estimation.", "A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A pseudometric on the space of probability measures can be defined as the distance between distribution embeddings: we denote this as γk, indexed by the kernel function k that defines the inner product in the RKHS. We present three theoretical properties of γk. First, we consider the question of determining the conditions on the kernel k for which γk is a metric: such k are denoted characteristic kernels. Unlike pseudometrics, a metric is zero only when two distributions coincide, thus ensuring the RKHS embedding maps all distributions uniquely (i.e., the embedding is injective). While previously published conditions may apply only in restricted circumstances (e.g., on compact domains), and are difficult to check, our conditions are straightforward and intuitive: integrally strictly positive definite kernels are characteristic. Alternatively, if a bounded continuous kernel is translation-invariant on ℜd, then it is characteristic if and only if the support of its Fourier transform is the entire ℜd. Second, we show that the distance between distributions under γk results from an interplay between the properties of the kernel and the distributions, by demonstrating that distributions are close in the embedding space when their differences occur at higher frequencies. Third, to understand the nature of the topology induced by γk, we relate γk to other popular metrics on probability measures, and present conditions on the kernel k under which γk metrizes the weak topology." ] }
1605.00743
2950286390
Attributes possess appealing properties and benefit many computer vision problems, such as object recognition, learning with humans in the loop, and image retrieval. Whereas the existing work mainly pursues utilizing attributes for various computer vision problems, we contend that the most basic problem---how to accurately and robustly detect attributes from images---has been left under explored. Especially, the existing work rarely explicitly tackles the need that attribute detectors should generalize well across different categories, including those previously unseen. Noting that this is analogous to the objective of multi-source domain generalization, if we treat each category as a domain, we provide a novel perspective to attribute detection and propose to gear the techniques in multi-source domain generalization for the purpose of learning cross-category generalizable attribute detectors. We validate our understanding and approach with extensive experiments on four challenging datasets and three different problems.
The distributional variance follows naturally, where @math is the map of the mean of all the distributions in @math . In practice, we do not have access to the distributions. Instead, we observe the samples @math each drawn from a distribution @math and can thus empirically estimate the distributional variance by @math . Here @math is the (centered) All kernels discussed in this paper have been centered @cite_12 . kernel matrix over all the samples, and @math collects the coefficients which depend on only the numbers of samples. We refer the readers to @cite_79 for more details including the consistency between the distributional variance @math and its estimate @math .
{ "cite_N": [ "@cite_79", "@cite_12" ], "mid": [ "2949436635", "1920328734" ], "abstract": [ "This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learning-theoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and real-world datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice.", "This paper presents new and effective algorithms for learning kernels. In particular, as shown by our empirical results, these algorithms consistently outperform the so-called uniform combination solution that has proven to be difficult to improve upon in the past, as well as other algorithms for learning kernels based on convex combinations of base kernels in both classification and regression. Our algorithms are based on the notion of centered alignment which is used as a similarity measure between kernels or kernel matrices. We present a number of novel algorithmic, theoretical, and empirical results for learning kernels based on our notion of centered alignment. In particular, we describe efficient algorithms for learning a maximum alignment kernel by showing that the problem can be reduced to a simple QP and discuss a one-stage algorithm for learning both a kernel and a hypothesis based on that kernel using an alignment-based regularization. Our theoretical results include a novel concentration bound for centered alignment between kernel matrices, the proof of the existence of effective predictors for kernels with high alignment, both for classification and for regression, and the proof of stability-based generalization bounds for a broad family of algorithms for learning kernels based on centered alignment. We also report the results of experiments with our centered alignment-based algorithms in both classification and regression." ] }
1605.00508
2345669120
Beamforming is an essential requirement to combat high pathloss and to improve signal-to-noise ratio during initial cell discovery in future millimeter wave cellular networks. The choice of an appropriate beamforming is directly coupled with its energy consumption. The energy consumption is even of more concern at a battery limited mobile station (MS). In this work, we provide an energy consumption based comparison of different beamforming schemes while considering both a low power and a high power analog-to-digital converter (ADC) for a millimeter wave based receiver at the MS. We analyze both context information (CI) (GPS positioning based) and non context information based schemes, and show that analog beamforming with CI (where mobile station’s positioning information is already available) can result in a lower energy consumption, while in all other scenarios digital beamforming has a lower energy consumption than analog and hybrid beamforming. We also show that under certain scenarios recently proposed phase shifters network architecture can result in a lower energy consumption than other beamforming schemes. Moreover, we show that the energy consumption trend among different beamforming schemes is valid irrespective of the number of ADC bits. Finally, we propose a new signaling structure which utilizes a relatively higher frequency sub-carrier for primary synchronization signals compared to other signaling, which allows a further reduction in initial cell search delay and energy consumption of the MS.
The research related to directional cell discovery in mmW 5G cellular networks is very recent. The authors in @cite_7 suggested to scan the complete angular space sequentially to identify the best BF direction both at the MS and the base station (BS). In @cite_12 , directional cell discovery is studied, and the authors showed that DBF with a low-bit ADC at the MS can be preferable compared to the ABF. A delay based comparison for initial access in mmW cellular networks is studied in @cite_2 , where it is shown that DBF has a lower delay than ABF without any performance degradation. In @cite_6 , the advantages of HBF in terms of lower delay and better access probability than ABF are presented for the case of initial beamforming.
{ "cite_N": [ "@cite_6", "@cite_12", "@cite_7", "@cite_2" ], "mid": [ "1987804395", "779733492", "2035330915", "2283435412" ], "abstract": [ "Cellular systems were designed for carrier frequencies in the microwave band (below 3 GHz) but will soon be operating in frequency bands up to 6 GHz. To meet the ever increasing demands for data, deployments in bands above 6 GHz, and as high as 75 GHz, are envisioned. However, as these systems migrate beyond the microwave band, certain channel characteristics can impact their deployment, especially the coverage range. To increase coverage, beamforming can be used but this role of beamforming is different than in current cellular systems, where its primary role is to improve data throughput. Because cellular procedures enable beamforming after a user establishes access with the system, new procedures are needed to enable beamforming during cell discovery and acquisition. This paper discusses several issues that must be resolved in order to use beamforming for access at millimeter wave (mmWave) frequencies, and presents solutions for initial access. Several approaches are verified by computer simulations, and it is shown that reliable network access and satisfactory coverage can be achieved in mmWave frequencies.", "The acute disparity between increasing bandwidth demand and available spectrum has brought millimeter wave (mmWave) bands to the forefront of candidate solutions for the next-generation cellular networks. Highly directional transmissions are essential for cellular communication in these frequencies to compensate for higher isotropic path loss. This reliance on directional beamforming, however, complicates initial cell search since mobiles and base stations must jointly search over a potentially large angular directional space to locate a suitable path to initiate communication. To address this problem, this paper proposes a directional cell discovery procedure where base stations periodically transmit synchronization signals, potentially in time-varying random directions, to scan the angular space. Detectors for these signals are derived based on a Generalized Likelihood Ratio Test (GLRT) under various signal and receiver assumptions. The detectors are then simulated under realistic design parameters and channels based on actual experimental measurements at 28 GHz in New York City. The study reveals two key findings: 1) digital beamforming can significantly outperform analog beamforming even when digital beamforming uses very low quantization to compensate for the additional power requirements and 2) omnidirectional transmissions of the synchronization signals from the base station generally outperform random directional scanning.", "With the formidable growth of various booming wireless communication services that require ever increasing data throughputs, the conventional microwave band below 10 GHz, which is currently used by almost all mobile communication systems, is going to reach its saturation point within just a few years. Therefore, the attention of radio system designers has been pushed toward ever higher segments of the frequency spectrum in a quest for increased capacity. In this article we investigate the feasibility, advantages, and challenges of future wireless communications over the Eband frequencies. We start with a brief review of the history of the E-band spectrum and its light licensing policy as well as benefits challenges. Then we introduce the propagation characteristics of E-band signals, based on which some potential fixed and mobile applications at the E-band are investigated. In particular, we analyze the achievability of a nontrivial multiplexing gain in fixed point-to-point E-band links, and propose an E-band mobile broadband (EMB) system as a candidate for the next generation mobile communication networks. The channelization and frame structure of the EMB system are discussed in detail.", "The millimeter wave (mmWave) bands have recently attracted considerable interest for next-generation cellular systems due to the massive available bandwidths at these frequencies. However, a key challenge in designing mmWave cellular systems is initial access -- the procedure by which a mobile establishes an initial link-layer connection to a base station cell. MmWave communication relies on highly directional transmissions and the initial access procedure must thus provide a mechanism by which initial transmission directions can be searched in a potentially large angular space. Design options are compared considering different scanning and signaling procedures to evaluate access delay and system overhead. The channel structure and multiple access issues are also considered. The analysis demonstrates significant benefits of low-resolution fully digital architectures in comparison to single stream analog beamforming." ] }
1605.00508
2345669120
Beamforming is an essential requirement to combat high pathloss and to improve signal-to-noise ratio during initial cell discovery in future millimeter wave cellular networks. The choice of an appropriate beamforming is directly coupled with its energy consumption. The energy consumption is even of more concern at a battery limited mobile station (MS). In this work, we provide an energy consumption based comparison of different beamforming schemes while considering both a low power and a high power analog-to-digital converter (ADC) for a millimeter wave based receiver at the MS. We analyze both context information (CI) (GPS positioning based) and non context information based schemes, and show that analog beamforming with CI (where mobile station’s positioning information is already available) can result in a lower energy consumption, while in all other scenarios digital beamforming has a lower energy consumption than analog and hybrid beamforming. We also show that under certain scenarios recently proposed phase shifters network architecture can result in a lower energy consumption than other beamforming schemes. Moreover, we show that the energy consumption trend among different beamforming schemes is valid irrespective of the number of ADC bits. Finally, we propose a new signaling structure which utilizes a relatively higher frequency sub-carrier for primary synchronization signals compared to other signaling, which allows a further reduction in initial cell search delay and energy consumption of the MS.
To address the issue of large search delay regarding the identification of the right BF direction, a context information (CI) based directional cell search is proposed in @cite_5 . The authors consider a HetNet scenario where the CI about the MS positioning is forwarded to the mmW BS and then the BS transmits the initial synchronization signals in the provided direction. In @cite_9 , to reduce the directional search delay associated with the ABF, the authors consider the availability of the CI regarding the mmW BS positioning at the MS. They further proposed a phase shifters network (PSN) architecture (which results in a lower power consumption than HBF) to mitigate the effect of erroneous CI.
{ "cite_N": [ "@cite_5", "@cite_9" ], "mid": [ "2964273971", "2962877103" ], "abstract": [ "The exploitation of the mm-wave bands is one of the most promising solutions for 5G mobile radio networks. However, the use of mm-wave technologies in cellular networks is not straightforward due to mm-wave severe propagation conditions that limit access availability. In order to overcome this obstacle, hybrid network architectures are being considered where mm-wave small cells can exploit an overlay coverage layer based on legacy technology. The additional mm-wave layer can also take advantage of a functional split between control and user plane, that allows to delegate most of the signaling functions to legacy base stations and to gather context information from users for resource optimization. However, mm-wave technology requires multiple antennas and highly directional transmissions to compensate for high path loss and limited power. Directional transmissions must be also used for the cell discovery and synchronization process, and this can lead to a non negligible delay due to need to scan the cell area with multiple transmissions in different angles. In this paper, we propose to exploit the context information related to user position, provided by the separated control plane, to improve the cell discovery procedure and minimize delay. We investigate the fundamental trade-offs of the cell discovery process with directional antennas and the effects of the context information accuracy on its performance. Numerical results are provided to validate our observations.", "Millimeter wave (mmWave) communication is envisioned as a cornerstone to fulfill the data rate requirements for fifth generation (5G) cellular networks. In mmWave communication, beamforming is considered as a key technology to combat the high path-loss, and unlike in conventional microwave communication, beamforming may be necessary even during initial access cell search. Among the proposed beamforming schemes for initial cell search, analog beamforming is a power efficient approach but suffers from its inherent search delay during initial access. In this work, we argue that analog beamforming can still be a viable choice when context information about mmWave base stations (BS) is available at the mobile station (MS). We then study how the performance of analog beamforming degrades in case of angular errors in the available context information. Finally, we present an analog beamforming receiver architecture that uses multiple arrays of Phase Shifters and a single RF chain to combat the effect of angular errors, showing that it can achieve the same performance as hybrid beamforming." ] }
1605.00170
2347101263
Sparse representation has been widely studied in visual tracking, which has shown promising tracking performance. Despite a lot of progress, the visual tracking problem is still a challenging task due to appearance variations over time. In this paper, we propose a novel sparse tracking algorithm that well addresses temporal appearance changes, by enforcing template representability and temporal consistency (TRAC). By modeling temporal consistency, our algorithm addresses the issue of drifting away from a tracking target. By exploring the templates' long-term-short-term representability, the proposed method adaptively updates the dictionary using the most descriptive templates, which significantly improves the robustness to target appearance changes. We compare our TRAC algorithm against the state-of-the-art approaches on 12 challenging benchmark image sequences. Both qualitative and quantitative results demonstrate that our algorithm significantly outperforms previous state-of-the-art trackers.
Visual tracking has been extensively studied over the last few decades. Comprehensive surveys of tracking methods can be found in @cite_17 @cite_0 . In general, existing tracking methods can be categorized as either discriminative or generative. Discriminative tracking methods formulate the tracking problem as a binary classification task that separates a target from the background. @cite_25 proposed a multi instance learning algorithm that trained a discriminative classifier in an online manner to separate the object from the background. @cite_12 used a bootstrapping binary classifier with positive and negative constraints for object tracking by detection. An online SVM solver was extended with latent variables in @cite_22 for structural learning of the tracking target. Generative tracking techniques @cite_1 , on the other hand, are based on appearance models of target objects and search the most similar image region. The appearance model can either rely on key points and finding correspondences on deformable objects @cite_13 or on image features extracted from a bounding box @cite_1 . We focus on appearance models relying on image features, which can be used to construct a descriptive representation of target objects.
{ "cite_N": [ "@cite_22", "@cite_1", "@cite_0", "@cite_13", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2024029849", "1970387420", "2126302311", "1961392663", "2167089254", "2147533695", "1985560977" ], "abstract": [ "Despite many advances made in the area, deformable targets and partial occlusions continue to represent key problems in visual tracking. Structured learning has shown good results when applied to tracking whole targets, but applying this approach to a part-based target model is complicated by the need to model the relationships between parts, and to avoid lengthy initialisation processes. We thus propose a method which models the unknown parts using latent variables. In doing so we extend the online algorithm pegasos to the structured prediction case (i.e., predicting the location of the bounding boxes) with latent part variables. To better estimate the parts, and to avoid over-fitting caused by the extra model complexity capacity introduced by the parts, we propose a two-stage training process, based on the primal rather than the dual form. We then show that the method outperforms the state-of-the-art (linear and non-linear kernel) trackers.", "The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an accurate system for real-time 3-D perception of humans by a mobile robot.", "There is a large variety of trackers, which have been proposed in the literature during the last two decades with some mixed success. Object tracking in realistic scenarios is a difficult problem, therefore, it remains a most active area of research in computer vision. A good tracker should perform well in a large number of videos involving illumination changes, occlusion, clutter, camera motion, low contrast, specularities, and at least six more aspects. However, the performance of proposed trackers have been evaluated typically on less than ten videos, or on the special purpose datasets. In this paper, we aim to evaluate trackers systematically and experimentally on 315 video fragments covering above aspects. We selected a set of nineteen trackers to include a wide variety of algorithms often cited in literature, supplemented with trackers appearing in 2010 and 2011 for which the code was publicly available. We demonstrate that trackers can be evaluated objectively by survival curves, Kaplan Meier statistics, and Grubs testing. We find that in the evaluation practice the F-score is as effective as the object tracking accuracy (OTA) score. The analysis under a large variety of circumstances provides objective insight into the strengths and weaknesses of trackers.", "We propose a novel method for establishing correspondences on deformable objects for single-target object tracking. The key ingredient is a dissimilarity measure between correspondences that takes into account their geometric compatibility, allowing us to separate inlier correspondences from outliers. We employ both static correspondences from the initial appearance of the object as well as adaptive correspondences from the previous frame to address the stability-plasticity dilemma. The geometric dissimilarity measure enables us to also disambiguate keypoints that are difficult to match. Based on these ideas we build a keypoint-based tracker that outputs rotated bounding boxes. We demonstrate in a rigorous empirical analysis that this tracker outperforms the state of the art on a dataset of 77 sequences.", "In this paper, we address the problem of learning an adaptive appearance model for object tracking. In particular, a class of tracking techniques called “tracking by detection” have been shown to give promising results at real-time speeds. These methods train a discriminative classifier in an online manner to separate the object from the background. This classifier bootstraps itself by using the current tracker state to extract positive and negative examples from the current frame. Slight inaccuracies in the tracker can therefore lead to incorrectly labeled training examples, which degrades the classifier and can cause further drift. In this paper we show that using Multiple Instance Learning (MIL) instead of traditional supervised learning avoids these problems, and can therefore lead to a more robust tracker with fewer parameter tweaks. We present a novel online MIL algorithm for object tracking that achieves superior results with real-time performance.", "This paper shows that the performance of a binary classifier can be significantly improved by the processing of structured unlabeled data, i.e. data are structured if knowing the label of one example restricts the labeling of the others. We propose a novel paradigm for training a binary classifier from labeled and unlabeled examples that we call P-N learning. The learning process is guided by positive (P) and negative (N) constraints which restrict the labeling of the unlabeled set. P-N learning evaluates the classifier on the unlabeled data, identifies examples that have been classified in contradiction with structural constraints and augments the training set with the corrected samples in an iterative process. We propose a theory that formulates the conditions under which P-N learning guarantees improvement of the initial classifier and validate it on synthetic and real data. P-N learning is applied to the problem of on-line learning of object detector during tracking. We show that an accurate object detector can be learned from a single example and an unlabeled video sequence where the object may occur. The algorithm is compared with related approaches and state-of-the-art is achieved on a variety of objects (faces, pedestrians, cars, motorbikes and animals).", "Long-term video tracking is of great importance for many applications in real-world scenarios. A key component for achieving long-term tracking is the tracker's capability of updating its internal representation of targets (the appearance model) to changing conditions. Given the rapid but fragmented development of this research area, we propose a unified conceptual framework for appearance model adaptation that enables a principled comparison of different approaches. Moreover, we introduce a novel evaluation methodology that enables simultaneous analysis of tracking accuracy and tracking success, without the need of setting application-dependent thresholds. Based on the proposed framework and this novel evaluation methodology, we conduct an extensive experimental comparison of trackers that perform appearance model adaptation. Theoretical and experimental analyses allow us to identify the most effective approaches as well as to highlight design choices that favor resilience to errors during the update process. We conclude the paper with a list of key open research challenges that have been singled out by means of our experimental comparison." ] }
1605.00170
2347101263
Sparse representation has been widely studied in visual tracking, which has shown promising tracking performance. Despite a lot of progress, the visual tracking problem is still a challenging task due to appearance variations over time. In this paper, we propose a novel sparse tracking algorithm that well addresses temporal appearance changes, by enforcing template representability and temporal consistency (TRAC). By modeling temporal consistency, our algorithm addresses the issue of drifting away from a tracking target. By exploring the templates' long-term-short-term representability, the proposed method adaptively updates the dictionary using the most descriptive templates, which significantly improves the robustness to target appearance changes. We compare our TRAC algorithm against the state-of-the-art approaches on 12 challenging benchmark image sequences. Both qualitative and quantitative results demonstrate that our algorithm significantly outperforms previous state-of-the-art trackers.
For accurate visual tracking, templates must be updated to account for target appearance changes and prevent drift problems. Most of the sparse-based trackers adopted the template update scheme from the work in @cite_2 , which assigns an importance weight for each template based on its utilization during tracking. The template having the smallest weight is then replaced by the current tracking result. However, this scheme cannot model the templates' representability and cannot adapt to the degree of target's appearance changes, thus lacks of discriminative power. Our TRAC algorithm addresses both issues and can robustly track targets with appearance changes over time.
{ "cite_N": [ "@cite_2" ], "mid": [ "2113577207" ], "abstract": [ "In this paper, we propose a robust visual tracking method by casting tracking as a sparse approximation problem in a particle filter framework. In this framework, occlusion, noise, and other challenging issues are addressed seamlessly through a set of trivial templates. Specifically, to find the tracking target in a new frame, each target candidate is sparsely represented in the space spanned by target templates and trivial templates. The sparsity is achieved by solving an l1-regularized least-squares problem. Then, the candidate with the smallest projection error is taken as the tracking target. After that, tracking is continued using a Bayesian state inference framework. Two strategies are used to further improve the tracking performance. First, target templates are dynamically updated to capture appearance changes. Second, nonnegativity constraints are enforced to filter out clutter which negatively resembles tracking targets. We test the proposed approach on numerous sequences involving different types of challenges, including occlusion and variations in illumination, scale, and pose. The proposed approach demonstrates excellent performance in comparison with previously proposed trackers. We also extend the method for simultaneous tracking and recognition by introducing a static template set which stores target images from different classes. The recognition result at each frame is propagated to produce the final result for the whole video. The approach is validated on a vehicle tracking and classification task using outdoor infrared video sequences." ] }
1605.00316
2346432327
The modern data analyst must cope with data encoded in various forms, vectors, matrices, strings, graphs, or more. Consequently, statistical and machine learning models tailored to different data encodings are important. We focus on data encoded as normalized vectors, so that their "direction" is more important than their magnitude. Specifically, we consider high-dimensional vectors that lie either on the surface of the unit hypersphere or on the real projective plane. For such data, we briefly review common mathematical models prevalent in machine learning, while also outlining some technical aspects, software, applications, and open mathematical challenges.
We note a work on feature extraction based on correlation in @cite_41 . Classical data mining applications such as topic modeling for normalized data are studied in @cite_19 @cite_35 . A semi-parametric setting using Dirichlet process mixtures for spherical data is @cite_40 . Several directional data clustering settings include: depth images using Watson mixtures @cite_36 ; a k-means++ @cite_39 style procedure for mixture of vMFs @cite_3 ; clustering on orthogonal manifolds @cite_4 ; mixtures of Gaussian and vMFs @cite_25 . Directional data has also been used in several biomedical (imaging) applications, for example @cite_32 , fMRI @cite_34 , white matter supervoxel segmentation @cite_30 , and brain imaging @cite_37 . In signal processing there are applications to spatial fading using vMF mixtures @cite_43 and speaker modeling @cite_8 . Finally, beyond vMF and Watson, it is worthwhile to consider the Angular Gaussian distribution @cite_33 , which has been applied to model natural images for instance in @cite_1 .
{ "cite_N": [ "@cite_30", "@cite_35", "@cite_37", "@cite_4", "@cite_33", "@cite_8", "@cite_41", "@cite_36", "@cite_1", "@cite_32", "@cite_3", "@cite_39", "@cite_19", "@cite_40", "@cite_43", "@cite_34", "@cite_25" ], "mid": [ "1451037378", "2119650157", "2113537613", "2021265981", "2021553162", "2150260301", "2127409454", "2112543631", "2186770727", "2105750764", "1617750321", "2073459066", "316065036", "318315622", "2139847864", "2164839939", "2089912168" ], "abstract": [ "A powerful aspect of diffusion MR imaging is the ability to reconstruct fiber orientations in brain white matter; however, the application of traditional learning algorithms is challenging due to the directional nature of the data. In this paper, we present an algorithmic approach to clustering such spatial and orientation data and apply it to brain white matter supervoxel segmentation. This approach is an extension of the DP-means algorithm to support axial data, and we present its theoretical connection to probabilistic models, including the Gaussian and Watson distributions. We evaluate our method with the analysis of synthetic data and an application to diffusion tensor atlas segmentation. We find our approach to be efficient and effective for the automatic extraction of regions of interest that respect the structure of brain white matter. The resulting supervoxel segmentation could be used to map regional anatomical changes in clinical studies or serve as a domain for more complex modeling.", "We introduce the Spherical Admixture Model (SAM), a Bayesian topic model for arbitrary l2 normalized data. SAM maintains the same hierarchical structure as Latent Dirichlet Allocation (LDA), but models documents as points on a high-dimensional spherical manifold, allowing a natural likelihood parameterization in terms of cosine distance. Furthermore, SAM can model word absence presence at the document level, and unlike previous models can assign explicit negative weight to topic terms. Performance is evaluated empirically, both through human ratings of topic quality and through diverse classification tasks from natural language processing and computer vision. In these experiments, SAM consistently outperforms existing models.", "Abstract Understanding the organization of the human brain requires identification of its functional subdivisions. Clustering schemes based on resting-state functional magnetic resonance imaging (fMRI) data are rapidly emerging as non-invasive alternatives to cytoarchitectonic mapping in postmortem brains. Here, we propose a novel spatio-temporal probabilistic parcellation scheme that overcomes major weaknesses of existing approaches by (i) modeling the fMRI time series of a voxel as a von Mises-Fisher distribution, which is widely used for clustering high dimensional data; (ii) modeling the latent cluster labels as a Markov random field, which provides spatial regularization on the cluster labels by penalizing neighboring voxels having different cluster labels; and (iii) introducing a prior on the number of labels, which helps in uncovering the number of clusters automatically from the data. Cluster labels and model parameters are estimated by an iterative expectation maximization procedure wherein, given the data and current estimates of model parameters, the latent cluster labels, are computed using α-expansion, a state of the art graph cut, method. In turn, given the current estimates of cluster labels, model parameters are estimated by maximizing the pseudo log-likelihood. The performance of the proposed method is validated using extensive computer simulations. Using novel stability analysis we examine the sensitivity of our methods to parameter initialization and demonstrate that the method is robust to a wide range of initial parameter values. We demonstrate the application of our methods by parcellating spatially contiguous as well as non-contiguous brain regions at both the individual participant and group levels. Notably, our analyses yield new data on the posterior boundaries of the supplementary motor area and provide new insights into functional organization of the insular cortex. Taken together, our findings suggest that our method is a powerful tool for investigating functional subdivisions in the human brain.", "The mean shift algorithm, which is a nonparametric density estimator for detecting the modes of a distribution on a Euclidean space, was recently extended to operate on analytic manifolds. The extension is extrinsic in the sense that the inherent optimization is performed on the tangent spaces of these manifolds. This approach specifically requires the use of the exponential map at each iteration. This paper presents an alternative mean shift formulation, which performs the iterative optimization “on” the manifold of interest and intrinsically locates the modes via consecutive evaluations of a mapping. In particular, these evaluations constitute a modified gradient ascent scheme that avoids the computation of the exponential maps for Stiefel and Grassmann manifolds. The performance of our algorithm is evaluated by conducting extensive comparative studies on synthetic data as well as experiments on object categorization and segmentation of multiple motions.", "SUMMARY The angular central Gaussian distribution is an alternative to the Bingham distribution for modeling antipodal symmetric directional data. In this paper the statistical theory for the angular central Gaussian model is presented. Some topics treated are maximum likelihood estimation of the parameters, testing for uniformity and circularity, and principal components analysis. Comparisons to methods based upon the sample second moments are made via an example.", "This paper proposes a generative model-based speaker clustering algorithm in the maximum a posteriori adapted Gaussian mixture model (GMM) mean supervector space. The algorithm can be viewed as an extension of the standard expectation maximization algorithm for fitting a mixture model to the data, which iterates between two steps - a sample re-assignment step (E-step) and a model re-estimation step (M-step) - until it converges. The directional scattering patterns of GMM mean supervectors suggest that we employ a mixture of von Mises-Fisher distributions in the model re-estimation step. In the sample re-assignment step, four sample-to-mixture assignment strategies, namely soft, hard, stochastic, and deterministic annealing assignments, are used. Our experiments on the GALE Mandarin dataset show that the use of a mixture of von Mises-Fisher distributions as the underlying model yields significantly higher speaker clustering accuracies than the use of a mixture of Gaussian distributions. It is further shown that deterministic annealing assignment outperforms soft assignment, that soft assignment is comparable to stochastic assignment, and that both soft and stochastic assignments outperform hard assignment.", "Beyond linear and kernel-based feature extraction, we propose in this paper the generalized feature extraction formulation based on the so-called graph embedding framework. Two novel correlation metric based algorithms are presented based on this formulation. correlation embedding analysis (CEA), which incorporates both correlational mapping and discriminating analysis, boosts the discriminating power by mapping data from a high-dimensional hypersphere onto another low-dimensional hypersphere and preserving the intrinsic neighbor relations with local graph modeling. correlational principal component analysis (CPCA) generalizes the conventional Principal Component Analysis (PCA) algorithm to the case with data distributed on a high-dimensional hypersphere. Their advantages stem from two facts: 1) tailored to normalized data, which are often the outputs from the data preprocessing step, and 2) directly designed with correlation metric, which shows to be generally better than Euclidean distance for classification purpose. Extensive comparisons with existing algorithms on visual classification experiments demonstrate the effectiveness of the proposed algorithms.", "In this paper, we propose an unsupervised clustering method for axially symmetric directional unit vectors. Our method exploits the Watson distribution and Bregman Divergence within a Model Based Clustering framework. The main objectives of our method are: (a) provide efficient solution to estimate the parameters of a Watson Mixture Model (WMM), (b) generate a set of WMMs and (b) select the optimal model. To this aim, we develop: (a) an efficient soft clustering method, (b) a hierarchical clustering approach in parameter space and (c) a model selection strategy by exploiting information criteria and an evaluation graph. We empirically validate the proposed method using synthetic data. Next, we apply the method for clustering image normals and demonstrate that the proposed method is a potential tool for analyzing the depth image.", "", "High angular resolution diffusion imaging (HARDI) permits the computation of water molecule displacement probabilities over the sphere. This probability is often referred to as the orientation distribution function (ODF). In this paper we present a novel model for representing this diffusion ODF namely, a mixture of von Mises-Fisher (vMF) distributions. Our model is compact in that it requires very few parameters to represent complicated ODF geometries which occur specifically in the presence of heterogeneous nerve fiber orientations. We present a Riemannian geometric framework for computing intrinsic distances (in closed-form) and for performing interpolation between ODFs represented by vMF mixtures. We also present closed-form equations for entropy and variance based anisotropy measures that are then computed and illustrated for real HARDI data from a rat brain.", "Von Mises-Fisher (vMF) Distribution is one of the most commonly used distributions for fitting directional data. Mixtures of vMF (MovMF) distributions have been used successfully in many applications. One of the important problems in mixture models is the problem of local minima of the objective function. Therefore, approaches to avoid local minima problem is essential in improving the performance. Recently, an algorithm called k-means++ was introduced in the literature and used successfully for finding initial parameters for mixtures of Gaussian (MoG) distributions. In this paper, we adopt this algorithm for finding good initializations for MovMF distributions. We show that MovMF distribution will lead to the same cost function as MoGs and therefore similar guarantee as the case of MoG distributions will also hold here. We also demonstrate the performance of the method on some real datasets.", "The k-means method is a widely used clustering technique that seeks to minimize the average squared distance between points in the same cluster. Although it offers no accuracy guarantees, its simplicity and speed are very appealing in practice. By augmenting k-means with a very simple, randomized seeding technique, we obtain an algorithm that is Θ(logk)-competitive with the optimal clustering. Preliminary experiments show that our augmentation improves both the speed and the accuracy of k-means, often quite dramatically.", "Automated unsupervised learning of topic-based clusters is used in various text data mining applications, e.g., document organization in content management, information retrieval and filtering in news aggregation services. Typically batch models are used for this purpose, which perform clustering on the document collection in aggregate. In this paper, we first analyze three batch topic models that have been recently proposed in the machine learning and data mining community – Latent Dirichlet Allocation (LDA), Dirichlet Compound Multinomial (DCM) mixtures and von-Mises Fisher (vMF) mixture models. Our discussion uses a common framework based on the particular assumptions made regarding the conditional distributions corresponding to each component and the topic priors. Our experiments on large real-world document collections demonstrate that though LDA is a good model for finding word-level topics, vMF finds better document-level topic clusters more efficiently, which is often important in text mining applications. In cases where offline clustering on complete document collections is infeasible due to resource constraints, online unsupervised clustering methods that process incoming data incrementally are necessary. To this end, we propose online variants of vMF, EDCM and LDA. Experiments on real-world streaming text illustrate the speed and performance benefits of online vMF. Finally, we propose a practical heuristic for hybrid topic modeling, which learns online topic models on streaming text data and intermittently runs batch topic models on aggregated documents offline. Such a hybrid model is useful for applications (e.g., dynamic topic-based aggregation of consumer-generated content in social networking sites) that need a good tradeoff between the performance of batch offline algorithms and efficiency of incremental online algorithms.", "", "In this paper new expressions for the Spatial Fading Correlation (SFC) functions of Antenna Arrays (AA) in a 3-dimensional (3D) multipath channel are derived. In particular the Uniform Circular Array (UCA) antenna topology is considered. The derivation of the novel SFC function uses a Probability Density Function (PDF) originating from the field of directional statistics, the Von Mises Fisher (VMF) PDF. In particular the novel SFC function is based on the concept of mixture modeling and hence uses a mixture of VMF distributions. Since the SFC function is dependent on the Angle of Arrival (AoA) as well as the power of each cluster, the more appropriate power azimuth colatitude spectrum term has been used. The choice of distribution is validated with the use of Multiple Input Multiple Output (MIMO) experimental data that was obtained in an outdoor drive test campaign in Germany. A mixture can be composed of any number of clusters and this is mainly dependent on the clutter type encountered in the propagation environment. The parameters of the individual clusters within the mixture are derived and an estimation of those parameters is achieved using the soft-Expectation Maximization (EM) algorithm. The results indicate that the proposed model fits well with the MIMO data.", "We present a method for discovering patterns of selectivity in fMRI data for experiments with multiple stimuli tasks. We introduce a representation of the data as profiles of selectivity using linear regression estimates, and employ mixture model density estimation to identify functional systems with distinct types of selectivity. The method characterizes these systems by their selectivity patterns and spatial maps, both estimated simultaneously via the EM algorithm. We demonstrate a corresponding method for group analysis that avoids the need for spatial correspondence among subjects. Consistency of the selectivity profiles across subjects provides a way to assess the validity of the discovered systems. We validate this model in the context of category selectivity in visual cortex, demonstrating good agreement with the findings based on prior hypothesis-driven methods.", "Mixture modelling involves explaining some observed evidence using a combination of probability distributions. The crux of the problem is the inference of an optimal number of mixture components and their corresponding parameters. This paper discusses unsupervised learning of mixture models using the Bayesian Minimum Message Length (MML) criterion. To demonstrate the effectiveness of search and inference of mixture parameters using the proposed approach, we select two key probability distributions, each handling fundamentally different types of data: the multivariate Gaussian distribution to address mixture modelling of data distributed in Euclidean space, and the multivariate von Mises-Fisher (vMF) distribution to address mixture modelling of directional data distributed on a unit hypersphere. The key contributions of this paper, in addition to the general search and inference methodology, include the derivation of MML expressions for encoding the data using multivariate Gaussian and von Mises-Fisher distributions, and the analytical derivation of the MML estimates of the parameters of the two distributions. Our approach is tested on simulated and real world data sets. For instance, we infer vMF mixtures that concisely explain experimentally determined three-dimensional protein conformations, providing an effective null model description of protein structures that is central to many inference problems in structural bioinformatics. The experimental results demonstrate that the performance of our proposed search and inference method along with the encoding schemes improve on the state of the art mixture modelling techniques." ] }
1605.00052
2950725226
An increasing number of computer vision tasks can be tackled with deep features, which are the intermediate outputs of a pre-trained Convolutional Neural Network. Despite the astonishing performance, deep features extracted from low-level neurons are still below satisfaction, arguably because they cannot access the spatial context contained in the higher layers. In this paper, we present InterActive, a novel algorithm which computes the activeness of neurons and network connections. Activeness is propagated through a neural network in a top-down manner, carrying high-level context and improving the descriptive power of low-level and mid-level neurons. Visualization indicates that neuron activeness can be interpreted as spatial-weighted neuron responses. We achieve state-of-the-art classification performance on a wide range of image datasets.
Image classification is a fundamental problem in computer vision. In recent years, researchers have extended the conventional tasks @cite_29 @cite_8 to fine-grained @cite_12 @cite_24 @cite_48 , and large-scale @cite_46 @cite_19 @cite_38 cases.
{ "cite_N": [ "@cite_38", "@cite_8", "@cite_48", "@cite_29", "@cite_24", "@cite_19", "@cite_46", "@cite_12" ], "mid": [ "2108598243", "2166049352", "", "2162915993", "", "2017814585", "1576445103", "" ], "abstract": [ "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "Current computational approaches to learning visual object categories require thousands of training images, are slow, cannot learn in an incremental manner and cannot incorporate prior information into the learning process. In addition, no algorithm presented in the literature has been tested on more than a handful of object categories. We present an method for learning object categories from just a few training images. It is quick and it uses prior information in a principled way. We test it on a dataset composed of images of objects belonging to 101 widely varied categories. Our proposed method is based on making use of prior information, assembled from (unrelated) object categories which were previously learnt. A generative probabilistic model is used, which represents the shape and appearance of a constellation of features belonging to the object. The parameters of the model are learnt incrementally in a Bayesian manner. Our incremental algorithm is compared experimentally to an earlier batch Bayesian algorithm, as well as to one based on maximum likelihood. The incremental and batch versions have comparable classification performance on small training sets, but incremental learning is significantly faster, making real-time learning feasible. Both Bayesian methods outperform maximum likelihood on small training sets.", "", "This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s \"gist\" and Lowe’s SIFT descriptors.", "", "Scene categorization is a fundamental problem in computer vision. However, scene understanding research has been constrained by the limited scope of currently-used databases which do not capture the full variety of scene categories. Whereas standard databases for object categorization contain hundreds of different classes of objects, the largest available dataset of scene categories contains only 15 classes. In this paper we propose the extensive Scene UNderstanding (SUN) database that contains 899 categories and 130,519 images. We use 397 well-sampled categories to evaluate numerous state-of-the-art algorithms for scene recognition and establish new bounds of performance. We measure human scene classification performance on the SUN database and compare this with computational methods. Additionally, we study a finer-grained scene representation to detect scenes embedded inside of larger scenes.", "We introduce a challenging set of 256 object categories containing a total of 30607 images. The original Caltech-101 [1] was collected by choosing a set of object categories, downloading examples from Google Images and then manually screening out all images that did not fit the category. Caltech-256 is collected in a similar manner with several improvements: a) the number of categories is more than doubled, b) the minimum number of images in any category is increased from 31 to 80, c) artifacts due to image rotation are avoided and d) a new and larger clutter category is introduced for testing background rejection. We suggest several testing paradigms to measure classification performance, then benchmark the dataset using two simple metrics as well as a state-of-the-art spatial pyramid matching [2] algorithm. Finally we use the clutter category to train an interest detector which rejects uninformative background regions.", "" ] }
1605.00052
2950725226
An increasing number of computer vision tasks can be tackled with deep features, which are the intermediate outputs of a pre-trained Convolutional Neural Network. Despite the astonishing performance, deep features extracted from low-level neurons are still below satisfaction, arguably because they cannot access the spatial context contained in the higher layers. In this paper, we present InterActive, a novel algorithm which computes the activeness of neurons and network connections. Activeness is propagated through a neural network in a top-down manner, carrying high-level context and improving the descriptive power of low-level and mid-level neurons. Visualization indicates that neuron activeness can be interpreted as spatial-weighted neuron responses. We achieve state-of-the-art classification performance on a wide range of image datasets.
The Bag-of-Visual-Words (BoVW) model @cite_56 represents each images with a high-dimensional vector. It typically consists of three stages, i.e. , descriptor extraction, feature encoding and feature summarization. Due to the limited descriptive power of raw pixels, local descriptors such as SIFT @cite_34 and HOG @cite_52 are extracted. A visual vocabulary is then built to capture the data distribution in feature space. Descriptors are thereafter quantized onto the vocabulary as compact feature vectors @cite_53 @cite_25 @cite_22 @cite_45 , and summarized as an image-level representation @cite_29 @cite_3 @cite_6 . These feature vectors are post-processed @cite_47 , and then fed into a machine learning tool @cite_11 @cite_49 @cite_2 for evaluation.
{ "cite_N": [ "@cite_22", "@cite_53", "@cite_29", "@cite_52", "@cite_3", "@cite_56", "@cite_6", "@cite_45", "@cite_49", "@cite_2", "@cite_47", "@cite_34", "@cite_25", "@cite_11" ], "mid": [ "1606858007", "2097018403", "2162915993", "2161969291", "", "1625255723", "1968990331", "", "", "", "", "2151103935", "", "" ], "abstract": [ "The Fisher kernel (FK) is a generic framework which combines the benefits of generative and discriminative approaches. In the context of image classification the FK was shown to extend the popular bag-of-visual-words (BOV) by going beyond count statistics. However, in practice, this enriched representation has not yet shown its superiority over the BOV. In the first part we show that with several well-motivated modifications over the original framework we can boost the accuracy of the FK. On PASCAL VOC 2007 we increase the Average Precision (AP) from 47.9 to 58.3 . Similarly, we demonstrate state-of-the-art accuracy on CalTech 256. A major advantage is that these results are obtained using only SIFT descriptors and costless linear classifiers. Equipped with this representation, we can now explore image classification on a larger scale. In the second part, as an application, we compare two abundant resources of labeled images to learn classifiers: ImageNet and Flickr groups. In an evaluation involving hundreds of thousands of training images we show that classifiers learned on Flickr groups perform surprisingly well (although they were not intended for this purpose) and that they can complement classifiers learned on more carefully annotated datasets.", "Recently SVMs using spatial pyramid matching (SPM) kernel have been highly successful in image classification. Despite its popularity, these nonlinear SVMs have a complexity O(n2 n3) in training and O(n) in testing, where n is the training size, implying that it is nontrivial to scaleup the algorithms to handle more than thousands of training images. In this paper we develop an extension of the SPM method, by generalizing vector quantization to sparse coding followed by multi-scale spatial max pooling, and propose a linear SPM kernel based on SIFT sparse codes. This new approach remarkably reduces the complexity of SVMs to O(n) in training and a constant in testing. In a number of image categorization experiments, we find that, in terms of classification accuracy, the suggested linear SPM based on sparse coding of SIFT descriptors always significantly outperforms the linear SPM kernel on histograms, and is even better than the nonlinear SPM kernels, leading to state-of-the-art performance on several benchmarks by using a single type of descriptors.", "This paper presents a method for recognizing scene categories based on approximate global geometric correspondence. This technique works by partitioning the image into increasingly fine sub-regions and computing histograms of local features found inside each sub-region. The resulting \"spatial pyramid\" is a simple and computationally efficient extension of an orderless bag-of-features image representation, and it shows significantly improved performance on challenging scene categorization tasks. Specifically, our proposed method exceeds the state of the art on the Caltech-101 database and achieves high accuracy on a large database of fifteen natural scene categories. The spatial pyramid framework also offers insights into the success of several recently proposed image descriptions, including Torralba’s \"gist\" and Lowe’s SIFT descriptors.", "We study the question of feature sets for robust visual object recognition; adopting linear SVM based human detection as a test case. After reviewing existing edge and gradient based descriptors, we show experimentally that grids of histograms of oriented gradient (HOG) descriptors significantly outperform existing feature sets for human detection. We study the influence of each stage of the computation on performance, concluding that fine-scale gradients, fine orientation binning, relatively coarse spatial binning, and high-quality local contrast normalization in overlapping descriptor blocks are all important for good results. The new approach gives near-perfect separation on the original MIT pedestrian database, so we introduce a more challenging dataset containing over 1800 annotated human images with a large range of pose variations and backgrounds.", "", "We present a novel method for generic visual categorization: the problem of identifying the object content of natural images while generalizing across variations inherent to the object class. This bag of keypoints method is based on vector quantization of affine invariant descriptors of image patches. We propose and compare two alternative implementations using different classifiers: Naive Bayes and SVM. The main advantages of the method are that it is simple, computationally efficient and intrinsically invariant. We present results for simultaneously classifying seven semantic visual categories. These results clearly demonstrate that the method is robust to background clutter and produces good categorization accuracy even without exploiting geometric information.", "Recent coding-based image classification systems generally adopt a key step of spatial pooling operation, which characterizes the statistics of patch-level local feature codes over the regions of interest (ROI), to form the image-level representation for classification. In this paper, we present a hierarchical ROI dictionary for spatial pooling, to beyond the widely used spatial pyramid in image classification literature. By utilizing the compositionality among ROIs, it captures rich spatial statistics via an efficient pooling algorithm in deep hierarchy. On this basis, we further employ partial least squares analysis to learn a more compact and discriminative image representation. The experimental results demonstrate superiority of the proposed hierarchical pooling method relative to spatial pyramid, on three benchmark datasets for image classification.", "", "", "", "", "This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scale and rotation, and are shown to provide robust matching across a substantial range of affine distortion, change in 3D viewpoint, addition of noise, and change in illumination. The features are highly distinctive, in the sense that a single feature can be correctly matched with high probability against a large database of features from many images. This paper also describes an approach to using these features for object recognition. The recognition proceeds by matching individual features to a database of features from known objects using a fast nearest-neighbor algorithm, followed by a Hough transform to identify clusters belonging to a single object, and finally performing verification through least-squares solution for consistent pose parameters. This approach to recognition can robustly identify objects among clutter and occlusion while achieving near real-time performance.", "", "" ] }
1605.00052
2950725226
An increasing number of computer vision tasks can be tackled with deep features, which are the intermediate outputs of a pre-trained Convolutional Neural Network. Despite the astonishing performance, deep features extracted from low-level neurons are still below satisfaction, arguably because they cannot access the spatial context contained in the higher layers. In this paper, we present InterActive, a novel algorithm which computes the activeness of neurons and network connections. Activeness is propagated through a neural network in a top-down manner, carrying high-level context and improving the descriptive power of low-level and mid-level neurons. Visualization indicates that neuron activeness can be interpreted as spatial-weighted neuron responses. We achieve state-of-the-art classification performance on a wide range of image datasets.
The Convolutional Neural Network (CNN) serves as a hierarchical model for large-scale visual recognition. It is based on that a network with enough neurons is able to fit any complicated data distribution. In past years, neural networks were shown to be effective for simple recognition tasks @cite_1 . More recently, the availability of large-scale training data ( e.g. , ImageNet @cite_38 ) and powerful GPUs makes it possible to train deep CNNs @cite_43 which significantly outperform BoVW models. A CNN is composed of several stacked layers, in each of which responses from the previous layer are convoluted and activated by a differentiable function. Hence, a CNN can be considered as a composite function, and is trained by back-propagating error signals defined by the difference between supervised and predicted labels at the top level. Recently, efficient methods were proposed to help CNNs converge faster @cite_43 and prevent over-fitting @cite_10 @cite_7 @cite_27 . It is believed that deeper networks produce better recognition results @cite_18 @cite_39 .
{ "cite_N": [ "@cite_38", "@cite_18", "@cite_7", "@cite_1", "@cite_39", "@cite_43", "@cite_27", "@cite_10" ], "mid": [ "2108598243", "1686810756", "2949117887", "2154579312", "", "", "", "1904365287" ], "abstract": [ "The explosion of image data on the Internet has the potential to foster more sophisticated and robust models and algorithms to index, retrieve, organize and interact with images and multimedia data. But exactly how such data can be harnessed and organized remains a critical problem. We introduce here a new database called “ImageNet”, a large-scale ontology of images built upon the backbone of the WordNet structure. ImageNet aims to populate the majority of the 80,000 synsets of WordNet with an average of 500-1000 clean and full resolution images. This will result in tens of millions of annotated images organized by the semantic hierarchy of WordNet. This paper offers a detailed analysis of ImageNet in its current state: 12 subtrees with 5247 synsets and 3.2 million images in total. We show that ImageNet is much larger in scale and diversity and much more accurate than the current image datasets. Constructing such a large-scale database is a challenging task. We describe the data collection scheme with Amazon Mechanical Turk. Lastly, we illustrate the usefulness of ImageNet through three simple applications in object recognition, image classification and automatic object clustering. We hope that the scale, accuracy, diversity and hierarchical structure of ImageNet can offer unparalleled opportunities to researchers in the computer vision community and beyond.", "In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.", "Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.9 top-5 validation error (and 4.8 test error), exceeding the accuracy of human raters.", "We present an application of back-propagation networks to handwritten digit recognition. Minimal preprocessing of the data was required, but architecture of the network was highly constrained and specifically designed for the task. The input of the network consists of normalized images of isolated digits. The method has 1 error rate and about a 9 reject rate on zipcode digits provided by the U.S. Postal Service.", "", "", "", "When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This \"overfitting\" is greatly reduced by randomly omitting half of the feature detectors on each training case. This prevents complex co-adaptations in which a feature detector is only helpful in the context of several other specific feature detectors. Instead, each neuron learns to detect a feature that is generally helpful for producing the correct answer given the combinatorially large variety of internal contexts in which it must operate. Random \"dropout\" gives big improvements on many benchmark tasks and sets new records for speech and object recognition." ] }
1605.00052
2950725226
An increasing number of computer vision tasks can be tackled with deep features, which are the intermediate outputs of a pre-trained Convolutional Neural Network. Despite the astonishing performance, deep features extracted from low-level neurons are still below satisfaction, arguably because they cannot access the spatial context contained in the higher layers. In this paper, we present InterActive, a novel algorithm which computes the activeness of neurons and network connections. Activeness is propagated through a neural network in a top-down manner, carrying high-level context and improving the descriptive power of low-level and mid-level neurons. Visualization indicates that neuron activeness can be interpreted as spatial-weighted neuron responses. We achieve state-of-the-art classification performance on a wide range of image datasets.
Visualization is an effective method of understanding CNNs. @cite_50 , a de-convolutional operation was designed to capture visual patterns on different layers of a pre-trained network. @cite_20 and @cite_33 show that different sets of neurons are activated when a network is used for detecting different visual concepts. The above works are based on a supervised signal on the output layer. In this paper, we define an unsupervised probabilistic distribution function on the high-level neuron responses, and back-propagate it to obtain the activeness of low-level neurons. Neuron activeness can also be visualized as spatial weighting maps. Computing neuron activeness involves finding the relevant contents on each network layer @cite_14 @cite_4 , and is related to recovering low-level details from high-level visual context @cite_23 .
{ "cite_N": [ "@cite_14", "@cite_4", "@cite_33", "@cite_50", "@cite_23", "@cite_20" ], "mid": [ "2951527505", "2953022181", "", "2952186574", "1903029394", "2962851944" ], "abstract": [ "Applying convolutional neural networks to large images is computationally expensive because the amount of computation scales linearly with the number of image pixels. We present a novel recurrent neural network model that is capable of extracting information from an image or video by adaptively selecting a sequence of regions or locations and only processing the selected regions at high resolution. Like convolutional neural networks, the proposed model has a degree of translation invariance built-in, but the amount of computation it performs can be controlled independently of the input image size. While the model is non-differentiable, it can be trained using reinforcement learning methods to learn task-specific policies. We evaluate our model on several image classification tasks, where it significantly outperforms a convolutional neural network baseline on cluttered images, and on a dynamic visual control problem, where it learns to track a simple object without an explicit training signal for doing so.", "Recurrent sequence generators conditioned on input data through an attention mechanism have recently shown very good performance on a range of tasks in- cluding machine translation, handwriting synthesis and image caption gen- eration. We extend the attention-mechanism with features needed for speech recognition. We show that while an adaptation of the model used for machine translation in reaches a competitive 18.7 phoneme error rate (PER) on the TIMIT phoneme recognition task, it can only be applied to utterances which are roughly as long as the ones it was trained on. We offer a qualitative explanation of this failure and propose a novel and generic method of adding location-awareness to the attention mechanism to alleviate this issue. The new method yields a model that is robust to long inputs and achieves 18 PER in single utterances and 20 in 10-times longer (repeated) utterances. Finally, we propose a change to the at- tention mechanism that prevents it from concentrating too much on single frames, which further reduces PER to 17.6 level.", "", "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets.", "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, exceed the state-of-the-art in semantic segmentation. Our key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet [20], the VGG net [31], and GoogLeNet [32]) into fully convolutional networks and transfer their learned representations by fine-tuning [3] to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional network achieves state-of-the-art segmentation of PASCAL VOC (20 relative improvement to 62.2 mean IU on 2012), NYUDv2, and SIFT Flow, while inference takes less than one fifth of a second for a typical image.", "This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [5], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [13]." ] }
1605.00052
2950725226
An increasing number of computer vision tasks can be tackled with deep features, which are the intermediate outputs of a pre-trained Convolutional Neural Network. Despite the astonishing performance, deep features extracted from low-level neurons are still below satisfaction, arguably because they cannot access the spatial context contained in the higher layers. In this paper, we present InterActive, a novel algorithm which computes the activeness of neurons and network connections. Activeness is propagated through a neural network in a top-down manner, carrying high-level context and improving the descriptive power of low-level and mid-level neurons. Visualization indicates that neuron activeness can be interpreted as spatial-weighted neuron responses. We achieve state-of-the-art classification performance on a wide range of image datasets.
Although our method and @cite_50 share similar ideas, they are quite different. We focus on generating better image description, while @cite_50 focuses on visualizing the network; we can visualize back-propagated neuron activeness, while @cite_50 visualizes neuron responses; we back-propagate the activeness of all neurons, while @cite_50 only chooses the neuron with maximal response; our method is unsupervised, while @cite_50 is supervised (by guessing'' the label). Being unsupervised, InterActive can be generalized to many more classification problems with a different set of image classes.
{ "cite_N": [ "@cite_50" ], "mid": [ "2952186574" ], "abstract": [ "Large Convolutional Network models have recently demonstrated impressive classification performance on the ImageNet benchmark. However there is no clear understanding of why they perform so well, or how they might be improved. In this paper we address both issues. We introduce a novel visualization technique that gives insight into the function of intermediate feature layers and the operation of the classifier. We also perform an ablation study to discover the performance contribution from different model layers. This enables us to find model architectures that outperform Krizhevsky al on the ImageNet classification benchmark. We show our ImageNet model generalizes well to other datasets: when the softmax classifier is retrained, it convincingly beats the current state-of-the-art results on Caltech-101 and Caltech-256 datasets." ] }
1605.00366
2345337169
This paper shows that it is possible to train large and deep convolutional neural networks (CNN) for JPEG compression artifacts reduction, and that such networks can provide significantly better reconstruction quality compared to previously used smaller networks as well as to any other state-of-the-art methods. We were able to train networks with 8 layers in a single step and in relatively short time by combining residual learning, skip architecture, and symmetric weight initialization. We provide further insights into convolution networks for JPEG artifact reduction by evaluating three different objectives, generalization with respect to training dataset size, and generalization with respect to JPEG quality level.
A large number of methods designed to reduce compression artifacts exist ranging from relatively simple and fast hand-designed filters to fully probabilistic image restoration methods with complex priors @cite_19 and methods which rely on advanced machine learning approaches @cite_21 .
{ "cite_N": [ "@cite_19", "@cite_21" ], "mid": [ "2128026399", "54257720" ], "abstract": [ "The JPEG standard is one of the most prevalent image compression schemes in use today. While JPEG was designed for use with natural images, it is also widely used for the encoding of raster documents. Unfortunately, JPEG's characteristic blocking and ringing artifacts can severely degrade the quality of text and graphics in complex documents. We propose a JPEG decompression algorithm which is designed to produce substantially higher quality images from the same standard JPEG encodings. The method works by incorporating a document image model into the decoding process which accounts for the wide variety of content in modern complex color documents. The method works by first segmenting the JPEG encoded document into regions corresponding to background, text, and picture content. The regions corresponding to text and background are then decoded using maximum a posteriori (MAP) estimation. Most importantly, the MAP reconstruction of the text regions uses a model which accounts for the spatial characteristics of text and graphics. Our experimental comparisons to the baseline JPEG decoding as well as to three other decoding schemes, demonstrate that our method substantially improves the quality of decoded images, both visually and as measured by PSNR.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage." ] }
1605.00366
2345337169
This paper shows that it is possible to train large and deep convolutional neural networks (CNN) for JPEG compression artifacts reduction, and that such networks can provide significantly better reconstruction quality compared to previously used smaller networks as well as to any other state-of-the-art methods. We were able to train networks with 8 layers in a single step and in relatively short time by combining residual learning, skip architecture, and symmetric weight initialization. We provide further insights into convolution networks for JPEG artifact reduction by evaluating three different objectives, generalization with respect to training dataset size, and generalization with respect to JPEG quality level.
This work focuses on application of convolutional networks to reconstruction of images corrupted by JPEG compression artifacts. Convolutional networks belong to an extensively studied domain of deep learning @cite_33 . Recent results in several machine learning tasks show that deep architectures are able to learn the high level abstractions necessary for a wide range of vision tasks including face recognition @cite_18 , object detection @cite_10 , scene classification @cite_6 , pose estimation @cite_12 , image captioning @cite_7 , and various image restoration tasks @cite_21 @cite_32 @cite_34 @cite_25 @cite_17 @cite_14 @cite_13 @cite_30 @cite_23 . Today, convolutional networks based approaches show the state-of-the-art results in many computer vision fields.
{ "cite_N": [ "@cite_13", "@cite_30", "@cite_18", "@cite_14", "@cite_33", "@cite_7", "@cite_21", "@cite_32", "@cite_6", "@cite_23", "@cite_34", "@cite_10", "@cite_25", "@cite_12", "@cite_17" ], "mid": [ "2319561215", "2280335824", "2145287260", "2124964692", "", "2951912364", "54257720", "2951997238", "", "1457323852", "2098477387", "2102605133", "2154815154", "2113325037", "1973567017" ], "abstract": [ "In this work we address the problem of blind deconvolution and denoising. We focus on restoration of text documents and we show that this type of highly structured data can be successfully restored by a convolutional neural network. The networks are trained to reconstruct high-quality images directly from blurry inputs without assuming any specific blur and noise models. We demonstrate the performance of the convolutional networks on a large set of text documents and on a combination of realistic de-focus and camera shake blur kernels. On this artificial data, the convolutional networks significantly outperform existing blind deconvolution methods, including those optimized for text, in terms of image quality and OCR accuracy. In fact, the networks outperform even state-of-the-art non-blind methods for anything but the lowest noise levels. The approach is validated on real photos taken by various devices.", "In this work we explore the previously proposed approach of direct blind deconvolution and denoising with convolutional neural networks (CNN) in a situation where the blur kernels are partially constrained. We focus on blurred images from a real-life traffic surveillance system, on which we, for the first time, demonstrate that neural networks trained on artificial data provide superior reconstruction quality on real images compared to traditional blind deconvolution methods. The training data is easy to obtain by blurring sharp photos from a target system with a very rough approximation of the expected blur kernels, thereby allowing custom CNNs to be trained for a specific application (image content and blur range). Additionally, we evaluate the behavior and limits of the CNNs with respect to blur direction range and length.", "In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35 on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27 , closely approaching human-level performance.", "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.", "", "Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. In this paper, we present a generative model based on a deep recurrent architecture that combines recent advances in computer vision and machine translation and that can be used to generate natural sentences describing an image. The model is trained to maximize the likelihood of the target description sentence given the training image. Experiments on several datasets show the accuracy of the model and the fluency of the language it learns solely from image descriptions. Our model is often quite accurate, which we verify both qualitatively and quantitatively. For instance, while the current state-of-the-art BLEU-1 score (the higher the better) on the Pascal dataset is 25, our approach yields 59, to be compared to human performance around 69. We also show BLEU-1 score improvements on Flickr30k, from 56 to 66, and on SBU, from 19 to 28. Lastly, on the newly released COCO dataset, we achieve a BLEU-4 of 27.7, which is the current state-of-the-art.", "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.", "We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification simonyan2015very . We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates ( @math times higher than SRCNN dong2015image ) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable.", "", "We describe a learning-based approach to blind image deconvolution. It uses a deep layered architecture, parts of which are borrowed from recent work on neural network learning, and parts of which incorporate computations that are specific to image deconvolution. The system is trained end-to-end on a set of artificially generated training examples, enabling competitive performance in blind deconvolution, both with respect to quality and runtime.", "We present an approach to low-level vision that combines two main ideas: the use of convolutional networks as an image processing architecture and an unsupervised learning procedure that synthesizes training samples from specific noise models. We demonstrate this approach on the challenging problem of natural image denoising. Using a test set with a hundred natural images, we find that convolutional networks provide comparable and in some cases superior performance to state of the art wavelet and Markov random field (MRF) methods. Moreover, we find that a convolutional network offers similar performance in the blind de-noising setting as compared to other techniques in the non-blind setting. We also show how convolutional networks are mathematically related to MRF approaches by presenting a mean field theory for an MRF specially designed for image denoising. Although these approaches are related, convolutional networks avoid computational difficulties in MRF approaches that arise from probabilistic learning and inference. This makes it possible to learn image processing architectures that have a high degree of representational power (we train models with over 15,000 parameters), but whose computational expense is significantly less than that associated with inference in MRF approaches with even hundreds of parameters.", "Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30 relative to the previous best result on VOC 2012 -- achieving a mAP of 53.3 . Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http: www.cs.berkeley.edu rbg rcnn.", "Photographs taken through a window are often compromised by dirt or rain present on the window surface. Common cases of this include pictures taken from inside a vehicle, or outdoor security cameras mounted inside a protective enclosure. At capture time, defocus can be used to remove the artifacts, but this relies on achieving a shallow depth-of-field and placement of the camera close to the window. Instead, we present a post-capture image processing solution that can remove localized rain and dirt artifacts from a single image. We collect a dataset of clean corrupted image pairs which are then used to train a specialized form of convolutional neural network. This learns how to map corrupted image patches to clean ones, implicitly capturing the characteristic appearance of dirt and water droplets in natural images. Our models demonstrate effective removal of dirt and rain in outdoor test conditions.", "We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regres- sors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formula- tion which capitalizes on recent advances in Deep Learn- ing. We present a detailed empirical analysis with state-of- art or better performance on four academic benchmarks of diverse real-world images.", "Image deconvolution is the ill-posed problem of recovering a sharp image, given a blurry one generated by a convolution. In this work, we deal with space-invariant non-blind deconvolution. Currently, the most successful methods involve a regularized inversion of the blur in Fourier domain as a first step. This step amplifies and colors the noise, and corrupts the image information. In a second (and arguably more difficult) step, one then needs to remove the colored noise, typically using a cleverly engineered algorithm. However, the methods based on this two-step approach do not properly address the fact that the image information has been corrupted. In this work, we also rely on a two-step procedure, but learn the second step on a large dataset of natural images, using a neural network. We will show that this approach outperforms the current state-of-the-art on a large dataset of artificially blurred images. We demonstrate the practical applicability of our method in a real-world example with photographic out-of-focus blur." ] }
1605.00366
2345337169
This paper shows that it is possible to train large and deep convolutional neural networks (CNN) for JPEG compression artifacts reduction, and that such networks can provide significantly better reconstruction quality compared to previously used smaller networks as well as to any other state-of-the-art methods. We were able to train networks with 8 layers in a single step and in relatively short time by combining residual learning, skip architecture, and symmetric weight initialization. We provide further insights into convolution networks for JPEG artifact reduction by evaluating three different objectives, generalization with respect to training dataset size, and generalization with respect to JPEG quality level.
Small networks were historically used in image denoising and other tasks. On the other hand, deep and large fully convolutional networks have become only recently important in this field. Burger @cite_11 used feed forward three layer neural network for image denoising. While there were attempts to use neural networks for denoising before, Burger showed that this approach can produce state-of-the-art results when trained on a sufficiently large dataset.
{ "cite_N": [ "@cite_11" ], "mid": [ "2037642501" ], "abstract": [ "Image denoising can be described as the problem of mapping from a noisy image to a noise-free image. The best currently available denoising methods approximate this mapping with cleverly engineered algorithms. In this work we attempt to learn this mapping directly with a plain multi layer perceptron (MLP) applied to image patches. While this has been done before, we will show that by training on large image databases we are able to compete with the current state-of-the-art image denoising methods. Furthermore, our approach is easily adapted to less extensively studied types of noise (by merely exchanging the training data), for which we achieve excellent results as well." ] }
1605.00366
2345337169
This paper shows that it is possible to train large and deep convolutional neural networks (CNN) for JPEG compression artifacts reduction, and that such networks can provide significantly better reconstruction quality compared to previously used smaller networks as well as to any other state-of-the-art methods. We were able to train networks with 8 layers in a single step and in relatively short time by combining residual learning, skip architecture, and symmetric weight initialization. We provide further insights into convolution networks for JPEG artifact reduction by evaluating three different objectives, generalization with respect to training dataset size, and generalization with respect to JPEG quality level.
Dong @cite_21 introduced super-resolution convolutional neural network (SRCNN) to deal with the ill-posed problem of super-resolution. The SRCNN is designed according the classical sparse coding methods -- the three layers of SRCNN consist of feature extraction layer, a high dimensional mapping layer, and a final reconstruction layer. The very deep CNN based super-resolution method proposed by Kim @cite_32 builds on the work of Dong @cite_21 and it shows that deep networks for super-resolution can be trained when proper guidelines are followed. They initialized networks properly and they used so-called residual learning in which the network predicts how the input image should be changed instead of predicting the desired image directly. Residual learning appears to be very important in super-resolution. The resulting 20 layers deep networks trained with adjustable gradient clipping significantly outperform previous approaches. However, it is unclear how effective residual learning would be in other image processing tasks where the networks inputs and outputs are not correlated that strongly as in super-resolution. We follow this approach in our work on JPEG reconstruction.
{ "cite_N": [ "@cite_21", "@cite_32" ], "mid": [ "54257720", "2951997238" ], "abstract": [ "We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.", "We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification simonyan2015very . We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates ( @math times higher than SRCNN dong2015image ) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable." ] }
1605.00366
2345337169
This paper shows that it is possible to train large and deep convolutional neural networks (CNN) for JPEG compression artifacts reduction, and that such networks can provide significantly better reconstruction quality compared to previously used smaller networks as well as to any other state-of-the-art methods. We were able to train networks with 8 layers in a single step and in relatively short time by combining residual learning, skip architecture, and symmetric weight initialization. We provide further insights into convolution networks for JPEG artifact reduction by evaluating three different objectives, generalization with respect to training dataset size, and generalization with respect to JPEG quality level.
Convolutional networks have previously been used for suppressing compression artifacts by Dong @cite_15 , who proposed a compact and efficient CNN based on SRCNN -- artifacts removing convolutional network (AR-CNN). AR-CNN extends the original architecture of SRCNN with feature enhancement layers. The network training consist of two stages -- a shallow network is trained first and it is used as an initialization for a final 4 layer CNN. As reported in the paper, this two stage approach improved results due to training difficulties encountered when training the full 4 layer network from scratch. The authors also state that they aim to achieve feature enhancement instead of just making the CNN deeper. They argue that although the deeper SRCNN introduces a better regressor between the low-level features and the reconstruction, the bottleneck lies on the features. Thus the extracting layer is augmented by the enhancement layer which together may provide better feature extractor.
{ "cite_N": [ "@cite_15" ], "mid": [ "2142683286" ], "abstract": [ "Lossy compression introduces complex compression artifacts, particularly the blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restores sharpened images that are accompanied with ringing effects. Inspired by the deep convolutional networks (DCN) on super-resolution, we formulate a compact and efficient network for seamless attenuation of different compression artifacts. We also demonstrate that a deeper model can be effectively trained with the features learned in a shallow network. Following a similar \"easy to hard\" idea, we systematically investigate several practical transfer settings and show the effectiveness of transfer learning in low level vision problems. Our method shows superior performance than the state-of-the-arts both on the benchmark datasets and the real-world use cases (i.e. Twitter)." ] }
1605.00366
2345337169
This paper shows that it is possible to train large and deep convolutional neural networks (CNN) for JPEG compression artifacts reduction, and that such networks can provide significantly better reconstruction quality compared to previously used smaller networks as well as to any other state-of-the-art methods. We were able to train networks with 8 layers in a single step and in relatively short time by combining residual learning, skip architecture, and symmetric weight initialization. We provide further insights into convolution networks for JPEG artifact reduction by evaluating three different objectives, generalization with respect to training dataset size, and generalization with respect to JPEG quality level.
We adapt the idea of residual learning @cite_32 for the JPEG compression artifact removal based on CNN. We follow the assumption "deeper is better" and we try to learn our deep residual CNNs in a single step by creating a new recipe including initialization, network architecture, and high learning rates. The resulting networks significantly outperform the classical JPEG compression artifact removal methods, as well as, the AR-CNN @cite_15 on common dataset measured by PSNR, specialized deblocking assessment measure PSNR-B , and SSIM.
{ "cite_N": [ "@cite_15", "@cite_32" ], "mid": [ "2142683286", "2951997238" ], "abstract": [ "Lossy compression introduces complex compression artifacts, particularly the blocking artifacts, ringing effects and blurring. Existing algorithms either focus on removing blocking artifacts and produce blurred output, or restores sharpened images that are accompanied with ringing effects. Inspired by the deep convolutional networks (DCN) on super-resolution, we formulate a compact and efficient network for seamless attenuation of different compression artifacts. We also demonstrate that a deeper model can be effectively trained with the features learned in a shallow network. Following a similar \"easy to hard\" idea, we systematically investigate several practical transfer settings and show the effectiveness of transfer learning in low level vision problems. Our method shows superior performance than the state-of-the-arts both on the benchmark datasets and the real-world use cases (i.e. Twitter).", "We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification simonyan2015very . We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, however, convergence speed becomes a critical issue during training. We propose a simple yet effective training procedure. We learn residuals only and use extremely high learning rates ( @math times higher than SRCNN dong2015image ) enabled by adjustable gradient clipping. Our proposed method performs better than existing methods in accuracy and visual improvements in our results are easily noticeable." ] }
1605.00354
2346234316
We present a self-contained, soft robotic hand composed of soft pneumatic actuator modules that are equipped with strain and pressure sensing. We show how this data can be used to discern whether a grasp was successful. Colocating sensing and embedded computation with the actuators greatly simplifies control and system integration. Equipped with a small pump, the hand is self-contained and needs only power and data supplied by a single USB connection to a PC. We demonstrate its function by grasping a variety of objects ranging from very small to large and heavy objects weighing more than the hand itself. The presented system nicely illustrates the advantages of soft robotics: low cost, low weight, and intrinsic compliance. We exploit morphological computation to simplify control, which allows successful grasping via underactuation. Grasping indeed relies on morphological computation at multiple levels, ranging from the geometry of the actuator which determines the actuator’s kinematics, embedded strain sensors to measure curvature, to maximizing contact area and applied force during grasping. Morphological computation reaches its limitations, however, when objects are too bulky to self-align with the gripper or when the state of grasping is of interest. We therefore argue that efficient and reliable grasping also requires not only intrinsic compliance, but also embedded sensing and computation. In particular, we show how embedded sensing can be used to detect successful grasps and vary the force exerted onto an object based on local feedback, which is not possible using morphological computation alone.
Morphological computation @cite_19 refers to the role of shape and physical first principles in mechanism design to simplify the required computation for signal processing and control. As materials can be designed to exhibit specific non-linear responses that can be combined with each other, morphological computation is theoretically capable of universal computation @cite_12 . In this paper, we refer to morphological computation as the use of form and elasticity to simplify controller design.
{ "cite_N": [ "@cite_19", "@cite_12" ], "mid": [ "2038873597", "2151471214" ], "abstract": [ "Traditionally, in robotics, artificial intelligence and neuroscience, there has been a focus on the study of the control or the neural system itself. Recently there has been an increasing interest in the notion of embodiment not only in robotics and artificial intelligence, but also in the neurosciences, psychology and philosophy. In this paper, we introduce the notion of morphological computation, and demonstrate how it can be exploited on the one hand for designing intelligent, adaptive robotic systems, and on the other hand for understanding natural systems. While embodiment has often been used in its trivial meaning, i.e. bintelligence requires a bodyQ, the concept has deeper and more important implications, concerned with the relation between physical and information (neural, control) processes. Morphological computation is about connecting body, brain and environment. A number of case studies are presented to illustrate the concept. We conclude with some speculations about potential lessons for neuroscience and robotics. D 2006 Elsevier B.V. All rights reserved.", "The control of compliant robots is, due to their often nonlinear and complex dynamics, inherently difficult. The vision of morphological computation proposes to view these aspects not only as problems, but rather also as parts of the solution. Non-rigid body parts are not seen anymore as imperfect realizations of rigid body parts, but rather as potential computational resources. The applicability of this vision has already been demonstrated for a variety of complex robot control problems. Nevertheless, a theoretical basis for understanding the capabilities and limitations of morphological computation has been missing so far. We present a model for morphological computation with compliant bodies, where a precise mathematical characterization of the potential computational contribution of a complex physical body is feasible. The theory suggests that complexity and nonlinearity, typically unwanted properties of robots, are desired features in order to provide computational power. We demonstrate that simple generic models of physical bodies, based on mass-spring systems, can be used to implement complex nonlinear operators. By adding a simple readout (which is static and linear) to the morphology such devices are able to emulate complex mappings of input to output streams in continuous time. Hence, by outsourcing parts of the computation to the physical body, the difficult problem of learning to control a complex body, could be reduced to a simple and perspicuous learning task, which can not get stuck in local minima of an error function." ] }
1605.00358
2345738554
We present a formal approach that exploits attacks related to SQL Injection (SQLi) searching for security flaws in a web application. We give a formal representation of web applications and databases, and show that our formalization effectively exploits SQLi attacks. We implemented our approach in a prototype tool called SQLfast and we show its efficiency on real-world case studies, including the discovery of an attack on Joomla! that no other tool can find.
@cite_5 , the authors describe the Chained Attack'' approach, which considers multiple attacks to compromise a web app. The idea is close to ours, but: (i) they consider a new kind of web intruder, whereas we stick with the DY intruder; (ii) we analyzed the most common techniques and proposed a formalization of a vulnerable database, they only consider the behavior of the web app.
{ "cite_N": [ "@cite_5" ], "mid": [ "2483259815" ], "abstract": [ "We present the Chained Attacks approach, an automated model-based approach to test the security of web applications that does not require a background in formal methods. Starting from a set of HTTP conversations and a configuration file providing the testing surface and purpose, a model of the System Under Test (SUT) is generated and input, along with the web attacker model we defined, to a model checker acting as test oracle. The HTTP conversations, payload libraries, and a mapping created while generating the model aid the concretization of the test cases, allowing for their execution on the SUT's implementation. We applied our approach to a real-life case study and we were able to find a combination of different attacks representing the concrete chained attack performed by a bug bounty hunter." ] }
1605.00064
2345668077
In this paper, we study novel neural network structures to better model long term dependency in sequential data. We propose to use more memory units to keep track of more preceding states in recurrent neural networks (RNNs), which are all recurrently fed to the hidden layers as feedback through different weighted paths. By extending the popular recurrent structure in RNNs, we provide the models with better short-term memory mechanism to learn long term dependency in sequences. Analogous to digital filters in signal processing, we call these structures as higher order RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned using the back-propagation through time method. HORNNs are generally applicable to a variety of sequence modelling tasks. In this work, we have examined HORNNs for the language modeling task using two popular data sets, namely the Penn Treebank (PTB) and English text8 data sets. Experimental results have shown that the proposed HORNNs yield the state-of-the-art performance on both data sets, significantly outperforming the regular RNNs as well as the popular LSTMs.
Hierarchical recurrent neural network proposed in @cite_27 is one of the earliest papers that attempt to improve RNNs to capture long term dependency in a better way. It proposes to add linear time delayed connections to RNNs to improve the gradient descent learning algorithm to find a better solution, eventually solving the gradient vanishing problem. However, in this early work, the idea of multi-resolution recurrent architectures has only been preliminarily examined for some simple small-scale tasks. This work is somehow relevant to our work in this paper but the higher order RNNs proposed here differs in several aspects. Firstly, we propose to use weighted connections in the structure, instead of simple multi-resolution short-cut paths. This makes our models fall into the category of higer order models. Secondly, we have proposed to use various pooling functions in generating the feedback signals, which is critical in normalizing the dynamic ranges of gradients flowing from various paths. Our experiments have shown that the success of our models is largely attributed to this technique.
{ "cite_N": [ "@cite_27" ], "mid": [ "2099257174" ], "abstract": [ "We have already shown that extracting long-term dependencies from sequential data is difficult, both for determimstic dynamical systems such as recurrent networks, and probabilistic models such as hidden Markov models (HMMs) or input output hidden Markov models (IOHMMs). In practice, to avoid this problem, researchers have used domain specific a-priori knowledge to give meaning to the hidden or state variables representing past context. In this paper, we propose to use a more general type of a-priori knowledge, namely that the temporal dependencies are structured hierarchically. This implies that long-term dependencies are represented by variables with a long time scale. This principle is applied to a recurrent network which includes delays and multiple time scales. Experiments confirm the advantages of such structures. A similar approach is proposed for HMMs and IOHMMs." ] }
1605.00064
2345668077
In this paper, we study novel neural network structures to better model long term dependency in sequential data. We propose to use more memory units to keep track of more preceding states in recurrent neural networks (RNNs), which are all recurrently fed to the hidden layers as feedback through different weighted paths. By extending the popular recurrent structure in RNNs, we provide the models with better short-term memory mechanism to learn long term dependency in sequences. Analogous to digital filters in signal processing, we call these structures as higher order RNNs (HORNNs). Similar to RNNs, HORNNs can also be learned using the back-propagation through time method. HORNNs are generally applicable to a variety of sequence modelling tasks. In this work, we have examined HORNNs for the language modeling task using two popular data sets, namely the Penn Treebank (PTB) and English text8 data sets. Experimental results have shown that the proposed HORNNs yield the state-of-the-art performance on both data sets, significantly outperforming the regular RNNs as well as the popular LSTMs.
The most successful approach to deal with vanishing gradients so far is to use long short term memory (LSTM) model @cite_10 . LSTM relies on a fairly sophisticated structure made of gates to control flow of information to the hidden neurons. The drawback of the LSTM is that it is complicated and slow to learn. The complexity of this model makes the learning very time consuming, and hard to scale for larger tasks. Another approach to address this issue is to add a hidden layer to RNNs @cite_21 . This layer is responsible for capturing longer term dependencies in input data by making its weight matrix close to identity. Recently, clock-work RNNs @cite_19 are proposed to address this problem as well, which splits each hidden layer into several modules running at different clocks. Each module receives signals from input and computes its output at a predefined clock rate. Gated feedback recurrent neural networks @cite_12 attempt to implement a generalized version using the gated feedback connection between layers of stacked RNNs, allowing the model to adaptively adjust the connection between consecutive hidden layers.
{ "cite_N": [ "@cite_19", "@cite_21", "@cite_10", "@cite_12" ], "mid": [ "2138660131", "2118776487", "", "2953061907" ], "abstract": [ "Sequence prediction and classification are ubiquitous and challenging problems in machine learning that can require identifying complex dependencies between temporally distant inputs. Recurrent Neural Networks (RNNs) have the ability, in theory, to cope with these temporal dependencies by virtue of the short-term memory implemented by their recurrent (feedback) connections. However, in practice they are difficult to train successfully when long-term memory is required. This paper introduces a simple, yet powerful modification to the simple RNN (SRN) architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate. Rather than making the standard RNN models more complex, CW-RNN reduces the number of SRN parameters, improves the performance significantly in the tasks tested, and speeds up the network evaluation. The network is demonstrated in preliminary experiments involving three tasks: audio signal generation, TIMIT spoken word classification, where it outperforms both SRN and LSTM networks, and online handwriting recognition, where it outperforms SRNs.", "Recurrent neural network is a powerful model that learns temporal patterns in sequential data. For a long time, it was believed that recurrent networks are difficult to train using simple optimizers, such as stochastic gradient descent, due to the so-called vanishing gradient problem. In this paper, we show that learning longer term patterns in real data, such as in natural language, is perfectly possible using gradient descent. This is achieved by using a slight structural modification of the simple recurrent neural network architecture. We encourage some of the hidden units to change their state slowly by making part of the recurrent weight matrix close to identity, thus forming kind of a longer term memory. We evaluate our model in language modeling experiments, where we obtain similar performance to the much more complex Long Short Term Memory (LSTM) networks (Hochreiter & Schmidhuber, 1997).", "", "In this work, we propose a novel recurrent neural network (RNN) architecture. The proposed RNN, gated-feedback RNN (GF-RNN), extends the existing approach of stacking multiple recurrent layers by allowing and controlling signals flowing from upper recurrent layers to lower layers using a global gating unit for each pair of layers. The recurrent signals exchanged between layers are gated adaptively based on the previous hidden states and the current input. We evaluated the proposed GF-RNN with different types of recurrent units, such as tanh, long short-term memory and gated recurrent units, on the tasks of character-level language modeling and Python program evaluation. Our empirical evaluation of different RNN units, revealed that in both tasks, the GF-RNN outperforms the conventional approaches to build deep stacked RNNs. We suggest that the improvement arises because the GF-RNN can adaptively assign different layers to different timescales and layer-to-layer interactions (including the top-down ones which are not usually present in a stacked RNN) by learning to gate these interactions." ] }
1605.00060
2345451964
Large-scale graph-structured data arising from social networks, databases, knowledge bases, web graphs, etc. is now available for analysis and mining. Graph-mining often involves 'relationship queries', which seek a ranked list of interesting interconnections among a given set of entities, corresponding to nodes in the graph. While relationship queries have been studied for many years, using various terminologies, e.g., keyword-search, Steiner-tree in a graph etc., the solutions proposed in the literature so far have not focused on scaling relationship queries to large graphs having billions of nodes and edges, such are now publicly available in the form of 'linked-open-data'. In this paper, we present an algorithm for distributed keyword search (DKS) on large graphs, based on the graph-parallel computing paradigm Pregel. We also present an analytical proof that our algorithm produces an optimally ranked list of answers if run to completion. Even if terminated early, our algorithm produces approximate answers along with bounds. We describe an optimized implementation of our DKS algorithm along with time-complexity analysis. Finally, we report and analyze experiments using an implementation of DKS on Giraph the graph-parallel computing framework based on Pregel, and demonstrate that we can efficiently process relationship queries on large-scale subsets of linked-open-data.
The Steiner Tree problem on graphs was surveyed by in @cite_22 . According to this and other such surveys most of the researchers have been trying to find a heuristic solution to this problem, such as Shortest Path Heuristic, Average Distance Heuristic, and Distance Network Heuristic etc. Most of these have an approximation ratio @math ; here, @math is ratio of approximate answer weight detected by an algorithm and the optimal answer weight. By and large the best solution was presented by @cite_3 with 1.55-approximation guarantee. To the best of our understanding there have been no effort on trying to restrict the search space of this problem, which is one of the primary contribution of our work. in @cite_21 highlighted that according to @cite_5 @cite_26 , the Steiner Tree problem is solvable with bounded number of keywords, and they also present a heuristic approach. We also corroborate the same finding using time-complexity analysis of our algorithm in .
{ "cite_N": [ "@cite_26", "@cite_22", "@cite_21", "@cite_3", "@cite_5" ], "mid": [ "2125818471", "1968004989", "2107105629", "1990327548", "1996668544" ], "abstract": [ "We consider the Directed Steiner Network (DSN) problem, also called the Point-to-Point Connection problem, where given a directed graph G and p pairs (s sub 1 ,t sub 1 ), ..., (s sub p ,t sub p ) of nodes in the graph, one has to find the smallest subgraph H of G that contains paths from s sub i to t sub i for all i. The problem is NP-hard for general p, since the Directed Steiner Tree problem is a special case. Until now, the complexity was unknown for constant p spl ges 3. We prove that the problem is polynomially solvable if p is any constant number, even if nodes and edges in G are weighted and the goal is to minimize the total weight of the subgraph H. In addition, we give an efficient algorithm for the Strongly Connected Steiner Subgraph problem for any constant p, where given a directed graph and p nodes in the graph, one has to compute the smallest strongly connected subgraph containing the p nodes.", "Given a set of input points, the Steiner Tree Problem (STP) is to find a minimum-length tree that connects the input points, where it is possible to add new points to minimize the length of the tree. Solving the STP is of great importance since it is one of the fundamental problems in network design, very large scale integration routing, multicast routing, wire length estimation, computational biology, and many other areas. However, the STP is NP-hard, which shatters any hopes of finding a polynomial-time algorithm to solve the problem exactly. This is why the majority of research has looked at finding efficient heuristic algorithms. Additionally, many authors focused their work on utilizing the ever-increasing computational power and developed many parallel and distributed methods for solving the problem. In this way we are able to obtain better results in less time than ever before. Here, we present a survey of the parallel and distributed methods for solving the STP and discuss some of their applications.", "Various approaches for keyword proximity search have been implemented in relational databases, XML and the Web. Yet, in all of them, an answer is a Q-fragment, namely, a subtree T of the given data graph G, such that T contains all the keywords of the query Q and has no proper subtree with this property. The rank of an answer is inversely proportional to its weight. Three problems are of interest: finding an optimal (i.e., top-ranked) answer, computing the top-k answers and enumerating all the answers in ranked order. It is shown that, under data complexity, an efficient algorithm for solving the first problem is sufficient for solving the other two problems with polynomial delay. Similarly, an efficient algorithm for finding a θ-approximation of the optimal answer suffices for carrying out the following two tasks with polynomial delay, under query-and-data complexity. First, enumerating in a (θ+1)-approximate order. Second, computing a (θ+1)-approximation of the top-k answers. As a corollary, this paper gives the first efficient algorithms, under data complexity, for enumerating all the answers in ranked order and for computing the top-k answers. It also gives the first efficient algorithms, under query-and-data complexity, for enumerating in a provably approximate order and for computing an approximation of the top-k answers.", "", "An algorithm for solving the Steiner problem on a finite undirected graph is presented. This algorithm computes the set of graph arcs of minimum total length needed to connect a specified set of k graph nodes. If the entire graph contains n nodes, the algorithm requires time proportional to n3 2 + n2 (2k-1 - k - 1) + n(3k-1 - 2k + 3) 2. The time requirement above includes the term n3 2, which can be eliminated if the set of shortest paths connecting each pair of nodes in the graph is available." ] }
1605.00060
2345451964
Large-scale graph-structured data arising from social networks, databases, knowledge bases, web graphs, etc. is now available for analysis and mining. Graph-mining often involves 'relationship queries', which seek a ranked list of interesting interconnections among a given set of entities, corresponding to nodes in the graph. While relationship queries have been studied for many years, using various terminologies, e.g., keyword-search, Steiner-tree in a graph etc., the solutions proposed in the literature so far have not focused on scaling relationship queries to large graphs having billions of nodes and edges, such are now publicly available in the form of 'linked-open-data'. In this paper, we present an algorithm for distributed keyword search (DKS) on large graphs, based on the graph-parallel computing paradigm Pregel. We also present an analytical proof that our algorithm produces an optimally ranked list of answers if run to completion. Even if terminated early, our algorithm produces approximate answers along with bounds. We describe an optimized implementation of our DKS algorithm along with time-complexity analysis. Finally, we report and analyze experiments using an implementation of DKS on Giraph the graph-parallel computing framework based on Pregel, and demonstrate that we can efficiently process relationship queries on large-scale subsets of linked-open-data.
The Steiner Tree problem or Group Steiner Tree problem has been attempted in multiple domains such as for routing of network packets in computer networks, multiple applications in social networks @cite_13 @cite_28 , identification of functional modules in protein networks @cite_0 . Most of these algorithms are either a heuristic approach or apply a domain specific constraint to solve this problem.
{ "cite_N": [ "@cite_28", "@cite_0", "@cite_13" ], "mid": [ "", "2171662214", "2145604831" ], "abstract": [ "", "Motivation: With the exponential growth of expression and protein–protein interaction (PPI) data, the frontier of research in systems biology shifts more and more to the integrated analysis of these large datasets. Of particular interest is the identification of functional modules in PPI networks, sharing common cellular function beyond the scope of classical pathways, by means of detecting differentially expressed regions in PPI networks. This requires on the one hand an adequate scoring of the nodes in the network to be identified and on the other hand the availability of an effective algorithm to find the maximally scoring network regions. Various heuristic approaches have been proposed in the literature. Results: Here we present the first exact solution for this problem, which is based on integer-linear programming and its connection to the well-known prize-collecting Steiner tree problem from Operations Research. Despite the NP-hardness of the underlying combinatorial problem, our method typically computes provably optimal subnetworks in large PPI networks in a few minutes. An essential ingredient of our approach is a scoring function defined on network nodes. We propose a new additive score with two desirable properties: (i) it is scalable by a statistically interpretable parameter and (ii) it allows a smooth integration of data from various sources. We apply our method to a well-established lymphoma microarray dataset in combination with associated survival data and the large interaction network of HPRD to identify functional modules by computing optimal-scoring subnetworks. In particular, we find a functional interaction module associated with proliferation over-expressed in the aggressive ABC subtype as well as modules derived from non-malignant by-stander cells. Availability: Our software is available freely for non-commercial purposes at http: www.planet-lisa.net. Contact: tobias.mueller@biozentrum.uni-wuerzburg.de", "Given a task T, a pool of individuals X with different skills, and a social network G that captures the compatibility among these individuals, we study the problem of finding X, a subset of X, to perform the task. We call this the T EAM F ORMATION problem. We require that members of X' not only meet the skill requirements of the task, but can also work effectively together as a team. We measure effectiveness using the communication cost incurred by the subgraph in G that only involves X'. We study two variants of the problem for two different communication-cost functions, and show that both variants are NP-hard. We explore their connections with existing combinatorial problems and give novel algorithms for their solution. To the best of our knowledge, this is the first work to consider the T EAM F ORMATION problem in the presence of a social network of individuals. Experiments on the DBLP dataset show that our framework works well in practice and gives useful and intuitive results." ] }
1604.08897
2343474118
Abstract Indexing highly repetitive collections has become a relevant problem with the emergence of large repositories of versioned documents, among other applications. These collections may reach huge sizes, but are formed mostly of documents that are near-copies of others. Traditional techniques for indexing these collections fail to properly exploit their regularities in order to reduce space. We introduce new techniques for compressing inverted indexes that exploit this near-copy regularity. They are based on run-length, Lempel–Ziv, or grammar compression of the differential inverted lists, instead of the usual practice of gap-encoding them. We show that, in this highly repetitive setting, our compression methods significantly reduce the space obtained with classical techniques, at the price of moderate slowdowns. Moreover, our best methods are universal, that is, they do not need to know the versioning structure of the collection, nor that a clear versioning structure even exists. We also introduce compressed self-indexes in the comparison. These are designed for general strings (not only natural language texts) and represent the text collection plus the index structure (not an inverted index) in integrated form. We show that these techniques can compress much further, using a small fraction of the space required by our new inverted indexes. Yet, they are orders of magnitude slower.
The most relevant previous work targeting highly repetitive collections of natural language text is by @cite_18 @cite_34 . They presented alternative compression methods for non-positional indexes on versioned collections. Their approach, called two-level indexing , merges all the versions of each document for creating the inverted lists. A secondary index stores, for each entry of the main inverted list, a bitmap indicating the versions of the document that contain the term. They convert previous one-level'' techniques @cite_17 @cite_45 into two-level methods, and also study methods for reordering the versions in order to improve compression.
{ "cite_N": [ "@cite_45", "@cite_18", "@cite_34", "@cite_17" ], "mid": [ "1556744446", "2061986359", "2022507549", "1969838114" ], "abstract": [ "Modern document collections often contain groups of documents with overlapping or shared content. However, most information retrieval systems process each document separately, causing shared content to be indexed multiple times. In this paper, we describe a new document representation model where related documents are organized as a tree, allowing shared content to be indexed just once. We show how this representation model can be encoded in an inverted index and we describe algorithms for evaluating free-text queries based on this encoding. We also show how our representation model applies to web, email, and newsgroup search. Finally, we present experimental results showing that our methods can provide a significant reduction in the size of an inverted index as well as in the time to build and query it.", "We study the problem of creating highly compressed full-text index structures for versioned document collections, that is, collections that contain multiple versions of each document. Important examples of such collections are Wikipedia or the web page archive maintained by the Internet Archive. A straightforward indexing approach would simply treat each document version as a separate document, such that index size scales linearly with the number of versions. However, several authors have recently studied approaches that exploit the significant similarities between different versions of the same document to obtain much smaller index sizes. In this paper, we propose new techniques for organizing and compressing inverted index structures for such collections. We also perform a detailed experimental comparison of new techniques and the existing techniques in the literature. Our results on an archive of the English version of Wikipedia, and on a subset of the Internet Archive collection, show significant benefits over previous approaches.", "Current Information Retrieval systems use inverted index structures for efficient query processing. Due to the extremely large size of many data sets, these index structures are usually kept in compressed form, and many techniques for optimizing compressed size and query processing speed have been proposed. In this paper, we focus on versioned document collections, that is, collections where each document is modified over time, resulting in multiple versions of the document. Consecutive versions of the same document are often similar, and several researchers have explored ideas for exploiting this similarity to decrease index size. We propose new index compression techniques for versioned document collections that achieve reductions in index size over previous methods. In particular, we first propose several bitwise compression techniques that achieve a compact index structure but that are too slow for most applications. Based on the lessons learned, we then propose additional techniques that come close to the sizes of the bitwise technique while also improving on the speed of the best previous methods.", "In this paper, we present an approach to the incorporation of object versioning into a distributed full-text information retrieval system. We propose an implementation based on “partially versioned” index sets, arguing that its space overhead and query-time performance make it suitable for full-text IR, with its heavy dependence on inverted indexing. We develop algorithms for computing both historical queries and time range queries and show how these algorithms can be applied to a number of problems in distributed information management, such as data replication, caching, transactional consistency, and hybrid media repositories." ] }
1604.08897
2343474118
Abstract Indexing highly repetitive collections has become a relevant problem with the emergence of large repositories of versioned documents, among other applications. These collections may reach huge sizes, but are formed mostly of documents that are near-copies of others. Traditional techniques for indexing these collections fail to properly exploit their regularities in order to reduce space. We introduce new techniques for compressing inverted indexes that exploit this near-copy regularity. They are based on run-length, Lempel–Ziv, or grammar compression of the differential inverted lists, instead of the usual practice of gap-encoding them. We show that, in this highly repetitive setting, our compression methods significantly reduce the space obtained with classical techniques, at the price of moderate slowdowns. Moreover, our best methods are universal, that is, they do not need to know the versioning structure of the collection, nor that a clear versioning structure even exists. We also introduce compressed self-indexes in the comparison. These are designed for general strings (not only natural language texts) and represent the text collection plus the index structure (not an inverted index) in integrated form. We show that these techniques can compress much further, using a small fraction of the space required by our new inverted indexes. Yet, they are orders of magnitude slower.
He and Suel @cite_51 also designed a positional inverted index for the repetitive scenario. They apply a previous technique to partition documents into fragments @cite_70 and then use their non-positional approach @cite_34 on the fragments. They focus on answering top-k queries, by first obtaining the top- @math ( @math ) documents over the non-positional index and then re-ranking them using the positional information in order to return the top- @math results. This is faster than using the whole positional information in the first stage @cite_51 @cite_44 .
{ "cite_N": [ "@cite_70", "@cite_44", "@cite_34", "@cite_51" ], "mid": [ "2170907470", "2052867877", "2022507549", "2090283421" ], "abstract": [ "Current web search engines focus on searching only themost recentsnapshot of the web. In some cases, however, it would be desirableto search over collections that include many different crawls andversions of each page. One important example of such a collectionis the Internet Archive, though there are many others. Sincethe data size of such an archive is multiple times that of a singlesnapshot, this presents us with significant performance challenges.Current engines use various techniques for index compression andoptimized query execution, but these techniques do not exploit thesignificant similarities between different versions of a page, or betweendifferent pages.In this paper, we propose a general framework for indexing andquery processing of archival collections and, more generally, anycollections with a sufficient amount of redundancy. Our approachresults in significant reductions in index size and query processingcosts on such collections, and it is orthogonal to and can be combinedwith the existing techniques. It also supports highly efficientupdates, both locally and over a network. Within this framework,we describe and evaluate different implementations that trade offindex size versus CPU cost and other factors, and discuss applicationsranging from archival web search to local search of web sites,email archives, or file systems. We present experimental resultsbased on search engine query log and a large collection consistingof multiple crawls.", "The inverted file is the most popular indexing mechanism for document search in an information retrieval system. Compressing an inverted file can greatly improve document search rate. Traditionally, the d-gap technique is used in the inverted file compression by replacing document identifiers with usually much smaller gap values. However, fluctuating gap values cannot be efficiently compressed by some well-known prefix-free codes. To smoothen and reduce the gap values, we propose a document-identifier reassignment algorithm. This reassignment is based on a similarity factor between documents. We generate a reassignment order for all documents according to the similarity to reassign closer identifiers to the documents having closer relationships. Simulation results show that the average gap values of sample inverted files can be reduced by 30 , and the compression rate of d-gapped inverted file with prefix-free codes can be improved by 15 .", "Current Information Retrieval systems use inverted index structures for efficient query processing. Due to the extremely large size of many data sets, these index structures are usually kept in compressed form, and many techniques for optimizing compressed size and query processing speed have been proposed. In this paper, we focus on versioned document collections, that is, collections where each document is modified over time, resulting in multiple versions of the document. Consecutive versions of the same document are often similar, and several researchers have explored ideas for exploiting this similarity to decrease index size. We propose new index compression techniques for versioned document collections that achieve reductions in index size over previous methods. In particular, we first propose several bitwise compression techniques that achieve a compact index structure but that are too slow for most applications. Based on the lessons learned, we then propose additional techniques that come close to the sizes of the bitwise technique while also improving on the speed of the best previous methods.", "Versioned document collections are collections that contain multiple versions of each document. Important examples are Web archives, Wikipedia and other wikis, or source code and documents maintained in revision control systems. Versioned document collections can become very large, due to the need to retain past versions, but there is also a lot of redundancy between versions that can be exploited. Thus, versioned document collections are usually stored using special differential (delta) compression techniques, and a number of researchers have recently studied how to exploit this redundancy to obtain more succinct full-text index structures. In this paper, we study index organization and compression techniques for such versioned full-text index structures. In particular, we focus on the case of positional index structures, while most previous work has focused on the non-positional case. Building on earlier work in [zs:redun], we propose a framework for indexing and querying in versioned document collections that integrates non-positional and positional indexes to enable fast top-k query processing. Within this framework, we define and study the problem of minimizing positional index size through optimal substring partitioning. Experiments on Wikipedia and web archive data show that our techniques achieve significant reductions in index size over previous work while supporting very fast query processing." ] }
1604.08823
2342721033
In their seminal paper, Frey and Osborne quantified the automation of jobs, by assigning each job in the O*NET database a probability to be automated. In this paper, we refine their results in the following way: Every O*NET job consists of a set of tasks, and these tasks can be related. We use a linear program to assign probabilities to tasks, such that related tasks have a similar probability and the tasks can explain the computerization probability of a job. Analyzing jobs on the level of tasks helps comprehending the results, as experts as well as laymen can more easily criticize and refine what parts of a job are susceptible to computerization.
The seminal paper by Frey and Osborne is the first to make quantitative claims about the future of jobs @cite_17 . Together with 70 machine learning experts, Frey and Osborne first manually labeled 70 out of 702 jobs from the database as either automatable'' or non automatable''. This labeling was, as the authors admit, a subjective assignment based on eye balling'' the job descriptions from . Labels were only assigned to jobs where the whole job was considered to be (non) automatable, and to jobs where the participants of the workshop were most confident. To calculate the probability for non-labeled jobs, Frey and Osborne used a probabilistic classification algorithm. They chose 9 properties from as features for their classifier, namely Finger Dexterity'', Manual Dexterity'', Cramped Work Space, Awkward Positions'', Originality'', Fine Arts'', Social Perceptiveness'', Negotiation'', Persuasion'', and Assisting and Caring for Others''.
{ "cite_N": [ "@cite_17" ], "mid": [ "2526781987" ], "abstract": [ "We examine how susceptible jobs are to computerisation. To assess this, we begin by implementing a novel methodology to estimate the probability of computerisation for 702 detailed occupations, using a Gaussian process classifier. Based on these estimates, we examine expected impacts of future computerisation on US labour market outcomes, with the primary objective of analysing the number of jobs at risk and the relationship between an occupations probability of computerisation, wages and educational attainment." ] }
1604.08685
2345308174
Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information.
In this paper, we choose to use a skeleton-based representation, exploiting the power of abstraction. The skeleton model can capture geometric changes of articulated objects @cite_52 @cite_1 @cite_28 , like a human body or the base of a swivel chair. Typically, researchers recovered a 3D skeleton from a single image by minimizing its projection error on the 2D image plane @cite_24 @cite_54 @cite_6 @cite_40 @cite_7 @cite_11 . Recent work in this line @cite_28 @cite_56 demonstrated state-of-the-art performance. In contrast to them, we propose to use neural networks to predict a 3D object skeleton from its 2D keypoints, which is more robust to imperfect detection results and can be jointly learned with keypoint estimators.
{ "cite_N": [ "@cite_7", "@cite_28", "@cite_54", "@cite_1", "@cite_52", "@cite_6", "@cite_56", "@cite_24", "@cite_40", "@cite_11" ], "mid": [ "2155196764", "1943191679", "2170321477", "2963013806", "2135085348", "2059704894", "2402546937", "2131806657", "2147817141", "2114111978" ], "abstract": [ "Reconstructing an arbitrary configuration of 3D points from their projection in an image is an ill-posed problem. When the points hold semantic meaning, such as anatomical landmarks on a body, human observers can often infer a plausible 3D configuration, drawing on extensive visual memory. We present an activity-independent method to recover the 3D configuration of a human figure from 2D locations of anatomical landmarks in a single image, leveraging a large motion capture corpus as a proxy for visual memory. Our method solves for anthropometrically regular body pose and explicitly estimates the camera via a matching pursuit algorithm operating on the image projections. Anthropometric regularity (i.e., that limbs obey known proportions) is a highly informative prior, but directly applying such constraints is intractable. Instead, we enforce a necessary condition on the sum of squared limb-lengths that can be solved for in closed form to discourage implausible configurations in 3D. We evaluate performance on a wide variety of human poses captured from different viewpoints and show generalization to novel 3D configurations and robustness to missing data.", "Estimating 3D human pose from 2D joint locations is central to the analysis of people in images and video. To address the fact that the problem is inherently ill posed, many methods impose a prior over human poses. Unfortunately these priors admit invalid poses because they do not model how joint-limits vary with pose. Here we make two key contributions. First, we collect a motion capture dataset that explores a wide range of human poses. From this we learn a pose-dependent model of joint limits that forms our prior. Both dataset and prior are available for research purposes. Second, we define a general parametrization of body pose and a new, multi-stage, method to estimate 3D pose from 2D joint locations using an over-complete dictionary of poses. Our method shows good generalization while avoiding impossible poses. We quantitatively compare our method with recent work and show state-of-the-art results on 2D to 3D pose estimation using the CMU mocap dataset. We also show superior results using manual annotations on real images and automatic detections on the Leeds sports pose dataset.", "Line drawings provide an effective means of communication about the geometry of 3D objects. An understanding of how to duplicate the way humans interpret line drawings is extremely important in enabling man-machine communication with respect to images, diagrams, and spatial constructs. In particular, such an understanding could be used to provide the human with the capability to create a line-drawing sketch of a polyhedral object that the machine can automatically convert into the intended 3D model.", "One major challenge for 3D pose estimation from a single RGB image is the acquisition of sufficient training data. In particular, collecting large amounts of training data that contain unconstrained images and are annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of images with annotated 2D poses and the second source consists of accurate 3D motion capture data. To integrate both sources, we propose a dual-source approach that combines 2D pose estimation with efficient and robust 3D pose retrieval. In our experiments, we show that our approach achieves state-of-the-art results and is even competitive when the skeleton structure of the two sources differ substantially.", "This paper presents an algorithm for learning the time-varying shape of a non-rigid 3D object from uncalibrated 2D tracking data. We model shape motion as a rigid component (rotation and translation) combined with a non-rigid deformation. Reconstruction is ill-posed if arbitrary deformations are allowed. We constrain the problem by assuming that the object shape at each time instant is drawn from a Gaussian distribution. Based on this assumption, the algorithm simultaneously estimates 3D shape and motion for each time frame, learns the parameters of the Gaussian, and robustly fills-in missing data points. We then extend the algorithm to model temporal smoothness in object shape, thus allowing it to handle severe cases of missing data.", "We introduce a new approach for recognizing and reconstructing 3D objects in images. Our approach is based on an analysis by synthesis strategy. A forward synthesis model constructs possible geometric interpretations of the world, and then selects the interpretation that best agrees with the measured visual evidence. The forward model synthesizes visual templates defined on invariant (HOG) features. These visual templates are discriminatively trained to be accurate for inverse estimation. We introduce an efficient \"brute-force\" approach to inference that searches through a large number of candidate reconstructions, returning the optimal one. One benefit of such an approach is that recognition is inherently (re)constructive. We show state of the art performance for detection and reconstruction on two challenging 3D object recognition datasets of cars and cuboids.", "We investigate the problem of reconstructing the 3D shape of an object, given a set of landmarks in a single image. To alleviate the reconstruction ambiguity, a widely-used approach is to confine the unknown 3D shape within a shape space built upon existing shapes. While this approach has proven to be successful in various applications, a challenging issue remains, i.e. the joint estimation of shape parameters and camera-pose parameters requires to solve a nonconvex optimization problem. The existing methods often adopt an alternating minimization scheme to locally update the parameters, and consequently the solution is sensitive to initialization. In this paper, we propose a convex formulation to address this issue and develop an efficient algorithm to solve the proposed convex program. We demonstrate the exact recovery property of the proposed method, its merits compared to the alternative methods, and the applicability in human pose, car and face reconstruction.", "Abstract A computer vision system has been implemented that can recognize three-dimensional objects from unknown viewpoints in single gray-scale images. Unlike most other approaches, the recognition is accomplished without any attempt to reconstruct depth information bottom-up from the visual input. Instead, three other mechanisms are used that can bridge the gap between the two-dimensional image and knowledge of three-dimensional objects. First, a process of perceptual organization is used to form groupings and structures in the image that are likely to be invariant over a wide range of viewpoints. Second, a probabilistic ranking method is used to reduce the size of the search space during model-based matching. Finally, a process of spatial correspondence brings the projections of three-dimensional models into direct correspondence with the image by solving for unknown viewpoint and model parameters. A high level of robustness in the presence of occlusion and missing data can be achieved through full application of a viewpoint consistency constraint. It is argued that similar mechanisms and constraints form the basis for recognition in human vision.", "Recovering 3D geometry from a single 2D line drawing is an important and challenging problem in computer vision. It has wide applications in interactive 3D modeling from images, computer-aided design, and 3D object retrieval. Previous methods of 3D reconstruction from line drawings are mainly based on a set of heuristic rules. They are not robust to sketch errors and often fail for objects that do not satisfy the rules. In this paper, we propose a novel approach, called example-based 3D object reconstruction from line drawings, which is based on the observation that a natural or man-made complex 3D object normally consists of a set of basic 3D objects. Given a line drawing, a graphical model is built where each node denotes a basic object whose candidates are from a 3D model (example) database. The 3D reconstruction is solved using a maximum-a-posteriori (MAP) estimation such that the reconstructed result best fits the line drawing. Our experiments show that this approach achieves much better reconstruction accuracy and are more robust to imperfect line drawings than previous methods.", "Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching." ] }
1604.08685
2345308174
Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information.
Our work also connects to the traditional field of vision as inverse graphics @cite_10 @cite_13 and analysis by synthesis @cite_12 @cite_43 @cite_57 @cite_34 , as we use neural nets to decode latent 3D structure from images, and use a projection layer for rendering. Their approaches often required supervision for the inferred representations or made over-simplified assumptions of background and occlusion in images. Our learns 3D representation without using 3D supervision, and generalizes to real images well.
{ "cite_N": [ "@cite_10", "@cite_57", "@cite_43", "@cite_34", "@cite_13", "@cite_12" ], "mid": [ "1981814724", "2169153920", "1960579544", "2181623680", "1691728462", "2147336195" ], "abstract": [ "We describe a hierarchical, generative model that can be viewed as a nonlinear generalization of factor analysis and can be implemented in a neural network. The model uses bottom–up, top–down and lateral connections to perform Bayesian perceptual inference correctly. Once perceptual inference has been performed the connection strengths can be updated using a very simple learning rule that only requires locally available information. We demonstrate that the network learns to extract sparse, distributed, hierarchical representations.", "This contribution reviews (some of) the history of analysis by synthesis, an approach to perception and comprehension articulated in the 1950s. Whereas much research has focused on bottom-up, feed-forward, inductive mechanisms, analysis by synthesis as a heuristic model emphasizes a balance of bottom-up and knowledge-driven, top-down, predictive steps in speech perception and language comprehension. This idea aligns well with contemporary Bayesian approaches to perception (in language and other domains), which are illustrated with examples from different aspects of perception and comprehension. Results from psycholinguistics, the cognitive neuroscience of language, and visual object recognition suggest that analysis by synthesis can provide a productive way of structuring biolinguistic research. Current evidence suggests that such a model is theoretically well motivated, biologically sensible, and becomes computationally tractable borrowing from Bayesian formalizations.", "Recent progress on probabilistic modeling and statistical learning, coupled with the availability of large training datasets, has led to remarkable progress in computer vision. Generative probabilistic models, or “analysis-by-synthesis” approaches, can capture rich scene structure but have been less widely applied than their discriminative counterparts, as they often require considerable problem-specific engineering in modeling and inference, and inference is typically seen as requiring slow, hypothesize-and-test Monte Carlo methods. Here we present Picture, a probabilistic programming language for scene understanding that allows researchers to express complex generative vision models, while automatically solving them using fast general-purpose inference machinery. Picture provides a stochastic scene language that can express generative models for arbitrary 2D 3D scenes, as well as a hierarchy of representation layers for comparing scene hypotheses with observed images by matching not simply pixels, but also more abstract features (e.g., contours, deep neural network activations). Inference can flexibly integrate advanced Monte Carlo strategies with fast bottom-up data-driven methods. Thus both representations and inference strategies can build directly on progress in discriminatively trained systems to make generative vision more robust and efficient. We use Picture to write programs for 3D face analysis, 3D human pose estimation, and 3D object reconstruction - each competitive with specially engineered baselines.", "Humans demonstrate remarkable abilities to predict physical events in dynamic scenes, and to infer the physical properties of objects from static images. We propose a generative model for solving these problems of physical scene understanding from real-world videos and images. At the core of our generative model is a 3D physics engine, operating on an object-based representation of physical properties, including mass, position, 3D shape, and friction. We can infer these latent properties using relatively brief runs of MCMC, which drive simulations in the physics engine to fit key features of visual observations. We further explore directly mapping visual inputs to physical properties, inverting a part of the generative process using deep learning. We name our model Galileo, and evaluate it on a video dataset with simple yet physically rich scenarios. Results show that Galileo is able to infer the physical properties of objects and predict the outcome of a variety of physical events, with an accuracy comparable to human subjects. Our study points towards an account of human vision with generative physical knowledge at its core, and various recognition models as helpers leading to efficient inference.", "This paper presents the Deep Convolution Inverse Graphics Network (DC-IGN), a model that aims to learn an interpretable representation of images, disentangled with respect to three-dimensional scene structure and viewing transformations such as depth rotations and lighting variations. The DC-IGN model is composed of multiple layers of convolution and de-convolution operators and is trained using the Stochastic Gradient Variational Bayes (SGVB) algorithm [10]. We propose a training procedure to encourage neurons in the graphics code layer to represent a specific transformation (e.g. pose or light). Given a single input image, our model can generate new images of the same object with variations in pose and lighting. We present qualitative and quantitative tests of the model's efficacy at learning a 3D rendering engine for varied object classes including faces and chairs.", "We argue that the study of human vision should be aimed at determining how humans perform natural tasks with natural images. Attempts to understand the phenomenology of vision from artificial stimuli, although worthwhile as a starting point, can lead to faulty generalizations about visual systems, because of the enormous complexity of natural images. Dealing with this complexity is daunting, but Bayesian inference on structured probability distributions offers the ability to design theories of vision that can deal with the complexity of natural images, and that use ‘analysis by synthesis' strategies with intriguing similarities to the brain. We examine these strategies using recent examples from computer vision, and outline some important imlications for cognitive science." ] }
1604.08685
2345308174
Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information.
3D viewpoint estimation 3D viewpoint estimation seeks to estimate the 3D orientation of an object from a single image @cite_16 . Some previous methods formulated it as a classification or regression problem, and aimed to directly estimate the viewpoint from an image @cite_8 @cite_53 . Others proposed to estimate 3D viewpoint from detected 2D keypoints or edges in the image @cite_11 @cite_22 @cite_27 . While the main focus of our work is to estimate 3D object structure, our method can also predict its 3D viewpoint.
{ "cite_N": [ "@cite_22", "@cite_8", "@cite_53", "@cite_27", "@cite_16", "@cite_11" ], "mid": [ "1519402791", "2111087635", "1591870335", "", "1991264156", "2114111978" ], "abstract": [ "We introduce a novel approach to the problem of localizing objects in an image and estimating their fine-pose. Given exact CAD models, and a few real training images with aligned models, we propose to leverage the geometric information from CAD models and appearance information from real images to learn a model that can accurately estimate fine pose in real images. Specifically, we propose FPM, a fine pose parts-based model, that combines geometric information in the form of shared 3D parts in deformable part based models, and appearance information in the form of objectness to achieve both fast and accurate fine pose estimation. Our method significantly outperforms current state-of-the-art algorithms in both accuracy and speed.", "This paper addresses the problem of category-level 3D object detection. Given a monocular image, our aim is to localize the objects in 3D by enclosing them with tight oriented 3D bounding boxes. We propose a novel approach that extends the well-acclaimed deformable part-based model [1] to reason in 3D. Our model represents an object class as a deformable 3D cuboid composed of faces and parts, which are both allowed to deform with respect to their anchors on the 3D box. We model the appearance of each face in fronto-parallel coordinates, thus effectively factoring out the appearance variation induced by viewpoint. Our model reasons about face visibility patters called aspects. We train the cuboid model jointly and discriminatively and share weights across all aspects to attain efficiency. Inference then entails sliding and rotating the box in 3D and scoring object hypotheses. While for inference we discretize the search space, the variables are continuous in our model. We demonstrate the effectiveness of our approach in indoor and outdoor scenarios, and show that our approach significantly outperforms the state-of-the-art in both 2D [1] and 3D object detection [2].", "Object viewpoint estimation from 2D images is an essential task in computer vision. However, two issues hinder its progress: scarcity of training data with viewpoint annotations, and a lack of powerful features. Inspired by the growing availability of 3D models, we propose a framework to address both issues by combining render-based image synthesis and CNNs (Convolutional Neural Networks). We believe that 3D models have the potential in generating a large number of images of high variation, which can be well exploited by deep CNN with a high learning capacity. Towards this goal, we propose a scalable and overfit-resistant image synthesis pipeline, together with a novel CNN specifically tailored for the viewpoint estimation task. Experimentally, we show that the viewpoint estimation from our pipeline can significantly outperform state-of-the-art methods on PASCAL 3D+ benchmark.", "", "3D object detection and pose estimation methods have become popular in recent years since they can handle ambiguities in 2D images and also provide a richer description for objects compared to 2D object detectors. However, most of the datasets for 3D recognition are limited to a small amount of images per category or are captured in controlled environments. In this paper, we contribute PASCAL3D+ dataset, which is a novel and challenging dataset for 3D object detection and pose estimation. PASCAL3D+ augments 12 rigid categories of the PASCAL VOC 2012 [4] with 3D annotations. Furthermore, more images are added for each category from ImageNet [3]. PASCAL3D+ images exhibit much more variability compared to the existing 3D datasets, and on average there are more than 3,000 object instances per category. We believe this dataset will provide a rich testbed to study 3D detection and pose estimation and will help to significantly push forward research in this area. We provide the results of variations of DPM [6] on our new dataset for object detection and viewpoint estimation in different scenarios, which can be used as baselines for the community. Our benchmark is available online at http: cvgl.stanford.edu projects pascal3d", "Geometric 3D reasoning at the level of objects has received renewed attention recently in the context of visual scene understanding. The level of geometric detail, however, is typically limited to qualitative representations or coarse boxes. This is linked to the fact that today's object class detectors are tuned toward robust 2D matching rather than accurate 3D geometry, encouraged by bounding-box-based benchmarks such as Pascal VOC. In this paper, we revisit ideas from the early days of computer vision, namely, detailed, 3D geometric object class representations for recognition. These representations can recover geometrically far more accurate object hypotheses than just bounding boxes, including continuous estimates of object pose and 3D wireframes with relative 3D positions of object parts. In combination with robust techniques for shape description and inference, we outperform state-of-the-art results in monocular 3D pose estimation. In a series of experiments, we analyze our approach in detail and demonstrate novel applications enabled by such an object class representation, such as fine-grained categorization of cars and bicycles, according to their 3D geometry, and ultrawide baseline matching." ] }
1604.08685
2345308174
Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information.
Training with synthetic data Synthetic data are often used to augment the training set @cite_35 @cite_61 @cite_26 . Su al @cite_35 attempted to train a 3D viewpoint estimator using a combination of real and synthetic images, while Sun al @cite_39 and Zhou al @cite_36 also used a similar strategy for object detection and matching, respectively. Huang al @cite_15 analyzed the invariance of convolutional neural networks using synthetic images. For image synthesis, Dosovitskiy al @cite_47 trained a neural network to generate new images using synthetic images.
{ "cite_N": [ "@cite_61", "@cite_35", "@cite_47", "@cite_26", "@cite_36", "@cite_39", "@cite_15" ], "mid": [ "1713526874", "2015112703", "1893585201", "2152926413", "2474531669", "2083544878", "2021261909" ], "abstract": [ "Crowdsourced 3D CAD models are becoming easily accessible online, and can potentially generate an infinite number of training images for almost any object category. We show that adapting contemporary Deep Convolutional Neural Net (DCNN) models to such data can be effective, especially in the few-shot regime where none or only a few annotated real images are available, or where the images are not well matched to the target domain. Little is known about the degree of realism necessary to train models with deep features on CAD data. In a detailed analysis, we use synthetic images to probe DCNN invariance to object-class variations caused by 3D shape, pose, and photorealism, with surprising findings. In particular, we show that DCNNs used as a fixed representation exhibit a large amount of invariance to these factors, but, if allowed to adapt, can still learn effectively from synthetic data. These findings guide us in designing a method for adaptive training of DCNNs using real and synthetic data. We show that our approach significantly outperforms previous methods on the benchmark PASCAL VOC2007 dataset when learning in the fewshot scenario, and outperform training with real data in a domain shift scenario on the Office benchmark.", "Images, while easy to acquire, view, publish, and share, they lack critical depth information. This poses a serious bottleneck for many image manipulation, editing, and retrieval tasks. In this paper we consider the problem of adding depth to an image of an object, effectively 'lifting' it back to 3D, by exploiting a collection of aligned 3D models of related objects. Our key insight is that, even when the imaged object is not contained in the shape collection, the network of shapes implicitly characterizes a shape-specific deformation subspace that regularizes the problem and enables robust diffusion of depth information from the shape collection to the input image. We evaluate our fully automatic approach on diverse and challenging input images, validate the results against Kinect depth readings, and demonstrate several imaging applications including depth-enhanced image editing and image relighting.", "We train a generative convolutional neural network which is able to generate images of objects given object type, viewpoint, and color. We train the network in a supervised manner on a dataset of rendered 3D chair models. Our experiments show that the network does not merely learn all images by heart, but rather finds a meaningful representation of a 3D chair model allowing it to assess the similarity of different chairs, interpolate between given viewpoints to generate the missing ones, or invent new chair styles by interpolating between chairs from the training set. We show that the network can be used to find correspondences between different chairs from the dataset, outperforming existing approaches on this task.", "Example-based methods are effective for parameter estimation problems when the underlying system is simple or the dimensionality of the input is low. For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly become prohibitively high. We introduce a new algorithm that learns a set of hashing functions that efficiently index examples relevant to a particular estimation task. Our algorithm extends locality-sensitive hashing, a recently developed method to find approximate neighbors in time sublinear in the number of examples. This method depends critically on the choice of hash functions that are optimally relevant to a particular estimation problem. Experiments demonstrate that the resulting algorithm, which we call parameter-sensitive hashing, can rapidly and accurately estimate the articulated pose of human figures from a large database of example images.", "Discriminative deep learning approaches have shown impressive results for problems where human-labeled ground truth is plentiful, but what about tasks where labels are difficult or impossible to obtain? This paper tackles one such problem: establishing dense visual correspondence across different object instances. For this task, although we do not know what the ground-truth is, we know it should be consistent across instances of that category. We exploit this consistency as a supervisory signal to train a convolutional neural network to predict cross-instance correspondences between pairs of images depicting objects of the same category. For each pair of training images we find an appropriate 3D CAD model and render two synthetic views to link in with the pair, establishing a correspondence flow 4-cycle. We use ground-truth synthetic-to-synthetic correspondences, provided by the rendering engine, to train a ConvNet to predict synthetic-to-real, real-to-real and realto-synthetic correspondences that are cycle-consistent with the ground-truth. At test time, no CAD models are required. We demonstrate that our end-to-end trained ConvNet supervised by cycle-consistency outperforms stateof-the-art pairwise matching methods in correspondencerelated tasks.", "The most successful 2D object detection methods require a large number of images annotated with object bounding boxes to be collected for training. We present an alternative approach that trains on virtual data rendered from 3D models, avoiding the need for manual labeling. Growing demand for virtual reality applications is quickly bringing about an abundance of available 3D models for a large variety of object categories. While mainstream use of 3D models in vision has focused on predicting the 3D pose of objects, we investigate the use of such freely available 3D models for multicategory 2D object detection. To address the issue of dataset bias that arises from training on virtual data and testing on real images, we propose a simple and fast adaptation approach based on decorrelated features. We also compare two kinds of virtual data, one rendered with real-image textures and one without. Evaluation on a benchmark domain adaptation dataset demonstrates that our method performs comparably to existing methods trained on large-scale real image domains.", "We present an approach to automatic 3D reconstruction of objects depicted in Web images. The approach reconstructs objects from single views. The key idea is to jointly analyze a collection of images of different objects along with a smaller collection of existing 3D models. The images are analyzed and reconstructed together. Joint analysis regularizes the formulated optimization problems, stabilizes correspondence estimation, and leads to reasonable reproduction of object appearance without traditional multi-view cues." ] }
1604.08685
2345308174
Understanding 3D object structure from a single image is an important but difficult task in computer vision, mostly due to the lack of 3D object annotations in real images. Previous work tackles this problem by either solving an optimization task given 2D keypoint positions, or training on synthetic data with ground truth 3D information.
In this paper, we combine real 2D-annotated images and synthetic 3D data for training to recover a 3D skeleton. We use heatmaps of 2D keypoints, instead of (often imperfectly) rendered images, from synthetic 3D data, so that our algorithm has better generalization ability as the effects of imperfect rendering are minimized. Yasin al @cite_1 also proposed to use both 2D and 3D data for training, but they uses keypoint location, instead of heatmaps, as the intermediate representation that connects 2D and 3D.
{ "cite_N": [ "@cite_1" ], "mid": [ "2963013806" ], "abstract": [ "One major challenge for 3D pose estimation from a single RGB image is the acquisition of sufficient training data. In particular, collecting large amounts of training data that contain unconstrained images and are annotated with accurate 3D poses is infeasible. We therefore propose to use two independent training sources. The first source consists of images with annotated 2D poses and the second source consists of accurate 3D motion capture data. To integrate both sources, we propose a dual-source approach that combines 2D pose estimation with efficient and robust 3D pose retrieval. In our experiments, we show that our approach achieves state-of-the-art results and is even competitive when the skeleton structure of the two sources differ substantially." ] }
1604.08633
2347114283
Recent work on word ordering has argued that syntactic structure is important, or even required, for effectively recovering the order of a sentence. We find that, in fact, an n-gram language model with a simple heuristic gives strong results on this task. Furthermore, we show that a long short-term memory (LSTM) language model is even more effective at recovering order, with our basic model outperforming a state-of-the-art syntactic model by 11.5 BLEU points. Additional data and larger beams yield further gains, at the expense of training and search time.
Recent approaches to linearization have been based on reconstructing the syntactic structure to produce the word order. Let @math represent all projective dependency parse trees over @math words. The objective is to find @math where @math is now over both the syntactic structure and the linearization. The current state of the art on the Penn Treebank (PTB) @cite_11 , without external data, of uses a transition-based parser with beam search to construct a sentence and a parse tree. The scoring function is a linear model @math and is trained with an early update structured perceptron to match both a given order and syntactic tree. The feature function @math includes features on the syntactic tree. This work improves upon past work which used best-first search over a similar objective @cite_12 .
{ "cite_N": [ "@cite_12", "@cite_11" ], "mid": [ "129942754", "1632114991" ], "abstract": [ "Machine-produced text often lacks grammaticality and fluency. This paper studies grammaticality improvement using a syntax-based algorithm based on ccg. The goal of the search problem is to find an optimal parse tree among all that can be constructed through selection and ordering of the input words. The search problem, which is significantly harder than parsing, is solved by guided learning for best-first search. In a standard word ordering task, our system gives a BLEU score of 40.1, higher than the previous result of 33.7 achieved by a dependency-based system.", "Abstract : As a result of this grant, the researchers have now published oil CDROM a corpus of over 4 million words of running text annotated with part-of- speech (POS) tags, with over 3 million words of that material assigned skelet al grammatical structure. This material now includes a fully hand-parsed version of the classic Brown corpus. About one half of the papers at the ACL Workshop on Using Large Text Corpora this past summer were based on the materials generated by this grant." ] }
1604.08202
2951289157
We consider the problem of amodal instance segmentation, the objective of which is to predict the region encompassing both visible and occluded parts of each object. Thus far, the lack of publicly available amodal segmentation annotations has stymied the development of amodal segmentation methods. In this paper, we sidestep this issue by relying solely on standard modal instance segmentation annotations to train our model. The result is a new method for amodal instance segmentation, which represents the first such method to the best of our knowledge. We demonstrate the proposed method's effectiveness both qualitatively and quantitatively.
There has been relatively little work exploring amodal completion. @cite_37 tackled the problem of predicting the amodal bounding box of an object. @cite_12 explored completing the occluded portions of planar surfaces given depth information. To the best of our knowledge, there has been no algorithmic work on general-purpose amodal segmentation. However, there has been work on collecting amodal segmentation annotations. @cite_5 collected amodal segmentation annotations on BSDS images, but has yet to make them publicly available. As far as we know, the proposed method represents the first method for amodal segmentation.
{ "cite_N": [ "@cite_5", "@cite_37", "@cite_12" ], "mid": [ "", "2209196558", "2067912884" ], "abstract": [ "", "We consider the problem of enriching current object detection systems with veridical object sizes and relative depth estimates from a single image. There are several technical challenges to this, such as occlusions, lack of calibration data and the scale ambiguity between object size and distance. These have not been addressed in full generality in previous work. Here we propose to tackle these issues by building upon advances in object recognition and using recently created large-scale datasets. We first introduce the task of amodal bounding box completion, which aims to infer the the full extent of the object instances in the image. We then propose a probabilistic framework for learning category-specific object size distributions from available annotations and leverage these in conjunction with amodal completions to infer veridical sizes of objects in novel images. Finally, we introduce a focal length prediction approach that exploits scene recognition to overcome inherent scale ambiguities and demonstrate qualitative results on challenging real-world scenes.", "We address the problems of contour detection, bottom-up grouping and semantic segmentation using RGB-D data. We focus on the challenging setting of cluttered indoor scenes, and evaluate our approach on the recently introduced NYU-Depth V2 (NYUD2) dataset [27]. We propose algorithms for object boundary detection and hierarchical segmentation that generalize the gPb-ucm approach of [2] by making effective use of depth information. We show that our system can label each contour with its type (depth, normal or albedo). We also propose a generic method for long-range amodal completion of surfaces and show its effectiveness in grouping. We then turn to the problem of semantic segmentation and propose a simple approach that classifies super pixels into the 40 dominant object categories in NYUD2. We use both generic and class-specific features to encode the appearance and geometry of objects. We also show how our approach can be used for scene classification, and how this contextual information in turn improves object recognition. In all of these tasks, we report significant improvements over the state-of-the-art." ] }
1604.08377
2950907599
Nowadays, more and more RDF data is becoming available on the Semantic Web. While the Semantic Web is generally incomplete by nature, on certain topics, it already contains complete information and thus, queries may return all answers that exist in reality. In this paper we develop a technique to check query completeness based on RDF data annotated with completeness information, taking into account data-specific inferences that lead to an inference problem which is @math -complete. We then identify a practically relevant fragment of completeness information, suitable for crowdsourced, entity-centric RDF data sources such as Wikidata, for which we develop an indexing technique that allows to scale completeness reasoning to Wikidata-scale data sources. We verify the applicability of our framework using Wikidata and develop COOL-WD, a completeness tool for Wikidata, used to annotate Wikidata with completeness statements and reason about the completeness of query answers over Wikidata. The tool is available at this http URL
Data completeness concerns the breadth, depth, and scope of information @cite_8 . In the relational databases, Motro @cite_2 and Levy @cite_13 were among the first to investigate data completeness. Motro developed a sound technique to check query completeness based on database views, while Levy introduced the notion of local completeness statements to denote which parts of a database are complete. Razniewski and Nutt @cite_10 further extended their results by reducing completeness reasoning to containment checking, for which many algorithms are known, and characterizing the complexity of reasoning for different classes of queries. In terms of their terminology, our completeness entailment problem is one of QC-QC entailment under bag semantics, for which so far it was only known that it is in @math @cite_12 . @cite_4 , proposed completeness patterns and defined a pattern algebra to check the completeness of queries. The work incorporated database instances, yet provided only a sound algorithm for completeness check.
{ "cite_N": [ "@cite_4", "@cite_8", "@cite_10", "@cite_2", "@cite_13", "@cite_12" ], "mid": [ "2018592576", "1567491469", "2189547572", "2009285751", "1551374365", "" ], "abstract": [ "In many applications including loosely coupled cloud databases, collaborative editing and network monitoring, data from multiple sources is regularly used for query answering. For reasons such as system failures, insufficient author knowledge or network issues, data may be temporarily unavailable or generally nonexistent. Hence, not all data needed for query answering may be available. In this paper, we propose a natural class of completeness patterns, expressed by selections on database tables, to specify complete parts of database tables. We then show how to adapt the operators of relational algebra so that they manipulate these completeness patterns to compute completeness patterns pertaining to query answers. Our proposed algebra is computationally sound and complete with respect to the information that the patterns provide. We show that stronger completeness patterns can be obtained by considering not only the schema but also the database instance and we extend the algebra to take into account this additional information. We develop novel techniques to efficiently implement the computation of completeness patterns on query answers and demonstrate their scalability on real data.", "Poor data quality (DQ) can have substantial social and economic impacts. Although firms are improving data quality with practical approaches and tools, their improvement efforts tend to focus narrowly on accuracy. We believe that data consumers have a much broader data quality conceptualization than IS professionals realize. The purpose of this paper is to develop a framework that captures the aspects of data quality that are important to data consumers.A two-stage survey and a two-phase sorting study were conducted to develop a hierarchical framework for organizing data quality dimensions. This framework captures dimensions of data quality that are important to data consumers. Intrinsic DQ denotes that data have quality in their own right. Contextual DQ highlights the requirement that data quality must be considered within the context of the task at hand. Representational DQ and accessibility DQ emphasize the importance of the role of systems. These findings are consistent with our understanding that high-quality data should be intrinsically good, contextually appropriate for the task, clearly represented, and accessible to the data consumer.Our framework has been used effectively in industry and government. Using this framework, IS managers were able to better understand and meet their data consumers' data quality needs. The salient feature of this research study is that quality attributes of data are collected from data consumers instead of being defined theoretically or based on researchers' experience. Although exploratory, this research provides a basis for future studies that measure data quality along the dimensions of this framework.", "Data completeness is an important aspect of data quality as in many scenarios it is crucial to guarantee completeness of query answers. We develop techniques to conclude the completeness of query answers from information about the completeness of parts of a generally incomplete database. In our framework, completeness of a database can be described in two ways: by table completeness (TC) statements, which say that certain parts of a relation are complete, and by query completeness (QC) statements, which say that the set of answers of a query is complete. We identify as core problem to decide whether table completeness entails query completeness (TC-QC). We develop decision procedures and assess the complexity of TC-QC inferences depending on the languages of the TC and QC statements. We show that in important cases weakest preconditions for query completeness can be expressed in terms of table completeness statements, which means that these statements identify precisely the parts of a database that are critical for the completeness of a query. For the related problem of QC-QC entailment, we discuss its connection to query determinacy. Moreover, we show how to use the concrete state of a database to enable further completeness inferences.", "Database integrity has two complementary components: validity , which guarantees that all false information is excluded from the database, and completeness , which guarantees that all true information is included in the database. This article describes a uniform model of integrity for relational databases, that considers both validity and completeness. To a large degree, this model subsumes the prevailing model of integrity (i.e., integrity constraints). One of the features of the new model is the determination of the integrity of answers issued by the database system in response to user queries. To users, answers that are accompanied with such detailed certifications of their integrity are more meaningful. First, the model is defined and discussed. Then, a specific mechanism is described that implements this model. With this mechanism, the determination of the integrity of an answer is a process analogous to the determination of the answer itself.", "We consider the problem of answering queries from databases that may be incomplete. A database is incomplete if some tuples may be missing from some relations, and only a part of each relation is known to be complete. This problem arises in several contexts. For example, systems that provide access to multiple heterogeneous information sources often encounter incomplete sources. The question we address is to determine whether the answer to a specific given query is complete even when the database is incomplete. We present a novel sound and complete algorithm for the answer-completeness problem by relating it to the problem of independence of queries from updates. We also show an important case of the independence problem (and therefore ofthe answer-completeness problem) that can be decided in polynomial time, whereas the best known algorithm for this case is exponential. This case involves updates that are described using a conjunction of comparison predicates. We also describe an algorithm that determines whether the answer to the query is complete in the current state of the database. Finally, we show that our ‘treatment extends naturally to partiallyincorrect databases. Permission to copy without fee all or part of this material is granted provided that the copies aTe not made OT distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, OT to republish, requirea a fee and or special permission from the Endowment. Proceedings of the 22nd VLDB Conference Mumbai(Bombay), India, 1996", "" ] }
1604.08377
2950907599
Nowadays, more and more RDF data is becoming available on the Semantic Web. While the Semantic Web is generally incomplete by nature, on certain topics, it already contains complete information and thus, queries may return all answers that exist in reality. In this paper we develop a technique to check query completeness based on RDF data annotated with completeness information, taking into account data-specific inferences that lead to an inference problem which is @math -complete. We then identify a practically relevant fragment of completeness information, suitable for crowdsourced, entity-centric RDF data sources such as Wikidata, for which we develop an indexing technique that allows to scale completeness reasoning to Wikidata-scale data sources. We verify the applicability of our framework using Wikidata and develop COOL-WD, a completeness tool for Wikidata, used to annotate Wikidata with completeness statements and reason about the completeness of query answers over Wikidata. The tool is available at this http URL
Gal ' a @cite_9 proposed a rule mining system that is able to operate under the Open-World Assumption (OWA) by simulating negative examples using the Partial Completeness Assumption (PCA). The PCA assumes that if the dataset knows some @math -attribute of @math , then it knows all @math -attributes of @math . This heuristic was also employed by @cite_6 to develop Knowledge Vault, a Web-scale system for probabilistic knowledge fusion. In their paper, they used the term Local Closed-World Assumption (LCWA).
{ "cite_N": [ "@cite_9", "@cite_6" ], "mid": [ "2151502664", "2016753842" ], "abstract": [ "Recent advances in information extraction have led to huge knowledge bases (KBs), which capture knowledge in a machine-readable format. Inductive Logic Programming (ILP) can be used to mine logical rules from the KB. These rules can help deduce and add missing knowledge to the KB. While ILP is a mature field, mining logical rules from KBs is different in two aspects: First, current rule mining systems are easily overwhelmed by the amount of data (state-of-the art systems cannot even run on today's KBs). Second, ILP usually requires counterexamples. KBs, however, implement the open world assumption (OWA), meaning that absent data cannot be used as counterexamples. In this paper, we develop a rule mining model that is explicitly tailored to support the OWA scenario. It is inspired by association rule mining and introduces a novel measure for confidence. Our extensive experiments show that our approach outperforms state-of-the-art approaches in terms of precision and coverage. Furthermore, our system, AMIE, mines rules orders of magnitude faster than state-of-the-art approaches.", "Recent years have witnessed a proliferation of large-scale knowledge bases, including Wikipedia, Freebase, YAGO, Microsoft's Satori, and Google's Knowledge Graph. To increase the scale even further, we need to explore automatic methods for constructing knowledge bases. Previous approaches have primarily focused on text-based extraction, which can be very noisy. Here we introduce Knowledge Vault, a Web-scale probabilistic knowledge base that combines extractions from Web content (obtained via analysis of text, tabular data, page structure, and human annotations) with prior knowledge derived from existing knowledge repositories. We employ supervised machine learning methods for fusing these distinct information sources. The Knowledge Vault is substantially bigger than any previously published structured knowledge repository, and features a probabilistic inference system that computes calibrated probabilities of fact correctness. We report the results of multiple studies that explore the relative utility of the different information sources and extraction methods." ] }
1604.08153
2344556769
In this paper we combine one method for hierarchical reinforcement learning - the options framework - with deep Q-networks (DQNs) through the use of different "option heads" on the policy network, and a supervisory network for choosing between the different options. We utilise our setup to investigate the effects of architectural constraints in subtasks with positive and negative transfer, across a range of network capacities. We empirically show that our augmented DQN has lower sample complexity when simultaneously learning subtasks with negative transfer, without degrading performance when learning subtasks with positive transfer.
Another notable success in subtask learning with multiple independent sources of reward are universal value function approximators (UVFAs) @cite_0 . UVFAs allow the generalisation of value functions across different goals, which helps the agent accomplish tasks that it has never seen before. The focus of UVFAs is in generalising between similar subtasks by sharing the representation between the different tasks. This has recently been expanded upon in the hierarchical-DQN @cite_4 ; however, these goal-based approaches have been demonstrated in domains where the different goals are highly related. From a function approximation perspective, goals should share a lot of structure with the raw states. In contrast, our approach focuses on separating out distinct subtasks, where partial independence between subpolicies can be enforced through structural constraints. In particular, we expect that separate Q-functions are less prone to negative transfer between subtasks.
{ "cite_N": [ "@cite_0", "@cite_4" ], "mid": [ "567721252", "2963262099" ], "abstract": [ "Value functions are a core component of reinforcement learning systems. The main idea is to to construct a single function approximator V (s; θ) that estimates the long-term reward from any state s, using parameters θ. In this paper we introduce universal value function approximators (UVFAs) V (s, g; θ) that generalise not just over states s but also over goals g. We develop an efficient technique for supervised learning of UVFAs, by factoring observed values into separate embedding vectors for state and goal, and then learning a mapping from s and g to these factored embedding vectors. We show how this technique may be incorporated into a reinforcement learning algorithm that updates the UVFA solely from observed rewards. Finally, we demonstrate that a UVFA can successfully generalise to previously unseen goals.", "Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. One of the key difficulties is insufficient exploration, resulting in an agent being unable to learn robust policies. Intrinsically motivated agents can explore new behavior for their own sake rather than to directly solve external goals. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical action-value functions, operating at different temporal scales, with goal-driven intrinsically motivated deep reinforcement learning. A top-level q-value function learns a policy over intrinsic goals, while a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse and delayed feedback: (1) a complex discrete stochastic decision process with stochastic transitions, and (2) the classic ATARI game -'Montezuma's Revenge'." ] }
1604.08335
2344259100
Outsourcing jobs to a public cloud is a cost-effective way to address the problem of satisfying the peak resource demand when the local cloud has insufficient resources. In this paper, we study on managing deadline-constrained bag-of-tasks jobs on hybrid clouds. We present a binary nonlinear programming (BNP) problem to model the hybrid cloud management where the utilization of physical machines (PMs) in the local cloud cluster is maximized when the local resources are enough to satisfy the deadline constraints of jobs, while when not, the rent cost from the public cloud is minimized. To solve this BNP problem in polynomial time, we proposed a heuristic algorithm. Its main idea is assigning the task closest to its deadline to current core until the core cannot finish any task within its deadline. When there is no available core, the algorithm adds an available PM with most capacity or rents a new VM with highest cost-performance ratio. Extensive experimental results show that our heuristic algorithm saves 16.2 -76 rent cost and improves 47.3 -182.8 resource utilizations satisfying deadline constraints, compared with first fit decreasing algorithm.
To minimize the cost for leased resources from public clouds, W. Z. Jiang and Z. Q. sheng @cite_0 modelled the mapping of tasks and VMs as a bipartite graph. The two independent vertex sets of the bipartite graph are task and VM collections, respectively. The weight of an edge is the VM cost of a discrete task, i.e. the product of the running time of the task and the cost of the VM per unit time. Then the problem minimizing the cost is to find a subset of the edge set, where the weighted sum of all the edges in the subset is the minimum. The authors used the Hopcroft-Karp algorithm @cite_17 to solve the minimum bipartite match problem. This work does not consider whether a task could be finish within its deadline.
{ "cite_N": [ "@cite_0", "@cite_17" ], "mid": [ "2533486876", "1592014028" ], "abstract": [ "Although some cloud providers look at the hybrid cloud as blasphemy, there are strong reasons for them to adopt it. Hybrid clouds offer the cost and scale benefits of public clouds while also offering the security and control of private clouds. Task scheduling, one of the most famous combinational optimization problems, plays a key role in the hybrid cloud environment. We propose a graph-based task scheduling algorithm. In order to achieve minimum cost, our algorithm takes into account not only the private resources but also the public resources. This paper also presents an extensive evaluation study and demonstrates that our proposed algorithms minimize the user's cost in a hybrid cloud environment.", "We reduce the problem of finding an augmenting path in a general graph to a reachability problem and show that a slight modification of depth-first search leads to an algorithm for finding such paths. As a consequence, we obtain a straightforward algorithm for maximum matching in general graphs of time complexity O(√nm), where n is the number of nodes and m is the number of edges in the graph." ] }
1604.08335
2344259100
Outsourcing jobs to a public cloud is a cost-effective way to address the problem of satisfying the peak resource demand when the local cloud has insufficient resources. In this paper, we study on managing deadline-constrained bag-of-tasks jobs on hybrid clouds. We present a binary nonlinear programming (BNP) problem to model the hybrid cloud management where the utilization of physical machines (PMs) in the local cloud cluster is maximized when the local resources are enough to satisfy the deadline constraints of jobs, while when not, the rent cost from the public cloud is minimized. To solve this BNP problem in polynomial time, we proposed a heuristic algorithm. Its main idea is assigning the task closest to its deadline to current core until the core cannot finish any task within its deadline. When there is no available core, the algorithm adds an available PM with most capacity or rents a new VM with highest cost-performance ratio. Extensive experimental results show that our heuristic algorithm saves 16.2 -76 rent cost and improves 47.3 -182.8 resource utilizations satisfying deadline constraints, compared with first fit decreasing algorithm.
Besides minimizing cost, a few work focused on minimizing the makespan of scientific applications by cloud bursting. FermiCloud @cite_12 despatched a VM on the PM that has the highest utilization but still have enough resource for the VM in private cloud. Only when all the resources in private cloud are consumed, VM are deployed on a public cloud. A new VM would be launched in a public cloud only when adding the VM can reduce the average job running time. @cite_26 @cite_30 proposed four cloud bursting schedulers whose main ideas are outsourcing a job to a public cloud when the estimated time between now and beginning execution of the job is greater than the estimated time consumed by migrating the job to the public cloud.
{ "cite_N": [ "@cite_30", "@cite_26", "@cite_12" ], "mid": [ "2000654328", "", "2030460158" ], "abstract": [ "The practice of computing across two or more data centers separated by the Internet is growing in popularity due to an explosion in scalable computing demands and pay-as-you-go schemes offered on the cloud. While cloud-bursting is addressing this process of scaling up and down across data centers (i.e. between private and public clouds), offering service level guarantees, is a challenge for inter-cloud computation, particularly for best-effort traffic and large files. The parallel workload we address is real-time and involves inter-cloud processing and analysis of images and documents. In our production printing domain, dedicated processing network resources are cost-prohibitive. Further, the problem is exacerbated by data intensive computing - we encounter huge file sizes atypical of intercloud parallel processing. To address these problems we propose three flavors of autonomic cloud-bursting schedulers that offer probabilistic guarantees on service levels required by customers (such as speed-up and queue sequence preservation) by adapting to changing workload characteristics, variation in bandwidth and available resources. In particular, these opportunistic schedulers use a quadratic response surface model for processing time in concert with a time-of-day dependent bandwidth predictor to increase the throughput and utilization while simultaneously reducing out-of-sequence completions for a document processing workload.", "", "Cloud computing is changing the infrastructure upon which scientific computing depends from supercomputers and distributed computing clusters to a more elastic cloud-based structure. The service-oriented focus and elasticity of clouds can not only facilitate technology needs of emerging business but also shorten response time and reduce operational costs of traditional scientific applications. Fermi National Accelerator Laboratory (Fermilab) is currently in the process of building its own private cloud, FermiCloud, which allows the existing grid infrastructure to use dynamically provisioned resources on FermiCloud to accommodate increased but dynamic computation demand from scientists in the domains of High Energy Physics (HEP) and other research areas. Cloud infrastructure also allows to increase a private cloud's resource capacity through \"bursting\" by borrowing or renting resources from other community or commercial clouds when needed. This paper introduces a joint project on building a cloud federation to support HEP applications between Fermi National Accelerator Laboratory and Korea Institution of Science and Technology Information, with technical contributions from the Illinois Institute of Technology. In particular, this paper presents two recent accomplishments of the joint project: (a) cloud bursting automation and (b) load balancer. Automatic cloud bursting allows computer resources to be dynamically reconfigured to meet users' demands. The load balance algorithm which the cloud bursting depends on decides when and where new resources need to be allocated. Our preliminary prototyping and experiments have shown promising success, yet, they also have opened new challenges to be studied." ] }
1604.08075
2344482722
Influential users play an important role in online social networks since users tend to have an impact on one other. Therefore, the proposed work analyzes users and their behavior in order to identify influential users and predict user participation. Normally, the success of a social media site is dependent on the activity level of the participating users. For both online social networking sites and individual users, it is of interest to find out if a topic will be interesting or not. In this article, we propose association learning to detect relationships between users. In order to verify the findings, several experiments were executed based on social network analysis, in which the most influential users identified from association rule learning were compared to the results from Degree Centrality and Page Rank Centrality. The results clearly indicate that it is possible to identify the most influential users using association rule learning. In addition, the results also indicate a lower execution time compared to state-of-the-art methods.
Online social networks and social media analysis are popular research areas in contemporary network science. The main focus in social network research is on link prediction @cite_26 and social connection prediction @cite_24 . Different teams around the world also work on: (i) personality prediction for micro blog users @cite_23 , (ii) churn prediction and its influence on the network @cite_32 @cite_4 , (iii) community evolution prediction @cite_5 @cite_27 , (iv) using social media to predict real-world outcomes @cite_18 , (v) predicting friendship intensity @cite_10 @cite_12 , (vi) affiliation recommendations @cite_29 @cite_21 , and (vii) sentiment analysis and opinion mining @cite_1 .
{ "cite_N": [ "@cite_18", "@cite_26", "@cite_4", "@cite_29", "@cite_21", "@cite_1", "@cite_32", "@cite_24", "@cite_27", "@cite_23", "@cite_5", "@cite_10", "@cite_12" ], "mid": [ "2015186536", "2148847267", "202848231", "2114353347", "1986211117", "2007257789", "2138906630", "2733547265", "2071727953", "2484033914", "2292103443", "1930399416", "1991809678" ], "abstract": [ "In recent years, social media has become ubiquitous and important for social networking and content sharing. And yet, the content that is generated from these websites remains largely untapped. In this paper, we demonstrate how social media content can be used to predict real-world outcomes. In particular, we use the chatter from Twitter.com to forecast box-office revenues for movies. We show that a simple model built from the rate at which tweets are created about particular topics can outperform market-based predictors. We further demonstrate how sentiments extracted from Twitter can be utilized to improve the forecasting power of social media.", "Given a snapshot of a social network, can we infer which new interactions among its members are likely to occur in the near future? We formalize this question as the link-prediction problem, and we develop approaches to link prediction based on measures for analyzing the “proximity” of nodes in a network. Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. © 2007 Wiley Periodicals, Inc.", "The exponential growth of interactions in the networked society becomes gradually a reality. Social networks thrive and expand rapidly across many different interaction platforms delivered by modern telecommunication and internet services. The role and the impact of individuals on network interactions is increasingly important although rather complex and difficult to analyse in the realistic dynamic network environment. The goal of this paper is to look into the key structural changes in social networks: addition and removal of nodes and propose a methodology for temporal modelling of network response to these changes in order to assess the true network impact of the added or removed node. We propose to use the time series of the first order neighbourhood interaction as the key dynamic measure of the impact of individual node on its local network’s interaction. The proposed methodology is supported with some preliminary experimental results carried out on real voice telecommunication network over customer acquisition and churn events and it lays ground for the new network-aware estimation of customer value.", "Online information services have grown too large for users to navigate without the help of automated tools such as collaborative filtering, which makes recommendations to users based on their collective past behavior. While many similarity measures have been proposed and individually evaluated, they have not been evaluated relative to each other in a large real-world environment. We present an extensive empirical comparison of six distinct measures of similarity for recommending online communities to members of the Orkut social network. We determine the usefulness of the different recommendations by actually measuring users' propensity to visit and join recommended communities. We also examine how the ordering of recommendations influenced user selection, as well as interesting social issues that arise in recommending communities within a real social network.", "Social network analysis has attracted increasing attention in recent years. In many social networks, besides friendship links among users, the phenomenon of users associating themselves with groups or communities is common. Thus, two networks exist simultaneously: the friendship network among users, and the affiliation network between users and groups. In this article, we tackle the affiliation recommendation problem, where the task is to predict or suggest new affiliations between users and communities, given the current state of the friendship and affiliation networks. More generally, affiliations need not be community affiliations---they can be a user’s taste, so affiliation recommendation algorithms have applications beyond community recommendation. In this article, we show that information from the friendship network can indeed be fruitfully exploited in making affiliation recommendations. Using a simple way of combining these networks, we suggest two models of user-community affinity for the purpose of making affiliation recommendations: one based on graph proximity, and another using latent factors to model users and communities. We explore the affiliation recommendation algorithms suggested by these models and evaluate these algorithms on two real-world networks, Orkut and Youtube. In doing so, we motivate and propose a way of evaluating recommenders, by measuring how good the top 50 recommendations are for the average user, and demonstrate the importance of choosing the right evaluation strategy. The algorithms suggested by the graph proximity model turn out to be the most effective. We also introduce scalable versions of these algorithms, and demonstrate their effectiveness. This use of link prediction techniques for the purpose of affiliation recommendation is, to our knowledge, novel.", "We carry out an empirical analysis to determine characteristics of social media channels.User generated content is \"noisy\" and contains mistakes, emoticons, etc.We evaluate text preprocessing algorithms regarding user generated content.Discussion of improvements to opinion mining process. The emerging research area of opinion mining deals with computational methods in order to find, extract and systematically analyze people's opinions, attitudes and emotions towards certain topics. While providing interesting market research information, the user generated content existing on the Web 2.0 presents numerous challenges regarding systematic analysis, the differences and unique characteristics of the various social media channels being one of them. This article reports on the determination of such particularities, and deduces their impact on text preprocessing and opinion mining algorithms. The effectiveness of different algorithms is evaluated in order to determine their applicability to the various social media channels. Our research shows that text preprocessing algorithms are mandatory for mining opinions on the Web 2.0 and that part of these algorithms are sensitive to errors and mistakes contained in the user generated content.", "Classification is an important topic in data mining research. Given a set of data records, each of which belongs to one of a number of predefined classes, the classification problem is concerned with the discovery of classification rules that can allow records with unknown class membership to be correctly classified. Many algorithms have been developed to mine large data sets for classification models and they have been shown to be very effective. However, when it comes to determining the likelihood of each classification made, many of them are not designed with such purpose in mind. For this, they are not readily applicable to such problems as churn prediction. For such an application, the goal is not only to predict whether or not a subscriber would switch from one carrier to another, it is also important that the likelihood of the subscriber's doing so be predicted. The reason for this is that a carrier can then choose to provide a special personalized offer and services to those subscribers who are predicted with higher likelihood to churn. Given its importance, we propose a new data mining algorithm, called data mining by evolutionary learning (DMEL), to handle classification problems of which the accuracy of each predictions made has to be estimated. In performing its tasks, DMEL searches through the possible rule space using an evolutionary approach that has the following characteristics: 1) the evolutionary process begins with the generation of an initial set of first-order rules (i.e., rules with one conjunct condition) using a probabilistic induction technique and based on these rules, rules of higher order (two or more conjuncts) are obtained iteratively; 2) when identifying interesting rules, an objective interestingness measure is used; 3) the fitness of a chromosome is defined in terms of the probability that the attribute values of a record can be correctly determined using the rules it encodes; and 4) the likelihood of predictions (or classifications) made are estimated so that subscribers can be ranked according to their likelihood to churn. Experiments with different data sets showed that DMEL is able to effectively discover interesting classification rules. In particular, it is able to predict churn accurately under different churn rates when applied to real telecom subscriber data.", "Graphical virtual worlds add two new layers to the old question what determines friendship formation. First, it is possible to distinguish between off-line (player) and online (avatar) characteristics. Second, these environments offer new possibilities for studying friendship formation. By tracking friendship requests and their acceptance rate, researchers are able to distinguish between with whom players want to become friends and with whom they actually do become friends. This article examined friendship formation in Timik, a graphical virtual world targeted at Polish teenagers. Homophily, preferential attachment and status were tested as possible underlying mechanisms. Results showed that preferential attachment and status drove invitations: Players wanted to become friends with high-status players. However, high-status players were also more likely to reject offers. Homophily only played a minor role. Players preferred players of the same avatar class and similar age but of the opposite sex. Too simil...", "Understanding the dynamics behind group formation and evolution in social networks is considered an instrumental milestone to better describe how individuals gather and form communities, how they enjoy and share the platform contents, how they are driven by their preferences tastes, and how their behaviors are influenced by peers. In this context, the notion of compactness of a social group is particularly relevant. While the literature usually refers to compactness as a measure to merely determine how much members of a group are similar among each other, we argue that the mutual trustworthiness between the members should be considered as an important factor in defining such a term. In fact, trust has profound effects on the dynamics of group formation and their evolution: individuals are more likely to join with and stay in a group if they can trust other group members. In this paper, we propose a quantitative measure of group compactness that takes into account both the similarity and the trustworthiness among users, and we present an algorithm to optimize such a measure. We provide empirical results, obtained from the real social networks EPINIONS and CIAO, that compare our notion of compactness versus the traditional notion of user similarity, clearly proving the advantages of our approach.", "", "Nowadays, sustained development of different social media can be observed worldwide. One of the relevant research domains intensively explored recently is analysis of social communities existing in social media as well as prediction of their future evolution taking into account collected historical evolution chains. These evolution chains proposed in the paper contain group states in the previous time frames and its historical transitions that were identified using one out of two methods: Stable Group Changes Identification (SGCI) and Group Evolution Discovery (GED). Based on the observed evolution chains of various length, structural network features are extracted, validated and selected as well as used to learn classification models. The experimental studies were performed on three real datasets with different profile: DBLP, Facebook and Polish blogosphere. The process of group prediction was analysed with respect to different classifiers as well as various descriptive feature sets extracted from evolution chains of different length. The results revealed that, in general, the longer evolution chains the better predictive abilities of the classification models. However, chains of length 3 to 7 enabled the GED-based method to almost reach its maximum possible prediction quality. For SGCI, this value was at the level of 3–5 last periods.", "Researchers put in tremendous amount of time and effort in order to crawl the information from online social networks. With the variety and the vast amount of information shared on online social networks today, different crawlers have been designed to capture several types of information. We have developed a novel crawler called SINCE. This crawler differs significantly from other existing crawlers in terms of efficiency and crawling depth. We are getting all interactions related to every single post. In addition, are we able to understand interaction dynamics, enabling support for making informed decisions on what content to re-crawl in order to get the most recent snapshot of interactions. Finally we evaluate our crawler against other existing crawlers in terms of completeness and efficiency. Over the last years we have crawled public communities on Facebook, resulting in over 500 million unique Facebook users, 50 million posts, 500 million comments and over 6 billion likes.", "Over the past decade Online Social Networks (OSNs) have made it possible for people to stay in touch with people they already know in real life; although, they have not been able to allow users to grow their personal social network. Existence of many successful dating and friend finder applications online today show the need and importance of such applications. In this paper, we describe an application that leverages social interactions in order to suggest people to users that they may find interesting. We allow users to expand their personal social network using their own interactions with other users on public pages and groups in OSNs. We finally evaluate our application by selecting a random set of users and asking them for their honest opinion." ] }
1604.08075
2344482722
Influential users play an important role in online social networks since users tend to have an impact on one other. Therefore, the proposed work analyzes users and their behavior in order to identify influential users and predict user participation. Normally, the success of a social media site is dependent on the activity level of the participating users. For both online social networking sites and individual users, it is of interest to find out if a topic will be interesting or not. In this article, we propose association learning to detect relationships between users. In order to verify the findings, several experiments were executed based on social network analysis, in which the most influential users identified from association rule learning were compared to the results from Degree Centrality and Page Rank Centrality. The results clearly indicate that it is possible to identify the most influential users using association rule learning. In addition, the results also indicate a lower execution time compared to state-of-the-art methods.
Other popular areas of research focus on popularity prediction in social media based on comment mining @cite_35 , predicting information cascade on social media @cite_2 , and predicting patterns of diffusion processes in social network @cite_14 . An important factor is often the user's role in the different processes. As such, identifying influential users are of interest to understand and or affect the spread of information, e.g., viral marketing. The ability to identify influential users might also affect the research into other areas of related work (e.g., ii or iii).
{ "cite_N": [ "@cite_35", "@cite_14", "@cite_2" ], "mid": [ "2164273822", "1495805406", "2545587473" ], "abstract": [ "Using comment information available from Digg we define a co-participation network between users. We focus on the analysis of this implicit network, and study the behavioral characteristics of users. Using an entropy measure, we infer that users at Digg are not highly focused and participate across a wide range of topics. We also use the comment data and social network derived features to predict the popularity of online content linked at Digg using a classification and regression framework. We show promising results for predicting the popularity scores even after limiting our feature extraction to the first few hours of comment activity that follows a Digg submission.", "Viral campaigns on the Internet may follow variety of models, depending on the content, incentives, personal attitudes of sender and recipient to the content and other factors. Due to the fact that the knowledge of the campaign specifics is essential for the campaign managers, researchers are constantly evaluating models and real-world data. The goal of this article is to present the new knowledge obtained from studying two viral campaigns that took place in a virtual world which followed the branching process. The results show that it is possible to reduce the time needed to estimate the model parameters of the campaign and, moreover, some important aspects of time-generations relationship are presented.", "Twitter is one of the very popular micro-blogging platforms for people to share content and information. Information propagates through the interaction between users with many different ways, such as retweet, mention or reply. With those abilities, Twitter has become one of the medium for advertisers to perform the marketing campaign. Sometimes in their campaign, advertisers hire several buzzers to make the campaign activity running more organically. In this paper we will discuss how to predict information cascades, in term of number interaction over the network that will happen just after buzzer doing its campaign. We formulate the task into a regression problem and define a feature-set, then extract the features on initial interaction data to build a model for prediction. Our experiment shows that Support Vector Regression (SVR) is better than Linear Regression (LR) algorithm. SVR has Mean Absolute Error (MAE) ranged from 1.54 to 33.93. We also found that the optimal setting of initial interaction data is 2 hours time lag which hit the lowest MAE 1.54." ] }
1604.08075
2344482722
Influential users play an important role in online social networks since users tend to have an impact on one other. Therefore, the proposed work analyzes users and their behavior in order to identify influential users and predict user participation. Normally, the success of a social media site is dependent on the activity level of the participating users. For both online social networking sites and individual users, it is of interest to find out if a topic will be interesting or not. In this article, we propose association learning to detect relationships between users. In order to verify the findings, several experiments were executed based on social network analysis, in which the most influential users identified from association rule learning were compared to the results from Degree Centrality and Page Rank Centrality. The results clearly indicate that it is possible to identify the most influential users using association rule learning. In addition, the results also indicate a lower execution time compared to state-of-the-art methods.
Research into detecting influential users on Twitter indicates that, while a large amount of followers seem to be present among influential users, predictions of which particular user will be influential is unreliable @cite_17 . Depending on the social network, how to define influence differs, e.g., influence on Twitter might be defined by retweets or mentions, while, on Digg, votes generated are used to measure influence @cite_15 @cite_31 @cite_28 . While some initial research has been done using clustering algorithms to identify top users, based on influence features, e.g., likes and replies, evaluation is lacking @cite_36 . Similarly, linear regression has been used to identify influential (categorical) users based on influence features @cite_28 .
{ "cite_N": [ "@cite_31", "@cite_28", "@cite_36", "@cite_15", "@cite_17" ], "mid": [ "1573308170", "2100504047", "1945980178", "1814023381", "1967579779" ], "abstract": [ "Who are the influential people in an online social network? The answer to this question depends not only on the structure of the network, but also on details of the dynamic processes occurring on it. We classify these processes as conservative and non-conservative. A random walk on a network is an example of a conservative dynamic process, while information spread is non-conservative. The influence models used to rank network nodes can be similarly classified, depending on the dynamic process they implicitly emulate. We claim that in order to correctly rank network nodes, the influence model has to match the details of the dynamic process. We study a real-world network on the social news aggregator Digg, which allows users to post and vote for news stories. We empirically define influence as the number of in-network votes a user's post generates. This influence measure, and the resulting ranking, arises entirely from the dynamics of voting on Digg, which represents non-conservative information flow. We then compare predictions of different influence models with this empirical estimate of influence. The results show that non-conservative models are better able to predict influential users on Digg. We find that normalized alpha-centrality metric turns out to be one of the best predictors of influence. We also present a simple algorithm for computing this metric and the associated mathematical formulation and analytical proofs.", "Community Web sites on specific topics are very popular on the Web. Some active Web communities are so huge and diverse that it becomes a challenging issue to efficiently mine meaningful knowledge from the Web communities. In this paper, we develop schemes to discover and browse power users by their activities in online communities. The novelties of this work are two-fold. 1) We define new features to describe user's social activities: statistical features to summarize userspsila activities and relationship-based features to describe interactions between individual users. And, through extensive user study and experiments to compare the performances of the ranking models based on various features, it is shown that the cross reference (CR) feature plays an unique and effective role in discovering power users in post-dominant online communities. 2) Thereafter, we develop a novel interface for effective exploration of power users based on the CR rank. Two schemes are proposed to incrementally navigate a large number of candidate power users with higher CR values: threshold-based navigation and traversal-based one. Experimental results shows that the proposed CR rank can be used for effective browsing of power users: about 70 precision is maintained while retrieving all the power users, which means that we can discover all the power users with relatively small number of false alarms.", "The Service of Facebook Fan Pages is one of the most popular social network platform for various organizations. Companies can interact with their own fans through the Fan Pages. The interactions include sending direct advertisement, gathering user meetings, and promoting electronic word of mouth (eWoM). For companies that use social network to gather customers' information, to identify the opinion leaders on the internet is very important, since opinion leaders are active persons and have influence on other potential customers. Based on clustering algorithm, we proposed a system that can find the opinion leaders and test our method on the Facebook Fan Pages. The data set includes 410,045 comments from 173,988 users that we gathered from October 2013 to September 2014. We also use classification methods to evaluate our system and find promising result.", "Directed links in social media could represent anything from intimate friendships to common interests, or even a passion for breaking news or celebrity gossip. Such directed links determine the flow of information and hence indicate a user's influence on others — a concept that is crucial in sociology and viral marketing. In this paper, using a large amount of data collected from Twitter, we present an in-depth comparison of three measures of influence: indegree, retweets, and mentions. Based on these measures, we investigate the dynamics of user influence across topics and time. We make several interesting observations. First, popular users who have high indegree are not necessarily influential in terms of spawning retweets or mentions. Second, most influential users can hold significant influence over a variety of topics. Third, influence is not gained spontaneously or accidentally, but through concerted effort such as limiting tweets to a single topic. We believe that these findings provide new insights for viral marketing and suggest that topological measures such as indegree alone reveals very little about the influence of a user.", "In this paper we investigate the attributes and relative influence of 1.6M Twitter users by tracking 74 million diffusion events that took place on the Twitter follower graph over a two month interval in 2009. Unsurprisingly, we find that the largest cascades tend to be generated by users who have been influential in the past and who have a large number of followers. We also find that URLs that were rated more interesting and or elicited more positive feelings by workers on Mechanical Turk were more likely to spread. In spite of these intuitive results, however, we find that predictions of which particular user or URL will generate large cascades are relatively unreliable. We conclude, therefore, that word-of-mouth diffusion can only be harnessed reliably by targeting large numbers of potential influencers, thereby capturing average effects. Finally, we consider a family of hypothetical marketing strategies, defined by the relative cost of identifying versus compensating potential \"influencers.\" We find that although under some circumstances, the most influential users are also the most cost-effective, under a wide range of plausible assumptions the most cost-effective performance can be realized using \"ordinary influencers\"---individuals who exert average or even less-than-average influence." ] }
1604.08075
2344482722
Influential users play an important role in online social networks since users tend to have an impact on one other. Therefore, the proposed work analyzes users and their behavior in order to identify influential users and predict user participation. Normally, the success of a social media site is dependent on the activity level of the participating users. For both online social networking sites and individual users, it is of interest to find out if a topic will be interesting or not. In this article, we propose association learning to detect relationships between users. In order to verify the findings, several experiments were executed based on social network analysis, in which the most influential users identified from association rule learning were compared to the results from Degree Centrality and Page Rank Centrality. The results clearly indicate that it is possible to identify the most influential users using association rule learning. In addition, the results also indicate a lower execution time compared to state-of-the-art methods.
Initial research used association rule learning to identify influential users and predict user participation in online social networks @cite_33 . Association rule learning has been previously used in social network and social media analysis.
{ "cite_N": [ "@cite_33" ], "mid": [ "2293214727" ], "abstract": [ "Online social networking services like Facebook provides a popular way for users to participate in different communication groups and discuss relevant topics with each other. While users tend to have an impact on each other, it is important to better understand and analyze users behavior in specific online groups. For social networking sites it is of interest to know if a topic will be interesting for users or not. Therefore, this study examines the prediction of user participation in online social networks discussions, in which we argue that it is possible to predict user participation in a public group using common machine learning techniques. We are predicting user participation based on association rules built with respect to user activeness of current posts. In total, we have crawled and extracted 2,443 active users interacting on 610 posts with over 14,117 comments on Facebook. The results show that the proposed approach has a high level of accuracy and the systematic study clearly depicts the possibility to predict user participation in social networking sites." ] }
1604.08381
2745428075
Consider a distributed network on a finite simple graph @math with diameter @math and maximum degree @math , where each node has a phase oscillator revolving on @math with unit speed. Pulse-coupling is a class of distributed time evolution rule for such networked phase oscillators inspired by biological oscillators, which depends only upon event-triggered local pulse communications. In this paper, we propose a novel inhibitory pulse-coupling and prove that arbitrary phase configuration on @math synchronizes by time @math if @math is a tree and @math . We extend this pulse-coupling by letting each oscillator throttle the input according to an auxiliary state variable. We show that the resulting adaptive pulse-coupling synchronizes arbitrary initial configuration on @math by time @math if @math is a tree. As an application, we obtain a universal randomized distributed clock synchronization algorithm, which uses @math memory per node and converges on any @math with expected worst case running time of @math .
While the methods based on concentration condition could be applied once the system is nearly synchronized or to maintain synchrony against weak fluctuation, a fundamental question must be addressed: This is what Q1 focuses on, which has been answered for some classes of pulse-couplings mainly on complete (all-to-all) graphs or cycles. In their seminal work, Mirollo and Strogatz @cite_15 showed that an excitatory pulse-coupling on complete graphs synchronizes almost all initial configurations. A similar result was derived for an inhibitory pulse-couplings by Klingmayr and Bettstetter @cite_1 . For PCOs on cycle graphs, Wang, and Doyle @cite_11 addressed Q1. More recently, these authors and Teel @cite_40 studied Q1 for PCOs on general topology assuming a global pacemaker. In Theorem and , we give an answer to Q1 in the case of the 4-coupling and adaptive 4-coupling on tree networks.
{ "cite_N": [ "@cite_40", "@cite_15", "@cite_1", "@cite_11" ], "mid": [ "2218979699", "2154953441", "2058455332", "2030115949" ], "abstract": [ "Abstract Pulse-coupled oscillators (PCOs) are limit cycle oscillators coupled by exchanging pulses at discrete time instants. Their importance in biology and engineering has motivated numerous studies aiming to understand the basic synchronization properties of a network of PCOs. In this work, we study synchronization of PCOs subject to a global pacemaker (or global cue) and local interactions between slave oscillators. We characterize solutions and give synchronization conditions using the phase response curve (PRC) as the design element, which is restricted to be of the delay type in the first half of the cycle, interval ( 0 , π ) , and of the advance type in the second half of the cycle, interval ( π , 2 π ) . It is shown that global synchronization is feasible when using an advance-delay PRC if the influence of the global cue is strong enough. Numerical examples are provided to illustrate the analytical findings.", "A simple model for synchronous firing of biological oscillators based on Peskin's model of the cardiac pacemaker (Mathematical aspects of heart physiology, Courant Institute of Mathematical Sciences, New York University, New York, 1975, pp. 268-278) is studied. The model consists of a population of identical integrate-and-fire oscillators. The coupling between oscillators is pulsatile: when a given oscillator fires, it pulls the others up by a fixed amount, or brings them to the firing threshold, whichever is less. The main result is that for almost all initial conditions, the population evolves to a state in which all the oscillators are firing synchronously. The relationship between the model and real communities of biological oscillators is discussed; examples include populations of synchronously flashing fireflies, crickets that chirp in unison, electrically synchronous pacemaker cells, and groups of women whose menstrual cycles become mutually synchronized.", "Solutions for time synchronization based on coupled oscillators operate in a self-organizing and adaptive manner and can be applied to various types of dynamic networks. The basic idea was inspired by swarms of fireflies, whose flashing dynamics shows an emergent behavior. This article introduces such a synchronization technique whose main components are “inhibitory coupling” and “self-adjustment.” Based on this new technique, a number of contributions are made. First, we prove that inhibitory coupling can lead to perfect synchrony independent of initial conditions for delay-free environments and homogeneous oscillators. Second, relaxing the assumptions to systems with delays and different phase rates, we prove that such systems synchronize up to a certain precision bound. We derive this bound assuming inhomogeneous delays and show by simulations that it gives a good estimate in strongly-coupled systems. Third, we show that inhibitory coupling with self-adjustment quickly leads to synchrony with a precision comparable to that of excitatory coupling. Fourth, we analyze the robustness against faulty members performing incorrect coupling. While the specific precision-loss encountered by such disturbances depends on system parameters, the system always regains synchrony for the investigated scenarios.", "The importance of pulse-coupled oscillators (PCOs) in biology and engineering has motivated research to understand basic properties of PCO networks. Despite the large body of work addressing PCOs, a global synchronization result for networks that are more general than all-to-all connected is still unavailable. In this paper we address global synchronization of PCO networks described by cycle graphs. It is shown for the bidirectional cycle case that as the number of oscillators in the cycle grows, the coupling strength must be increased in order to guarantee synchronization for arbitrary initial conditions. For the unidirectional cycle case, the strongest coupling cannot ensure global synchronization yet a refractory period in the phase response curve is sufficient to enable global synchronization. Analytical findings are confirmed by numerical simulations." ] }
1604.08381
2745428075
Consider a distributed network on a finite simple graph @math with diameter @math and maximum degree @math , where each node has a phase oscillator revolving on @math with unit speed. Pulse-coupling is a class of distributed time evolution rule for such networked phase oscillators inspired by biological oscillators, which depends only upon event-triggered local pulse communications. In this paper, we propose a novel inhibitory pulse-coupling and prove that arbitrary phase configuration on @math synchronizes by time @math if @math is a tree and @math . We extend this pulse-coupling by letting each oscillator throttle the input according to an auxiliary state variable. We show that the resulting adaptive pulse-coupling synchronizes arbitrary initial configuration on @math by time @math if @math is a tree. As an application, we obtain a universal randomized distributed clock synchronization algorithm, which uses @math memory per node and converges on any @math with expected worst case running time of @math .
The question Q3 is closely related to the concept of self-stabilization in theoretical computer science. A distributed algorithm is said to be if it recovers desired system configurations from arbitrary system configuration. This notion was first proposed by Dijkstra @cite_13 as a paradigm for designing distributed algorithms which are robust under arbitrary transient faults. For the convenience in following discussions, we denote by @math and @math the diameter and maximum degree of the underlying network, respectively.
{ "cite_N": [ "@cite_13" ], "mid": [ "2170774893" ], "abstract": [ "The coordinated motion of multi-agent systems and oscillator synchronization are two important examples of networked control systems. In this technical note, we consider what effect multiple, non-commensurate (heterogeneous) communication delays can have on the consensus properties of large-scale multi-agent systems endowed with nonlinear dynamics. We show that the structure of the delayed dynamics allows functionality to be retained for arbitrary communication delays, even for switching topologies under certain connectivity conditions. The results are extended to the related problem of oscillator synchronization." ] }
1604.08381
2745428075
Consider a distributed network on a finite simple graph @math with diameter @math and maximum degree @math , where each node has a phase oscillator revolving on @math with unit speed. Pulse-coupling is a class of distributed time evolution rule for such networked phase oscillators inspired by biological oscillators, which depends only upon event-triggered local pulse communications. In this paper, we propose a novel inhibitory pulse-coupling and prove that arbitrary phase configuration on @math synchronizes by time @math if @math is a tree and @math . We extend this pulse-coupling by letting each oscillator throttle the input according to an auxiliary state variable. We show that the resulting adaptive pulse-coupling synchronizes arbitrary initial configuration on @math by time @math if @math is a tree. As an application, we obtain a universal randomized distributed clock synchronization algorithm, which uses @math memory per node and converges on any @math with expected worst case running time of @math .
A popular approach in designing a clock synchronization algorithm solving Q3 is to use un unbounded memory for each node to unravel' the cyclic phase space @math and construct an ever-increasing clock counter. As in the above mentioned technique assuming concentration condition, this effectively gives a global total ordering between local times. Then all nodes can tune toward the locally maximal time, for instance, so that the global maximum propagates and subsumes all the other nodes in @math time. This idea dates back to Lamport @cite_5 , and similar technique has been used in different contexts: for synchronous systems @cite_33 and for asynchronous systems @cite_43 . The biggest advantage of such approach includes independence of network topology and optimal time complexity of @math . However, they suffer when it comes to memory efficiency, and assuming unbounded memory on each node is far from practical, especially with the presence of faulty nodes @cite_45 .
{ "cite_N": [ "@cite_5", "@cite_45", "@cite_33", "@cite_43" ], "mid": [ "1973501242", "2139659159", "", "1975533944" ], "abstract": [ "The concept of one event happening before another in a distributed system is examined, and is shown to define a partial ordering of the events. A distributed algorithm is given for synchronizing a system of logical clocks which can be used to totally order the events. The use of the total ordering is illustrated with a method for solving synchronization problems. The algorithm is then specialized for synchronizing physical clocks, and a bound is derived on how far out of synchrony the clocks can become.", "Distributed systems: an algorithmic approach is an important addition to the distributed computing literature. The book offers a broad overview of important distributed computing topics, and, where relevant, a touch of networking topics as well.", "", "A synchronizer with a phase counter (sometimes called asynchronous phase clock) is an asynchronous distributed algorithm, where each node maintains a local \"pulse counter\" that simulates the global clock in a synchronous network. In this paper, we present a time-optimal self-stabilizing scheme for such a synchronizer, assuming unbounded counters. We give a simple rule by which each node can compute its pulse number as a function of its neighbors' pulse numbers. We also show that some of the popular correction functions for phase clock synchronization are not self-stabilizing in asynchronous networks. Using our rule, the counters stabilize in time bounded by the diameter of the network, without invoking global operations. We argue that the use of unbounded counters can be justified by the availability of memory for counters that are large enough to be practically unbounded and by the existence of reset protocols that can be used to restart the counters in some rare cases where faults will make this necessary." ] }
1604.08381
2745428075
Consider a distributed network on a finite simple graph @math with diameter @math and maximum degree @math , where each node has a phase oscillator revolving on @math with unit speed. Pulse-coupling is a class of distributed time evolution rule for such networked phase oscillators inspired by biological oscillators, which depends only upon event-triggered local pulse communications. In this paper, we propose a novel inhibitory pulse-coupling and prove that arbitrary phase configuration on @math synchronizes by time @math if @math is a tree and @math . We extend this pulse-coupling by letting each oscillator throttle the input according to an auxiliary state variable. We show that the resulting adaptive pulse-coupling synchronizes arbitrary initial configuration on @math by time @math if @math is a tree. As an application, we obtain a universal randomized distributed clock synchronization algorithm, which uses @math memory per node and converges on any @math with expected worst case running time of @math .
For synchronous systems with discrete phase clocks taking values from @math , a number of algorithms which are self-stabilizing on trees with constant memory per node are known: e.g., for @math by Herman and Ghosh @cite_36 , for all odd @math by @cite_39 . Upper bounds of @math for convergence time is known for such algorithms. More recently, the author proposed a class of @math -state inhibitory pulse-couplings which are called the (FCAs) @cite_12 . In the reference and in @cite_23 , we showed that the @math -color FCA is self-stabilizing on finite paths for arbitrary @math , and on finite trees if and only if @math . The 4-coupling we introduced in this work is a continuous-state and asynchronous-update generalization of the 4-color FCA. That is, if the initial phases for the 4-coupling are discretized on a 1 4 grid points on @math , i.e., @math for all @math , then the trajectory @math follows the 4-color FCA dynamics.
{ "cite_N": [ "@cite_36", "@cite_23", "@cite_12", "@cite_39" ], "mid": [ "2059435667", "2528943279", "", "1584976403" ], "abstract": [ "Abstract This note considers the problem of synchronizing a network of digital clocks: the clocks all run at the same rate, however, an initial state of the network may place the clocks in arbitrary phases. The problem is to devise a protocol to advance or retard clocks so that eventually all clocks are in phase. The solutions presented in this note are protocols in which all processes are identical and use a constant amount of space per process. One solution is a deterministic protocol for a tree network; another solution is a probabilistic protocol for a network of arbitrary topology.", "We study a one-parameter family of discrete dynamical systems called the @math -color firefly cellular automata (FCAs), which were introduced recently by the author. At each discrete time @math , each vertex in a graph has a state in @math , and a special state @math is designated as the blinking' state. At step @math , simultaneously for all vertices, the state of a vertex increments from @math to @math unless @math and at least one of its neighbors is in the state @math . A central question about this system is that on what class of network topologies synchrony is guaranteed to emerge. In a previous work, we have shown that for @math , every @math -coloring on a finite tree synchronizes iff the maximum degree is less than @math , and asked whether this behavior holds for all @math . In this paper, we answer the question positively for @math and negatively for all @math by constructing counterexamples on trees with maximum degree at most @math .", "", "We address the self-stabilizing unison problem in tree networks. We propose two self-stabilizing unison protocols without any reset correcting system. The first one, called Protocol SU_Min, being scheduled by a synchronous daemon, is self-stabilizing to synchronous unison in at most D steps, where D is the diameter of the network. The second one, Protocol WU_Min, being scheduled by an asynchronous daemon, is self-stabilizing to asynchronous unison in at most D rounds. Moreover, both are optimal in space. The amount of required space is independent of any local or global information on the tree. Furthermore, they work on dynamic trees networks, in which the topology may change during the execution." ] }
1604.08191
2345044840
Single-peakedness is one of the most important and well-known domain restrictions on preferences. The computational study of single-peaked electorates has largely been restricted to elections with tie-free votes, and recent work that studies the computational complexity of manipulative attacks for single-peaked elections for votes with ties has been restricted to nonstandard models of single-peaked preferences for top orders. We study the computational complexity of manipulation for votes with ties for the standard model of single-peaked preferences and for single-plateaued preferences. We show that these models avoid the anomalous complexity behavior exhibited by the other models. We also state a surprising result on the relation between the societ al axis and the complexity of manipulation for single-peaked preferences.
Since single-peakedness is a strong restriction on preferences, in real-world scenarios it is likely that voters may only have nearly single-peaked preferences, where different distance measures to a single-peaked profile are considered. Both the computational complexity of different manipulative attacks @cite_9 @cite_24 and detecting when a given profile is nearly single-peaked @cite_11 @cite_20 have been considered.
{ "cite_N": [ "@cite_24", "@cite_9", "@cite_20", "@cite_11" ], "mid": [ "", "1991113796", "108432896", "2153793916" ], "abstract": [ "", "Many electoral control and manipulation problems-which we will refer to in general as ''manipulative actions'' problems-are NP-hard in the general case. It has recently been noted that many of these problems fall into polynomial time if the electorate is single-peaked, i.e., is polarized along some axis issue. However, real-world electorates are not truly single-peaked. There are usually some mavericks, and so real-world electorates tend merely to be nearly single-peaked. This paper studies the complexity of manipulative-action algorithms for elections over nearly single-peaked electorates. We do this for many notions of nearness and for a broad range of election systems. We provide instances where even one maverick jumps the manipulative-action complexity up to NP-hardness, but we also provide many instances where some number of mavericks can be tolerated without increasing the manipulative-action complexity.", "Uncertainty arises in preference aggregation in several ways. There may, for example, be uncertainty in the votes or the voting rule. Such uncertainty can introduce computational complexity in determining which candidate or candidates can or must win the election. In this paper, we survey recent work in this area and give some new results. We argue, for example, that the set of possible winners can be computationally harder to compute than the necessary winner. As a second example, we show that, even if the unknown votes are assumed to be single-peaked, it remains computationally hard to compute the possible and necessary winners, or to manipulate the election.", "Manipulation, bribery, and control are well-studied ways of changing the outcome of an election. Many voting systems are, in the general case, computationally resistant to some of these manipulative actions. However when restricted to single-peaked electorates, these systems suddenly become easy to manipulate. Recently, Faliszewski, Hemaspaandra, and Hemaspaandra (2011b) studied the complexity of dishonest behavior in nearly single-peaked electorates. These are electorates that are not single-peaked but close to it according to some distance measure. In this paper we introduce several new distance measures regarding single-peakedness. We prove that determining whether a given profile is nearly single-peaked is NP-complete in many cases. For one case we present a polynomial-time algorithm. Furthermore, we explore the relations between several notions of nearly single-peakedness." ] }
1604.07547
2343710432
Can we predict the winner of Miss Universe after watching how they stride down the catwalk during the evening gown competition? Fashion gurus say they can! In our work, we study this question from the perspective of computer vision. In particular, we want to understand whether existing computer vision approaches can be used to automatically extract the qualities exhibited by the Miss Universe winners during their catwalk. This study can pave the way towards new vision-based applications for the fashion industry. To this end, we propose a novel video dataset, called the Miss Universe dataset, comprising 10 years of the evening gown competition selected between 1996–2010. We further propose two ranking-related problems: (1) Miss Universe Listwise Ranking and (2) Miss Universe Pairwise Ranking. In addition, we also develop an approach that simultaneously addresses the two proposed problems. To describe the videos we employ the recently proposed Stacked Fisher Vectors in conjunction with robust local spatio-temporal features. From our evaluation we found that although the addressed problems are extremely challenging, the proposed system is able to rank the winner in the top 3 best predicted scores for 5 out of 10 Miss Universe competitions.
Gait and walk assessments have been investigated for elderly people and humans with neurological disorders @cite_7 @cite_17 . Two web-cams are used to extract gait parameters including walking speed, step time, and step length in @cite_7 . The gait parameters are used for a fall risk assessment tool for home monitoring of older adults. For rehabilitation and treatment of patients with neurological disorders, automatic gait analysis with a Microsoft Kinect sensor is used to quantify the gait abnormality of patients with multiple sclerosis @cite_17 . A gait analysis system consisting of two camcoders located on the right and left side of a treadmill is employed in @cite_18 . This system fully reconstructs the skeleton model and demonstrates good accuracy compared to Kinect sensors. Despite being a related problem, for our Miss Universe catwalk analysis, Kinect sensors or multi-cameras are simply not available. The assessment of quality of actions using only visual information is still under early development. A recent work to predict the expert judges' scores for actions diving and figure skating in the Olympic games is presented in @cite_33 . The concept behind the score prediction is to learn how to assess the quality of actions in videos.
{ "cite_N": [ "@cite_18", "@cite_33", "@cite_7", "@cite_17" ], "mid": [ "28660007", "2260521078", "2161938769", "" ], "abstract": [ "Gait analysis is a domain of interest in clinical medical practice, both for neurological and non-neurological abnormal troubles. Marker-based systems are the most favored methods of human motion assessment and gait analysis, however, these systems require specific equipment and expertise and are cumbersome, costly and difficult to use. In this paper we compare two low-cost and marker-less systems that are: (1) A Kinect in front of a treadmill and (2) a set of two camcorders on the sides of the treadmill, used to reconstruct the skeleton of a subject during walk. We validated our method with ground truth data obtained with markers manually placed on the subject’s body. Finally, we present an application for asymmetric gait recognition. Our results on different subjects showed that, compared to the Kinect, the two-camcorder approach was very efficient and provided accurate measurements for gait assessment.", "While recent advances in computer vision have provided reliable methods to recognize actions in both images and videos, the problem of assessing how well people perform actions has been largely unexplored in computer vision. Since methods for assessing action quality have many real-world applications in healthcare, sports, and video retrieval, we believe the computer vision community should begin to tackle this challenging problem. To spur progress, we introduce a learning-based framework that takes steps towards assessing how well people perform actions in videos. Our approach works by training a regression model from spatiotemporal pose features to scores obtained from expert judges. Moreover, our approach can provide interpretable feedback on how people can improve their action. We evaluate our method on a new Olympic sports dataset, and our experiments suggest our framework is able to rank the athletes more accurately than a non-expert human. While promising, our method is still a long way to rivaling the performance of expert judges, indicating that there is significant opportunity in computer vision research to improve on this difficult yet important task.", "In this paper, we propose a webcam-based system for in-home gait assessment of older adults. A methodology has been developed to extract gait parameters including walking speed, step time, and step length from a 3-D voxel reconstruction, which is built from two calibrated webcam views. The gait parameters are validated with a GAITRite mat and a Vicon motion capture system in the laboratory with 13 participants and 44 tests, and again with GAITRite for 8 older adults in senior housing. Excellent agreement with intraclass correlation coefficients of 0.99 and repeatability coefficients between 0.7 and 6.6 was found for walking speed, step time, and step length given the limitation of frame rate and voxel resolution. The system was further tested with ten seniors in a scripted scenario representing everyday activities in an unstructured environment. The system results demonstrate the capability of being used as a daily gait assessment tool for fall risk assessment and other medical applications. Furthermore, we found that residents displayed different gait patterns during their clinical GAITRite tests compared to the realistic scenario, namely a mean increase of 21 in walking speed, a mean decrease of 12 in step time, and a mean increase of 6 in step length. These findings provide support for continuous gait assessment in the home for capturing habitual gait.", "" ] }
1604.07547
2343710432
Can we predict the winner of Miss Universe after watching how they stride down the catwalk during the evening gown competition? Fashion gurus say they can! In our work, we study this question from the perspective of computer vision. In particular, we want to understand whether existing computer vision approaches can be used to automatically extract the qualities exhibited by the Miss Universe winners during their catwalk. This study can pave the way towards new vision-based applications for the fashion industry. To this end, we propose a novel video dataset, called the Miss Universe dataset, comprising 10 years of the evening gown competition selected between 1996–2010. We further propose two ranking-related problems: (1) Miss Universe Listwise Ranking and (2) Miss Universe Pairwise Ranking. In addition, we also develop an approach that simultaneously addresses the two proposed problems. To describe the videos we employ the recently proposed Stacked Fisher Vectors in conjunction with robust local spatio-temporal features. From our evaluation we found that although the addressed problems are extremely challenging, the proposed system is able to rank the winner in the top 3 best predicted scores for 5 out of 10 Miss Universe competitions.
Catwalk analysis can be also related to fine-grained action analysis. Fine-grained action analysis has been recently investigated for action recognition @cite_10 @cite_25 @cite_31 @cite_2 @cite_1 , where it is important to recognise small differences in activities such as cut and peel in food preparation. This is in contrast to traditional action recognition where the goal is to recognise full-body activities such as walking or jumping.
{ "cite_N": [ "@cite_1", "@cite_2", "@cite_31", "@cite_10", "@cite_25" ], "mid": [ "2019660985", "1511568086", "2396622734", "", "1744759976" ], "abstract": [ "While activity recognition is a current focus of research the challenging problem of fine-grained activity recognition is largely overlooked. We thus propose a novel database of 65 cooking activities, continuously recorded in a realistic setting. Activities are distinguished by fine-grained body motions that have low inter-class variability and high intra-class variability due to diverse subjects and ingredients. We benchmark two approaches on our dataset, one based on articulated pose tracks and the second using holistic video features. While the holistic approach outperforms the pose-based approach, our evaluation suggests that fine-grained activities are more difficult to detect and the body model can help in those cases. Providing high-resolution videos as well as an intermediate pose representation we hope to foster research in fine-grained activity recognition.", "Holistic methods based on dense trajectories [29, 30] are currently the de facto standard for recognition of human activities in video. Whether holistic representations will sustain or will be superseded by higher level video encoding in terms of body pose and motion is the subject of an ongoing debate [12]. In this paper we aim to clarify the underlying factors responsible for good performance of holistic and pose-based representations. To that end we build on our recent dataset [2] leveraging the existing taxonomy of human activities. This dataset includes (24,920 ) video snippets covering (410 ) human activities in total. Our analysis reveals that holistic and pose-based methods are highly complementary, and their performance varies significantly depending on the activity. We find that holistic methods are mostly affected by the number and speed of trajectories, whereas pose-based methods are mostly influenced by viewpoint of the person. We observe striking performance differences across activities: for certain activities results with pose-based features are more than twice as accurate compared to holistic features, and vice versa. The best performing approach in our comparison is based on the combination of holistic and pose-based approaches, which again underlines their complementarity.", "In this paper we propose a novel feature descriptor Extended Co-occurrence HOG (ECoHOG) and integrate it with dense point trajectories demonstrating its usefulness in fine grained activity recognition. This feature is inspired by original Co-occurrence HOG (CoHOG) that is based on histograms of occurrences of pairs of image gradients in the image. Instead relying only on pure histograms we introduce a sum of gradient magnitudes of co-occurring pairs of image gradients in the image. This results in giving the importance to the object boundaries and straightening the difference between the moving foreground and static background. We also couple ECoHOG with dense point trajectories extracted using optical flow from video sequences and demonstrate that they are extremely well suited for fine grained activity recognition. Using our feature we outperform state of the art methods in this task and provide extensive quantitative evaluation.", "", "This work targets human action recognition in video. While recent methods typically represent actions by statistics of local video features, here we argue for the importance of a representation derived from human pose. To this end we propose a new Pose-based Convolutional Neural Network descriptor (P-CNN) for action recognition. The descriptor aggregates motion and appearance information along tracks of human body parts. We investigate different schemes of temporal aggregation and experiment with P-CNN features obtained both for automatically estimated and manually annotated human poses. We evaluate our method on the recent and challenging JHMDB and MPII Cooking datasets. For both datasets our method shows consistent improvement over the state of the art." ] }
1604.07547
2343710432
Can we predict the winner of Miss Universe after watching how they stride down the catwalk during the evening gown competition? Fashion gurus say they can! In our work, we study this question from the perspective of computer vision. In particular, we want to understand whether existing computer vision approaches can be used to automatically extract the qualities exhibited by the Miss Universe winners during their catwalk. This study can pave the way towards new vision-based applications for the fashion industry. To this end, we propose a novel video dataset, called the Miss Universe dataset, comprising 10 years of the evening gown competition selected between 1996–2010. We further propose two ranking-related problems: (1) Miss Universe Listwise Ranking and (2) Miss Universe Pairwise Ranking. In addition, we also develop an approach that simultaneously addresses the two proposed problems. To describe the videos we employ the recently proposed Stacked Fisher Vectors in conjunction with robust local spatio-temporal features. From our evaluation we found that although the addressed problems are extremely challenging, the proposed system is able to rank the winner in the top 3 best predicted scores for 5 out of 10 Miss Universe competitions.
Improved dense trajectory (IDT) features in conjunction with Fisher Vector representation have recently show outstanding performance for the action recognition problem @cite_12 . This approach densely samples feature points at several spatial scales in each frame and tracks them using optical flow. For each trajectory the following descriptors are computed: Trajectory, Histogram of Gradients, Histogram of Optical Flow, and Motion Boundary histogram. Finally, all descriptors are concatenated and normalised. IDT features are also popular for fine-grained action recognition @cite_31 @cite_15 @cite_32 . However, some disadvantages have been reported. IDT generates irrelevant trajectories that are eventually are discarded. Processing such trajectories is time consuming and hence not suitable for realistic environments @cite_14 @cite_20 .
{ "cite_N": [ "@cite_14", "@cite_15", "@cite_32", "@cite_31", "@cite_20", "@cite_12" ], "mid": [ "855623855", "", "", "2396622734", "2015822910", "2105101328" ], "abstract": [ "Abstract Recognizing human actions in video sequences has been a challenging problem in the last few years due to its real-world applications. A lot of action representation approaches have been proposed to improve the action recognition performance. Despite the popularity of local features-based approaches together with “Bag-of-Words” model for action representation, it fails to capture adequate spatial or temporal relationships. In an attempt to overcome this problem, a trajectory-based local representation approaches have been proposed to capture the temporal information. This paper introduces an improvement of trajectory-based human action recognition approaches to capture discriminative temporal relationships. In our approach, we extract trajectories by tracking the detected spatio-temporal interest points named “cuboid features” with matching its SIFT descriptors over the consecutive frames. We, also, propose a linking and exploring method to obtain efficient trajectories for motion representation in realistic conditions. Then the volumes around the trajectories’ points are described to represent human actions based on the Bag-of-Words (BOW) model. Finally, a support vector machine is used to classify human actions. The effectiveness of the proposed approach was evaluated on three popular datasets (KTH, Weizmann and UCF sports). Experimental results showed that the proposed approach yields considerable performance improvement over the state-of-the-art approaches.", "", "", "In this paper we propose a novel feature descriptor Extended Co-occurrence HOG (ECoHOG) and integrate it with dense point trajectories demonstrating its usefulness in fine grained activity recognition. This feature is inspired by original Co-occurrence HOG (CoHOG) that is based on histograms of occurrences of pairs of image gradients in the image. Instead relying only on pure histograms we introduce a sum of gradient magnitudes of co-occurring pairs of image gradients in the image. This results in giving the importance to the object boundaries and straightening the difference between the moving foreground and static background. We also couple ECoHOG with dense point trajectories extracted using optical flow from video sequences and demonstrate that they are extremely well suited for fine grained activity recognition. Using our feature we outperform state of the art methods in this task and provide extensive quantitative evaluation.", "In this paper, we propose the fast dense trajectories algorithm for human action recognition. Dense trajectories are robust to fast irregular motions and outperform other state-of-the-art descriptors such as KLT tracker or SIFT descriptors. However, the use of dense trajectories is time consuming. To improve the efficiency, we extract feature trajectories in the ROI rather than in the whole frames, and we use the temporal pyramids to achieve adaptable mechanism for different action speed. We evaluate the method on the dataset of Huawei 3DLife -- 3D human reconstruction and action recognition Grand Challenge in ACM Multimedia 2013. Experimental results show a significant improvement over the dense trajectories descriptor in real-time, and adaptable to different speed.", "Recently dense trajectories were shown to be an efficient video representation for action recognition and achieved state-of-the-art results on a variety of datasets. This paper improves their performance by taking into account camera motion to correct them. To estimate camera motion, we match feature points between frames using SURF descriptors and dense optical flow, which are shown to be complementary. These matches are, then, used to robustly estimate a homography with RANSAC. Human motion is in general different from camera motion and generates inconsistent matches. To improve the estimation, a human detector is employed to remove these matches. Given the estimated camera motion, we remove trajectories consistent with it. We also use this estimation to cancel out camera motion from the optical flow. This significantly improves motion-based descriptors, such as HOF and MBH. Experimental results on four challenging action datasets (i.e., Hollywood2, HMDB51, Olympic Sports and UCF50) significantly outperform the current state of the art." ] }
1604.07547
2343710432
Can we predict the winner of Miss Universe after watching how they stride down the catwalk during the evening gown competition? Fashion gurus say they can! In our work, we study this question from the perspective of computer vision. In particular, we want to understand whether existing computer vision approaches can be used to automatically extract the qualities exhibited by the Miss Universe winners during their catwalk. This study can pave the way towards new vision-based applications for the fashion industry. To this end, we propose a novel video dataset, called the Miss Universe dataset, comprising 10 years of the evening gown competition selected between 1996–2010. We further propose two ranking-related problems: (1) Miss Universe Listwise Ranking and (2) Miss Universe Pairwise Ranking. In addition, we also develop an approach that simultaneously addresses the two proposed problems. To describe the videos we employ the recently proposed Stacked Fisher Vectors in conjunction with robust local spatio-temporal features. From our evaluation we found that although the addressed problems are extremely challenging, the proposed system is able to rank the winner in the top 3 best predicted scores for 5 out of 10 Miss Universe competitions.
Gradients have been used as a relatively simple yet effective video representation @cite_5 . Each pixel in the gradient image helps extract relevant information, eg. edges of a subject. Gradients can be computed at every spatio-temporal location @math in any direction in a video. Lastly, since the task of action recognition is based on an ordered sequence of frames, optical flow can be used to provide an efficient way of capturing local dynamics and motion patterns in a scene @cite_16 .
{ "cite_N": [ "@cite_5", "@cite_16" ], "mid": [ "1974809759", "1576762698" ], "abstract": [ "In this paper we propose a novel approach to multi-action recognition that performs joint segmentation and classification. This approach models each action using a Gaussian mixture using robust low-dimensional action features. Segmentation is achieved by performing classification on overlapping temporal windows, which are then merged to produce the final result. This approach is considerably less complicated than previous methods which use dynamic programming or computationally expensive hidden Markov models (HMMs). Initial experiments on a stitched version of the KTH dataset show that the proposed approach achieves an accuracy of 78.3 , outperforming a recent HMM-based approach which obtained 71.2 .", "Action Recognition in videos is an active research field that is fueled by an acute need, spanning several application domains. Still, existing systems fall short of the applications' needs in real-world scenarios, where the quality of the video is less than optimal and the viewpoint is uncontrolled and often not static. In this paper, we consider the key elements of motion encoding and focus on capturing local changes in motion directions. In addition, we decouple image edges from motion edges using a suppression mechanism, and compensate for global camera motion by using an especially fitted registration scheme. Combined with a standard bag-of-words technique, our methods achieves state-of-the-art performance in the most recent and challenging benchmarks." ] }
1604.07814
2607980312
We consider multi-agent, convex optimization programs subject to separable constraints, where the constraint function of each agent involves only its local decision vector, while the decision vectors of all agents are coupled via a common objective function. We focus on a regularized variant of the so called Jacobi algorithm for decentralized computation in such problems. We first consider the case where the objective function is quadratic, and provide a fixed-point theoretic analysis showing that the algorithm converges to a minimizer of the centralized problem. Moreover, we quantify the potential benefits of such an iterative scheme by comparing it against a scaled projected gradient algorithm. We then consider the general case and show that all limit points of the proposed iteration are optimal solutions of the centralized problem. The efficacy of the proposed algorithm is illustrated by applying it to the problem of optimal charging of electric vehicles, where, as opposed to earlier approaches, we show convergence to an optimal charging scheme for a finite, possibly large, number of vehicles.
The second direction for decentralized optimization involves mainly the so called Jacobi algorithm, which serves as an alternative to gradient algorithms. The Gauss-Seidel algorithm exhibits similarities with the Jacobi one, but is not of parallelizable nature @cite_27 , unless a coloring scheme is adopted (see Section 1.2.4 in @cite_3 ). Under the Jacobi algorithmic setup, at every iteration, instead of performing a gradient step, each agent minimizes the common objective function subject to its local constraints, while keeping the decision vectors of all other agents fixed to their values at the previous iteration. A regularized version of the Jacobi algorithm has been proposed in @cite_13 @cite_30 , and more recently in @cite_16 @cite_15 . Other parallelizable iterative methods are proposed in @cite_29 @cite_31 @cite_0 , where, however, partially separable cost functions are considered.
{ "cite_N": [ "@cite_30", "@cite_31", "@cite_29", "@cite_3", "@cite_0", "@cite_27", "@cite_15", "@cite_16", "@cite_13" ], "mid": [ "2047620357", "2117669906", "2115594466", "1603765807", "2592062427", "1967138577", "", "", "2171305414" ], "abstract": [ "The auxiliary problem principle allows one to find the solution of a problem (minimization problem, saddle-point problem, etc.) by solving a sequence of auxiliary problems. There is a wide range of possible choices for these problems, so that one can give special features to them in order to make them easier to solve. We introduced this principle in Ref. 1 and showed its relevance to decomposing a problem into subproblems and to coordinating the subproblems. Here, we derive several basic or abstract algorithms, already given in Ref. 1, and we study their convergence properties in the framework of i infinite-dimensional convex programming.", "In many machine learning problems such as the dual form of SVM, the objective function to be minimized is convex but not strongly convex. This fact causes difficulties in obtaining the complexity of some commonly used optimization algorithms. In this paper, we proved the global linear convergence on a wide range of algorithms when they are applied to some non-strongly convex problems. In particular, we are the first to prove O(log(1 e)) time complexity of cyclic coordinate descent methods on dual problems of support vector classification and regression.", "We propose a decentralized algorithm to optimally schedule electric vehicle (EV) charging. The algorithm exploits the elasticity of electric vehicle loads to fill the valleys in electric load profiles. We first formulate the EV charging scheduling problem as an optimal control problem, whose objective is to impose a generalized notion of valley-filling, and study properties of optimal charging profiles. We then give a decentralized algorithm to iteratively solve the optimal control problem. In each iteration, EVs update their charging profiles according to the control signal broadcast by the utility company, and the utility company alters the control signal to guide their updates. The algorithm converges to optimal charging profiles (that are as “flat” as they can possibly be) irrespective of the specifications (e.g., maximum charging rate and deadline) of EVs, even if EVs do not necessarily update their charging profiles in every iteration, and use potentially outdated control signal when they update. Moreover, the algorithm only requires each EV solving its local problem, hence its implementation requires low computation capability. We also extend the algorithm to track a given load profile and to real-time implementation.", "", "In this paper we employ a parallel version of a randomized (block) coordinate descent method for minimizing the sum of a partially separable smooth convex function and a fully separable nonsmooth convex function. Under the assumption of Lipschitz continuity of the gradient of the smooth function, this method has a sublinear convergence rate. Linear convergence rate of the method is obtained for the newly introduced class of generalized error bound functions. We prove that the new class of generalized error bound functions encompasses both global local error bound functions and smooth strongly convex functions. We also show that the theoretical estimates on the convergence rate depend on the number of blocks chosen randomly and a natural measure of separability of the smooth component of the objective function.", "In view of the minimization of a nonsmooth nonconvex function f, we prove an abstract convergence result for descent methods satisfying a sufficient-decrease assumption, and allowing a relative error tolerance. Our result guarantees the convergence of bounded sequences, under the assumption that the function f satisfies the Kurdyka–Łojasiewicz inequality. This assumption allows to cover a wide range of problems, including nonsmooth semi-algebraic (or more generally tame) minimization. The specialization of our result to different kinds of structured problems provides several new convergence results for inexact versions of the gradient method, the proximal method, the forward–backward splitting algorithm, the gradient projection and some proximal regularization of the Gauss–Seidel method in a nonconvex setting. Our results are illustrated through feasibility problems, or iterative thresholding procedures for compressive sensing.", "", "", "In the general framework of inifinite-dimensional convex programming, two fundamental principles are demonstrated and used to derive several basic algorithms to solve a so-called \"master\" (constrained optimization) problem. These algorithms consist in solving an infinite sequence of \"auxiliary\" problems whose solutions converge to the master's optimal one. By making particular choices for the auxiliary problems, one can recover either classical algorithms (gradient, Newton-Raphson, Uzawa) or decomposition-coordination (two-level) algorithms. The advantages of the theory are that it clearly sets the connection between classical and two-level algorithms, It provides a framework for classifying the two-level algorithms, and it gives a systematic way of deriving new algorithms." ] }
1604.07211
2952212047
We have developed reduced reference parametric models for estimating perceived quality in audiovisual multimedia services. We have created 144 unique configurations for audiovisual content including various application and network parameters such as bitrates and distortions in terms of bandwidth, packet loss rate and jitter. To generate the data needed for model training and validation we have tasked 24 subjects, in a controlled environment, to rate the overall audiovisual quality on the absolute category rating (ACR) 5-level quality scale. We have developed models using Random Forest and Neural Network based machine learning methods in order to estimate Mean Opinion Scores (MOS) values. We have used information retrieved from the packet headers and side information provided as network parameters for model training. Random Forest based models have performed better in terms of Root Mean Square Error (RMSE) and Pearson correlation coefficient. The side information proved to be very effective in developing the model. We have found that, while the model performance might be improved by replacing the side information with more accurate bit stream level measurements, they are performing well in estimating perceived quality in audiovisual multimedia services.
In 2010 @cite_0 conducted subjective experiments to explore methods to predict audiovisual quality objectively for video calls in wireless applications. They have presented subjective test results for 60 test conditions on how audio and video contribute to overall audiovisual quality and develop models to reflect this relationship, and investigated how network and application parameters affect overall audiovisual quality. In their analysis they have used a regression model to predict audiovisual quality from packet loss rate and frame rate.
{ "cite_N": [ "@cite_0" ], "mid": [ "2137870642" ], "abstract": [ "Mobile wireless multimedia applications (e.g., video calls and IPTV) have gained great momentum in recent years. An important issue is to monitor predict overall audiovisual quality, instead of audio-only or video-only quality, non-intrusively for technical or commercial reasons. Previous audiovisual modeling research mainly considered application parameters (e.g., codec and send bit rate). Little attention has been paid to how network parameters, e.g., Packet Error Rate (PER) affect audiovisual quality. The aim of this paper is to explore methods to predict audiovisual quality objectively for video calls in wireless applications. The contributions of the paper are twofold. Firstly, we present subjective test results on how audio and video contribute to overall audiovisual quality and develop models to reflect this relationship. Secondly, we investigated how network parameters (e.g., PER) and application parameters, e.g., video Frame Rate (FR) affect overall audiovisual quality. We developed a regression model to predict audiovisual quality from PER and FR which can be used to monitor predict audiovisual quality non-intrusively. We also explore the possibility to predict audiovisual quality from full-reference voice and video quality metrics (i.e., PESQ and PSNR) and from combined PESQ PSNR and network application parameters. The different predication accuracy obtained from these models (accuracy from 84 to 93 ) indicates the complex attributes in audiovisual quality prediction. An extended Evalvid NS-2 platform is developed to support simulation of video calls over wireless networks." ] }