aid stringlengths 9 15 | mid stringlengths 7 10 | abstract stringlengths 78 2.56k | related_work stringlengths 92 1.77k | ref_abstract dict |
|---|---|---|---|---|
1701.03989 | 2951792378 | On modern large-scale parallel computers, the performance of Krylov subspace iterative methods is limited by global synchronization. This has inspired the development of @math -step Krylov subspace method variants, in which iterations are computed in blocks of @math , which can reduce the number of global synchronizations per iteration by a factor of @math . Although the @math -step variants are mathematically equivalent to their classical counterparts, they can behave quite differently in finite precision depending on the parameter @math . If @math is chosen too large, the @math -step method can suffer a convergence delay and a decrease in attainable accuracy relative to the classical method. This makes it difficult for a potential user of such methods - the @math value that minimizes the time per iteration may not be the best @math for minimizing the overall time-to-solution, and further may cause an unacceptable decrease in accuracy. Towards improving the reliability and usability of @math -step Krylov subspace methods, in this work we derive the , a variable @math -step CG method where in block @math , the parameter @math is determined automatically such that a user-specified accuracy is attainable. The method for determining @math is based on a bound on growth of the residual gap within block @math , from which we derive a constraint on the condition numbers of the computed @math -dimensional Krylov subspace bases. The computations required for determining the block size @math can be performed without increasing the number of global synchronizations per block. Our numerical experiments demonstrate that the adaptive @math -step CG method is able to attain up to the same accuracy as classical CG while still significantly reducing the total number of global synchronizations. | In both @math -step and classical variants of Krylov subspace methods, finite precision roundoff error in updates to the approximate solution @math and the residual @math in each iteration can cause the updated residual @math and the true residual @math to grow further and further apart as the iterations proceed. If this deviation grows large, it can limit the , i.e., the accuracy with which we can solve @math on a computer with unit round-off @math . Analyses of maximum attainable accuracy in CG and other classical KSMs are given by Greenbaum @cite_14 , van der Vorst and Ye @cite_40 , Sleijpen, van der Vorst, and Fokkema @cite_12 , Sleijpen, van der Vorst, and Modersitzki @cite_16 , Bj "o rck, Elfving, and Strako s @cite_3 , and Gutknecht and Strako s @cite_33 . One important result of these analyses is the insight that loss of accuracy can be caused at a very early stage of the computation, which can not be corrected in later iterations. Analyses of the maximum attainable accuracy in @math -step CG and the @math -step biconjugate gradient method (BICG) can be found in @cite_19 @cite_6 . | {
"cite_N": [
"@cite_14",
"@cite_33",
"@cite_3",
"@cite_6",
"@cite_19",
"@cite_40",
"@cite_16",
"@cite_12"
],
"mid": [
"1985536181",
"2060710554",
"2033180095",
"2221817300",
"2002054778",
"2106833662",
"",
"2080114591"
],
"abstract": [
"Many conjugate gradient-like methods for solving linear systems @math use recursion formulas for updating residual vectors instead of computing the residuals directly. For such methods it is shown that the difference between the actual residuals and the updated approximate residual vectors generated in finite precision arithmetic depends on the machine precision @math and on the maximum norm of an iterate divided by the norm of the true solution. It is often observed numerically, and can sometimes be proved, that the norms of the updated approximate residual vectors converge to zero or, at least, become orders of magnitude smaller than the machine precision. In such cases, the actual residual norm reaches the level @math times the maximum ratio of the norm of an iterate to that of the true solution. Using exact arithmetic theory to bound the size of the iterates, we give a priori estimates of the size of the final residual for a number of algorithms.",
"It has been widely observed that Krylov space solvers based on two three-term recurrences can give significantly less accurate residuals than mathematically equivalent solvers implemented with three two-term recurrences. In this paper we attempt to clarify and justify this difference theoretically by analyzing the gaps between recursively and explicitly computed residuals. It is shown that, in contrast with the two-term recurrences analyzed by Sleijpen, van der Vorst, and Fokkema [ Numer. Algorithms, 7 (1994), pp. 75--109] and Greenbaum [SIAM J. Matrix Anal. Appl., 18 (1997), pp. 535--551], in the two three-term recurrences the contributions of the local roundoff errors to the analyzed gaps may be dramatically amplified while propagating through the algorithm. This result explains, for example, the well-known behavior of three-term-based versions of the biconjugate gradient method, where large gaps between recursively and explicitly computed residuals are not uncommon. For the conjugate gradient method, however, such a devastating behavior---although possible---is not observed frequently in practical computations, and the difference between two-term and three-term implementations is usually moderate or small. This can also be explained by our results.",
"The conjugate gradient method applied to the normal equations ATAx=ATb (CGLS) is often used for solving large sparse linear least squares problems. The mathematically equivalent algorithm LSQR based on the Lanczos bidiagonalization process is an often recommended alternative. In this paper, the achievable accuracy of different conjgate gradient and Lanczos methods in finite precision is studied. It is shown that an implementation of algorithm CGLS in which the residual sk=AT(b-Axk) of the normal equations is recurred will not in general achieve accurate solutions. The same conclusion holds for the method based on Lanczos bidiagonalization with starting vector ATb. For the preferred implementation of CGLS we bound the error ||r-rk|| of the computed residual rk. Numerical tests are given that confirm a conjecture of backward stability. The achievable accuracy of LSQR is shown to be similar. The analysis essentially also covers the preconditioned case.",
"Advancements in the field of high-performance scientific computing are necessary to address the most important challenges we face in the 21st century. From physical modeling to large-scale data analysis, engineering efficient code at the extreme scale requires a critical focus on reducing communication -- the movement of data between levels of memory hierarchy or between processors over a network -- which is the most expensive operation in terms of both time and energy at all scales of computing. Achieving scalable performance thus requires a dramatic shift in the field of algorithm design, with a key area of innovation being the development of communication-avoiding algorithms.Solvers for sparse linear algebra problems, ubiquitous throughout scientific and mathematical applications, often limit application performance due to a low computation communication ratio. Among iterative methods, Krylov subspace methods are the most general and widely-used. To alleviate performance bottlenecks, much prior work has focused on the development of communication-avoiding Krylov subspace methods, which can offer asymptotic performance improvements over a set number of iterations. In finite precision, the convergence and stability properties of classical Krylov methods are not necessarily maintained by communication-avoiding Krylov methods. Depending on the parameters used and the numerical properties of the problem, these communication-avoiding variants can exhibit slower convergence and decreased accuracy compared to their classical counterparts, making it unclear when communication-avoiding Krylov subspace methods are suitable for use in practice. Until now, the literature on communication-avoiding Krylov methods lacked a detailed numerical stability analysis, as well as both theoretical and practical comparisons with the stability and convergence properties of standard implementations. In this thesis, we address this major challenge to the practical use of communication-avoiding Krylov subspace methods. We extend a number of theoretical results and algorithmic techniques developed for classical Krylov subspace methods to communication-avoiding Krylov subspace methods and identify constraints under which these methods are competitive in terms of both achieving asymptotic speedups and meeting application-specific numerical requirements.",
"Krylov subspace methods are a popular class of iterative methods for solving linear systems with large, sparse matrices. On modern computer architectures, both sequential and parallel performance of classical Krylov methods is limited by costly data movement, or communication, required to update the approximate solution in each iteration. These motivated communication-avoiding Krylov methods, based on @math -step formulations, reduce data movement by a factor of @math by reordering the computations in classical Krylov methods to exploit locality. Studies on the finite precision behavior of communication-avoiding Krylov methods in the literature have thus far been empirical in nature; in this work, we provide the first quantitative analysis of the maximum attainable accuracy of communication-avoiding Krylov subspace methods in finite precision. Following the analysis for classical Krylov methods, we derive a bound on the deviation of the true and updated residuals in communication-avoiding conjugate gradient...",
"In this paper, a strategy is proposed for alternative computations of the residual vectors in Krylov subspace methods, which improves the agreement of the computed residuals and the true residuals to the level of O(u) ||A|| ||x||. Building on earlier ideas on residual replacement and on insights in the finite precision behavior of the Krylov subspace methods, computable error bounds are derived for iterations that involve occasionally replacing the computed residuals by the true residuals, and they are used to monitor the deviation of the two residuals and hence to select residual replacement steps, so that the recurrence relations for the computed residuals, which control the convergence of the method, are perturbed within safe bounds. Numerical examples are presented to demonstrate the effectiveness of this new residual replacement scheme.",
"",
"It is well-known that Bi-CG can be adapted so that the operations withAT can be avoided, and hybrid methods can be constructed in which it is attempted to further improve the convergence behaviour. Examples of this are CGS, Bi-CGSTAB, and the more general BiCGstab(l) method. In this paper it is shown that BiCGstab(l) can be implemented in different ways. Each of the suggested approaches has its own advantages and disadvantages. Our implementations allow for combinations of Bi-CG with arbitrary polynomial methods. The choice for a specific implementation can also be made for reasons of numerical stability. This aspect receives much attention. Various effects have been illustrated by numerical examples."
]
} |
1701.03212 | 2951614422 | Topological data analysis (TDA) has emerged as one of the most promising techniques to reconstruct the unknown shapes of high-dimensional spaces from observed data samples. TDA, thus, yields key shape descriptors in the form of persistent topological features that can be used for any supervised or unsupervised learning task, including multi-way classification. Sparse sampling, on the other hand, provides a highly efficient technique to reconstruct signals in the spatial-temporal domain from just a few carefully-chosen samples. Here, we present a new method, referred to as the Sparse-TDA algorithm, that combines favorable aspects of the two techniques. This combination is realized by selecting an optimal set of sparse pixel samples from the persistent features generated by a vector-based TDA algorithm. These sparse samples are selected from a low-rank matrix representation of persistent features using QR pivoting. We show that the Sparse-TDA method demonstrates promising performance on three benchmark problems related to human posture recognition and image texture classification. | Over the past decade or so, an increasing interest in utilizing tools from algebraic topology to extract insights from high dimensional data has given rise to the field of TDA. The successful applications of TDA have spanned a large number of areas, ranging from computer vision @cite_24 to medical imaging @cite_6 , biochemistry @cite_4 , neuroscience @cite_17 and materials science @cite_19 . A predominant tool in TDA is persistent homology, which tracks the evolution of the topological features in a multi-scale manner to avoid information loss @cite_16 @cite_7 . The multi-scale information is summarized by the persistence diagram (PD), a multiset of points in @math that encodes the lifetime (i.e., persistence) of the features. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_6",
"@cite_24",
"@cite_19",
"@cite_16",
"@cite_17"
],
"mid": [
"2135610119",
"",
"2124496089",
"2952711884",
"2424211690",
"",
"1487729220"
],
"abstract": [
"In this paper we partially clarify the relation between the compressibility of a protein and its molecular geometric structure. To identify and understand the relevant topological features within a given protein, we model its molecule as an alpha filtration and hence obtain multi-scale insight into the structure of its tunnels and cavities. The persistence diagrams of this alpha filtration capture the sizes and robustness of such tunnels and cavities in a compact and meaningful manner. From these persistence diagrams, we extract a measure of compressibility derived from those topological features whose relevance is suggested by physical and chemical properties. Due to recent advances in combinatorial topology, this measure is efficiently and directly computable from information found in the Protein Data Bank (PDB). Our main result establishes a clear linear correlation between the topological measure and the experimentally-determined compressibility of most proteins for which both PDB information and experimental compressibility data are available. Finally, we establish that both the topological measurement and the linear correlation are stable with respect to small perturbations in the input data, such as those arising from experimental errors in compressibility and X-ray crystallography experiments.",
"",
"We introduce a novel algorithm for segmenting the high resolution CT images of the left ventricle (LV), particularly the papillary muscles and the trabeculae. High quality segmentations of these structures are necessary in order to better understand the anatomical function and geometrical properties of LV. These fine structures, however, are extremely challenging to capture due to their delicate and complex nature in both geometry and topology. Our algorithm computes the potential missing topological structures of a given initial segmentation. Using techniques from computational topology, e.g. persistent homology, our algorithm find topological handles which are likely to be the true signal. To further increase accuracy, these proposals are measured by the saliency and confidence from a trained classifier. Handles with high scores are restored in the final segmentation, leading to high quality segmentation results of the complex structures.",
"Topological data analysis offers a rich source of valuable information to study vision problems. Yet, so far we lack a theoretically sound connection to popular kernel-based learning techniques, such as kernel SVMs or kernel PCA. In this work, we establish such a connection by designing a multi-scale kernel for persistence diagrams, a stable summary representation of topological features in data. We show that this kernel is positive definite and prove its stability with respect to the 1-Wasserstein distance. Experiments on two benchmark datasets for 3D shape classification retrieval and texture recognition show considerable performance gains of the proposed method compared to an alternative approach that is based on the recently introduced persistence landscapes.",
"This article proposes a topological method that extracts hierarchical structures of various amorphous solids. The method is based on the persistence diagram (PD), a mathematical tool for capturing shapes of multiscale data. The input to the PDs is given by an atomic configuration and the output is expressed as 2D histograms. Then, specific distributions such as curves and islands in the PDs identify meaningful shape characteristics of the atomic configuration. Although the method can be applied to a wide variety of disordered systems, it is applied here to silica glass, the Lennard-Jones system, and Cu-Zr met allic glass as standard examples of continuous random network and random packing structures. In silica glass, the method classified the atomic rings as short-range and medium-range orders and unveiled hierarchical ring structures among them. These detailed geometric characterizations clarified a real space origin of the first sharp diffraction peak and also indicated that PDs contain information on elastic response. Even in the Lennard-Jones system and Cu-Zr met allic glass, the hierarchical structures in the atomic configurations were derived in a similar way using PDs, although the glass structures and properties substantially differ from silica glass. These results suggest that the PDs provide a unified method that extracts greater depth of geometric information in amorphous solids than conventional methods.",
"",
"We present a novel framework for characterizing signals in images using techniques from computational algebraic topology. This technique is general enough for dealing with noisy multivariate data including geometric noise. The main tool is persistent homology which can be encoded in persistence diagrams. These diagrams visually show how the number of connected components of the sublevel sets of the signal changes. The use of local critical values of a function differs from the usual statistical parametric mapping framework, which mainly uses the mean signal in quantifying imaging data. Our proposed method uses all the local critical values in characterizing the signal and by doing so offers a completely new data reduction and analysis framework for quantifying the signal. As an illustration, we apply this method to a 1D simulated signal and 2D cortical thickness data. In case of the latter, extra homological structures are evident in an control group over the autistic group."
]
} |
1701.03212 | 2951614422 | Topological data analysis (TDA) has emerged as one of the most promising techniques to reconstruct the unknown shapes of high-dimensional spaces from observed data samples. TDA, thus, yields key shape descriptors in the form of persistent topological features that can be used for any supervised or unsupervised learning task, including multi-way classification. Sparse sampling, on the other hand, provides a highly efficient technique to reconstruct signals in the spatial-temporal domain from just a few carefully-chosen samples. Here, we present a new method, referred to as the Sparse-TDA algorithm, that combines favorable aspects of the two techniques. This combination is realized by selecting an optimal set of sparse pixel samples from the persistent features generated by a vector-based TDA algorithm. These sparse samples are selected from a low-rank matrix representation of persistent features using QR pivoting. We show that the Sparse-TDA method demonstrates promising performance on three benchmark problems related to human posture recognition and image texture classification. | In this work, we employ the vector representation from @cite_23 and integrate with a sparse sampling method using QR pivots to identify discriminative features in the presence of noisy and redundant information to further improve classifier training time and sometimes prediction accuracy. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2964237352"
],
"abstract": [
"Many data sets can be viewed as a noisy sampling of an underlying space, and tools from topological data analysis can characterize this structure for the purpose of knowledge discovery. One such tool is persistent homology, which provides a multiscale description of the homological features within a data set. A useful representation of this homological information is a persistence diagram (PD). Efforts have been made to map PDs into spaces with additional structure valuable to machine learning tasks. We convert a PD to a finite-dimensional vector representation which we call a persistence image (PI), and prove the stability of this transformation with respect to small perturbations in the inputs. The discriminatory power of PIs is compared against existing methods, showing significant performance gains. We explore the use of PIs with vector-based machine learning tools, such as linear sparse support vector machines, which identify features containing discriminating topological information. Finally, high accuracy inference of parameter values from the dynamic output of a discrete dynamical system (the linked twist map) and a partial differential equation (the anisotropic Kuramoto-Sivashinsky equation) provide a novel application of the discriminatory power of PIs."
]
} |
1701.03102 | 2951716784 | Limited annotated data available for the recognition of facial expression and action units embarrasses the training of deep networks, which can learn disentangled invariant features. However, a linear model with just several parameters normally is not demanding in terms of training data. In this paper, we propose an elegant linear model to untangle confounding factors in challenging realistic multichannel signals such as 2D face videos. The simple yet powerful model does not rely on huge training data and is natural for recognizing facial actions without explicitly disentangling the identity. Base on well-understood intuitive linear models such as Sparse Representation based Classification (SRC), previous attempts require a prepossessing of explicit decoupling which is practically inexact. Instead, we exploit the low-rank property across frames to subtract the underlying neutral faces which are modeled jointly with sparse representation on the action components with group sparsity enforced. On the extended Cohn-Kanade dataset (CK+), our one-shot automatic method on raw face videos performs as competitive as SRC applied on manually prepared action components and performs even better than SRC in terms of true positive rate. We apply the model to the even more challenging task of facial action unit recognition, verified on the MPI Face Video Database (MPI-VDB) achieving a decent performance. All the programs and data have been made publicly available. | Among non-linear models, one line of work is kernel-based methods @cite_26 while another is deep learning @cite_13 @cite_24 @cite_4 @cite_6 . Similar ideas with disentangling factors have been presented in @cite_3 @cite_25 @cite_6 . By introducing extra cues, one line of works is 3D models @cite_22 while another is multi-modal models @cite_20 . But in the linear world, observing a random signal @math for recognition, we just hope to send the classifier a representation @math over a dictionary @math such that @math . Normally @math is computed by pursuing the best . For example, when @math is under-complete (skinny), a closed-form approximate solution can be obtained by Least-Squares: @math . When @math is over-complete (fat), add a Tikhonov regularizer: @math where @math and @math is under-complete. Notably, @math is generally dense. | {
"cite_N": [
"@cite_26",
"@cite_4",
"@cite_22",
"@cite_6",
"@cite_3",
"@cite_24",
"@cite_13",
"@cite_25",
"@cite_20"
],
"mid": [
"2436394355",
"1974210421",
"1940910204",
"171902450",
"",
"2217426128",
"",
"",
""
],
"abstract": [
"Research in face perception and emotion theory requires very large annotated databases of images of facial expressions of emotion. Annotations should include Action Units (AUs) and their intensities as well as emotion category. This goal cannot be readily achieved manually. Herein, we present a novel computer vision algorithm to annotate a large database of one million images of facial expressions of emotion in the wild (i.e., face images downloaded from the Internet). First, we show that this newly proposed algorithm can recognize AUs and their intensities reliably across databases. To our knowledge, this is the first published algorithm to achieve highly-accurate results in the recognition of AUs and their intensities across multiple databases. Our algorithm also runs in real-time (>30 images second), allowing it to work with large numbers of images and video sequences. Second, we use WordNet to download 1,000,000 images of facial expressions with associated emotion keywords from the Internet. These images are then automatically annotated with AUs, AU intensities and emotion categories by our algorithm. The result is a highly useful database that can be readily queried using semantic descriptions for applications in computer vision, affective computing, social and cognitive psychology and neuroscience, e.g., \"show me all the images with happy faces\" or \"all images with AU 1 at intensity c.\"",
"A training process for facial expression recognition is usually performed sequentially in three individual stages: feature learning, feature selection, and classifier construction. Extensive empirical studies are needed to search for an optimal combination of feature representation, feature set, and classifier to achieve good recognition performance. This paper presents a novel Boosted Deep Belief Network (BDBN) for performing the three training stages iteratively in a unified loopy framework. Through the proposed BDBN framework, a set of features, which is effective to characterize expression-related facial appearance shape changes, can be learned and selected to form a boosted strong classifier in a statistical way. As learning continues, the strong classifier is improved iteratively and more importantly, the discriminative capabilities of selected features are strengthened as well according to their relative importance to the strong classifier via a joint fine-tune process in the BDBN framework. Extensive experiments on two public databases showed that the BDBN framework yielded dramatic improvements in facial expression analysis.",
"We propose a real-time 3D model-based method that continuously recognizes dimensional emotions from facial expressions in natural communications. In our method, 3D facial models are restored from 2D images, which provide crucial clues for the enhancement of robustness to overcome large changes including out-of-plane head rotations, fast head motions and partial facial occlusions. To accurately recognize the emotion, a novel random forest-based algorithm which simultaneously integrates two regressions for 3D facial tracking and continuous emotion estimation is constructed. Moreover, via the reconstructed 3D facial model, temporal information and user-independent emotion presentations are also taken into account through our image fusion process. The experimental results show that our algorithm can achieve state-of-the-art result with higher Pearson's correlation coefficient of continuous emotion recognition in real time.",
"We propose a semi-supervised approach to solve the task of emotion recognition in 2D face images using recent ideas in deep learning for handling the factors of variation present in data. An emotion classification algorithm should be both robust to (1) remaining variations due to the pose of the face in the image after centering and alignment, (2) the identity or morphology of the face. In order to achieve this invariance, we propose to learn a hierarchy of features in which we gradually filter the factors of variation arising from both (1) and (2). We address (1) by using a multi-scale contractive convolutional network (CCNET) in order to obtain invariance to translations of the facial traits in the image. Using the feature representation produced by the CCNET, we train a Contractive Discriminative Analysis (CDA) feature extractor, a novel variant of the Contractive Auto-Encoder (CAE), designed to learn a representation separating out the emotion-related factors from the others (which mostly capture the subject identity, and what is left of pose after the CCNET). This system beats the state-of-the-art on a recently proposed dataset for facial expression recognition, the Toronto Face Database, moving the state-of-art accuracy from 82.4 to 85.0 , while the CCNET and CDA improve accuracy of a standard CAE by 8 .",
"",
"Temporal information has useful features for recognizing facial expressions. However, to manually design useful features requires a lot of effort. In this paper, to reduce this effort, a deep learning technique, which is regarded as a tool to automatically extract useful features from raw data, is adopted. Our deep network is based on two different models. The first deep network extracts temporal appearance features from image sequences, while the other deep network extracts temporal geometry features from temporal facial landmark points. These two models are combined using a new integration method in order to boost the performance of the facial expression recognition. Through several experiments, we show that the two models cooperate with each other. As a result, we achieve superior performance to other state-of-the-art methods in the CK+ and Oulu-CASIA databases. Furthermore, we show that our new integration method gives more accurate results than traditional methods, such as a weighted summation and a feature concatenation method.",
"",
"",
""
]
} |
1701.03102 | 2951716784 | Limited annotated data available for the recognition of facial expression and action units embarrasses the training of deep networks, which can learn disentangled invariant features. However, a linear model with just several parameters normally is not demanding in terms of training data. In this paper, we propose an elegant linear model to untangle confounding factors in challenging realistic multichannel signals such as 2D face videos. The simple yet powerful model does not rely on huge training data and is natural for recognizing facial actions without explicitly disentangling the identity. Base on well-understood intuitive linear models such as Sparse Representation based Classification (SRC), previous attempts require a prepossessing of explicit decoupling which is practically inexact. Instead, we exploit the low-rank property across frames to subtract the underlying neutral faces which are modeled jointly with sparse representation on the action components with group sparsity enforced. On the extended Cohn-Kanade dataset (CK+), our one-shot automatic method on raw face videos performs as competitive as SRC applied on manually prepared action components and performs even better than SRC in terms of true positive rate. We apply the model to the even more challenging task of facial action unit recognition, verified on the MPI Face Video Database (MPI-VDB) achieving a decent performance. All the programs and data have been made publicly available. | Alternatively, we can seek a sparse usage of @math . Sparse Representation based Classification @cite_23 (SRC) expresses a test sample @math as a weighted linear combination @math of simply stacked columnwise in the dictionary @math . Presumably, non-zero weight coefficients drop to the ground-truth class, which induces a sparse coefficient vector or the so-called sparse representation. In practice, non-zero coefficients also drop to other classes due to noises and correlations among classes. Once adding an error term @math , we can form an dictionary @math which is always over-complete: @math @math . SRC evaluates which class leads to the minimum reconstruction error, which can be seen as a max-margin classifier. | {
"cite_N": [
"@cite_23"
],
"mid": [
"2129812935"
],
"abstract": [
"We consider the problem of automatically recognizing human faces from frontal views with varying expression and illumination, as well as occlusion and disguise. We cast the recognition problem as one of classifying among multiple linear regression models and argue that new theory from sparse signal representation offers the key to addressing this problem. Based on a sparse representation computed by C1-minimization, we propose a general classification algorithm for (image-based) object recognition. This new framework provides new insights into two crucial issues in face recognition: feature extraction and robustness to occlusion. For feature extraction, we show that if sparsity in the recognition problem is properly harnessed, the choice of features is no longer critical. What is critical, however, is whether the number of features is sufficiently large and whether the sparse representation is correctly computed. Unconventional features such as downsampled images and random projections perform just as well as conventional features such as eigenfaces and Laplacianfaces, as long as the dimension of the feature space surpasses certain threshold, predicted by the theory of sparse representation. This framework can handle errors due to occlusion and corruption uniformly by exploiting the fact that these errors are often sparse with respect to the standard (pixel) basis. The theory of sparse representation helps predict how much occlusion the recognition algorithm can handle and how to choose the training images to maximize robustness to occlusion. We conduct extensive experiments on publicly available databases to verify the efficacy of the proposed algorithm and corroborate the above claims."
]
} |
1701.03102 | 2951716784 | Limited annotated data available for the recognition of facial expression and action units embarrasses the training of deep networks, which can learn disentangled invariant features. However, a linear model with just several parameters normally is not demanding in terms of training data. In this paper, we propose an elegant linear model to untangle confounding factors in challenging realistic multichannel signals such as 2D face videos. The simple yet powerful model does not rely on huge training data and is natural for recognizing facial actions without explicitly disentangling the identity. Base on well-understood intuitive linear models such as Sparse Representation based Classification (SRC), previous attempts require a prepossessing of explicit decoupling which is practically inexact. Instead, we exploit the low-rank property across frames to subtract the underlying neutral faces which are modeled jointly with sparse representation on the action components with group sparsity enforced. On the extended Cohn-Kanade dataset (CK+), our one-shot automatic method on raw face videos performs as competitive as SRC applied on manually prepared action components and performs even better than SRC in terms of true positive rate. We apply the model to the even more challenging task of facial action unit recognition, verified on the MPI Face Video Database (MPI-VDB) achieving a decent performance. All the programs and data have been made publicly available. | Particularly for facial actions, we treat videos as multichannel signals @cite_12 @cite_14 , different from image-based methods @cite_0 @cite_28 . @cite_0 explicitly separates the neutral face and action component, and then exploits the class-wise sparsity separately for the recognition of identity from neutral faces and expression from action components. Differently, with the focus of facial actions we exploit the low-rank property for disentangling identity as well as structured sparsity by inter-channel observation. Furthermore, there is tradeoff between simplicity and performance. As videos are sequential signals, the above appearance-based methods including ours cannot model the dynamics given by a temporal model @cite_27 or spatio-temporal models @cite_7 @cite_11 @cite_19 . Other linear models include ordinal regression @cite_15 @cite_9 @cite_21 and boosting @cite_18 . | {
"cite_N": [
"@cite_18",
"@cite_14",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_0",
"@cite_19",
"@cite_27",
"@cite_15",
"@cite_12",
"@cite_11"
],
"mid": [
"2172197449",
"",
"2134860945",
"",
"2083261714",
"1491036874",
"2012362967",
"",
"2197748410",
"",
"",
"1984354005"
],
"abstract": [
"Most previous work focuses on how to learn discriminating appearance features over all the face without considering the fact that each facial expression is physically composed of some relative action units (AU). However, the definition of AU is an ambiguous semantic description in Facial Action Coding System (FACS), so it makes accurate AU detection very difficult. In this paper, we adopt a scheme of compromise to avoid AU detection, and try to interpret facial expression by learning some compositional appearance features around AU areas. We first divided face image into local patches according to the locations of AUs, and then we extract local appearance features from each patch. A minimum error based optimization strategy is adopted to build compositional features based on local appearance features, and this process embedded into Boosting learning structure. Experiments on the Cohn-Kanada database show that the proposed method has a promising performance and the built compositional features are basically consistent to FACS.",
"",
"Facial expression is temporally dynamic event which can be decomposed into a set of muscle motions occurring in different facial regions over various time intervals. For dynamic expression recognition, two key issues, temporal alignment and semantics-aware dynamic representation, must be taken into account. In this paper, we attempt to solve both problems via manifold modeling of videos based on a novel mid-level representation, i.e. expressionlet. Specifically, our method contains three key components: 1) each expression video clip is modeled as a spatio-temporal manifold (STM) formed by dense low-level features, 2) a Universal Manifold Model (UMM) is learned over all low-level features and represented as a set of local ST modes to statistically unify all the STMs. 3) the local modes on each STM can be instantiated by fitting to UMM, and the corresponding expressionlet is constructed by modeling the variations in each local ST mode. With above strategy, expression videos are naturally aligned both spatially and temporally. To enhance the discriminative power, the expressionlet-based STM representation is further processed with discriminant embedding. Our method is evaluated on four public expression databases, CK+, MMI, Oulu-CASIA, and AFEW. In all cases, our method reports results better than the known state-of-the-art.",
"",
"Automated facial expression recognition has received increased attention over the past two decades. Existing works in the field usually do not encode either the temporal evolution or the intensity of the observed facial displays. They also fail to jointly model multidimensional (multi-class) continuous facial behaviour data; binary classifiers — one for each target basic-emotion class — are used instead. In this paper, intrinsic topology of multidimensional continuous facial affect data is first modeled by an ordinal manifold. This topology is then incorporated into the Hidden Conditional Ordinal Random Field (H-CORF) framework for dynamic ordinal regression by constraining H-CORF parameters to lie on the ordinal manifold. The resulting model attains simultaneous dynamic recognition and intensity estimation of facial expressions of multiple emotions. To the best of our knowledge, the proposed method is the first one to achieve this on both deliberate as well as spontaneous facial affect data.",
"We consider the task of labeling facial emotion intensities in videos, where the emotion intensities to be predicted have ordinal scales (e.g., low, medium, and high) that change in time. A significant challenge is that the rates of increase and decrease differ substantially across subjects. Moreover, the actual absolute differences of intensity values carry little information, with their relative order being more important. To solve the intensity prediction problem we propose a new dynamic ranking model that models the signal intensity at each time as a label on an ordinal scale and links the temporally proximal labels using dynamic smoothness constraints. This new model extends the successful static ordinal regression to a structured (dynamic) setting by using an analogy with Conditional Random Field (CRF) models in structured classification. We show that, although non-convex, the new model can be accurately learned using efficient gradient search. The predictions resulting from this dynamic ranking model show significant improvements over the regular CRFs, which fail to consider ordinal relationships between predicted labels. We also observe substantial improvements over static ranking models that do not exploit temporal dependencies of ordinal predictions. We demonstrate the benefits of our algorithm on the Cohn-Kanade dataset for the dynamic facial emotion intensity prediction problem and illustrate its performance in a controlled synthetic setting.",
"Most of the existing methods for the recognition of faces and expressions consider either the expression-invariant face recognition problem or the identity-independent facial expression recognition problem. In this paper, we propose joint face and facial expression recognition using a dictionary-based component separation algorithm (DCS). In this approach, the given expressive face is viewed as a superposition of a neutral face component with a facial expression component which is sparse with respect to the whole image. This assumption leads to a dictionary-based component separation algorithm which benefits from the idea of sparsity and morphological diversity. This entails building data-driven dictionaries for neutral and expressive components. The DCS algorithm then uses these dictionaries to decompose an expressive test face into its constituent components. The sparse codes we obtain as a result of this decomposition are then used for joint face and expression recognition. Experiments on publicly available expression and face data sets show the effectiveness of our method.",
"",
"Facial expression can be seen as the dynamic variation of one's appearance over time. Successful recognition thus involves finding representations of high-dimensional spatiotemporal patterns that can be generalized to unseen facial morphologies and variations of the expression dynamics. In this paper, we propose to learn Random Forests from heterogeneous derivative features (e.g. facial fiducial point movements or texture variations) upon pairs of images. Those forests are conditioned on the expression label of the first frame to reduce the variability of the ongoing expression transitions. When testing on a specific frame of a video, pairs are created between this frame and the previous ones. Predictions for each previous frame are used to draw trees from Pairwise Conditional Random Forests (PCRF) whose pairwise outputs are averaged over time to produce robust estimates. As such, PCRF appears as a natural extension of Random Forests to learn spatio-temporal patterns, that leads to significant improvements over standard Random Forests as well as state-of-the-art approaches on several facial expression benchmarks.",
"",
"",
"Spatial-temporal relations among facial muscles carry crucial information about facial expressions yet have not been thoroughly exploited. One contributing factor for this is the limited ability of the current dynamic models in capturing complex spatial and temporal relations. Existing dynamic models can only capture simple local temporal relations among sequential events, or lack the ability for incorporating uncertainties. To overcome these limitations and take full advantage of the spatio-temporal information, we propose to model the facial expression as a complex activity that consists of temporally overlapping or sequential primitive facial events. We further propose the Interval Temporal Bayesian Network to capture these complex temporal relations among primitive facial events for facial expression modeling and recognition. Experimental results on benchmark databases demonstrate the feasibility of the proposed approach in recognizing facial expressions based purely on spatio-temporal relations among facial muscles, as well as its advantage over the existing methods."
]
} |
1701.03249 | 2583152362 | Detecting anomalies of a cyber physical system (CPS), which is a complex system consisting of both physical and software parts, is important because a CPS often operates autonomously in an unpredictable environment. However, because of the ever-changing nature and lack of a precise model for a CPS, detecting anomalies is still a challenging task. To address this problem, we propose applying an outlier detection method to a CPS log. By using a log obtained from an actual aquarium management system, we evaluated the effectiveness of our proposed method by analyzing outliers that it detected. By investigating the outliers with the developer of the system, we confirmed that some outliers indicate actual faults in the system. For example, our method detected failures of mutual exclusion in the control system that were unknown to the developer. Our method also detected transient losses of functionalities and unexpected reboots. On the other hand, our method did not detect anomalies that were too many and similar. In addition, our method reported rare but unproblematic concurrent combinations of operations as anomalies. Thus, our approach is effective at finding anomalies, but there is still room for improvement. | There is extensive literature on anomaly detection of a hybrid system @cite_2 @cite_0 @cite_1 @cite_4 @cite_8 ; all of them presuppose a model of a system. On the other hand, our method does not assume a system model because preparing the model for CPS and its environment is a difficult task. | {
"cite_N": [
"@cite_4",
"@cite_8",
"@cite_1",
"@cite_0",
"@cite_2"
],
"mid": [
"2129039806",
"",
"1592367452",
"2109559642",
"1495862233"
],
"abstract": [
"Many networked embedded sensing and control systems can be modeled as hybrid systems with interacting continuous and discrete dynamics. These systems present significant challenges for monitoring and diagnosis. Many existing model-based approaches focus on diagnostic reasoning assuming appropriate fault signatures have been generated. However, an important missing piece is the integration of model-based techniques with the acquisition and processing of sensor signals and the modeling of faults to support diagnostic reasoning. This paper addresses key modeling and computational problems at the interface between model-based diagnosis techniques and signature analysis to enable the efficient detection and isolation of incipient and abrupt faults in hybrid systems. A hybrid automata model that parameterizes abrupt and incipient faults is introduced. Based on this model, an approach for diagnoser design is presented. The paper also develops a novel mode estimation algorithm that uses model-based prediction to focus distributed processing signal algorithms. Finally, the paper describes a diagnostic system architecture that integrates the modeling, prediction, and diagnosis components. The implemented architecture is applied to fault diagnosis of a complex electro-mechanical machine, the Xerox DC265 printer, and the experimental results presented validate the approach. A number of design trade-offs that were made to support implementation of the algorithms for online applications are also described.",
"",
"Model-based diagnosis and mode estimation capabilities excel at diagnosing systems whose symptoms are clearly distinguished from normal behavior. A strength of mode estimation, in particular, is its ability to track a system's discrete dynamics as it moves between different behavioral modes. However, often failures bury their symptoms amongst the signal noise, until their effects become catastrophic.We introduce a hybrid mode estimation system that extracts mode estimates from subtle symptoms. First, we introduce a modeling formalism, called concurrent probabilistic hybrid automata (cPHA), that merge hidden Markov models (HMM) with continuous dynamical system models. Second, we introduce hybrid estimation as a method for tracking and diagnosing cPHA, by unifying traditional continuous state observers with HMM belief update. Finally, we introduce a novel, any-time, any-space algorithm for computing approximate hybrid estimates.",
"Techniques for diagnosing faults in hybrid systems that combine digital (discrete) supervisory controllers with analog (continuous) plants need to be different from those used for discrete or continuous systems. This paper presents a methodology for online tracking and diagnosis of hybrid systems. We demonstrate the effectiveness of the approach with experiments conducted on the fuel-transfer system of fighter aircraft",
"This article presents a number of complementary algorithms for detecting faults on-board operating robots, where a fault is defined as a deviation from expected behavior. The algorithms focus on faults that cannot directly be detected from current sensor values but require inference from a sequence of time-varying sensor values. Each algorithm provides an independent improvement over the basic approach. These improvements are not mutually exclusive, and the algorithms may be combined to suit the application domain. All the approaches presented require dynamic models representing the behavior of each of the fault and operational states. These models can be built from analytical models of the robot dynamics, data from simulation, or from the real robot. All the approaches presented detect faults from a finite number of known fault conditions, although there may potentially be a very large number of these faults."
]
} |
1701.03249 | 2583152362 | Detecting anomalies of a cyber physical system (CPS), which is a complex system consisting of both physical and software parts, is important because a CPS often operates autonomously in an unpredictable environment. However, because of the ever-changing nature and lack of a precise model for a CPS, detecting anomalies is still a challenging task. To address this problem, we propose applying an outlier detection method to a CPS log. By using a log obtained from an actual aquarium management system, we evaluated the effectiveness of our proposed method by analyzing outliers that it detected. By investigating the outliers with the developer of the system, we confirmed that some outliers indicate actual faults in the system. For example, our method detected failures of mutual exclusion in the control system that were unknown to the developer. Our method also detected transient losses of functionalities and unexpected reboots. On the other hand, our method did not detect anomalies that were too many and similar. In addition, our method reported rare but unproblematic concurrent combinations of operations as anomalies. Thus, our approach is effective at finding anomalies, but there is still room for improvement. | There is also extensive literature on anomaly detection from a software log @cite_7 @cite_11 @cite_14 @cite_17 . These papers either assume that a log is purely discrete or real-value data. On the other hand, our method handles a log with both discrete and real-value data. | {
"cite_N": [
"@cite_14",
"@cite_17",
"@cite_7",
"@cite_11"
],
"mid": [
"2039157918",
"2511988939",
"1963574127",
"2560021099"
],
"abstract": [
"Surprisingly, console logs rarely help operators detect problems in large-scale datacenter services, for they often consist of the voluminous intermixing of messages from many software components written by independent developers. We propose a general methodology to mine this rich source of information to automatically detect system runtime problems. We first parse console logs by combining source code analysis with information retrieval to create composite features. We then analyze these features using machine learning to detect operational problems. We show that our method enables analyses that are impossible with previous methods because of its superior ability to create sophisticated features. We also show how to distill the results of our analysis to an operator-friendly one-page decision tree showing the critical messages associated with the detected problems. We validate our approach using the Darkstar online game server and the Hadoop File System, where we detect numerous real problems with high accuracy and few false positives. In the Hadoop case, we are able to analyze 24 million lines of console logs in 3 minutes. Our methodology works on textual console logs of any size and requires no changes to the service software, no human input, and no knowledge of the software's internals.",
"Cyber-physical systems (CPS), which integrate algorithmic control with physical processes, often consist of physically distributed components communicating over a network. A malfunctioning or compromised component in such a CPS can lead to costly consequences, especially in the context of public infrastructure. In this short paper, we argue for the importance of constructing invariants (or models) of the physical behaviour exhibited by CPS, motivated by their applications to the control, monitoring, and attestation of components. To achieve this despite the inherent complexity of CPS, we propose a new technique for learning invariants that combines machine learning with ideas from mutation testing. We present a preliminary study on a water treatment system that suggests the efficacy of this approach, propose strategies for establishing confidence in the correctness of invariants, then summarise some research questions and the steps we are taking to investigate them.",
"Predicting system failures can be of great benefit to managers that get a better command over system performance. Data that systems generate in the form of logs is a valuable source of information to predict system reliability. As such, there is an increasing demand of tools to mine logs and provide accurate predictions. However, interpreting information in logs poses some challenges. This study discusses how to effectively mining sequences of logs and provide correct predictions. The approach integrates different machine learning techniques to control for data brittleness, provide accuracy of model selection and validation, and increase robustness of classification results. We apply the proposed approach to log sequences of 25 different applications of a software system for telemetry and performance of cars. On this system, we discuss the ability of three well-known support vector machines - multilayer perceptron, radial basis function and linear kernels - to fit and predict defective log sequences. Our results show that a good analysis strategy provides stable, accurate predictions. Such strategy must at least require high fitting ability of models used for prediction. We demonstrate that such models give excellent predictions both on individual applications - e.g., 1 false positive rate, 94 true positive rate, and 95 precision - and across system applications - on average, 9 false positive rate, 78 true positive rate, and 95 precision. We also show that these results are similarly achieved for different degree of sequence defectiveness. To show how good are our results, we compare them with recent studies in system log analysis. We finally provide some recommendations that we draw reflecting on our study.",
"Anomaly detection plays an important role in managementof modern large-scale distributed systems. Logs, whichrecord system runtime information, are widely used for anomalydetection. Traditionally, developers (or operators) often inspectthe logs manually with keyword search and rule matching. Theincreasing scale and complexity of modern systems, however, make the volume of logs explode, which renders the infeasibilityof manual inspection. To reduce manual effort, many anomalydetection methods based on automated log analysis are proposed. However, developers may still have no idea which anomalydetection methods they should adopt, because there is a lackof a review and comparison among these anomaly detectionmethods. Moreover, even if developers decide to employ ananomaly detection method, re-implementation requires a nontrivialeffort. To address these problems, we provide a detailedreview and evaluation of six state-of-the-art log-based anomalydetection methods, including three supervised methods and threeunsupervised methods, and also release an open-source toolkitallowing ease of reuse. These methods have been evaluated ontwo publicly-available production log datasets, with a total of15,923,592 log messages and 365,298 anomaly instances. Webelieve that our work, with the evaluation results as well asthe corresponding findings, can provide guidelines for adoptionof these methods and provide references for future development."
]
} |
1701.03249 | 2583152362 | Detecting anomalies of a cyber physical system (CPS), which is a complex system consisting of both physical and software parts, is important because a CPS often operates autonomously in an unpredictable environment. However, because of the ever-changing nature and lack of a precise model for a CPS, detecting anomalies is still a challenging task. To address this problem, we propose applying an outlier detection method to a CPS log. By using a log obtained from an actual aquarium management system, we evaluated the effectiveness of our proposed method by analyzing outliers that it detected. By investigating the outliers with the developer of the system, we confirmed that some outliers indicate actual faults in the system. For example, our method detected failures of mutual exclusion in the control system that were unknown to the developer. Our method also detected transient losses of functionalities and unexpected reboots. On the other hand, our method did not detect anomalies that were too many and similar. In addition, our method reported rare but unproblematic concurrent combinations of operations as anomalies. Thus, our approach is effective at finding anomalies, but there is still room for improvement. | We handled vectors with very high dimensions in our experiment; thus, we pushed the outlier detection method to its limit. Outlier detection in high-dimensional space is an active research area @cite_13 . In addition to LOF, we attempted to use high contrast subspaces (HiCS) algorithm @cite_12 and the correlation outlier probability (COP) algorithm @cite_6 ; however, their computations could not be completed within reasonable time and memory constraints. | {
"cite_N": [
"@cite_13",
"@cite_6",
"@cite_12"
],
"mid": [
"141379055",
"2045765911",
"2000661457"
],
"abstract": [
"Many real data sets are very high dimensional. In some scenarios, real data sets may contain hundreds or thousands of dimensions. With increasing dimensionality, many of the conventional outlier detection methods do not work very effectively. This is an artifact of the well-known curse of dimensionality. In high-dimensional space, the data becomes sparse, and the true outliers become masked by the noise effects of multiple irrelevant dimensions, when analyzed in full dimensionality.",
"In this paper, we propose a novel outlier detection model to find outliers that deviate from the generating mechanisms of normal instances by considering combinations of different subsets of attributes, as they occur when there are local correlations in the data set. Our model enables to search for outliers in arbitrarily oriented subspaces of the original feature space. We show how in addition to an outlier score, our model also derives an explanation of the outlierness that is useful in investigating the results. Our experiments suggest that our novel method can find different outliers than existing work and can be seen as a complement of those approaches.",
"Outlier mining is a major task in data analysis. Outliers are objects that highly deviate from regular objects in their local neighborhood. Density-based outlier ranking methods score each object based on its degree of deviation. In many applications, these ranking methods degenerate to random listings due to low contrast between outliers and regular objects. Outliers do not show up in the scattered full space, they are hidden in multiple high contrast subspace projections of the data. Measuring the contrast of such subspaces for outlier rankings is an open research challenge. In this work, we propose a novel subspace search method that selects high contrast subspaces for density-based outlier ranking. It is designed as pre-processing step to outlier ranking algorithms. It searches for high contrast subspaces with a significant amount of conditional dependence among the subspace dimensions. With our approach, we propose a first measure for the contrast of subspaces. Thus, we enhance the quality of traditional outlier rankings by computing outlier scores in high contrast projections only. The evaluation on real and synthetic data shows that our approach outperforms traditional dimensionality reduction techniques, naive random projections as well as state-of-the-art subspace search techniques and provides enhanced quality for outlier ranking."
]
} |
1701.03091 | 2584112015 | The ability of the RDF data model to link data from heterogeneous domains has led to an explosive growth of RDF data. So, evaluating SPARQL queries over large RDF data has been crucial for the semantic web community. However, due to the graph nature of RDF data, evaluating SPARQL queries in relational databases and common data-parallel systems needs a lot of joins and is inefficient. On the other hand, the enormity of datasets that are graph in nature such as social network data, has led the database community to develop graph-parallel processing systems to support iterative graph computations efficiently. In this work we take advantage of the graph representation of RDF data and exploit GraphX, a new graph processing system based on Spark. We propose a subgraph matching algorithm, compatible with the GraphX programming model to evaluate SPARQL queries. Some experiments are performed to show the system scalability to handle large datasets. | Among all the systems which are designed for dealing with RDF data, we focus on the graph-oriented systems. gStore @cite_1 is a graph-oriented RDF store. This system uses a graph storage model for storing RDF data. gStore transforms RDF data into a so-called . It implements an efficient subgraph matching mechanism using an to evaluate SPARQL queries. Also, the signature graph representation enables gStore to answer wildcard queries. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2121041488"
],
"abstract": [
"We address efficient processing of SPARQL queries over RDF datasets. The proposed techniques, incorporated into the gStore system, handle, in a uniform and scalable manner, SPARQL queries with wildcards and aggregate operators over dynamic RDF datasets. Our approach is graph based. We store RDF data as a large graph and also represent a SPARQL query as a query graph. Thus, the query answering problem is converted into a subgraph matching problem. To achieve efficient and scalable query processing, we develop an index, together with effective pruning rules and efficient search algorithms. We propose techniques that use this infrastructure to answer aggregation queries. We also propose an effective maintenance algorithm to handle online updates over RDF repositories. Extensive experiments confirm the efficiency and effectiveness of our solutions."
]
} |
1701.03091 | 2584112015 | The ability of the RDF data model to link data from heterogeneous domains has led to an explosive growth of RDF data. So, evaluating SPARQL queries over large RDF data has been crucial for the semantic web community. However, due to the graph nature of RDF data, evaluating SPARQL queries in relational databases and common data-parallel systems needs a lot of joins and is inefficient. On the other hand, the enormity of datasets that are graph in nature such as social network data, has led the database community to develop graph-parallel processing systems to support iterative graph computations efficiently. In this work we take advantage of the graph representation of RDF data and exploit GraphX, a new graph processing system based on Spark. We propose a subgraph matching algorithm, compatible with the GraphX programming model to evaluate SPARQL queries. Some experiments are performed to show the system scalability to handle large datasets. | Another recent graph-based approach to answer SPARQL queries is @cite_9 . This system is based on GraphLab and employs a vertex-centric subgraph matching algorithm. This work uses the GAS mechanism for implementing the algorithm. However, it has some shortcomings in answering star queries and also it cannot evaluate queries with variables in the position of predicates. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1997856897"
],
"abstract": [
"In this paper we explore the fusion of two largely disparate but related communities, that of Big Data and the Semantic Web. Due to the rise of large real-world graph datasets, a number of graph-centric parallel platforms have been proposed and developed. Many of these platforms, notable among them Pregel, Giraph, GraphLab, GraphChi, the Graph Processing System, and GraphX, present a programming interface that is vertex-centric, a variant of Valiant's Bulk Synchronous Parallel model. These platforms seek to address growing analytical needs for very large graph datasets arising from a variety of sources, such as social, biological, and computer networks. With this growing interest in large graphs, there has also been a concomitant rise in the Semantic Web, which describes data in terms of subject-predicate-object triples, or in other words edges of a graph where the predicate is a directed labeled edge between the two vertices, the subject and object. Despite the graph-oriented nature of Semantic Web data, and the advent of an increasingly large web of data, no one has explored the usage of these maturing graph platforms to analyze Semantic Web data. In this paper we outline a method of implementing SPARQL queries within the GraphLab framework, obtaining good scaling to the size of our system, 51 nodes."
]
} |
1701.03091 | 2584112015 | The ability of the RDF data model to link data from heterogeneous domains has led to an explosive growth of RDF data. So, evaluating SPARQL queries over large RDF data has been crucial for the semantic web community. However, due to the graph nature of RDF data, evaluating SPARQL queries in relational databases and common data-parallel systems needs a lot of joins and is inefficient. On the other hand, the enormity of datasets that are graph in nature such as social network data, has led the database community to develop graph-parallel processing systems to support iterative graph computations efficiently. In this work we take advantage of the graph representation of RDF data and exploit GraphX, a new graph processing system based on Spark. We propose a subgraph matching algorithm, compatible with the GraphX programming model to evaluate SPARQL queries. Some experiments are performed to show the system scalability to handle large datasets. | Using Spark for answering SPARQL queries has been investigated briefly in a short paper @cite_12 . The paper introduces as a graph processing system for evaluating only SPARQL queries. However, the paper does not provide implementation details of the system. | {
"cite_N": [
"@cite_12"
],
"mid": [
"2403597024"
],
"abstract": [
"With the explosive growth of semantic data on the Web over the past years, many large-scale RDF knowledge bases with billions of facts are generating. This poses significant challenges for the storage and retrieval of big RDF graphs. In this paper, we introduce the SparkRDF, an elastic discreted semantic graph processing engine with distributed memory. To reduce the high I O and communication costs for distributed platforms, SparkRDF implements SPARQL query based on Spark, a novel in-memory distributed computing framework. All the intermediate results are cached in the distributed memory to accelerate the process of iterative join. To reduce the search space and memory overhead, SparkRDF splits the RDF graph into the multi-layer subgraphs based on the relations and classes. For SPARQL query optimization, SparkRDF generates an optimal execution plan for join queries, leading to effective reduction on the size of intermediate results, the number of joins and the cost of communication. Our extensive evaluation demonstrates the efficiency of our system."
]
} |
1701.03163 | 2949171418 | We propose UDP, the first training-free parser for Universal Dependencies (UD). Our algorithm is based on PageRank and a small set of head attachment rules. It features two-step decoding to guarantee that function words are attached as leaf nodes. The parser requires no training, and it is competitive with a delexicalized transfer system. UDP offers a linguistically sound unsupervised alternative to cross-lingual parsing for UD, which can be used as a baseline for such systems. The parser has very few parameters and is distinctly robust to domain change across languages. | Recent years have seen exciting developments in cross-lingual linguistic structure prediction based on transfer or projection of POS and dependencies @cite_10 @cite_17 . These works mainly use supervised learning and domain adaptation techniques for the target language. | {
"cite_N": [
"@cite_10",
"@cite_17"
],
"mid": [
"2142523187",
"2152691628"
],
"abstract": [
"We describe a novel approach for inducing unsupervised part-of-speech taggers for languages that have no labeled training data, but have translated text in a resource-rich language. Our method does not assume any knowledge about the target language (in particular no tagging dictionary is assumed), making it applicable to a wide array of resource-poor languages. We use graph-based label propagation for cross-lingual knowledge transfer and use the projected labels as features in an unsupervised model (Berg-, 2010). Across eight European languages, our approach results in an average absolute improvement of 10.4 over a state-of-the-art baseline, and 16.7 over vanilla hidden Markov models induced with the Expectation Maximization algorithm.",
"We present a simple method for transferring dependency parsers from source languages with labeled training data to target languages without labeled training data. We first demonstrate that delexicalized parsers can be directly transferred between languages, producing significantly higher accuracies than unsupervised parsers. We then use a constraint driven learning algorithm where constraints are drawn from parallel corpora to project the final parser. Unlike previous work on projecting syntactic resources, we show that simple methods for introducing multiple source languages can significantly improve the overall quality of the resulting parsers. The projected parsers from our system result in state-of-the-art performance when compared to previously studied unsupervised and projected parsing systems across eight different languages."
]
} |
1701.03163 | 2949171418 | We propose UDP, the first training-free parser for Universal Dependencies (UD). Our algorithm is based on PageRank and a small set of head attachment rules. It features two-step decoding to guarantee that function words are attached as leaf nodes. The parser requires no training, and it is competitive with a delexicalized transfer system. UDP offers a linguistically sound unsupervised alternative to cross-lingual parsing for UD, which can be used as a baseline for such systems. The parser has very few parameters and is distinctly robust to domain change across languages. | The first group of approaches deals with annotation projection @cite_5 , whereby parallel corpora are used to transfer annotations between resource-rich source languages and low-resource target languages. Projection relies on the availability and quality of parallel corpora, source-side taggers and parsers, but also tokenizers, sentence aligners, and word aligners for sources and targets. were the first to project syntactic dependencies, and improved on their projection algorithm. Current state of the art in cross-lingual dependency parsing involves leveraging parallel corpora for annotation projection @cite_21 @cite_1 . | {
"cite_N": [
"@cite_5",
"@cite_21",
"@cite_1"
],
"mid": [
"2016630033",
"2114609248",
"2250313959"
],
"abstract": [
"This paper describes a system and set of algorithms for automatically inducing stand-alone monolingual part-of-speech taggers, base noun-phrase bracketers, named-entity taggers and morphological analyzers for an arbitrary foreign language. Case studies include French, Chinese, Czech and Spanish.Existing text analysis tools for English are applied to bilingual text corpora and their output projected onto the second language via statistically derived word alignments. Simple direct annotation projection is quite noisy, however, even with optimal alignments. Thus this paper presents noise-robust tagger, bracketer and lemmatizer training procedures capable of accurate system bootstrapping from noisy and incomplete initial projections.Performance of the induced stand-alone part-of-speech tagger applied to French achieves 96 core part-of-speech (POS) tag accuracy, and the corresponding induced noun-phrase bracketer exceeds 91 F-measure. The induced morphological analyzer achieves over 99 lemmatization accuracy on the complete French verbal system.This achievement is particularly noteworthy in that it required absolutely no hand-annotated training data in the given language, and virtually no language-specific knowledge or resources beyond raw text. Performance also significantly exceeds that obtained by direct annotation projection.",
"We present a novel approach for inducing unsupervised dependency parsers for languages that have no labeled training data, but have translated text in a resourcerich language. We train probabilistic parsing models for resource-poor languages by transferring cross-lingual knowledge from resource-rich language with entropy regularization. Our method can be used as a purely monolingual dependency parser, requiring no human translations for the test data, thus making it applicable to a wide range of resource-poor languages. We perform experiments on three Data sets — Version 1.0 and version 2.0 of Google Universal Dependency Treebanks and Treebanks from CoNLL shared-tasks, across ten languages. We obtain stateof-the art performance of all the three data sets when compared with previously studied unsupervised and projected parsing systems.",
"We present a novel method for the crosslingual transfer of dependency parsers. Our goal is to induce a dependency parser in a target language of interest without any direct supervision: instead we assume access to parallel translations between the target and one or more source languages, and to supervised parsers in the source language(s). Our key contributions are to show the utility of dense projected structures when training the target language parser, and to introduce a novel learning algorithm that makes use of dense structures. Results on several languages show an absolute improvement of 5.51 in average dependency accuracy over the state-of-the-art method of (Ma and Xia, 2014). Our average dependency accuracy of 82.18 compares favourably to the accuracy of fully supervised methods."
]
} |
1701.03163 | 2949171418 | We propose UDP, the first training-free parser for Universal Dependencies (UD). Our algorithm is based on PageRank and a small set of head attachment rules. It features two-step decoding to guarantee that function words are attached as leaf nodes. The parser requires no training, and it is competitive with a delexicalized transfer system. UDP offers a linguistically sound unsupervised alternative to cross-lingual parsing for UD, which can be used as a baseline for such systems. The parser has very few parameters and is distinctly robust to domain change across languages. | The second group of approaches deals with transferring source parsing models to target languages. were the first to introduce the idea of delexicalization: removing lexical features by training and cross-lingually applying parsers solely on POS sequences. and independently extended the approach by using multiple sources, requiring uniform POS and dependency representations @cite_2 . | {
"cite_N": [
"@cite_2"
],
"mid": [
"2258701653"
],
"abstract": [
"How do we parse the languages for which no treebanks are available? This contribution addresses the cross-lingual viewpoint on statistical dependency parsing, in which we attempt to make use of resource-rich source language treebanks to build and adapt models for the under-resourced target languages. We outline the benefits, and indicate the drawbacks of the current major approaches. We emphasize synthetic treebanking: the automatic creation of target language treebanks by means of annotation projection and machine translation. We present competitive results in cross-lingual dependency parsing using a combination of various techniques that contribute to the overall success of the method. We further include a detailed discussion about the impact of part-of-speech label accuracy on parsing results that provide guidance in practical applications of cross-lingual methods for truly under-resourced languages."
]
} |
1701.03163 | 2949171418 | We propose UDP, the first training-free parser for Universal Dependencies (UD). Our algorithm is based on PageRank and a small set of head attachment rules. It features two-step decoding to guarantee that function words are attached as leaf nodes. The parser requires no training, and it is competitive with a delexicalized transfer system. UDP offers a linguistically sound unsupervised alternative to cross-lingual parsing for UD, which can be used as a baseline for such systems. The parser has very few parameters and is distinctly robust to domain change across languages. | These two characteristics make our parser unsupervised. Data-driven unsupervised dependency parsing is now a well-established discipline @cite_19 @cite_22 @cite_16 . Still, the performance of these parsers falls far behind the approaches involving any sort of supervision. | {
"cite_N": [
"@cite_19",
"@cite_16",
"@cite_22"
],
"mid": [
"2153568660",
"",
"1527783480"
],
"abstract": [
"We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional regularities that are salient in the data.",
"",
"We present three approaches for unsupervised grammar induction that are sensitive to data complexity and apply them to Klein and Manning's Dependency Model with Valence. The first, Baby Steps, bootstraps itself via iterated learning of increasingly longer sentences and requires no initialization. This method substantially exceeds Klein and Manning's published scores and achieves 39.4 accuracy on Section 23 (all sentences) of the Wall Street Journal corpus. The second, Less is More, uses a low-complexity subset of the available data: sentences up to length 15. Focusing on fewer but simpler examples trades off quantity against ambiguity; it attains 44.1 accuracy, using the standard linguistically-informed prior and batch training, beating state-of-the-art. Leapfrog, our third heuristic, combines Less is More with Baby Steps by mixing their models of shorter sentences, then rapidly ramping up exposure to the full training set, driving up accuracy to 45.0 . These trends generalize to the Brown corpus; awareness of data complexity may improve other parsing models and unsupervised algorithms."
]
} |
1701.02718 | 2572772963 | Humans have rich understanding of liquid containers and their contents; for example, we can effortlessly pour water from a pitcher to a cup. Doing so requires estimating the volume of the cup, approximating the amount of water in the pitcher, and predicting the behavior of water when we tilt the pitcher. Very little attention in computer vision has been made to liquids and their containers. In this paper, we study liquid containers and their contents, and propose methods to estimate the volume of containers, approximate the amount of liquid in them, and perform comparative volume estimations all from a single RGB image. Furthermore, we show the results of the proposed model for predicting the behavior of liquids inside containers when one tilts the containers. We also introduce a new dataset of Containers Of liQuid contEnt (COQE) that contains more than 5,000 images of 10,000 liquid containers in context labelled with volume, amount of content, bounding box annotation, and corresponding similar 3D CAD models. | In @cite_15 , a hybrid discriminative-generative approach is proposed to detect transparent objects such as bottles and glasses. @cite_4 propose a method for detection, 3D pose estimation, and 3D reconstruction of glassware. @cite_24 also propose a method for reconstruction of 3D scenes that include transparent objects. Our work goes beyond detection and reconstruction since we perform reasoning about higher-level tasks such as content estimation or pour prediction. | {
"cite_N": [
"@cite_24",
"@cite_15",
"@cite_4"
],
"mid": [
"1937507046",
"2140188952",
"2413354348"
],
"abstract": [
"We present a practical and inexpensive method to reconstruct 3D scenes that include piece-wise planar transparent objects. Our work is motivated by the need for automatically generating 3D models of interior scenes, in which glass structures are common. These large structures are often invisible to cameras or even our human visual system. Existing 3D reconstruction methods for transparent objects are usually not applicable in such a room-size reconstruction setting. Our approach augments a regular depth camera (e.g., the Microsoft Kinect camera) with a single ultrasonic sensor, which is able to measure distance to any objects, including transparent surfaces. We present a novel sensor fusion algorithm that first segments the depth map into different categories such as opaque transparent infinity (e.g., too far to measure) and then updates the depth map based on the segmentation outcome. Our current hardware setup can generate only one additional point measurement per frame, yet our fusion algorithm is able to generate satisfactory reconstruction results based on our probabilistic model. We highlight the performance in many challenging indoor benchmarks.",
"Existing methods for visual recognition based on quantized local features can perform poorly when local features exist on transparent surfaces, such as glass or plastic objects. There are characteristic patterns to the local appearance of transparent objects, but they may not be well captured by distances to individual examples or by a local pattern codebook obtained by vector quantization. The appearance of a transparent patch is determined in part by the refraction of a background pattern through a transparent medium: the energy from the background usually dominates the patch appearance. We model transparent local patch appearance using an additive model of latent factors: background factors due to scene content, and factors which capture a local edge energy distribution characteristic of the refraction. We implement our method using a novel LDA-SIFT formulation which performs LDA prior to any vector quantization step; we discover latent topics which are characteristic of particular transparent patches and quantize the SIFT space into transparent visual words according to the latent topic dimensions. No knowledge of the background scene is required at test time; we show examples recognizing transparent glasses in a domestic environment.",
""
]
} |
1701.02718 | 2572772963 | Humans have rich understanding of liquid containers and their contents; for example, we can effortlessly pour water from a pitcher to a cup. Doing so requires estimating the volume of the cup, approximating the amount of water in the pitcher, and predicting the behavior of water when we tilt the pitcher. Very little attention in computer vision has been made to liquids and their containers. In this paper, we study liquid containers and their contents, and propose methods to estimate the volume of containers, approximate the amount of liquid in them, and perform comparative volume estimations all from a single RGB image. Furthermore, we show the results of the proposed model for predicting the behavior of liquids inside containers when one tilts the containers. We also introduce a new dataset of Containers Of liQuid contEnt (COQE) that contains more than 5,000 images of 10,000 liquid containers in context labelled with volume, amount of content, bounding box annotation, and corresponding similar 3D CAD models. | Object sizes are inferred by @cite_40 using a combination of visual and linguistic cues. In this paper, we focus only on visual cues. Size estimates have also been used by @cite_32 @cite_6 to better estimate the geometry of scenes. The result of 3D object detectors lin13,song14,gupta15 can be used to obtain a rough estimate of the volume of the containers. However, they are typically designed for RGBD images. Moreover, the output of these detectors cannot be used for estimation of the amount of content or pouring prediction. Depth estimation methods from single RGB images @cite_2 @cite_35 @cite_51 can also be used for computing the relative size of containers. | {
"cite_N": [
"@cite_35",
"@cite_32",
"@cite_6",
"@cite_40",
"@cite_2",
"@cite_51"
],
"mid": [
"2026203852",
"2146352414",
"",
"",
"2109443835",
"2951234442"
],
"abstract": [
"We consider the problem of estimating the depth of each pixel in a scene from a single monocular image. Unlike traditional approaches [18, 19], which attempt to map from appearance features to depth directly, we first perform a semantic segmentation of the scene and use the semantic labels to guide the 3D reconstruction. This approach provides several advantages: By knowing the semantic class of a pixel or region, depth and geometry constraints can be easily enforced (e.g., “sky” is far away and “ground” is horizontal). In addition, depth can be more readily predicted by measuring the difference in appearance with respect to a given semantic class. For example, a tree will have more uniform appearance in the distance than it does close up. Finally, the incorporation of semantic features allows us to achieve state-of-the-art results with a significantly simpler model than previous works.",
"Image understanding requires not only individually estimating elements of the visual world but also capturing the interplay among them. In this paper, we provide a framework for placing local object detection in the context of the overall 3D scene by modeling the interdependence of objects, surface orientations, and camera viewpoint. Most object detection methods consider all scales and locations in the image as equally likely. We show that with probabilistic estimates of 3D geometry, both in terms of surfaces and world coordinates, we can put objects into perspective and model the scale and location variance in the image. Our approach reflects the cyclical nature of the problem by allowing probabilistic object hypotheses to refine geometry and vice-versa. Our framework allows painless substitution of almost any object detector and is easily extended to include other aspects of image understanding. Our results confirm the benefits of our integrated approach.",
"",
"",
"When we look at a picture, our prior knowledge about the world allows us to resolve some of the ambiguities that are inherent to monocular vision, and thereby infer 3d information about the scene. We also recognize different objects, decide on their orientations, and identify how they are connected to their environment. Focusing on the problem of autonomous 3d reconstruction of indoor scenes, in this paper we present a dynamic Bayesian network model capable of resolving some of these ambiguities and recovering 3d information for many images. Our model assumes a \"floorwall\" geometry on the scene and is trained to recognize the floor-wall boundary in each column of the image. When the image is produced under perspective geometry, we show that this model can be used for 3d reconstruction from a single image. To our knowledge, this was the first monocular approach to automatically recover 3d reconstructions from single indoor images.",
"Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation."
]
} |
1701.02718 | 2572772963 | Humans have rich understanding of liquid containers and their contents; for example, we can effortlessly pour water from a pitcher to a cup. Doing so requires estimating the volume of the cup, approximating the amount of water in the pitcher, and predicting the behavior of water when we tilt the pitcher. Very little attention in computer vision has been made to liquids and their containers. In this paper, we study liquid containers and their contents, and propose methods to estimate the volume of containers, approximate the amount of liquid in them, and perform comparative volume estimations all from a single RGB image. Furthermore, we show the results of the proposed model for predicting the behavior of liquids inside containers when one tilts the containers. We also introduce a new dataset of Containers Of liQuid contEnt (COQE) that contains more than 5,000 images of 10,000 liquid containers in context labelled with volume, amount of content, bounding box annotation, and corresponding similar 3D CAD models. | Our pouring prediction task shares similarities with @cite_30 . In @cite_30 , they predict the sequence of movement of rigid objects for a given force. In this work, we are concerned with liquids that have different dynamics and appearance statistics than solid objects. | {
"cite_N": [
"@cite_30"
],
"mid": [
"2301880263"
],
"abstract": [
"What happens if one pushes a cup sitting on a table toward the edge of the table? How about pushing a desk against a wall? In this paper, we study the problem of understanding the movements of objects as a result of applying external forces to them. For a given force vector applied to a specific location in an image, our goal is to predict long-term sequential movements caused by that force. Doing so entails reasoning about scene geometry, objects, their attributes, and the physical rules that govern the movements of objects. We design a deep neural network model that learns long-term sequential dependencies of object movements while taking into account the geometry and appearance of the scene by combining Convolutional and Recurrent Neural Networks. Training our model requires a large-scale dataset of object movements caused by external forces. To build a dataset of forces in scenes, we reconstructed all images in SUN RGB-D dataset in a physics simulator to estimate the physical movements of objects caused by external forces applied to them. Our Forces in Scenes (ForScene) dataset contains 65,000 object movements in 3D which represent a variety of external forces applied to different types of objects. Our experimental evaluations show that the challenging task of predicting long-term movements of objects as their reaction to external forces is possible from a single image. The code and dataset are available at: http: allenai.org plato forces."
]
} |
1701.02718 | 2572772963 | Humans have rich understanding of liquid containers and their contents; for example, we can effortlessly pour water from a pitcher to a cup. Doing so requires estimating the volume of the cup, approximating the amount of water in the pitcher, and predicting the behavior of water when we tilt the pitcher. Very little attention in computer vision has been made to liquids and their containers. In this paper, we study liquid containers and their contents, and propose methods to estimate the volume of containers, approximate the amount of liquid in them, and perform comparative volume estimations all from a single RGB image. Furthermore, we show the results of the proposed model for predicting the behavior of liquids inside containers when one tilts the containers. We also introduce a new dataset of Containers Of liQuid contEnt (COQE) that contains more than 5,000 images of 10,000 liquid containers in context labelled with volume, amount of content, bounding box annotation, and corresponding similar 3D CAD models. | There are a number of works in the robotics community that tackle the problem of liquid pouring @cite_8 @cite_7 @cite_38 @cite_31 @cite_3 @cite_42 . However, these approaches either have been designed for synthetic environments @cite_31 @cite_3 or they have been tested in lab settings and with additional sensors @cite_8 @cite_38 @cite_43 @cite_42 . Fluid simulation is a popular topic in computer graphics @cite_17 @cite_44 @cite_23 . Our problem is different since we predict the liquid behavior from a single image and are not concerned about rendering. | {
"cite_N": [
"@cite_38",
"@cite_7",
"@cite_8",
"@cite_42",
"@cite_3",
"@cite_44",
"@cite_43",
"@cite_23",
"@cite_31",
"@cite_17"
],
"mid": [
"",
"",
"2118459920",
"2531280530",
"2093550895",
"",
"2076235166",
"588441650",
"2215032184",
""
],
"abstract": [
"",
"",
"When describing robot motion with dynamic movement primitives (DMPs), goal (trajectory endpoint), shape and temporal scaling parameters are used. In reinforcement learning with DMPs, usually goals and temporal scaling parameters are predefined and only the weights for shaping a DMP are learned. Many tasks, however, exist where the best goal position is not a priori known, requiring to learn it. Thus, here we specifically address the question of how to simultaneously combine goal and shape parameter learning. This is a difficult problem because learning of both parameters could easily interfere in a destructive way. We apply value function approximation techniques for goal learning and direct policy search methods for shape learning. Specifically, we use ''policy improvement with path integrals'' and ''natural actor critic'' for the policy search. We solve a learning-to-pour-liquid task in simulations as well as using a Pa10 robot arm. Results for learning from scratch, learning initialized by human demonstration, as well as for modifying the tool for the learned DMPs are presented. We observe that the combination of goal and shape learning is stable and robust within large parameter regimes. Learning converges quickly even in the presence of disturbances, which makes this combined method suitable for robotic applications.",
"Pouring a specific amount of liquid is a challenging task. In this paper we develop methods for robots to use visual feedback to perform closed-loop control for pouring liquids. We propose both a model-based and a model-free method utilizing deep learning for estimating the volume of liquid in a container. Our results show that the model-free method is better able to estimate the volume. We combine this with a simple PID controller to pour specific amounts of liquid, and show that the robot is able to achieve an average 38ml deviation from the target amount. To our knowledge, this is the first use of raw visual feedback to pour liquids in robotics.",
"Abstract Autonomous robots that are to perform complex everyday tasks such as making pancakes have to understand how the effects of an action depend on the way the action is executed. Within Artificial Intelligence, classical planning reasons about whether actions are executable, but makes the assumption that the actions will succeed (with some probability). In this work, we have designed, implemented, and analyzed a framework that allows us to envision the physical effects of robot manipulation actions. We consider envisioning to be a qualitative reasoning method that reasons about actions and their effects based on simulation-based projections. Thereby it allows a robot to infer what could happen when it performs a task in a certain way. This is achieved by translating a qualitative physics problem into a parameterized simulation problem; performing a detailed physics-based simulation of a robot plan; logging the state evolution into appropriate data structures; and then translating these sub-symbolic data structures into interval-based first-order symbolic, qualitative representations, called timelines. The result of the envisioning is a set of detailed narratives represented by timelines which are then used to infer answers to qualitative reasoning problems. By envisioning the outcome of actions before committing to them, a robot is able to reason about physical phenomena and can therefore prevent itself from ending up in unwanted situations. Using this approach, robots can perform manipulation tasks more efficiently, robustly, and flexibly, and they can even successfully accomplish previously unknown variations of tasks.",
"",
"One of the key challenges for learning manipulation skills is generalizing between different objects. The robot should adapt both its actions and the task constraints to the geometry of the object being manipulated. In this paper, we propose computing geometric parameters of novel objects by warping known objects to match their shape. We refer to the parameters computed in this manner as warped parameters, as they are defined as functions of the warped object's point cloud. The warped parameters form the basis of the features for the motor skill learning process, and they are used to generalize between different objects. The proposed method was successfully evaluated on a pouring task both in simulation and on a real robot.",
"Animating fluids like water, smoke, and fire using physics-based simulation is increasingly important in visual effects, in particular in movies, like The Day After Tomorrow, and in computer games. This book provides a practical introduction to fluid simulation for graphics. The focus is on animating fully three-dimensional incompressible flow, from understanding the math and the algorithms to the actual implementation.",
"We explore a temporal decomposition of dynamics in order to enhance policy learning with unknown dynamics. There are model-free methods and model-based methods for policy learning with unknown dynamics, but both approaches have problems: in general, model-free methods have less generalization ability, while model-based methods are often limited by the assumed model structure or need to gather many samples to make models. We consider a temporal decomposition of dynamics to make learning models easier. To obtain a policy, we apply differential dynamic programming (DDP). A feature of our method is that we consider decomposed dynamics even when there is no action to be taken, which allows us to decompose dynamics more flexibly. Consequently learned dynamics become more accurate. Our DDP is a first-order gradient descent algorithm with a stochastic evaluation function. In DDP with learned models, typically there are many local maxima. In order to avoid them, we consider multiple criteria evaluation functions. In addition to the stochastic evaluation function, we use a reference value function. This method was verified with pouring simulation experiments where we created complicated dynamics. The results show that we can optimize actions with DDP while learning dynamics models.",
""
]
} |
1701.02641 | 2574004477 | Vehicle-to-vehicle (V2V) communication is a crucial component of the future autonomous driving systems since it enables improved awareness of the surrounding environment, even without extensive processing of sensory information. However, V2V communication is prone to failures and delays, so a distributed fault-tolerant approach is required for safe and efficient transportation. In this paper, we focus on the intersection crossing (IC) problem with autonomous vehicles that cooperate via V2V communications, and propose a novel distributed IC algorithm that can handle an unknown number of communication failures. Our analysis shows that both safety and liveness requirements are satisfied in all realistic situations. We also found, based on a real data set, that the crossing delay is only slightly increased even in the presence of highly correlated failures. | In @cite_13 , the authors provide a survey on vehicle detection techniques, with a focus on vision-based detection. The sensors are first classified into two groups: active (such as lasers, radars and lidars) and passive (such as cameras, and acoustic sensors), and then compared to each other in terms of range, cost and other features. The radar is considered as the best active sensor, since it provides long-range ( @math ) real-time detection even under bad weather (e.g., foggy, rainy) conditions. On the other hand, a radar is not able to estimate the shape of the object, which can be done with lidar, a costly alternative. These problems encouraged authors to focus on passive sensors such as cameras. Cameras are low-cost sensors, able to provide a very precise information about the objects. However, their main drawback is a high complexity of data processing, low range during nights, and sensitivity to weather conditions. Note that authors did not consider any kind of communication between vehicles, that would resolve some of the sensors' problems. | {
"cite_N": [
"@cite_13"
],
"mid": [
"2088173505"
],
"abstract": [
"Over the past decade, vision-based vehicle detection techniques for road safety improvement have gained an increasing amount of attention. Unfortunately, the techniques suffer from robustness due to huge variability in vehicle shape (particularly for motorcycles), cluttered environment, various illumination conditions, and driving behavior. In this paper, we provide a comprehensive survey in a systematic approach about the state-of-the-art on-road vision-based vehicle detection and tracking systems for collision avoidance systems (CASs). This paper is structured based on a vehicle detection processes starting from sensor selection to vehicle detection and tracking. Techniques in each process step are reviewed and analyzed individually. Two main contributions in this paper are the following: survey on motorcycle detection techniques and the sensor comparison in terms of cost and range parameters. Finally, the survey provides an optimal choice with a low cost and reliable CAS design in vehicle industries."
]
} |
1701.02641 | 2574004477 | Vehicle-to-vehicle (V2V) communication is a crucial component of the future autonomous driving systems since it enables improved awareness of the surrounding environment, even without extensive processing of sensory information. However, V2V communication is prone to failures and delays, so a distributed fault-tolerant approach is required for safe and efficient transportation. In this paper, we focus on the intersection crossing (IC) problem with autonomous vehicles that cooperate via V2V communications, and propose a novel distributed IC algorithm that can handle an unknown number of communication failures. Our analysis shows that both safety and liveness requirements are satisfied in all realistic situations. We also found, based on a real data set, that the crossing delay is only slightly increased even in the presence of highly correlated failures. | In @cite_6 , the authors use V2V for decentralized and cooperative collision avoidance for semi-autonomous vehicles, in which the control is taken from the driver once the car enters a critical area. The algorithm is tested using vehicles equipped with: differential GPS (DGPS), IMU, dedicated short-range communication (DSRC) unit, and an interface with actuators. Their solution aims to compute appropriate throttle brake control to avoid entering the capture area, in which no control action can prevent a collision. The estimation of longitudinal displacement, velocity and acceleration is performed using Kalman filtering. This estimation takes into account a bounded communication delay found experimentally. Their experimental results showed that all collisions are averted, and that the algorithm does not introduce a significant delay. | {
"cite_N": [
"@cite_6"
],
"mid": [
"2010725166"
],
"abstract": [
"In this paper, we leverage vehicle-to-vehicle (V2V) communication technology to implement computationally efficient decentralized algorithms for two-vehicle cooperative collision avoidance at intersections. Our algorithms employ formal control theoretic methods to guarantee a collision-free (safe) system, whereas overrides are only applied when necessary to prevent a crash. Model uncertainty and communication delays are explicitly accounted for by the model and by the state estimation algorithm. The main contribution of this work is to provide an experimental validation of our method on two instrumented vehicles engaged in an intersection collision avoidance scenario in a test track."
]
} |
1701.02641 | 2574004477 | Vehicle-to-vehicle (V2V) communication is a crucial component of the future autonomous driving systems since it enables improved awareness of the surrounding environment, even without extensive processing of sensory information. However, V2V communication is prone to failures and delays, so a distributed fault-tolerant approach is required for safe and efficient transportation. In this paper, we focus on the intersection crossing (IC) problem with autonomous vehicles that cooperate via V2V communications, and propose a novel distributed IC algorithm that can handle an unknown number of communication failures. Our analysis shows that both safety and liveness requirements are satisfied in all realistic situations. We also found, based on a real data set, that the crossing delay is only slightly increased even in the presence of highly correlated failures. | The work in @cite_11 develops reliable and efficient intersection protocols using V2V communication. The proposed solutions are able to avoid deadlocks and vehicle collisions at intersections. The protocols are fully distributed since they do not rely on any centralized unit such as intersection manager. The autonomous vehicles are equipped with a similar set of sensors as in @cite_6 , and also a DSRC unit for V2V communication. The vehicles interact with each other using standardized basic safety messages (BSM) adapted for intersection crossing. The proposed protocols are tested using AutoSim simulator emulator, which utilizes a real city topography. The results showed that the proposed protocols outperform the traditional traffic light protocols in terms of trip delay, especially with an asymmetric traffic volume. | {
"cite_N": [
"@cite_6",
"@cite_11"
],
"mid": [
"2010725166",
"1969806274"
],
"abstract": [
"In this paper, we leverage vehicle-to-vehicle (V2V) communication technology to implement computationally efficient decentralized algorithms for two-vehicle cooperative collision avoidance at intersections. Our algorithms employ formal control theoretic methods to guarantee a collision-free (safe) system, whereas overrides are only applied when necessary to prevent a crash. Model uncertainty and communication delays are explicitly accounted for by the model and by the state estimation algorithm. The main contribution of this work is to provide an experimental validation of our method on two instrumented vehicles engaged in an intersection collision avoidance scenario in a test track.",
"Autonomous driving will play an important role in the future of transportation. Various autonomous vehicles have been demonstrated at the DARPA Urban Challenge [3]. General Motors has recently unveiled their Electrical-Networked Vehicles (EN-V) in Shanghai, China [5]. One of the main challenges of autonomous driving in urban areas is transition through cross-roads and intersections. In addition to safety concerns, current intersection management technologies such as stop signs and traffic lights can introduce significant traffic delays even under light traffic conditions. Our goal is to design and develop efficient and reliable intersection protocols to avoid vehicle collisions at intersections and increase the traffic throughput. The focus of this paper is investigating vehicle-to-vehicle (V2V) communications as a part of co-operative driving in the context of autonomous vehicles. We study how our proposed V2V intersection protocols can be beneficial for autonomous driving, and show significant improvements in throughput. We also prove that our protocols avoid deadlock situations inside the intersection area. The simulation results show that our new proposed V2V intersection protocols provide both safe passage through the intersection and significantly decrease the delay at the intersection and our latest V2V intersection protocol yields over 85 overall performance improvement over the common traffic light models."
]
} |
1701.02641 | 2574004477 | Vehicle-to-vehicle (V2V) communication is a crucial component of the future autonomous driving systems since it enables improved awareness of the surrounding environment, even without extensive processing of sensory information. However, V2V communication is prone to failures and delays, so a distributed fault-tolerant approach is required for safe and efficient transportation. In this paper, we focus on the intersection crossing (IC) problem with autonomous vehicles that cooperate via V2V communications, and propose a novel distributed IC algorithm that can handle an unknown number of communication failures. Our analysis shows that both safety and liveness requirements are satisfied in all realistic situations. We also found, based on a real data set, that the crossing delay is only slightly increased even in the presence of highly correlated failures. | Cooperative collision avoidance with imperfect vehicle-to-infrastructure (and vice-versa) communication is analyzed in @cite_3 . The centralized supervisor, located at the intersection, acquires the positions, velocities, and accelerations of the incoming vehicles, and then decides either to allow vehicles' desired inputs, or to override them with a safe set of inputs. The communication is subject to failures, with the success reception probability based on the Rayleigh fading model. According to their simulation results, the mean time between the accidents is significantly increased, but a collision may happen if the override message has been lost. | {
"cite_N": [
"@cite_3"
],
"mid": [
"1900137830"
],
"abstract": [
"Intersections remain among the most accident-prone subsystems in modern traffic. With the introduction of vehicle-to-infrastructure communication, it is possible for the intersection to become aware of the incoming stream of vehicles and issue warnings when needed. We consider an approach where vehicles can act automatically on those warnings, leaving drivers maximal freedom of manoeuvre while guaranteeing safety with minimal intervention. We also quantify the impact of imperfect communication in the uplink (from vehicle to infrastructure) and the downlink (from infrastructure to vehicle)."
]
} |
1701.02641 | 2574004477 | Vehicle-to-vehicle (V2V) communication is a crucial component of the future autonomous driving systems since it enables improved awareness of the surrounding environment, even without extensive processing of sensory information. However, V2V communication is prone to failures and delays, so a distributed fault-tolerant approach is required for safe and efficient transportation. In this paper, we focus on the intersection crossing (IC) problem with autonomous vehicles that cooperate via V2V communications, and propose a novel distributed IC algorithm that can handle an unknown number of communication failures. Our analysis shows that both safety and liveness requirements are satisfied in all realistic situations. We also found, based on a real data set, that the crossing delay is only slightly increased even in the presence of highly correlated failures. | A hybrid centralized distributed architecture that ensures both safety (no collisions), and liveness (a finite crossing time), at intersections without stop signs and traffic lights, is proposed in @cite_7 . The vehicles are equipped with a positioning unit, internal sensors, and a V2V communication unit. To resolve the problem with a bounded communication delay and packet losses, the rear car needs to break with maximum deceleration. They compared the proposed solution with stop-sign and traffic-light technologies and found that the average travel time is significantly reduced. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2120423655"
],
"abstract": [
"The automation of driving tasks is of increasing interest for highway traffic management. The emerging technologies of global positioning and intervehicular wireless communications, combined with in-vehicle computation and sensing capabilities, can potentially provide remarkable improvements in safety and efficiency. We address the problem of designing intelligent in-tersections, where traffic lights and stop signs are removed, and cars negotiate the intersection through an interaction of centralized and distributed decision making. Intelligent intersections are representative of complex hybrid systems that are increasingly of interest, where the challenge is to design tractable distributed algorithms that guarantee safety and provide good performance. Systems of automatically driven vehicles will need an under lying collision avoidance system with provable safety properties to be acceptable. This condition raises several challenges. We need to ensure perpetual collision avoidance so that cars do not get into future problematic positions to avoid an immediate collision. The architecture needs to allow distributed freedom of action to cars yet should guard against worst-case behavior of other cars to guarantee collision avoidance. The algorithms should be tractable both computationally and in information requirements and robust to uncertainties in sensing and communication. To address these challenges, we propose a hybrid architecture with an appropriate interplay between centralized coordination and distributed freedom of action. The approach is built around a core where each car has an infinite horizon contingency plan, which is updated at each sampling instant and distributed by the cars, in a computationally tractable manner. We also define a dynamically changing partial-order relation between cars, which specifies, for each car, a set of cars whose worst-case behaviors it should guard against. The architecture is hybrid, involving a centralized component that coordinates intersection traversals. We prove the safety and liveness of the overall scheme. The mathematical challenge of accurately quantifying performance remains as a difficult challenge; therefore, we conduct a simulation study that shows the benefits over stop signs and traffic lights. It is hoped that our effort can provide methodologies for the design of tractable solutions for complex distributed systems that require safety and liveness guarantees."
]
} |
1701.03041 | 2951176792 | Grasping is a complex process involving knowledge of the object, the surroundings, and of oneself. While humans are able to integrate and process all of the sensory information required for performing this task, equipping machines with this capability is an extremely challenging endeavor. In this paper, we investigate how deep learning techniques can allow us to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. We explore a paradigm based on generative models for learning integrated object-action representations, and demonstrate its capacity for capturing and generating multimodal, multi-finger grasp configurations on a simulated grasping dataset. | Within robotics, our work shares some similarities to the notion of (ED), where prior knowledge is used to either synthesize grasps partially (i.e. through constraints or algorithmic priors @cite_0 ) or fully, by using previously executed grasps. Our work is also similar in spirit to the concept of @cite_9 , which define a continuous space for grasp and shape configurations. | {
"cite_N": [
"@cite_0",
"@cite_9"
],
"mid": [
"2414685554",
"294421534"
],
"abstract": [
"This paper presents the Dexterity Network (Dex-Net) 1.0, a dataset of 3D object models and a sampling-based planning algorithm to explore how Cloud Robotics can be used for robust grasp planning. The algorithm uses a Multi- Armed Bandit model with correlated rewards to leverage prior grasps and 3D object models in a growing dataset that currently includes over 10,000 unique 3D object models and 2.5 million parallel-jaw grasps. Each grasp includes an estimate of the probability of force closure under uncertainty in object and gripper pose and friction. Dex-Net 1.0 uses Multi-View Convolutional Neural Networks (MV-CNNs), a new deep learning method for 3D object classification, to provide a similarity metric between objects, and the Google Cloud Platform to simultaneously run up to 1,500 virtual cores, reducing experiment runtime by up to three orders of magnitude. Experiments suggest that correlated bandit techniques can use a cloud-based network of object models to significantly reduce the number of samples required for robust grasp planning. We report on system sensitivity to variations in similarity metrics and in uncertainty in pose and friction. Code and updated information is available at http: berkeleyautomation.github.io dex-net .",
"We present a new approach for modelling grasping using an integrated space of grasps and shapes. In particular, we introduce an infinite dimensional space, the Grasp Moduli Space, which represents shapes and grasps in a continuous manner. We define a metric on this space allowing us to formalize ‘nearby’ grasp shape configurations and we discuss continuous deformations of such configurations. We work in particular with surfaces with cylindrical coordinates and analyse the stability of a popular L1 grasp quality measure Ql under continuous deformations of shapes and grasps. We experimentally determine bounds on the maximal change of Ql in a small neighbourhood around stable grasps with grasp quality above a threshold. In the case of surfaces of revolution, we determine stable grasps which correspond to grasps used by humans and develop an efficient algorithm for generating those grasps in the case of three contact points. We show that sufficiently stable grasps stay stable under small deformations. For larger deformations, we develop a gradient-based method that can transfer stable grasps between different surfaces. Additionally, we show in experiments that our gradient method can be used to find stable grasps on arbitrary surfaces with cylindrical coordinates by deforming such surfaces towards a corresponding ‘canonical’ surface of revolution."
]
} |
1701.03041 | 2951176792 | Grasping is a complex process involving knowledge of the object, the surroundings, and of oneself. While humans are able to integrate and process all of the sensory information required for performing this task, equipping machines with this capability is an extremely challenging endeavor. In this paper, we investigate how deep learning techniques can allow us to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. We explore a paradigm based on generative models for learning integrated object-action representations, and demonstrate its capacity for capturing and generating multimodal, multi-finger grasp configurations on a simulated grasping dataset. | Motor imagery (MI) and motor execution (ME) are two different forms of motor representation. While motor execution is an external representation (physical performance of an action), motor imagery is the use of an internal representation for mentally an action @cite_22 @cite_1 . | {
"cite_N": [
"@cite_1",
"@cite_22"
],
"mid": [
"2051652848",
"2022965779"
],
"abstract": [
"Rehabilitation, for a large part may be seen as a learning process where old skills have to be re-acquired and new ones have to be learned on the basis of practice. Active exercising creates a flow of sensory (afferent) information. It is known that motor recovery and motor learning have many aspects in common. Both are largely based on response-produced sensory information. In the present article it is asked whether active physical exercise is always necessary for creating this sensory flow. Numerous studies have indicated that motor imagery may result in the same plastic changes in the motor system as actual physical practice. Motor imagery is the mental execution of a movement without any overt movement or without any peripheral (muscle) activation. It has been shown that motor imagery leads to the activation of the same brain areas as actual movement. The present article discusses the role that motor imagery may play in neurological rehabilitation. Furthermore, it will be discussed to what extent the observation of a movement performed by another subject may play a similar role in learning. It is concluded that, although the clinical evidence is still meager, the use of motor imagery in neurological rehabilitation may be defended on theoretical grounds and on the basis of the results of experimental studies with healthy subjects.",
"Abstract Paradigms drawn from cognitive psychology have provided new insight into covert stages of action. These states include not only intending actions that will eventually be executed, but also imagining actions, recognizing tools, learning by observation, or even understanding the behavior of other people. Studies using techniques for mapping brain activity, probing cortical excitability, or measuring the activity of peripheral effectors in normal human subjects and in patients all provide evidence of a subliminal activation of the motor system during these cognitive states. The hypothesis that the motor system is part of a simulation network that is activated under a variety of conditions in relation to action, either self-intended or observed from other individuals, will be developed. The function of this process of simulation would be not only to shape the motor system in anticipation to execution, but also to provide the self with information on the feasibility and the meaning of potential actions."
]
} |
1701.03041 | 2951176792 | Grasping is a complex process involving knowledge of the object, the surroundings, and of oneself. While humans are able to integrate and process all of the sensory information required for performing this task, equipping machines with this capability is an extremely challenging endeavor. In this paper, we investigate how deep learning techniques can allow us to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. We explore a paradigm based on generative models for learning integrated object-action representations, and demonstrate its capacity for capturing and generating multimodal, multi-finger grasp configurations on a simulated grasping dataset. | Among many studies that have examined this phenomenon, we highlight one by Frak al @cite_27 who explored it in the context of which frame of reference is adopted during implicit (unconcious) MI performance. The authors presented evidence that even though MI is an internal process, participants mentally simulating a grasp on a water container did so under real-world biomechanical constraints. That is, grasps or actions that would have been uncomfortable to perform in the real world (e.g. due to awkward joint positioning) were correlated with responses of the mentally simulated action. | {
"cite_N": [
"@cite_27"
],
"mid": [
"2133255759"
],
"abstract": [
"Five normal subjects were tested in a simulated grasping task. A cylindrical container filled with water was placed on the center of a horizontal monitor screen. Subjects used a precision grip formed by the thumb and index finger of their right hand. After a preliminary run during which the container was present, it was replaced by an image of the upper surface of the cylinder appearing on the horizontal computer screen on which the real cylinder was placed during the preliminary run. In each trial the image was marked with two contact points which defined an opposition axis in various orientations with respect to the frontal plane. The subjects' task consisted, once shown a stimulus, of judging as quickly as possible whether the previously experienced action of grasping the container full of water and pouring the water out would be easy, difficult or impossible with the fingers placed according to the opposition axis indicated on the circle. Response times were found to be longer for the grasps judged to be more difficult due to the orientation and position of the opposition axis. In a control experiment, three subjects actually performed the grasps with different orientations and positions of the opposition axis. The effects of these parameters on response time followed the same trends as during simulated movements. This result shows that simulated hand movements take into account the same biomechanical limitations as actually performed movements."
]
} |
1701.03041 | 2951176792 | Grasping is a complex process involving knowledge of the object, the surroundings, and of oneself. While humans are able to integrate and process all of the sensory information required for performing this task, equipping machines with this capability is an extremely challenging endeavor. In this paper, we investigate how deep learning techniques can allow us to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. We explore a paradigm based on generative models for learning integrated object-action representations, and demonstrate its capacity for capturing and generating multimodal, multi-finger grasp configurations on a simulated grasping dataset. | The majority of work in DL and robotic grasping has focused on the use of parallel-plate effectors with few degrees of freedom. Both Lenz @cite_28 , and Pinto @cite_10 formulate grasping as a problem, and train classifiers to predict the most likely grasps through supervised learning. By posing grasping as a detection problem, different areas of the image could correspond to many different grasps and fit with the multimodality of grasping; yet, to obtain multiple grasps for the same image patch, some form of stochastic components or knowledge of the types of grasps is required. | {
"cite_N": [
"@cite_28",
"@cite_10"
],
"mid": [
"1999156278",
"2949098821"
],
"abstract": [
"We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.",
"Current learning-based robot grasping approaches exploit human-labeled datasets for training the models. However, there are two problems with such a methodology: (a) since each object can be grasped in multiple ways, manually labeling grasp locations is not a trivial task; (b) human labeling is biased by semantics. While there have been attempts to train robots using trial-and-error experiments, the amount of data used in such experiments remains substantially low and hence makes the learner prone to over-fitting. In this paper, we take the leap of increasing the available training data to 40 times more than prior work, leading to a dataset size of 50K data points collected over 700 hours of robot grasping attempts. This allows us to train a Convolutional Neural Network (CNN) for the task of predicting grasp locations without severe overfitting. In our formulation, we recast the regression problem to an 18-way binary classification over image patches. We also present a multi-stage learning approach where a CNN trained in one stage is used to collect hard negatives in subsequent stages. Our experiments clearly show the benefit of using large-scale datasets (and multi-stage training) for the task of grasping. We also compare to several baselines and show state-of-the-art performance on generalization to unseen objects for grasping."
]
} |
1701.03041 | 2951176792 | Grasping is a complex process involving knowledge of the object, the surroundings, and of oneself. While humans are able to integrate and process all of the sensory information required for performing this task, equipping machines with this capability is an extremely challenging endeavor. In this paper, we investigate how deep learning techniques can allow us to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. We explore a paradigm based on generative models for learning integrated object-action representations, and demonstrate its capacity for capturing and generating multimodal, multi-finger grasp configurations on a simulated grasping dataset. | Mahler al @cite_0 approach the problem of grasping through the use of deep, multi-view CNNs to index prior knowledge of grasping an object from an experience database. Levine al @cite_4 work towards the full motion of grasping by linking the prediction of motor commands for moving a robotic arm with the probability that a grasp at a given pose will succeed. Other work on full-motion robotic grasping includes Levine @cite_29 and Finn @cite_23 who learn visuomotor skills using deep reinforcement learning. | {
"cite_N": [
"@cite_0",
"@cite_29",
"@cite_4",
"@cite_23"
],
"mid": [
"2414685554",
"2964161785",
"2293467699",
"2210483910"
],
"abstract": [
"This paper presents the Dexterity Network (Dex-Net) 1.0, a dataset of 3D object models and a sampling-based planning algorithm to explore how Cloud Robotics can be used for robust grasp planning. The algorithm uses a Multi- Armed Bandit model with correlated rewards to leverage prior grasps and 3D object models in a growing dataset that currently includes over 10,000 unique 3D object models and 2.5 million parallel-jaw grasps. Each grasp includes an estimate of the probability of force closure under uncertainty in object and gripper pose and friction. Dex-Net 1.0 uses Multi-View Convolutional Neural Networks (MV-CNNs), a new deep learning method for 3D object classification, to provide a similarity metric between objects, and the Google Cloud Platform to simultaneously run up to 1,500 virtual cores, reducing experiment runtime by up to three orders of magnitude. Experiments suggest that correlated bandit techniques can use a cloud-based network of object models to significantly reduce the number of samples required for robust grasp planning. We report on system sensitivity to variations in similarity metrics and in uncertainty in pose and friction. Code and updated information is available at http: berkeleyautomation.github.io dex-net .",
"Policy search methods can allow robots to learn control policies for a wide range of tasks, but practical applications of policy search often require hand-engineered components for perception, state estimation, and low-level control. In this paper, we aim to answer the following question: does training the perception and control systems jointly end-to-end provide better performance than training each component separately? To this end, we develop a method that can be used to learn policies that map raw image observations directly to torques at the robot's motors. The policies are represented by deep convolutional neural networks (CNNs) with 92,000 parameters, and are trained using a guided policy search method, which transforms policy search into supervised learning, with supervision provided by a simple trajectory-centric reinforcement learning method. We evaluate our method on a range of real-world manipulation tasks that require close coordination between vision and control, such as screwing a cap onto a bottle, and present simulated comparisons to a range of prior policy search methods.",
"We describe a learning-based approach to hand-eye coordination for robotic grasping from monocular images. To learn hand-eye coordination for grasping, we trained a large convolutional neural network to predict the probability that task-space motion of the gripper will result in successful grasps, using only monocular camera images and independently of camera calibration or the current robot pose. This requires the network to observe the spatial relationship between the gripper and objects in the scene, thus learning hand-eye coordination. We then use this network to servo the gripper in real time to achieve successful grasps. To train our network, we collected over 800,000 grasp attempts over the course of two months, using between 6 and 14 robotic manipulators at any given time, with differences in camera placement and hardware. Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous servoing.",
"Reinforcement learning provides a powerful and flexible framework for automated acquisition of robotic motion skills. However, applying reinforcement learning requires a sufficiently detailed representation of the state, including the configuration of task-relevant objects. We present an approach that automates state-space construction by learning a state representation directly from camera images. Our method uses a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects, and then learns a motion skill with these feature points using an efficient reinforcement learning method based on local linear models. The resulting controller reacts continuously to the learned feature points, allowing the robot to dynamically manipulate objects in the world with closed-loop control. We demonstrate our method with a PR2 robot on tasks that include pushing a free-standing toy block, picking up a bag of rice using a spatula, and hanging a loop of rope on a hook at various positions. In each task, our method automatically learns to track task-relevant objects and manipulate their configuration with the robot's arm."
]
} |
1701.03041 | 2951176792 | Grasping is a complex process involving knowledge of the object, the surroundings, and of oneself. While humans are able to integrate and process all of the sensory information required for performing this task, equipping machines with this capability is an extremely challenging endeavor. In this paper, we investigate how deep learning techniques can allow us to translate high-level concepts such as motor imagery to the problem of robotic grasp synthesis. We explore a paradigm based on generative models for learning integrated object-action representations, and demonstrate its capacity for capturing and generating multimodal, multi-finger grasp configurations on a simulated grasping dataset. | DL and robotic grasping have also recently extended to the domain of multi-fingered hands. Kappler al @cite_17 used DL to train a classifier to predict grasp stability of the Barrett hand under different quality metrics. In this work, rather than treating grasping as a classification problem, we propose to predict to grasp an object with a multi-fingered hand, through a gripper-agnostic representation of available contact positions and contact normals. | {
"cite_N": [
"@cite_17"
],
"mid": [
"1503925285"
],
"abstract": [
"We propose a new large-scale database containing grasps that are applied to a large set of objects from numerous categories. These grasps are generated in simulation and are annotated with different grasp stability metrics. We use a descriptive and efficient representation of the local object shape at which each grasp is applied. Given this data, we present a two-fold analysis: (i) We use crowdsourcing to analyze the correlation of the metrics with grasp success as predicted by humans. The results show that the metric based on physics simulation is a more consistent predictor for grasp success than the standard υ-metric. The results also support the hypothesis that human labels are not required for good ground truth grasp data. Instead the physics-metric can be used to generate datasets in simulation that may then be used to bootstrap learning in the real world. (ii) We apply a deep learning method and show that it can better leverage the large-scale database for prediction of grasp success compared to logistic regression. Furthermore, the results suggest that labels based on the physics-metric are less noisy than those from the υ-metric and therefore lead to a better classification performance."
]
} |
1701.02485 | 2951870822 | We propose a novel image set classification technique using linear regression models. Downsampled gallery image sets are interpreted as subspaces of a high dimensional space to avoid the computationally expensive training step. We estimate regression models for each test image using the class specific gallery subspaces. Images of the test set are then reconstructed using the regression models. Based on the minimum reconstruction error between the reconstructed and the original images, a weighted voting strategy is used to classify the test set. We performed extensive evaluation on the benchmark UCSD Honda, CMU Mobo and YouTube Celebrity datasets for face classification, and ETH-80 dataset for object classification. The results demonstrate that by using only a small amount of training data, our technique achieved competitive classification accuracy and superior computational speed compared with the state-of-the-art methods. | Image set classification techniques can be categorized as parametric, non-parametric and deep learning based methods. The parametric methods @cite_3 use a statistical distribution model to approximate an image set and then uses KL-divergence to measure the similarity between the two distribution models. Such methods, however, fail to produce good results in case of a weak statistical relationship between the training and the test image sets. | {
"cite_N": [
"@cite_3"
],
"mid": [
"2122691893"
],
"abstract": [
"In many automatic face recognition applications, a set of a person's face images is available rather than a single image. In this paper, we describe a novel method for face recognition using image sets. We propose a flexible, semi-parametric model for learning probability densities confined to highly non-linear but intrinsically low-dimensional manifolds. The model leads to a statistical formulation of the recognition problem in terms of minimizing the divergence between densities estimated on these manifolds. The proposed method is evaluated on a large data set, acquired in realistic imaging conditions with severe illumination variation. Our algorithm is shown to match the best and outperform other state-of-the-art algorithms in the literature, achieving 94 recognition rate on average."
]
} |
1701.02485 | 2951870822 | We propose a novel image set classification technique using linear regression models. Downsampled gallery image sets are interpreted as subspaces of a high dimensional space to avoid the computationally expensive training step. We estimate regression models for each test image using the class specific gallery subspaces. Images of the test set are then reconstructed using the regression models. Based on the minimum reconstruction error between the reconstructed and the original images, a weighted voting strategy is used to classify the test set. We performed extensive evaluation on the benchmark UCSD Honda, CMU Mobo and YouTube Celebrity datasets for face classification, and ETH-80 dataset for object classification. The results demonstrate that by using only a small amount of training data, our technique achieved competitive classification accuracy and superior computational speed compared with the state-of-the-art methods. | For non-parametric methods, several different metrics are used to determine the set to set similarity. @cite_31 , use the Euclidean distance between the sets' mean as the similarity metric. Cevikalp and Triggs @cite_0 present two models to learn set samples. The set to set distance using an affine hull model is called Affine Hull Image Set Distance (AHISD) while that using convex hull model is termed as the Convex Hull Image Set Distance (CHISD). @cite_10 used the mean image of image set and affine hull model to calculate the Sparse Approximated Nearest Points (SANP) for image sets in order to determine the distance between the training image set and test image set. Some non-parametric methods (e.g., @cite_15 , @cite_13 , @cite_29 , @cite_31 ) use a point on a geometric surface to represent the complete image set. The image set can also be represented either by a combination of linear subspaces or on a complex non-linear manifold. For linear subspaces, the cosine of the smallest angle between any vector in one subspace and any other vector in the other subspace is commonly used as the similarity metric between image sets. | {
"cite_N": [
"@cite_13",
"@cite_15",
"@cite_29",
"@cite_0",
"@cite_31",
"@cite_10"
],
"mid": [
"2120453412",
"2066986622",
"2144093206",
"1996939238",
"2110599581",
"1964470356"
],
"abstract": [
"This paper presents a novel discriminative learning method, called manifold discriminant analysis (MDA), to solve the problem of image set classification. By modeling each image set as a manifold, we formulate the problem as classification-oriented multi-manifolds learning. Aiming at maximizing “manifold margin”, MDA seeks to learn an embedding space, where manifolds with different class labels are better separated, and local data compactness within each manifold is enhanced. As a result, new testing manifold can be more reliably classified in the learned embedding space. The proposed method is evaluated on the tasks of object recognition with image sets, including face recognition and object categorization. Comprehensive comparisons and extensive experiments demonstrate the effectiveness of our method.",
"A convenient way of dealing with image sets is to represent them as points on Grassmannian manifolds. While several recent studies explored the applicability of discriminant analysis on such manifolds, the conventional formalism of discriminant analysis suffers from not considering the local structure of the data. We propose a discriminant analysis approach on Grassmannian manifolds, based on a graph-embedding framework. We show that by introducing within-class and between-class similarity graphs to characterise intra-class compactness and inter-class separability, the geometrical structure of data can be exploited. Experiments on several image datasets (PIE, BANCA, MoBo, ETH-80) show that the proposed algorithm obtains considerable improvements in discrimination accuracy, in comparison to three recent methods: Grassmann Discriminant Analysis (GDA), Kernel GDA, and the kernel version of Affine Hull Image Set Distance. We further propose a Grassmannian kernel, based on canonical correlation between subspaces, which can increase discrimination accuracy when used in combination with previous Grassmannian kernels.",
"We propose a novel discriminative learning approach to image set classification by modeling the image set with its natural second-order statistic, i.e. covariance matrix. Since nonsingular covariance matrices, a.k.a. symmetric positive definite (SPD) matrices, lie on a Riemannian manifold, classical learning algorithms cannot be directly utilized to classify points on the manifold. By exploring an efficient metric for the SPD matrices, i.e., Log-Euclidean Distance (LED), we derive a kernel function that explicitly maps the covariance matrix from the Riemannian manifold to a Euclidean space. With this explicit mapping, any learning method devoted to vector space can be exploited in either its linear or kernel formulation. Linear Discriminant Analysis (LDA) and Partial Least Squares (PLS) are considered in this paper for their feasibility for our specific problem. We further investigate the conventional linear subspace based set modeling technique and cast it in a unified framework with our covariance matrix based modeling. The proposed method is evaluated on two tasks: face recognition and object categorization. Extensive experimental results show not only the superiority of our method over state-of-the-art ones in both accuracy and efficiency, but also its stability to two real challenges: noisy set data and varying set size.",
"We introduce a novel method for face recognition from image sets. In our setting each test and training example is a set of images of an individual's face, not just a single image, so recognition decisions need to be based on comparisons of image sets. Methods for this have two main aspects: the models used to represent the individual image sets; and the similarity metric used to compare the models. Here, we represent images as points in a linear or affine feature space and characterize each image set by a convex geometric region (the affine or convex hull) spanned by its feature points. Set dissimilarity is measured by geometric distances (distances of closest approach) between convex models. To reduce the influence of outliers we use robust methods to discard input points that are far from the fitted model. The kernel trick allows the approach to be extended to implicit feature mappings, thus handling complex and nonlinear manifolds of face images. Experiments on two public face datasets show that our proposed methods outperform a number of existing state-of-the-art ones.",
"In this paper, we address the problem of classifying image sets, each of which contains images belonging to the same class but covering large variations in, for instance, viewpoint and illumination. We innovatively formulate the problem as the computation of Manifold-Manifold Distance (MMD), i.e., calculating the distance between nonlinear manifolds each representing one image set. To compute MMD, we also propose a novel manifold learning approach, which expresses a manifold by a collection of local linear models, each depicted by a subspace. MMD is then converted to integrating the distances between pair of subspaces respectively from one of the involved manifolds. The proposed MMD method is evaluated on the task of Face Recognition based on Image Set (FRIS). In FRIS, each known subject is enrolled with a set of facial images and modeled as a gallery manifold, while a testing subject is modeled as a probe manifold, which is then matched against all the gallery manifolds by MMD. Identification is achieved by seeking the minimum MMD. Experimental results on two public face databases, Honda UCSD and CMU MoBo, demonstrate that the proposed MMD method outperforms the competing methods.",
"We propose an efficient and robust solution for image set classification. A joint representation of an image set is proposed which includes the image samples of the set and their affine hull model. The model accounts for unseen appearances in the form of affine combinations of sample images. To calculate the between-set distance, we introduce the Sparse Approximated Nearest Point (SANP). SANPs are the nearest points of two image sets such that each point can be sparsely approximated by the image samples of its respective set. This novel sparse formulation enforces sparsity on the sample coefficients and jointly optimizes the nearest points as well as their sparse approximations. Unlike standard sparse coding, the data to be sparsely approximated are not fixed. A convex formulation is proposed to find the optimal SANPs between two sets and the accelerated proximal gradient method is adapted to efficiently solve this optimization. We also derive the kernel extension of the SANP and propose an algorithm for dynamically tuning the RBF kernel parameter while matching each pair of image sets. Comprehensive experiments on the UCSD Honda, CMU MoBo, and YouTube Celebrities face datasets show that our method consistently outperforms the state of the art."
]
} |
1701.02468 | 2950459049 | 3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits "in-the- wild". However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-of-the art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes. | The classical 2D representation of humans are 2D keypoints @cite_7 @cite_36 @cite_16 @cite_26 @cite_40 @cite_21 . While 2D keypoint prediction has seen considerable progress in the last years and could be considered close to being solved @cite_38 @cite_0 @cite_23 , 3D pose estimation from single images remains a challenge @cite_30 @cite_8 @cite_20 . | {
"cite_N": [
"@cite_30",
"@cite_38",
"@cite_26",
"@cite_7",
"@cite_8",
"@cite_36",
"@cite_21",
"@cite_0",
"@cite_40",
"@cite_23",
"@cite_16",
"@cite_20"
],
"mid": [
"2483862638",
"",
"2103015390",
"2080873731",
"2155196764",
"2282306112",
"2093949207",
"2950762923",
"2128271252",
"2255781698",
"2135533529",
"2075834168"
],
"abstract": [
"We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. The problem is challenging because of the complexity of the human body, articulation, occlusion, clothing, lighting, and the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D body joint locations. We then fit (top-down) a recently published statistical body shape model, called SMPL, to the 2D joints. We do so by minimizing an objective function that penalizes the error between the projected 3D model joints and detected 2D joints. Because SMPL captures correlations in human shape across the population, we are able to robustly fit it to very little data. We further leverage the 3D model to prevent solutions that cause interpenetration. We evaluate our method, SMPLify, on the Leeds Sports, HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect to the state of the art.",
"",
"The task of 2-D articulated human pose estimation in natural images is extremely challenging due to the high level of variation in human appearance. These variations arise from different clothing, anatomy, imaging conditions and the large number of poses it is possible for a human body to take. Recent work has shown state-of-the-art results by partitioning the pose space and using strong nonlinear classifiers such that the pose dependence and multi-modal nature of body part appearance can be captured. We propose to extend these methods to handle much larger quantities of training data, an order of magnitude larger than current datasets, and show how to utilize Amazon Mechanical Turk and a latent annotation update scheme to achieve high quality annotations at low cost. We demonstrate a significant increase in pose estimation accuracy, while simultaneously reducing computational expense by a factor of 10, and contribute a dataset of 10,000 highly articulated poses.",
"Human pose estimation has made significant progress during the last years. However current datasets are limited in their coverage of the overall pose estimation challenges. Still these serve as the common sources to evaluate, train and compare different models on. In this paper we introduce a novel benchmark \"MPII Human Pose\" that makes a significant advance in terms of diversity and difficulty, a contribution that we feel is required for future developments in human body models. This comprehensive dataset was collected using an established taxonomy of over 800 human activities [1]. The collected images cover a wider variety of human activities than previous datasets including various recreational, occupational and householding activities, and capture people from a wider range of viewpoints. We provide a rich set of labels including positions of body joints, full 3D torso and head orientation, occlusion labels for joints and body parts, and activity labels. For each image we provide adjacent video frames to facilitate the use of motion information. Given these rich annotations we perform a detailed analysis of leading human pose estimation approaches and gaining insights for the success and failures of these methods.",
"Reconstructing an arbitrary configuration of 3D points from their projection in an image is an ill-posed problem. When the points hold semantic meaning, such as anatomical landmarks on a body, human observers can often infer a plausible 3D configuration, drawing on extensive visual memory. We present an activity-independent method to recover the 3D configuration of a human figure from 2D locations of anatomical landmarks in a single image, leveraging a large motion capture corpus as a proxy for visual memory. Our method solves for anthropometrically regular body pose and explicitly estimates the camera via a matching pursuit algorithm operating on the image projections. Anthropometric regularity (i.e., that limbs obey known proportions) is a highly informative prior, but directly applying such constraints is intractable. Instead, we enforce a necessary condition on the sum of squared limb-lengths that can be solved for in closed form to discourage implausible configurations in 3D. We evaluate performance on a wide variety of human poses captured from different viewpoints and show generalization to novel 3D configurations and robustness to missing data.",
"We propose a personalized ConvNet pose estimator that automatically adapts itself to the uniqueness of a person's appearance to improve pose estimation in long videos. We make the following contributions: (i) we show that given a few high-precision pose annotations, e.g. from a generic ConvNet pose estimator, additional annotations can be generated throughout the video using a combination of image-based matching for temporally distant frames, and dense optical flow for temporally local frames; (ii) we develop an occlusion aware self-evaluation model that is able to automatically select the high-quality and reject the erroneous additional annotations; and (iii) we demonstrate that these high-quality annotations can be used to fine-tune a ConvNet pose estimator and thereby personalize it to lock on to key discriminative features of the person's appearance. The outcome is a substantial improvement in the pose estimates for the target video using the personalized ConvNet compared to the original generic ConvNet. Our method outperforms the state of the art (including top ConvNet methods) by a large margin on two standard benchmarks, as well as on a new challenging YouTube video dataset. Furthermore, we show that training from the automatically generated annotations can be used to improve the performance of a generic ConvNet on other benchmarks.",
"We address the problem of articulated human pose estimation in videos using an ensemble of tractable models with rich appearance, shape, contour and motion cues. In previous articulated pose estimation work on unconstrained videos, using temporal coupling of limb positions has made little to no difference in performance over parsing frames individually [8, 28]. One crucial reason for this is that joint parsing of multiple articulated parts over time involves intractable inference and learning problems, and previous work has resorted to approximate inference and simplified models. We overcome these computational and modeling limitations using an ensemble of tractable submodels which couple locations of body joints within and across frames using expressive cues. Each submodel is responsible for tracking a single joint through time (e.g., left elbow) and also models the spatial arrangement of all joints in a single frame. Because of the tree structure of each submodel, we can perform efficient exact inference and use rich temporal features that depend on image appearance, e.g., color tracking and optical flow contours. We propose and experimentally investigate a hierarchy of submodel combination methods, and we find that a highly efficient max-marginal combination method outperforms much slower (by orders of magnitude) approximate inference using dual decomposition. We apply our pose model on a new video dataset of highly varied and articulated poses from TV shows. We show significant quantitative and qualitative improvements over state-of-the-art single-frame pose estimation approaches.",
"This work introduces a novel convolutional network architecture for the task of human pose estimation. Features are processed across all scales and consolidated to best capture the various spatial relationships associated with the body. We show how repeated bottom-up, top-down processing used in conjunction with intermediate supervision is critical to improving the performance of the network. We refer to the architecture as a \"stacked hourglass\" network based on the successive steps of pooling and upsampling that are done to produce a final set of predictions. State-of-the-art results are achieved on the FLIC and MPII benchmarks outcompeting all recent methods.",
"We propose a multimodal, decomposable model for articulated human pose estimation in monocular images. A typical approach to this problem is to use a linear structured model, which struggles to capture the wide range of appearance present in realistic, unconstrained images. In this paper, we instead propose a model of human pose that explicitly captures a variety of pose modes. Unlike other multimodal models, our approach includes both global and local pose cues and uses a convex objective and joint training for mode selection and pose estimation. We also employ a cascaded mode selection step which controls the trade-off between speed and accuracy, yielding a 5x speedup in inference and learning. Our model outperforms state-of-the-art approaches across the accuracy-speed trade-off curve for several pose datasets. This includes our newly-collected dataset of people in movies, FLIC, which contains an order of magnitude more labeled data for training and testing than existing datasets.",
"Pose Machines provide a sequential prediction framework for learning rich implicit spatial models. In this work we show a systematic design for how convolutional networks can be incorporated into the pose machine framework for learning image features and image-dependent spatial models for the task of pose estimation. The contribution of this paper is to implicitly model long-range dependencies between variables in structured prediction tasks such as articulated pose estimation. We achieve this by designing a sequential architecture composed of convolutional networks that directly operate on belief maps from previous stages, producing increasingly refined estimates for part locations, without the need for explicit graphical model-style inference. Our approach addresses the characteristic difficulty of vanishing gradients during training by providing a natural learning objective function that enforces intermediate supervision, thereby replenishing back-propagated gradients and conditioning the learning procedure. We demonstrate state-of-the-art performance and outperform competing methods on standard benchmarks including the MPII, LSP, and FLIC datasets.",
"We investigate the task of 2D articulated human pose estimation in unconstrained still images. This is extremely challenging because of variation in pose, anatomy, clothing, and imaging conditions. Current methods use simple models of body part appearance and plausible configurations due to limitations of available training data and constraints on computational expense. We show that such models severely limit accuracy. Building on the successful pictorial structure model (PSM) we propose richer models of both appearance and pose, using state-of-the-art discriminative classifiers without introducing unacceptable computational expense. We introduce a new annotated database of challenging consumer images, an order of magnitude larger than currently available datasets, and demonstrate over 50 relative improvement in pose estimation accuracy over a stateof-the-art method.",
"We present an easy-to-use image retouching technique for realistic reshaping of human bodies in a single image. A model-based approach is taken by integrating a 3D whole-body morphable model into the reshaping process to achieve globally consistent editing effects. A novel body-aware image warping approach is introduced to reliably transfer the reshaping effects from the model to the image, even under moderate fitting errors. Thanks to the parametric nature of the model, our technique parameterizes the degree of reshaping by a small set of semantic attributes, such as weight and height. It allows easy creation of desired reshaping effects by changing the full-body attributes, while producing visually pleasing results even for loosely-dressed humans in casual photographs with a variety of poses and shapes."
]
} |
1701.02468 | 2950459049 | 3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits "in-the- wild". However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-of-the art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes. | Bourdev and Malik @cite_4 enhanced the H3D dataset from 20 keypoint annotations for 1,240 people in 2D with relative 3D information as well as 11 annotated body part segments. | {
"cite_N": [
"@cite_4"
],
"mid": [
"2535410496"
],
"abstract": [
"We address the classic problems of detection, segmentation and pose estimation of people in images with a novel definition of a part, a poselet. We postulate two criteria (1) It should be easy to find a poselet given an input image (2) it should be easy to localize the 3D configuration of the person conditioned on the detection of a poselet. To permit this we have built a new dataset, H3D, of annotations of humans in 2D photographs with 3D joint information, inferred using anthropometric constraints. This enables us to implement a data-driven search procedure for finding poselets that are tightly clustered in both 3D joint configuration space as well as 2D image appearance. The algorithm discovers poselets that correspond to frontal and profile faces, pedestrians, head and shoulder views, among others. Each poselet provides examples for training a linear SVM classifier which can then be run over the image in a multiscale scanning mode. The outputs of these poselet detectors can be thought of as an intermediate layer of nodes, on top of which one can run a second layer of classification or regression. We show how this permits detection and localization of torsos or keypoints such as left shoulder, nose, etc. Experimental results show that we obtain state of the art performance on people detection in the PASCAL VOC 2007 challenge, among other datasets. We are making publicly available both the H3D dataset as well as the poselet parameters for use by other researchers."
]
} |
1701.02468 | 2950459049 | 3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits "in-the- wild". However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-of-the art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes. | In contrast, the @cite_14 and @cite_17 datasets provide very accurate 3D labels: they are both recorded in motion capture environments. Both datasets have high fidelity but contain only a very limited level of diversity in background and person appearance. We evaluate the 3D human pose estimation performance on both. Recent approaches target 3D pose ground truth from natural scenes, but either rely on vision systems prone to failure @cite_43 or inertial suits that modify the appearance of the body and are prone to motion drift @cite_20 . | {
"cite_N": [
"@cite_43",
"@cite_14",
"@cite_20",
"@cite_17"
],
"mid": [
"2079846689",
"2099333815",
"2075834168",
"2101032778"
],
"abstract": [
"We present a novel method for accurate marker-less capture of articulated skeleton motion of several subjects in general scenes, indoors and outdoors, even from input filmed with as few as two cameras. Our approach unites a discriminative image-based joint detection method with a model-based generative motion tracking algorithm through a combined pose optimization energy. The discriminative part-based pose detection method, implemented using Convolutional Networks (ConvNet), estimates unary potentials for each joint of a kinematic skeleton model. These unary potentials are used to probabilistically extract pose constraints for tracking by using weighted sampling from a pose posterior guided by the model. In the final energy, these constraints are combined with an appearance-based model-to-image similarity term. Poses can be computed very efficiently using iterative local optimization, as ConvNet detection is fast, and our formulation yields a combined pose estimation energy with analytic derivatives. In combination, this enables to track full articulated joint angles at state-of-the-art accuracy and temporal stability with a very low number of cameras.",
"While research on articulated human motion and pose estimation has progressed rapidly in the last few years, there has been no systematic quantitative evaluation of competing methods to establish the current state of the art. We present data obtained using a hardware system that is able to capture synchronized video and ground-truth 3D motion. The resulting HumanEva datasets contain multiple subjects performing a set of predefined actions with a number of repetitions. On the order of 40,000 frames of synchronized motion capture and multi-view video (resulting in over one quarter million image frames in total) were collected at 60 Hz with an additional 37,000 time instants of pure motion capture data. A standard set of error measures is defined for evaluating both 2D and 3D pose estimation and tracking algorithms. We also describe a baseline algorithm for 3D articulated tracking that uses a relatively standard Bayesian framework with optimization in the form of Sequential Importance Resampling and Annealed Particle Filtering. In the context of this baseline algorithm we explore a variety of likelihood functions, prior models of human motion and the effects of algorithm parameters. Our experiments suggest that image observation models and motion priors play important roles in performance, and that in a multi-view laboratory environment, where initialization is available, Bayesian filtering tends to perform well. The datasets and the software are made available to the research community. This infrastructure will support the development of new articulated motion and pose estimation algorithms, will provide a baseline for the evaluation and comparison of new methods, and will help establish the current state of the art in human pose estimation and tracking.",
"We present an easy-to-use image retouching technique for realistic reshaping of human bodies in a single image. A model-based approach is taken by integrating a 3D whole-body morphable model into the reshaping process to achieve globally consistent editing effects. A novel body-aware image warping approach is introduced to reliably transfer the reshaping effects from the model to the image, even under moderate fitting errors. Thanks to the parametric nature of the model, our technique parameterizes the degree of reshaping by a small set of semantic attributes, such as weight and height. It allows easy creation of desired reshaping effects by changing the full-body attributes, while producing visually pleasing results even for loosely-dressed humans in casual photographs with a variety of poses and shapes.",
"We introduce a new dataset, Human3.6M, of 3.6 Million accurate 3D Human poses, acquired by recording the performance of 5 female and 6 male subjects, under 4 different viewpoints, for training realistic human sensing systems and for evaluating the next generation of human pose estimation models and algorithms. Besides increasing the size of the datasets in the current state-of-the-art by several orders of magnitude, we also aim to complement such datasets with a diverse set of motions and poses encountered as part of typical human activities (taking photos, talking on the phone, posing, greeting, eating, etc.), with additional synchronized image, human motion capture, and time of flight (depth) data, and with accurate 3D body scans of all the subject actors involved. We also provide controlled mixed reality evaluation scenarios where 3D human models are animated using motion capture and inserted using correct 3D geometry, in complex real environments, viewed with moving cameras, and under occlusion. Finally, we provide a set of large-scale statistical models and detailed evaluation baselines for the dataset illustrating its diversity and the scope for improvement by future work in the research community. Our experiments show that our best large-scale model can leverage our full training set to obtain a 20 improvement in performance compared to a training set of the scale of the largest existing public dataset for this problem. Yet the potential for improvement by leveraging higher capacity, more complex models with our large dataset, is substantially vaster and should stimulate future research. The dataset together with code for the associated large-scale learning models, features, visualization tools, as well as the evaluation server, is available online at http: vision.imar.ro human3.6m ."
]
} |
1701.02468 | 2950459049 | 3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits "in-the- wild". However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-of-the art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes. | Body representations beyond 3D skeletons have a long history in the computer vision community @cite_31 @cite_9 @cite_18 @cite_29 . More recently, these representations have taken new popularity in approaches that fit detailed surfaces of a body model to images @cite_30 @cite_15 @cite_34 @cite_10 @cite_20 . These representations are more tightly connected to the physical reality of the human body and the image formation process. | {
"cite_N": [
"@cite_30",
"@cite_18",
"@cite_15",
"@cite_9",
"@cite_29",
"@cite_31",
"@cite_34",
"@cite_10",
"@cite_20"
],
"mid": [
"2483862638",
"1852924452",
"2545173102",
"2081519360",
"1967437522",
"2030989822",
"1992475172",
"2293220651",
"2075834168"
],
"abstract": [
"We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. The problem is challenging because of the complexity of the human body, articulation, occlusion, clothing, lighting, and the inherent ambiguity in inferring 3D from 2D. To solve this, we first use a recently published CNN-based method, DeepCut, to predict (bottom-up) the 2D body joint locations. We then fit (top-down) a recently published statistical body shape model, called SMPL, to the 2D joints. We do so by minimizing an objective function that penalizes the error between the projected 3D model joints and detected 2D joints. Because SMPL captures correlations in human shape across the population, we are able to robustly fit it to very little data. We further leverage the 3D model to prevent solutions that cause interpenetration. We evaluate our method, SMPLify, on the Leeds Sports, HumanEva, and Human3.6M datasets, showing superior pose accuracy with respect to the state of the art.",
"Experimental techniques are demonstrated which generate segmented symbolic descriptions for complex objects with joints, such as a hammer or a glove. Complete \"descriptions with relationship of parts at joints and descriptions of joints are presented. These techniques are elements of a larger scheme for description mechanisms for hypotheses, and for visual memory and recognition.",
"We describe a solution to the challenging problem of estimating human body shape from a single photograph or painting. Our approach computes shape and pose parameters of a 3D human body model directly from monocular image cues and advances the state of the art in several directions. First, given a user-supplied estimate of the subject's height and a few clicked points on the body we estimate an initial 3D articulated body pose and shape. Second, using this initial guess we generate a tri-map of regions inside, outside and on the boundary of the human, which is used to segment the image using graph cuts. Third, we learn a low-dimensional linear model of human shape in which variations due to height are concentrated along a single dimension, enabling height-constrained estimation of body shape. Fourth, we formulate the problem of parametric human shape from shading. We estimate the body pose, shape and reflectance as well as the scene lighting that produces a synthesized body that robustly matches the image evidence. Quantitative experiments demonstrate how smooth shading provides powerful constraints on human shape. We further demonstrate a novel application in which we extract 3D human models from archival photographs and paintings.",
"The human visual process can be studied by examining the computational problems associated with deriving useful information from retinal images. In this paper, we apply this approach to the problem of representing three-dimensional shapes for the purpose of recognition. 1. Three criteria, accessibility, scope and uniqueness, and stability and sensitivity, are presented for judging the usefulness of a representation for shape recognition. 2. Three aspects of a representation9s design are considered, (i) the representation9s coordinate system, (ii) its primitives, which are the primary units of shape information used in the representation, and (iii) the organization the representation imposes on the information in its descriptions. 3. In terms of these design issues and the criteria presented, a shape representation for recognition should: (i) use an object-centred coordinate system, (ii) include volumetric primitives of varied sizes, and (iii) have a modular organization. A representation based on a shape9s natural axes (for example the axes identified by a stick figure) follows directly from these choices. 4. The basic process for deriving a shape description in this representation must involve: (i) a means for identifying the natural axes of a shape in its image and (ii) a mechanism for transforming viewer-centred axis specifications to specifications in an object-centred coordinate system. 5. Shape recognition involves: (i) a collection of stored shape descriptions, and (ii) various indexes into the collection that allow a newly derived description to be associated with an appropriate stored description. The most important of these indexes allows shape recognition to proceed conservatively from the general to the specific based on the specificity of the information available from the image. 6. New constraints supplied by a conservative recognition process can be used to extract more information from the image. A relaxation process for carrying out this constraint analysis is described.",
"We present a new method for inferring dense data to model correspondences, focusing on the application of human pose estimation from depth images. Recent work proposed the use of regression forests to quickly predict correspondences between depth pixels and points on a 3D human mesh model. That work, however, used a proxy forest training objective based on the classification of depth pixels to body parts. In contrast, we introduce Metric Space Information Gain (MSIG), a new decision forest training objective designed to directly minimize the entropy of distributions in a metric space. When applied to a model surface, viewed as a metric space defined by geodesic distances, MSIG aims to minimize image-to-model correspondence uncertainty. A naive implementation of MSIG would scale quadratically with the number of training examples. As this is intractable for large datasets, we propose a method to compute MSIG in linear time. Our method is a principled generalization of the proxy classification objective, and does not require an extrinsic isometric embedding of the model surface in Euclidean space. Our experiments demonstrate that this leads to correspondences that are considerably more accurate than state of the art, using far fewer training images.",
"Abstract For a machine to be able to ‘see’, it must know something about the object it is ‘looking’ at. A common method in machine vision is to provide the machine with general rather than specific knowledge about the object. An alternative technique, and the one used in this paper, is a model-based approach in which particulars about the object are given and this drives the analysis. The computer program described here, the WALKER model, maps images into a description in which a person is represented by the series of hierarchical levels, i.e. a person has an arm which has a lower-arm which has a hand. The performance of the program is illustrated by superimposing the machine-generated picture over the original photographic images.",
"In this paper we propose a multilinear model of human pose and body shape which is estimated from a database of registered 3D body scans in different poses. The model is generated by factorizing the measurements into pose and shape dependent components. By combining it with an ICP based registration method, we are able to estimate pose and body shape of dressed subjects from single images. If several images of the subject are available, shape and poses can be optimized simultaneously for all input images. Additionally, while estimating pose and shape, we use the model as a virtual calibration pattern and also recover the parameters of the perspective camera model the images were created with.",
"In this paper, we propose a deep convolutional neural network for 3D human pose estimation from monocular images. We train the network using two strategies: (1) a multi-task framework that jointly trains pose regression and body part detectors; (2) a pre-training strategy where the pose regressor is initialized using a network trained for body part detection. We compare our network on a large data set and achieve significant improvement over baseline methods. Human pose estimation is a structured prediction problem, i.e., the locations of each body part are highly correlated. Although we do not add constraints about the correlations between body parts to the network, we empirically show that the network has disentangled the dependencies among different body parts, and learned their correlations.",
"We present an easy-to-use image retouching technique for realistic reshaping of human bodies in a single image. A model-based approach is taken by integrating a 3D whole-body morphable model into the reshaping process to achieve globally consistent editing effects. A novel body-aware image warping approach is introduced to reliably transfer the reshaping effects from the model to the image, even under moderate fitting errors. Thanks to the parametric nature of the model, our technique parameterizes the degree of reshaping by a small set of semantic attributes, such as weight and height. It allows easy creation of desired reshaping effects by changing the full-body attributes, while producing visually pleasing results even for loosely-dressed humans in casual photographs with a variety of poses and shapes."
]
} |
1701.02468 | 2950459049 | 3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits "in-the- wild". However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-of-the art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes. | One of the classic problems related to representations of the extent of the body is body part segmentation. Fine-grained part segmentation has been added to the public parts of the VOC dataset @cite_27 by @cite_42 . Annotations for 24 human body parts and also part segments for all VOC object classes, where applicable, are available. Even though hard to compare, we provide results on the dataset. The Freiburg Sitting People dataset @cite_41 consists of 200 images with 14 part segmentation and is tailored towards sitting poses. The ideas by @cite_13 for 2.5D data inspired our body part representation. Relatively simple methods have proven to achieve good performance in segmentation tasks with easy'' backgrounds like Human80k, a subset of Human3.6M @cite_3 . | {
"cite_N": [
"@cite_41",
"@cite_42",
"@cite_3",
"@cite_27",
"@cite_13"
],
"mid": [
"2398640840",
"2104408738",
"2052747804",
"2031489346",
"2036196300"
],
"abstract": [
"This paper addresses the problem of human body part segmentation in conventional RGB images, which has several applications in robotics, such as learning from demonstration and human-robot handovers. The proposed solution is based on Convolutional Neural Networks (CNNs). We present a network architecture that assigns each pixel to one of a predefined set of human body part classes, such as head, torso, arms, legs. After initializing weights with a very deep convolutional network for image classification, the network can be trained end-to-end and yields precise class predictions at the original input resolution. Our architecture particularly improves on over-fitting issues in the up-convolutional part of the network. Relying only on RGB rather than RGB-D images also allows us to apply the approach outdoors. The network achieves state-of-the-art performance on the PASCAL Parts dataset. Moreover, we introduce two new part segmentation datasets, the Freiburg sitting people dataset and the Freiburg people in disaster dataset. We also present results obtained with a ground robot and an unmanned aerial vehicle.",
"Detecting objects becomes difficult when we need to deal with large shape deformation, occlusion and low resolution. We propose a novel approach to i) handle large deformations and partial occlusions in animals (as examples of highly deformable objects), ii) describe them in terms of body parts, and iii) detect them when their body parts are hard to detect (e.g., animals depicted at low resolution). We represent the holistic object and body parts separately and use a fully connected model to arrange templates for the holistic object and body parts. Our model automatically decouples the holistic object or body parts from the model when they are hard to detect. This enables us to represent a large number of holistic object and body part combinations to better deal with different \"detectability\" patterns caused by deformations, occlusion and or low resolution. We apply our method to the six animal categories in the PASCAL VOC dataset and show that our method significantly improves state-of-the-art (by 4.1 AP) and provides a richer representation for objects. During training we use annotations for body parts (e.g., head, torso, etc.), making use of a new dataset of fully annotated object parts for PASCAL VOC 2010, which provides a mask for each part.",
"Recently, the emergence of Kinect systems has demonstrated the benefits of predicting an intermediate body part labeling for 3D human pose estimation, in conjunction with RGB-D imagery. The availability of depth information plays a critical role, so an important question is whether a similar representation can be developed with sufficient robustness in order to estimate 3D pose from RGB images. This paper provides evidence for a positive answer, by leveraging (a) 2D human body part labeling in images, (b) second-order label-sensitive pooling over dynamically computed regions resulting from a hierarchical decomposition of the body, and (c) iterative structured-output modeling to contextualize the process based on 3D pose estimates. For robustness and generalization, we take advantage of a recent large-scale 3D human motion capture dataset, Human3.6M[18] that also has human body part labeling annotations available with images. We provide extensive experimental studies where alternative intermediate representations are compared and report a substantial 33 error reduction over competitive discriminative baselines that regress 3D human pose against global HOG features.",
"The Pascal Visual Object Classes (VOC) challenge is a benchmark in visual object category recognition and detection, providing the vision and machine learning communities with a standard dataset of images and annotation, and standard evaluation procedures. Organised annually from 2005 to present, the challenge and its associated dataset has become accepted as the benchmark for object detection. This paper describes the dataset and evaluation procedure. We review the state-of-the-art in evaluated methods for both classification and detection, analyse whether the methods are statistically different, what they are learning from the images (e.g. the object or its context), and what the methods find easy or confuse. The paper concludes with lessons learnt in the three year history of the challenge, and proposes directions for future improvement and extension.",
"We describe two new approaches to human pose estimation. Both can quickly and accurately predict the 3D positions of body joints from a single depth image without using any temporal information. The key to both approaches is the use of a large, realistic, and highly varied synthetic set of training images. This allows us to learn models that are largely invariant to factors such as pose, body shape, field-of-view cropping, and clothing. Our first approach employs an intermediate body parts representation, designed so that an accurate per-pixel classification of the parts will localize the joints of the body. The second approach instead directly regresses the positions of body joints. By using simple depth pixel comparison features and parallelizable decision forests, both approaches can run super-real time on consumer hardware. Our evaluation investigates many aspects of our methods, and compares the approaches to each other and to the state of the art. Results on silhouettes suggest broader applicability to other imaging modalities."
]
} |
1701.02468 | 2950459049 | 3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits "in-the- wild". However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-of-the art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes. | Following previous work on cardboard people @cite_22 and contour people @cite_1 , an attempt to work towards an intermediate-level person representation is the JHMDB dataset and the related labeling tool @cite_6 . It relies on puppets' to ease the annotation task, while providing a higher level of detail than solely joint locations. | {
"cite_N": [
"@cite_1",
"@cite_22",
"@cite_6"
],
"mid": [
"1985100786",
"2133716807",
"2034014085"
],
"abstract": [
"We define a new “contour person” model of the human body that has the expressive power of a detailed 3D model and the computational benefits of a simple 2D part-based model. The contour person (CP) model is learned from a 3D SCAPE model of the human body that captures natural shape and pose variations; the projected contours of this model, along with their segmentation into parts forms the training set. The CP model factors deformations of the body into three components: shape variation, viewpoint change and part rotation. This latter model also incorporates a learned non-rigid deformation model. The result is a 2D articulated model that is compact to represent, simple to compute with and more expressive than previous models. We demonstrate the value of such a model in 2D pose estimation and segmentation. Given an initial pose from a standard pictorial-structures method, we refine the pose and shape using an objective function that segments the scene into foreground and background regions. The result is a parametric, human-specific, image segmentation.",
"We extend the work of Black and Yacoob (1995) on the tracking and recognition of human facial expressions using parametrized models of optical flow to deal with the articulated motion of human limbs. We define a \"card-board person model\" in which a person's limbs are represented by a set of connected planar patches. The parametrized image motion of these patches in constrained to enforce articulated motion and is solved for directly using a robust estimation technique. The recovered motion parameters provide a rich and concise description of the activity that can be used for recognition. We propose a method for performing view-based recognition of human activities from the optical flow parameters that extends previous methods to cope with the cyclical nature of human motion. We illustrate the method with examples of tracking human legs of long image sequences.",
"Although action recognition in videos is widely studied, current methods often fail on real-world datasets. Many recent approaches improve accuracy and robustness to cope with challenging video sequences, but it is often unclear what affects the results most. This paper attempts to provide insights based on a systematic performance evaluation using thoroughly-annotated data of human actions. We annotate human Joints for the HMDB dataset (J-HMDB). This annotation can be used to derive ground truth optical flow and segmentation. We evaluate current methods using this dataset and systematically replace the output of various algorithms with ground truth. This enables us to discover what is important - for example, should we work on improving flow algorithms, estimating human bounding boxes, or enabling pose estimation? In summary, we find that high-level pose features greatly outperform low mid level features, in particular, pose over time is critical, but current pose estimation algorithms are not yet reliable enough to provide this information. We also find that the accuracy of a top-performing action recognition framework can be greatly increased by refining the underlying low mid level features, this suggests it is important to improve optical flow and human detection algorithms. Our analysis and J-HMDB dataset should facilitate a deeper understanding of action recognition algorithms."
]
} |
1701.02468 | 2950459049 | 3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits "in-the- wild". However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-of-the art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes. | The attempt to unify representations for human bodies has been made mainly in the context of human kinematics @cite_32 @cite_45 . In their work, a rich representation for 3D motion capture marker sets is used to transfer captures to different targets. The setup of markers to capture not only human motion but also shape has been explored by @cite_5 for motion capture scenarios. While they optimized the placement of markers for a 12 camera setup, we must ensure that the markers disambiguate pose and shape from a single view. Hence, we use a denser set of markers. | {
"cite_N": [
"@cite_5",
"@cite_45",
"@cite_32"
],
"mid": [
"2122633688",
"",
"1487977235"
],
"abstract": [
"Marker-based motion capture (mocap) is widely criticized as producing lifeless animations. We argue that important information about body surface motion is present in standard marker sets but is lost in extracting a skeleton. We demonstrate a new approach called MoSh (Motion and Shape capture), that automatically extracts this detail from mocap data. MoSh estimates body shape and pose together using sparse marker data by exploiting a parametric model of the human body. In contrast to previous work, MoSh solves for the marker locations relative to the body and estimates accurate body shape directly from the markers without the use of 3D scans; this effectively turns a mocap system into an approximate body scanner. MoSh is able to capture soft tissue motions directly from markers by allowing body shape to vary over time. We evaluate the effect of different marker sets on pose and shape accuracy and propose a new sparse marker set for capturing soft-tissue motion. We illustrate MoSh by recovering body shape, pose, and soft-tissue motion from archival mocap data and using this to produce animations with subtlety and realism. We also show soft-tissue motion retargeting to new characters and show how to magnify the 3D deformations of soft tissue to create animations with appealing exaggerations.",
"",
"We present a large-scale whole-body human motion database consisting of captured raw motion data as well as the corresponding post-processed motions. This database serves as a key element for a wide variety of research questions related e.g. to human motion analysis, imitation learning, action recognition and motion generation in robotics. In contrast to previous approaches, the motion data in our database considers the motions of the observed human subject as well as the objects with which the subject is interacting. The information about human-object relations is crucial for the proper understanding of human actions and their goal-directed reproduction on a robot. To facilitate the creation and processing of human motion data, we propose procedures and techniques for capturing of motion, labeling and organization of the motion capture data based on a Motion Description Tree, as well as for the normalization of human motion to an unified representation based on a reference model of the human body. We provide software tools and interfaces to the database allowing access and efficient search with the proposed motion representation."
]
} |
1701.02344 | 2952813088 | Context: Information Technology consumes up to 10 of the world's electricity generation, contributing to CO2 emissions and high energy costs. Data centers, particularly databases, use up to 23 of this energy. Therefore, building an energy-efficient (green) database engine could reduce energy consumption and CO2 emissions. Goal: To understand the factors driving databases' energy consumption and execution time throughout their evolution. Method: We conducted an empirical case study of energy consumption by two MySQL database engines, InnoDB and MyISAM, across 40 releases. We examined the relationships of four software metrics to energy consumption and execution time to determine which metrics reflect the greenness and performance of a database. Results: Our analysis shows that database engines' energy consumption and execution time increase as databases evolve. Moreover, the Lines of Code metric is correlated moderately to strongly with energy consumption and execution time in 88 of cases. Conclusions: Our findings provide insights to both practitioners and researchers. Database administrators may use them to select a fast, green release of the MySQL database engine. MySQL database-engine developers may use the software metric to assess products' greenness and performance. Researchers may use our findings to further develop new hypotheses or build models to predict greenness and performance of databases. | There are several studies about the power consumption of devices. @cite_15 produced power models for the complete system depending on processor performance events. @cite_1 measured and modeled the power consumption of hard drives. The hard disk state model provides both the quantitative data and insight necessary to design an efficient power management system. @cite_45 studied two types of optimization (namely, transport-level and application-level) of network interfaces to decrease their energy consumption. | {
"cite_N": [
"@cite_15",
"@cite_45",
"@cite_1"
],
"mid": [
"2124523382",
"1552956031",
"2110211554"
],
"abstract": [
"This paper proposes the use of microprocessor performance counters for online measurement of complete system power consumption. While past studies have demonstrated the use of performance counters for microprocessor power, to the best of our knowledge, we are the first to create power models for the entire system based on processor performance events. Our approach takes advantage of the \"trickle-down\" effect of performance events in a microprocessor. We show how well known performance-related events within a microprocessor such as cache misses and DMA transactions are highly correlated to power consumption outside of the microprocessor. Using measurement of an actual system running scientific and commercial workloads we develop and validate power models for five subsystems: memory, chipset, I O, disk and microprocessor. These models are shown to have an average error of less than 9 per subsystem across the considered workloads. Through the use of these models and existing on-chip performance event counters, it is possible to estimate system power consumption without the need for additional power sensing hardware",
"",
"Recently, a large effort has been made to reduce the power consumed by computer-systems. Multiple power states have been defined, and mechanisms have been developed to allow system software to control transitions between these states. Unfortunately, little work has been done to determine effective times to change states. Statistical models of the power utilized by individual subsystems call provide a basis for making such decisions. The hard disk state model described provides both the quantitative data and insight necessary to design an efficient power management system. >"
]
} |
1701.02344 | 2952813088 | Context: Information Technology consumes up to 10 of the world's electricity generation, contributing to CO2 emissions and high energy costs. Data centers, particularly databases, use up to 23 of this energy. Therefore, building an energy-efficient (green) database engine could reduce energy consumption and CO2 emissions. Goal: To understand the factors driving databases' energy consumption and execution time throughout their evolution. Method: We conducted an empirical case study of energy consumption by two MySQL database engines, InnoDB and MyISAM, across 40 releases. We examined the relationships of four software metrics to energy consumption and execution time to determine which metrics reflect the greenness and performance of a database. Results: Our analysis shows that database engines' energy consumption and execution time increase as databases evolve. Moreover, the Lines of Code metric is correlated moderately to strongly with energy consumption and execution time in 88 of cases. Conclusions: Our findings provide insights to both practitioners and researchers. Database administrators may use them to select a fast, green release of the MySQL database engine. MySQL database-engine developers may use the software metric to assess products' greenness and performance. Researchers may use our findings to further develop new hypotheses or build models to predict greenness and performance of databases. | @cite_34 performed a quantitative analysis of the costs and benefits of spinning down a disk drive as a power management technique. The main idea behind the power consumption measurement movement is to be followed by suggestions or actions taken in order to find solutions to any undesirable outcomes. @cite_33 applied methods to analyze the relationship between global variable usage and the efforts required by software maintenance and examined the effects of optimizations upon power usage. @cite_49 employed source code change techniques to decrease the energy overheads accompanying application OS connections and modified the source code changes and compiler optimizations in order to reduce power usage. @cite_38 introduced a framework for studying the power-performance efficiency of the NAS parallel benchmarks on a 32-node Beowulf cluster. | {
"cite_N": [
"@cite_38",
"@cite_34",
"@cite_33",
"@cite_49"
],
"mid": [
"1998208137",
"1557863631",
"1172579163",
"2083774192"
],
"abstract": [
"Software Fault Injection (SFI) is an established technique for assessing the robustness of a software under test by exposing it to faults in its operational environment. Depending on the complexity of this operational environment, the complexity of the software under test, and the number and type of faults, a thorough SFI assessment can entail (a) numerous experiments and (b) long experiment run times, which both contribute to a considerable execution time for the tests. In order to counteract this increase when dealing with complex systems, recent works propose to exploit parallel hardware to execute multiple experiments at the same time. While PArallel fault INjections (PAIN) yield higher experiment throughput, they are based on an implicit assumption of non-interference among the simultaneously executing experiments. In this paper we investigate the validity of this assumption and determine the trade-off between increased throughput and the accuracy of experimental results obtained from PAIN experiments.",
"With the advent and subsequent popularity of portable computers, power management of system components has become an important issue. Current portable computers implement a number of power reduction techniques to achieve a longer battery life. Included among these is spinning down a disk during long periods of inactivity. In this paper, we perform a quantitative analysis of the potential costs and benefits of spinning down the disk drive as a power reduction technique. Our conclusion is that almost all the energy consumed by a disk drive can be eliminated with little loss in performance. Although on current hardware, reliability can be impacted by our policies, the next generation of disk drives will use technology (such as dynamic head loading) which is virtually unaffected by repeated spinups. We found that the optimal spindown delay time, the amount of time the disk idles before it is spun down, is 2 seconds. This differs significantly from the 3-5 minutes in current practice by industry. We will show in this paper the effect of varying the spindown delay on power consumption; one conclusion is that a 3-5 minute delay results in only half of the potential benefit of spinning down a disk.",
"Previously, compiler transformations have primarily focussed on minimizing program execution time. This thesis explores some examples of applying compiler technology outside of its original scope. Specifically, we apply compiler analysis to the field of software maintenance and evolution by examining the use of global data throughout the lifetimes of many open source projects. Also, we investigate the effects of compiler optimizations on the power consumption of small battery powered devices. Finally, in an area closer to traditional compiler research we examine automatic program parallelization in the form of thread-level speculation.",
"This paper proposes four types of source code transformations for operating system (OS)-driven embedded software programs to reduce their energy consumption. Their key features include spanning of process boundaries and minimization of the energy consumed in the execution of OS services—opportunities which are beyond the reach of conventional compiler optimizations and source code transformations. We have applied the proposed transformations to several multiprocess benchmark programs in the context of an embedded Linux OS running on an Intel StrongARM processor. They achieve up to 37.9p (23.8p, on average) energy reduction compared to highly compiler-optimized implementations."
]
} |
1701.02344 | 2952813088 | Context: Information Technology consumes up to 10 of the world's electricity generation, contributing to CO2 emissions and high energy costs. Data centers, particularly databases, use up to 23 of this energy. Therefore, building an energy-efficient (green) database engine could reduce energy consumption and CO2 emissions. Goal: To understand the factors driving databases' energy consumption and execution time throughout their evolution. Method: We conducted an empirical case study of energy consumption by two MySQL database engines, InnoDB and MyISAM, across 40 releases. We examined the relationships of four software metrics to energy consumption and execution time to determine which metrics reflect the greenness and performance of a database. Results: Our analysis shows that database engines' energy consumption and execution time increase as databases evolve. Moreover, the Lines of Code metric is correlated moderately to strongly with energy consumption and execution time in 88 of cases. Conclusions: Our findings provide insights to both practitioners and researchers. Database administrators may use them to select a fast, green release of the MySQL database engine. MySQL database-engine developers may use the software metric to assess products' greenness and performance. Researchers may use our findings to further develop new hypotheses or build models to predict greenness and performance of databases. | Some researchers have concentrated on the idea of benchmarking and examining power measurement. @cite_42 described a tool that approximates the energy consumption of software in order to help concerned consumers make knowledgeable decisions about the software they use. @cite_58 introduced a complete system power simulator that represents the CPU, the hierarchy of memory and a low-power disk subsystem and calculates the power performance of both side applications and the OS. | {
"cite_N": [
"@cite_42",
"@cite_58"
],
"mid": [
"2111454275",
"2124567303"
],
"abstract": [
"The energy consumption of computers has become an important environmental issue. This paper describes the development of Green Tracker, a tool that estimates the energy consumption of software in order to help concerned users make informed decisions about the software they use. We present preliminary results gathered from this system's initial usage. Ultimately the information gathered from this tool will be used to raise awareness and help make the energy consumption of software a more central concern among software developers.",
"Power dissipation has become one of the most critical factors for the continued development of both high-end and low-end computer systems. We present a complete system power simulator, called SoftWatt, that models the CPU, memory hierarchy, and a low-power disk subsystem and quantifies the power behavior of both the application and operating system. This tool, built on top of the SimOS infrastructure, uses validated analytical energy models to identify the power hotspots in the system components, capture relative contributions of the user and kernel code to the system power profile, identify the power-hungry operating system services and characterize the variance in kernel power profile with respect to workload. Our results using Spec JVM98 benchmark suite emphasize the importance of complete system simulation to understand the power impact of architecture and operating system on application execution."
]
} |
1701.02344 | 2952813088 | Context: Information Technology consumes up to 10 of the world's electricity generation, contributing to CO2 emissions and high energy costs. Data centers, particularly databases, use up to 23 of this energy. Therefore, building an energy-efficient (green) database engine could reduce energy consumption and CO2 emissions. Goal: To understand the factors driving databases' energy consumption and execution time throughout their evolution. Method: We conducted an empirical case study of energy consumption by two MySQL database engines, InnoDB and MyISAM, across 40 releases. We examined the relationships of four software metrics to energy consumption and execution time to determine which metrics reflect the greenness and performance of a database. Results: Our analysis shows that database engines' energy consumption and execution time increase as databases evolve. Moreover, the Lines of Code metric is correlated moderately to strongly with energy consumption and execution time in 88 of cases. Conclusions: Our findings provide insights to both practitioners and researchers. Database administrators may use them to select a fast, green release of the MySQL database engine. MySQL database-engine developers may use the software metric to assess products' greenness and performance. Researchers may use our findings to further develop new hypotheses or build models to predict greenness and performance of databases. | Researchers have studied changes to the design of a database engine, but not to its energy consumption. For example, @cite_9 investigated changes to the amount of communicated information passed to system administrators over multiple versions of the PostgreSQL database engine and the Hadoop data processing framework. | {
"cite_N": [
"@cite_9"
],
"mid": [
"1886625064"
],
"abstract": [
"SUMMARY Substantial research in software engineering focuses on understanding the dynamic nature of software systems in order to improve software maintenance and program comprehension. This research typically makes use of automated instrumentation and profiling techniques after the fact, that is, without considering domain knowledge. In this paper, we examine another source of dynamic information that is generated from statements that have been inserted into the code base during development to draw the system administrators' attention to important run-time phenomena. We call this source communicated information (CI). Examples of CI include execution logs and system events. The availability of CI has sparked the development of an ecosystem of Log Processing Apps (LPAs) that surround the software system under analysis to monitor and document various run-time constraints. The dependence of LPAs on the timeliness, accuracy and granularity of the CI means that it is important to understand the nature of CI and how it evolves over time, both qualitatively and quantitatively. Yet, to our knowledge, little empirical analysis has been performed on CI and its evolution. In a case study on two large open source and one industrial software systems, we explore the evolution of CI by mining the execution logs of these systems and the logging statements in the source code. Our study illustrates the need for better traceability between CI and the LPAs that analyze the CI. In particular, we find that the CI changes at a high rate across versions, which could lead to fragile LPAs. We found that up to 70 of these changes could have been avoided and the impact of 15 to 80 of the changes can be controlled through the use of robust analysis techniques by LPAs. We also found that LPAs that track implementation-level CI (e.g. performance analysis) and the LPAs that monitor error messages (system health monitoring) are more fragile than LPAs that track domain-level CI (e.g. workload modelling), because the latter CI tends to be long-lived. Copyright © 2013 John Wiley & Sons, Ltd."
]
} |
1701.02386 | 2952533959 | Generative Adversarial Networks (GAN) (, 2014) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. | Rosset and Segal @cite_8 proposed to use an additive mixture model in the case where the log likelihood can be computed. They derived the update rule via computing the steepest descent direction when adding a component with infinitesimal weight. This leads to an update rule which is degenerate if the generative model can produce arbitrarily concentrated distributions (indeed the optimal component is just a Dirac distribution) which is thus not suitable for the GAN setting. Moreover, their results do not apply once the weight @math becomes non-infinitesimal. In contrast, for any fixed weight of the new component our approach gives the overall optimal update (rather than just the best direction), and applies to any @math -divergence. Remarkably, in both theories, improvements of the mixture are guaranteed only if the new weak'' learner is still good enough (see Conditions &) | {
"cite_N": [
"@cite_8"
],
"mid": [
"2121805075"
],
"abstract": [
"Several authors have suggested viewing boosting as a gradient descent search for a good fit in function space. We apply gradient-based boosting methodology to the unsupervised learning problem of density estimation. We show convergence properties of the algorithm and prove that a strength of weak learnability property applies to this problem as well. We illustrate the potential of this approach through experiments with boosting Bayesian networks to learn density models."
]
} |
1701.02386 | 2952533959 | Generative Adversarial Networks (GAN) (, 2014) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. | Similarly, Barron and Li @cite_1 studied the construction of mixtures minimizing the Kullback divergence and proposed a greedy procedure for doing so. They also proved that under certain conditions, finite mixtures can approximate arbitrary mixtures at a rate @math where @math is the number of components in the mixture when the weight of each newly added component is @math . These results are specific to the Kullback divergence but are consistent with our more general results. | {
"cite_N": [
"@cite_1"
],
"mid": [
"2168227362"
],
"abstract": [
"Gaussian mixtures (or so-called radial basis function networks) for density estimation provide a natural counterpart to sigmoidal neural networks for function fitting and approximation. In both cases, it is possible to give simple expressions for the iterative improvement of performance as components of the network are introduced one at a time. In particular, for mixture density estimation we show that a k-component mixture estimated by maximum likelihood (or by an iterative likelihood improvement that we introduce) achieves log-likelihood within order 1 k of the log-likelihood achievable by any convex combination. Consequences for approximation and estimation using Kullback-Leibler risk are also given. A Minimum Description Length principle selects the optimal number of components k that minimizes the risk bound."
]
} |
1701.02386 | 2952533959 | Generative Adversarial Networks (GAN) (, 2014) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. | : @cite_19 propose an additive procedure similar to ours but with a different reweighting scheme, which is not motivated by a theoretical analysis of optimality conditions. On every new iteration the authors propose to run GAN on the top @math training examples with maximum value of the discriminator from the last iteration. Empirical results of Section show that this heuristic often fails to address the missing modes problem. | {
"cite_N": [
"@cite_19"
],
"mid": [
"2560169576"
],
"abstract": [
"Ensembles are a popular way to improve results of discriminative CNNs. The combination of several networks trained starting from different initializations improves results significantly. In this paper we investigate the usage of ensembles of GANs. The specific nature of GANs opens up several new ways to construct ensembles. The first one is based on the fact that in the minimax game which is played to optimize the GAN objective the generator network keeps on changing even after the network can be considered optimal. As such ensembles of GANs can be constructed based on the same network initialization but just taking models which have different amount of iterations. These so-called self ensembles are much faster to train than traditional ensembles. The second method, called cascade GANs, redirects part of the training data which is badly modeled by the first GAN to another GAN. In experiments on the CIFAR10 dataset we show that ensembles of GANs obtain model probability distributions which better model the data distribution. In addition, we show that these improved results can be obtained at little additional computational cost."
]
} |
1701.02386 | 2952533959 | Generative Adversarial Networks (GAN) (, 2014) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. | Finally, many papers investigate completely different approaches for addressing the same issue by directly modifying the training objective of an individual GAN. For instance, : @cite_10 add an autoencoding cost to the training objective of GAN, while : @cite_22 allow the generator to look few steps ahead'' when making a gradient step. | {
"cite_N": [
"@cite_10",
"@cite_22"
],
"mid": [
"2963865839",
"2554314924"
],
"abstract": [
"Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution during the early phases of training, thus providing a unified solution to the missing modes problem.",
"We introduce a method to stabilize Generative Adversarial Networks (GANs) by defining the generator objective with respect to an unrolled optimization of the discriminator. This allows training to be adjusted between using the optimal discriminator in the generator's objective, which is ideal but infeasible in practice, and using the current value of the discriminator, which is often unstable and leads to poor solutions. We show how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator."
]
} |
1701.02386 | 2952533959 | Generative Adversarial Networks (GAN) (, 2014) are an effective method for training generative models of complex data such as natural images. However, they are notoriously hard to train and can suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. We propose an iterative procedure, called AdaGAN, where at every step we add a new component into a mixture model by running a GAN algorithm on a reweighted sample. This is inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. We prove that such an incremental procedure leads to convergence to the true distribution in a finite number of steps if each step is optimal, and convergence at an exponential rate otherwise. We also illustrate experimentally that this procedure addresses the problem of missing modes. | The paper is organized as follows. In Section we present our main theoretical results regarding optimization of mixture models under general @math -divergences. In particular we show that it is possible to build an optimal mixture in an incremental fashion, where each additional component is obtained by applying a GAN-style procedure with a reweighted distribution. In Section we show that if the GAN optimization at each step is perfect, the process converges to the true data distribution at exponential rate (or even in a , for which we provide a necessary and sufficient condition). Then we show in Section that imperfect GAN solutions still lead to the exponential rate of convergence under certain weak learnability'' conditions. These results naturally lead us to a new boosting-style iterative procedure for constructing generative models, which is combined with GAN in , resulting in a new algorithm called . Finally, we report initial empirical results in Section , where we compare AdaGAN with several benchmarks, including original GAN, uniform mixture of multiple independently trained GANs, and iterative procedure of : @cite_19 . | {
"cite_N": [
"@cite_19"
],
"mid": [
"2560169576"
],
"abstract": [
"Ensembles are a popular way to improve results of discriminative CNNs. The combination of several networks trained starting from different initializations improves results significantly. In this paper we investigate the usage of ensembles of GANs. The specific nature of GANs opens up several new ways to construct ensembles. The first one is based on the fact that in the minimax game which is played to optimize the GAN objective the generator network keeps on changing even after the network can be considered optimal. As such ensembles of GANs can be constructed based on the same network initialization but just taking models which have different amount of iterations. These so-called self ensembles are much faster to train than traditional ensembles. The second method, called cascade GANs, redirects part of the training data which is badly modeled by the first GAN to another GAN. In experiments on the CIFAR10 dataset we show that ensembles of GANs obtain model probability distributions which better model the data distribution. In addition, we show that these improved results can be obtained at little additional computational cost."
]
} |
1701.02298 | 2576044318 | The focus of the current research is to identify people of interest in social networks. We are especially interested in studying dark networks, which represent illegal or covert activity. In such networks, people are unlikely to disclose accurate information when queried. We present REDLEARN, an algorithm for sampling dark networks with the goal of identifying as many nodes of interest as possible. We consider two realistic lying scenarios, which describe how individuals in a dark network may attempt to conceal their connections. We test and present our results on several real-world multilayered networks, and show that REDLEARN achieves up to a 340 improvement over the next best strategy. | There are a multitude of sampling techniques for network exploration, including random walks ( @cite_10 , @cite_9 , @cite_5 ), biased random walks ( @cite_3 ), or walks combined with reversible Markov Chains( @cite_15 ), Bayesian methods( @cite_12 ), or standard exhaustive search algorithms like depth-first or breadth-first searches, such as @cite_18 @cite_16 @cite_2 @cite_11 @cite_14 . However, these methods fail in using discovered knowledge, such as node attributes, effectively. Various researchers have considered the problem of sampling for specific goals, such as maximizing the number of nodes observed. For example, Avrachenkov, et al present an algorithm to sample the node with the highest estimated unobserved degree @cite_7 . Hanneke and Xing @cite_17 , and Maiya and Berger-Wolf @cite_4 examine online sampling for centrality measures. Macskassy and Provost develop a guilt-by-association method to identify suspicious individuals in a partially-known network @cite_13 . | {
"cite_N": [
"@cite_13",
"@cite_18",
"@cite_14",
"@cite_4",
"@cite_7",
"@cite_9",
"@cite_17",
"@cite_3",
"@cite_2",
"@cite_5",
"@cite_15",
"@cite_16",
"@cite_10",
"@cite_12",
"@cite_11"
],
"mid": [
"2003129826",
"",
"",
"1556756413",
"",
"1488123997",
"",
"2168586324",
"",
"1964062850",
"1586184796",
"",
"1984062978",
"2144731007",
""
],
"abstract": [
"Complex networks underlie an enormous variety of social, biological, physical, and virtual systems. A profound complication for the science of complex networks is that in most cases, observing all nodes and all network interactions is impossible. Previous work addressing the impacts of partial network data is surprisingly limited, focuses primarily on missing nodes, and suggests that network statistics derived from subsampled data are not suitable estimators for the same network statistics describing the overall network topology. We generate scaling methods to predict true network statistics, including the degree distribution, from only partial knowledge of nodes, links, or weights. Our methods are transparent and do not assume a known generating process for the network, thus enabling prediction of network statistics for a wide variety of applications. We validate analytical results on four simulated network classes and empirical data sets of various sizes. We perform subsampling experiments by varying proportions of sampled data and demonstrate that our scaling methods can provide very good estimates of true network statistics while acknowledging limits. Lastly, we apply our techniques to a set of rich and evolving large-scale social networks, Twitter reply networks. Based on 100 million tweets, we use our scaling techniques to propose a statistical characterization of the Twitter Interactome from September 2008 to November 2008. Our treatment allows us to find support for Dunbar's hypothesis in detecting an upper threshold for the number of active social contacts that individuals maintain over the course of one week.",
"",
"",
"In this work, we investigate the use of online or “crawling” algorithms to sample large social networks in order to determine the most influential or important individuals within the network (by varying definitions of network centrality). We describe a novel sampling technique based on concepts from expander graphs. We empirically evaluate this method in addition to other online sampling strategies on several real-world social networks. We find that, by sampling nodes to maximize the expansion of the sample, we are able to approximate the set of most influential individuals across multiple measures of centrality.",
"",
"",
"",
"We study the biased random-walk process in random uncorrelated networks with arbitrary degree distributions. In our model, the bias is defined by the preferential transition probability, which, in recent years, has been commonly used to study the efficiency of different routing protocols in communication networks. We derive exact expressions for the stationary occupation probability and for the mean transit time between two nodes. The effect of the cyclic search on transit times is also explored. Results presented in this paper provide the basis for a theoretical treatment of transport-related problems in complex networks, including quantitative estimation of the critical value of the packet generation rate.",
"",
"We investigate random walks on complex networks and derive an exact expression for the mean firstpassage time (MFPT) between two nodes. We introduce for each node the random walk centrality C, which is the ratio between its coordination number and a characteristic relaxation time, and show that it determines essentially the MFPT. The centrality of a node determines the relative speed by which a node can receive and spread information over the network in a random process. Numerical simulations of an ensemble of random walkers moving on paradigmatic network models confirm this analytical prediction.",
"",
"",
"We investigate network exploration by random walks defined via stationary and adaptive transition probabilities on large graphs. We derive an exact formula valid for arbitrary graphs and arbitrary walks with stationary transition probabilities (STP), for the average number of discovered edges as a function of time. We show that for STP walks site and edge exploration obey the same scaling nλ as a function of time n. Therefore, edge exploration on graphs with many loops is always lagging compared to site exploration, the revealed graph being sparse until almost all nodes have been discovered. We then introduce the edge explorer model (EEM), which presents a novel class of adaptive walks, that perform faithful network discovery even on dense networks.",
"In many multivariate domains, we are interested in analyzing the dependency structure of the underlying distribution, e.g., whether two variables are in direct interaction. We can represent dependency structures using Bayesian network models. To analyze a given data set, Bayesian model selection attempts to find the most likely (MAP) model, and uses its structure to answer these questions. However, when the amount of available data is modest, there might be many models that have non-negligible posterior. Thus, we want compute the Bayesian posterior of a feature, i.e., the total posterior probability of all models that contain it. In this paper, we propose a new approach for this task. We first show how to efficiently compute a sum over the exponential number of networks that are consistent with a fixed order over network variables. This allows us to compute, for a given order, both the marginal probability of the data and the posterior of a feature. We then use this result as the basis for an algorithm that approximates the Bayesian posterior of a feature. Our approach uses a Markov Chain Monte Carlo (MCMC) method, but over orders rather than over network structures. The space of orders is smaller and more regular than the space of structures, and has much a smoother posterior “landscape”. We present empirical results on synthetic and real-life datasets that compare our approach to full model averaging (when possible), to MCMC over network structures, and to a non-Bayesian bootstrap approach.",
""
]
} |
1701.02368 | 2952889941 | Social networks allow rapid spread of ideas and innovations while the negative information can also propagate widely. When the cascades with different opinions reaching the same user, the cascade arriving first is the most likely to be taken by the user. Therefore, once misinformation or rumor is detected, a natural containment method is to introduce a positive cascade competing against the rumor. Given a budget @math , the rumor blocking problem asks for @math seed users to trigger the spread of the positive cascade such that the number of the users who are not influenced by rumor can be maximized. The prior works have shown that the rumor blocking problem can be approximated within a factor of @math by a classic greedy algorithm combined with Monte Carlo simulation with the running time of @math , where @math and @math are the number of users and edges, respectively. Unfortunately, the Monte-Carlo-simulation-based methods are extremely time consuming and the existing algorithms either trade performance guarantees for practical efficiency or vice versa. In this paper, we present a randomized algorithm which runs in @math expected time and provides a @math -approximation with a high probability. The experimentally results on both the real-world and synthetic social networks have shown that the proposed randomized rumor blocking algorithm is much more efficient than the state-of-the-art method and it is able to find the seed nodes which are effective in limiting the spread of rumor. | C. Budak @cite_0 are among the first who study the misinformation containment problem. In particular, they consider the multi-campaign independent cascade model and investigate the problem of identifying a subset of individuals that need to be convinced to adopt the good" campaign so as to minimize the number of people that adopt the rumor. X. He @cite_21 and L. Fan @cite_9 further study this problem under the competitive linear threshold model and the OPOAO model, respectively. S. Li @cite_22 later formulate the @math rumor restriction problem and show a @math -approximation. As mentioned earlier, the existing approaches are time consuming and thus cannot handle large social networks. Recently, several heuristic methods have been proposed by different works, such as @cite_1 @cite_16 , but they cannot provide performance guarantee. In this paper, we aim to design the rumor blocking algorithm which is provably effective and also efficient. | {
"cite_N": [
"@cite_22",
"@cite_9",
"@cite_21",
"@cite_1",
"@cite_0",
"@cite_16"
],
"mid": [
"1965759835",
"",
"2395439732",
"2296380453",
"",
"2135086682"
],
"abstract": [
"Online Social Networks (OSNs) have recently emerged as an effective medium for information sharing. Unfortunately, it has been frequently observed that malicious rumors being spread over an OSN are not controllable, and this is not desirable. This paper proposes a new problem, namely the γ - k rumor restriction problem, whose goal is, given a social network, to find a set S of nodes with k protectors (γ * k protectors from the contaminated set, and (1 - γ) * k protectors from the decontaminated set) to protect the network such that the number of decontaminated nodes is maximum. We show that the objective function of the γ - k rumor restriction problem is submodular, and use this result to design a greedy approximation algorithm with performance ratio of 1 - 1 e for the problem under the linear threshold model and independent cascade model, respectively. To verify our algorithms, we conduct experiments on real word social networks including NetHEPT, WikiVote and Slashdot0811. The results show that our algorithm works efficiently and effectively.",
"",
"In many real-world situations, different and often opposite opinions, innovations, or products are competing with one another for their social influence in a networked society. In this paper, we study competitive influence propagation in social networks under the competitive linear threshold (CLT) model, an extension to the classic linear threshold model. Under the CLT model, we focus on the problem that one entity tries to block the influence propagation of its competing entity as much as possible by strategically selecting a number of seed nodes that could initiate its own influence propagation. We call this problem the influence blocking maximization (IBM) problem. We prove that the objective function of IBM in the CLT model is submodular, and thus a greedy algorithm could achieve 1 − 1 e approximation ratio. However, the greedy algorithm requires Monte-Carlo simulations of competitive influence propagation, which makes the algorithm not efficient. We design an efficient algorithm CLDAG, which utilizes the properties of the CLT model, to address this issue. We conduct extensive simulations of CLDAG, the greedy algorithm, and other baseline algorithms on real-world and synthetic datasets. Our results show that CLDAG is able to provide best accuracy in par with the greedy algorithm and often better than other algorithms, while it is two orders of magnitude faster than the greedy algorithm.",
"In this paper, we study the Misinformation Containment (MC) problem. In particular, taking into account the faster development of misinformation detection techniques, we mainly focus on the limiting the misinformation with known sources case. We prove that under the Competitive Activation Model, the MC problem is NP-hard and show that it cannot be approximated in polynomial time within a ratio of ( e (e-1) ) unless (NP DTIME (n^ O( n) ) ). Due to its hardness, we propose an effective algorithm, exploiting the critical nodes and using the greedy approach as well as applying the CELF heuristic to achieve the goal. Comprehensive experiments on real social networks are conducted, and results show that our algorithm can effectively expand the awareness of correct information as well as limit the spread of misinformation.",
"",
"In many real-world scenarios, social network serves as a platform for information diffusion, alongside with positive information (truth) dissemination, negative information (rumor) also spread among the public. To make the social network as a reliable medium, it is necessary to have strategies to control rumor diffusion. In this article, we address the Least Cost Rumor Blocking (LCRB) problem where rumors originate from a community Cr in the network and a notion of protectors are used to limit the bad influence of rumors. The problem can be summarized as identifying a minimal subset of individuals as initial protectors to minimize the number of people infected in neighbor communities of Cr at the end of both diffusion processes. Observing the community structure property, we pay attention to a kind of vertex set, called bridge end set, in which each node has at least one direct in-neighbor in Cr and is reachable from rumors. Under the OOAO model, we study LCRB-P problem, in which α (0 <; α <; 1) fraction of bridge ends are required to be protected. We prove that the objective function of this problem is submodular and a greedy algorithm is adopted to derive a (1-1 e)-approximation. Furthermore, we study LCRB-D problem over the DOAA model, in which all the bridge ends are required to be protected, we prove that there is no polynomial time o(ln n)-approximation for the LCRB-D problem unless P = NP, and propose a Set Cover Based Greedy (SCBG) algorithm which achieves a O(ln n)-approximation ratio. Finally, to evaluate the efficiency and effectiveness of our algorithm, we conduct extensive comparison simulations in three real-world datasets, and the results show that our algorithm outperforms other heuristics."
]
} |
1701.02368 | 2952889941 | Social networks allow rapid spread of ideas and innovations while the negative information can also propagate widely. When the cascades with different opinions reaching the same user, the cascade arriving first is the most likely to be taken by the user. Therefore, once misinformation or rumor is detected, a natural containment method is to introduce a positive cascade competing against the rumor. Given a budget @math , the rumor blocking problem asks for @math seed users to trigger the spread of the positive cascade such that the number of the users who are not influenced by rumor can be maximized. The prior works have shown that the rumor blocking problem can be approximated within a factor of @math by a classic greedy algorithm combined with Monte Carlo simulation with the running time of @math , where @math and @math are the number of users and edges, respectively. Unfortunately, the Monte-Carlo-simulation-based methods are extremely time consuming and the existing algorithms either trade performance guarantees for practical efficiency or vice versa. In this paper, we present a randomized algorithm which runs in @math expected time and provides a @math -approximation with a high probability. The experimentally results on both the real-world and synthetic social networks have shown that the proposed randomized rumor blocking algorithm is much more efficient than the state-of-the-art method and it is able to find the seed nodes which are effective in limiting the spread of rumor. | Rumor source detection is another important problem for rumor controlling. The prior works primarily focus on the susceptible-infected-recovered (SIR) model where the nodes can be infected by rumor and may recover later. Shah @cite_12 provide a systematic study and design a rumor source estimator based upon the concept of rumor centrality. Z. Wang @cite_4 propose a unified inference framework based on the union rumor centrality. | {
"cite_N": [
"@cite_4",
"@cite_12"
],
"mid": [
"2112515575",
"2111772797"
],
"abstract": [
"This paper addresses the problem of a single rumor source detection with multiple observations, from a statistical point of view of a spreading over a network, based on the susceptible-infectious model. For tree networks, multiple sequential observations for one single instance of rumor spreading cannot improve over the initial snapshot observation. The situation dramatically improves for multiple independent observations. We propose a unified inference framework based on the union rumor centrality, and provide explicit detection performance for degree-regular tree networks. Surprisingly, even with merely two observations, the detection probability at least doubles that of a single observation, and further approaches one, i.e., reliable detection, with increasing degree. This indicates that a richer diversity enhances detectability. For general graphs, a detection algorithm using a breadth-first search strategy is also proposed and evaluated. Besides rumor source detection, our results can be used in network forensics to combat recurring epidemic-like information spreading such as online anomaly and fraudulent email spams.",
"We provide a systematic study of the problem of finding the source of a rumor in a network. We model rumor spreading in a network with the popular susceptible-infected (SI) model and then construct an estimator for the rumor source. This estimator is based upon a novel topological quantity which we term rumor centrality. We establish that this is a maximum likelihood (ML) estimator for a class of graphs. We find the following surprising threshold phenomenon: on trees which grow faster than a line, the estimator always has nontrivial detection probability, whereas on trees that grow like a line, the detection probability will go to 0 as the network grows. Simulations performed on synthetic networks such as the popular small-world and scale-free networks, and on real networks such as an internet AS network and the U.S. electric power grid network, show that the estimator either finds the source exactly or within a few hops of the true source across different network topologies. We compare rumor centrality to another common network centrality notion known as distance centrality. We prove that on trees, the rumor center and distance center are equivalent, but on general networks, they may differ. Indeed, simulations show that rumor centrality outperforms distance centrality in finding rumor sources in networks which are not tree-like."
]
} |
1701.02368 | 2952889941 | Social networks allow rapid spread of ideas and innovations while the negative information can also propagate widely. When the cascades with different opinions reaching the same user, the cascade arriving first is the most likely to be taken by the user. Therefore, once misinformation or rumor is detected, a natural containment method is to introduce a positive cascade competing against the rumor. Given a budget @math , the rumor blocking problem asks for @math seed users to trigger the spread of the positive cascade such that the number of the users who are not influenced by rumor can be maximized. The prior works have shown that the rumor blocking problem can be approximated within a factor of @math by a classic greedy algorithm combined with Monte Carlo simulation with the running time of @math , where @math and @math are the number of users and edges, respectively. Unfortunately, the Monte-Carlo-simulation-based methods are extremely time consuming and the existing algorithms either trade performance guarantees for practical efficiency or vice versa. In this paper, we present a randomized algorithm which runs in @math expected time and provides a @math -approximation with a high probability. The experimentally results on both the real-world and synthetic social networks have shown that the proposed randomized rumor blocking algorithm is much more efficient than the state-of-the-art method and it is able to find the seed nodes which are effective in limiting the spread of rumor. | Rumor detection aims to distinguish rumor from genuine news. Leskovec @cite_11 develop a framework for tracking the spread of misinformation and observe a set of persistent temporal patterns in the news cycle. Ratkiewicz @cite_13 build a machine learning framework to detect the early stages of viral spreading of political misinformation. In @cite_2 , Qazvinian address the rumor detection problem by exploring the effectiveness of three categories of features: content-based, network-based, and microblog-specific memes. Takahashi @cite_19 study the characteristics of rumor and design a system to detect rumor on Twitter. | {
"cite_N": [
"@cite_2",
"@cite_19",
"@cite_13",
"@cite_11"
],
"mid": [
"2159981908",
"2057026632",
"202178741",
"2127492100"
],
"abstract": [
"A rumor is commonly defined as a statement whose true value is unverifiable. Rumors may spread misinformation (false information) or disinformation (deliberately false information) on a network of people. Identifying rumors is crucial in online social media where large amounts of information are easily spread across a large network by sources with unverified authority. In this paper, we address the problem of rumor detection in microblogs and explore the effectiveness of 3 categories of features: content-based, network-based, and microblog-specific memes for correctly identifying rumors. Moreover, we show how these features are also effective in identifying disinformers, users who endorse a rumor and further help it to spread. We perform our experiments on more than 10,000 manually annotated tweets collected from Twitter and show how our retrieval model achieves more than 0.95 in Mean Average Precision (MAP). Finally, we believe that our dataset is the first large-scale dataset on rumor detection. It can open new dimensions in analyzing online misinformation and other aspects of microblog conversations.",
"Twitter is useful in a situation of disaster for communication, announcement, request for rescue and so on. On the other hand, it causes a negative by-product, spreading rumors. This paper describe how rumors have spread after a disaster of earthquake, and discuss how can we deal with them. We first investigated actual instances of rumor after the disaster. And then we attempted to disclose characteristics of those rumors. Based on the investigation we developed a system which detects candidates of rumor from twitter and then evaluated it. The result of experiment shows the proposed algorithm can find rumors with acceptable accuracy.",
"We study astroturf political campaigns on microblogging platforms: politically-motivated individuals and organizations that use multiple centrally-controlled accounts to create the appearance of widespread support for a candidate or opinion. We describe a machine learning framework that combines topological, content-based and crowdsourced features of information diffusion networks on Twitter to detect the early stages of viral spreading of political misinformation. We present promising preliminary results with better than 96 accuracy in the detection of astroturf content in the run-up to the 2010 U.S. midterm elections.",
"Tracking new topics, ideas, and \"memes\" across the Web has been an issue of considerable interest. Recent work has developed methods for tracking topic shifts over long time scales, as well as abrupt spikes in the appearance of particular named entities. However, these approaches are less well suited to the identification of content that spreads widely and then fades over time scales on the order of days - the time scale at which we perceive news and events. We develop a framework for tracking short, distinctive phrases that travel relatively intact through on-line text; developing scalable algorithms for clustering textual variants of such phrases, we identify a broad class of memes that exhibit wide spread and rich variation on a daily basis. As our principal domain of study, we show how such a meme-tracking approach can provide a coherent representation of the news cycle - the daily rhythms in the news media that have long been the subject of qualitative interpretation but have never been captured accurately enough to permit actual quantitative analysis. We tracked 1.6 million mainstream media sites and blogs over a period of three months with the total of 90 million articles and we find a set of novel and persistent temporal patterns in the news cycle. In particular, we observe a typical lag of 2.5 hours between the peaks of attention to a phrase in the news media and in blogs respectively, with divergent behavior around the overall peak and a \"heartbeat\"-like pattern in the handoff between news and blogs. We also develop and analyze a mathematical model for the kinds of temporal variation that the system exhibits."
]
} |
1701.02490 | 2562337727 | The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods. | Reinforcement Learning An MDP provides a mathematical framework which is widely used for modelling the dynamics of an environment under different actions, and is useful for solving reinforcement learning problems @cite_27 . An MDP is defined by the tuple @math . The set of all states and actions are represented by @math and @math respectively. The reward and transition probability functions are given by @math and @math . Dynamic programming is used in cases where the environment's dynamics, i.e., the reward function and transition probabilities are known in advance. Two popular dynamic programming algorithms are policy iteration and value iteration. For large-scale situations, it is difficult to experience the whole state space, which leads to the use of function approximation that constructs an approximator of the entire function @cite_18 @cite_17 . In this work, we use value iteration for small-scale situations, and further build a neural network approximator to solve the scalability problem. | {
"cite_N": [
"@cite_27",
"@cite_18",
"@cite_17"
],
"mid": [
"94772686",
"1547105496",
"2046513829"
],
"abstract": [
"",
"The success of reinforcement learning in practical problems depends on the ability to combine function approximation with temporal difference methods such as value iteration. Experiments in this area have produced mixed results; there have been both notable successes and notable disappointments. Theory has been scarce, mostly due to the difficulty of reasoning about function approximators that generalize beyond the observed data. We provide a proof of convergence for a wide class of temporal difference methods involving function approximators such as k-nearest-neighbor, and show experimentally that these methods can be useful. The proof is based on a view of function approximators as expansion or contraction mappings. In addition, we present a novel view of approximate value iteration: an approximate algorithm for one environment turns out to be an exact algorithm for a different environment.",
"A recent surge in research in kernelized approaches to reinforcement learning has sought to bring the benefits of kernelized machine learning techniques to reinforcement learning. Kernelized reinforcement learning techniques are fairly new and different authors have approached the topic with different assumptions and goals. Neither a unifying view nor an understanding of the pros and cons of different approaches has yet emerged. In this paper, we offer a unifying view of the different approaches to kernelized value function approximation for reinforcement learning. We show that, except for different approaches to regularization, Kernelized LSTD (KLSTD) is equivalent to a modelbased approach that uses kernelized regression to find an approximate reward and transition model, and that Gaussian Process Temporal Difference learning (GPTD) returns a mean value function that is equivalent to these other approaches. We also discuss the relationship between our modelbased approach and the earlier Gaussian Processes in Reinforcement Learning (GPRL). Finally, we decompose the Bellman error into the sum of transition error and reward error terms, and demonstrate through experiments that this decomposition can be helpful in choosing regularization parameters."
]
} |
1701.02490 | 2562337727 | The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods. | Bid Landscape Forecasting Bid landscape forecasting refers to modeling the market price distribution for auctions of specific ad inventory, and its c.d.f. is the winning probability given each specific bid price @cite_9 . The authors in @cite_13 @cite_7 @cite_9 presented some hypothetical winning functions and learned the parameters. For example, a log-normal market price distribution with the parameters estimated by gradient boosting decision trees was proposed in @cite_9 . Since advertisers only observe the winning impressions, the problem of censored data @cite_28 @cite_21 is critical. Authors in @cite_30 proposed to leverage censored linear regression to jointly model the likelihood of observed market prices in winning cases and censored ones with losing bids. Recently, the authors in @cite_19 proposed to combine survival analysis and decision tree models, where each tree leaf maintains a non-parametric survival model to fit the censored market prices. In this paper, we follow @cite_28 @cite_21 and use a non-parametric method to model the market price distribution. | {
"cite_N": [
"@cite_30",
"@cite_7",
"@cite_28",
"@cite_9",
"@cite_21",
"@cite_19",
"@cite_13"
],
"mid": [
"2073685064",
"2149822245",
"2951545777",
"2031002853",
"2513944453",
"2515050826",
"1852264786"
],
"abstract": [
"In the aspect of a Demand-Side Platform (DSP), which is the agent of advertisers, we study how to predict the winning price such that the DSP can win the bid by placing a proper bidding value in the real-time bidding (RTB) auction. We propose to leverage the machine learning and statistical methods to train the winning price model from the bidding history. A major challenge is that a DSP usually suffers from the censoring of the winning price, especially for those lost bids in the past. To solve it, we utilize the censored regression model, which is widely used in the survival analysis and econometrics, to fit the censored bidding data. Note, however, the assumption of censored regression does not hold on the real RTB data. As a result, we further propose a mixture model, which combines linear regression on bids with observable winning prices and censored regression on bids with the censored winning prices, weighted by the winning rate of the DSP. Experiment results show that the proposed mixture model in general prominently outperforms linear regression in terms of the prediction accuracy.",
"In this paper we study bid optimisation for real-time bidding (RTB) based display advertising. RTB allows advertisers to bid on a display ad impression in real time when it is being generated. It goes beyond contextual advertising by motivating the bidding focused on user data and it is different from the sponsored search auction where the bid price is associated with keywords. For the demand side, a fundamental technical challenge is to automate the bidding process based on the budget, the campaign objective and various information gathered in runtime and in history. In this paper, the programmatic bidding is cast as a functional optimisation problem. Under certain dependency assumptions, we derive simple bidding functions that can be calculated in real time; our finding shows that the optimal bid has a non-linear relationship with the impression level evaluation such as the click-through rate and the conversion rate, which are estimated in real time from the impression level features. This is different from previous work that is mainly focused on a linear bidding function. Our mathematical derivation suggests that optimal bidding strategies should try to bid more impressions rather than focus on a small set of high valued impressions because according to the current RTB market data, compared to the higher evaluated impressions, the lower evaluated ones are more cost effective and the chances of winning them are relatively higher. Aside from the theoretical insights, offline experiments on a real dataset and online experiments on a production RTB system verify the effectiveness of our proposed optimal bidding strategies and the functional optimisation framework.",
"We consider the budget optimization problem faced by an advertiser participating in repeated sponsored search auctions, seeking to maximize the number of clicks attained under that budget. We cast the budget optimization problem as a Markov Decision Process (MDP) with censored observations, and propose a learning algorithm based on the wellknown Kaplan-Meier or product-limit estimator. We validate the performance of this algorithm by comparing it to several others on a large set of search auction data from Microsoft adCenter, demonstrating fast convergence to optimal performance.",
"Display advertising has been a significant source of revenue for publishers and ad networks in online advertising ecosystem. One important business model in online display advertising is Ad Exchange marketplace, also called non-guaranteed delivery (NGD), in which advertisers buy targeted page views and audiences on a spot market through real-time auction. In this paper, we describe a bid landscape forecasting system in NGD marketplace for any advertiser campaign specified by a variety of targeting attributes. In the system, the impressions that satisfy the campaign targeting attributes are partitioned into multiple mutually exclusive samples. Each sample is one unique combination of quantified attribute values. We develop a divide-and-conquer approach that breaks down the campaign-level forecasting problem. First, utilizing a novel star-tree data structure, we forecast the bid for each sample using non-linear regression by gradient boosting decision trees. Then we employ a mixture-of-log-normal model to generate campaign-level bid distribution based on the sample-level forecasted distributions. The experiment results of a system developed with our approach show that it can accurately forecast the bid distributions for various campaigns running on the world's largest NGD advertising exchange system, outperforming two baseline methods in term of forecasting errors.",
"In real-time display advertising, ad slots are sold per impression via an auction mechanism. For an advertiser, the campaign information is incomplete --- the user responses (e.g, clicks or conversions) and the market price of each ad impression are observed only if the advertiser's bid had won the corresponding ad auction. The predictions, such as bid landscape forecasting, click-through rate (CTR) estimation, and bid optimisation, are all operated in the pre-bid stage with full-volume bid request data. However, the training data is gathered in the post-bid stage with a strong bias towards the winning impressions. A common solution for learning over such censored data is to reweight data instances to correct the discrepancy between training and prediction. However, little study has been done on how to obtain the weights independent of previous bidding strategies and consequently integrate them into the final CTR prediction and bid generation steps. In this paper, we formulate CTR estimation and bid optimisation under such censored auction data. Derived from a survival model, we show that historic bid information is naturally incorporated to produce Bid-aware Gradient Descents (BGD) which controls both the importance and the direction of the gradient to achieve unbiased learning. The empirical study based on two large-scale real-world datasets demonstrates remarkable performance gains from our solution. The learning framework has been deployed on Yahoo!'s real-time bidding platform and provided 2.97 AUC lift for CTR estimation and 9.30 eCPC drop for bid optimisation in an online A B test.",
"Real-time auction has become an important online advertising trading mechanism. A crucial issue for advertisers is to model the market competition, i.e., bid landscape forecasting. It is formulated as predicting the market price distribution for each ad auction provided by its side information. Existing solutions mainly focus on parameterized heuristic forms of the market price distribution and learn the parameters to fit the data. In this paper, we present a functional bid landscape forecasting method to automatically learn the function mapping from each ad auction features to the market price distribution without any assumption about the functional form. Specifically, to deal with the categorical feature input, we propose a novel decision tree model with a node splitting scheme by attribute value clustering. Furthermore, to deal with the problem of right-censored market price observations, we propose to incorporate a survival model into tree learning and prediction, which largely reduces the model bias. The experiments on real-world data demonstrate that our models achieve substantial performance gains over previous work in various metrics. The software related to this paper is available at https: github.com zeromike bid-lands.",
"A major trend in mobile advertising is the emergence of real time bidding (RTB) based marketplaces on the supply side and the corresponding programmatic impression buying on the demand side. In order to acquire the most relevant audience impression at the lowest cost, a demand side player has to accurately estimate the win rate and winning price in the auction, and incorporate that knowledge in its bid. In this paper, we describe our battle-proven techniques of predicting win rate and winning price in RTB, and the corresponding bidding strategies built on top of those predictions. We also reveal the close relationship between the win rate and winning price estimation, and demonstrate how to solve the two problems together. All of our estimation methods are developed with distributed framework and have been applied to billion order numbers of data in real business operation."
]
} |
1701.02490 | 2562337727 | The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods. | Bid Optimization As has been discussed above, bidding strategy optimization is the key component within the decision process for the advertisers @cite_12 . The auction theory @cite_6 proved that truthful bidding is the optimal strategy under the second-price auction. However, truthful bidding may perform poorly when considering the multiple auctions and the budget constraint @cite_7 . In real-world applications, the linear bidding function @cite_12 is widely used. The authors in @cite_7 empirically showed that there existed non-linear bidding functions better than the linear ones under variant budget constraints. When the data changes, however, the heuristic model @cite_12 or hypothetical bidding functions @cite_32 @cite_7 cannot depict well the real data distribution. The authors in @cite_28 @cite_25 proposed the model-based MDPs to derive the optimal policy for bidding in sponsored search or ad selection in contextual advertising, where the decision is made on keyword level. In our work, we investigate the most challenging impression-level bid decision problem in RTB display advertising that is totally different from @cite_28 @cite_25 . We also tackle the scalability problem, which remains unsolved in @cite_28 , and demonstrate the efficiency and effectiveness of our method in a variety of experiments. | {
"cite_N": [
"@cite_7",
"@cite_28",
"@cite_32",
"@cite_6",
"@cite_25",
"@cite_12"
],
"mid": [
"2149822245",
"2951545777",
"1973081445",
"",
"1992556596",
"2039842578"
],
"abstract": [
"In this paper we study bid optimisation for real-time bidding (RTB) based display advertising. RTB allows advertisers to bid on a display ad impression in real time when it is being generated. It goes beyond contextual advertising by motivating the bidding focused on user data and it is different from the sponsored search auction where the bid price is associated with keywords. For the demand side, a fundamental technical challenge is to automate the bidding process based on the budget, the campaign objective and various information gathered in runtime and in history. In this paper, the programmatic bidding is cast as a functional optimisation problem. Under certain dependency assumptions, we derive simple bidding functions that can be calculated in real time; our finding shows that the optimal bid has a non-linear relationship with the impression level evaluation such as the click-through rate and the conversion rate, which are estimated in real time from the impression level features. This is different from previous work that is mainly focused on a linear bidding function. Our mathematical derivation suggests that optimal bidding strategies should try to bid more impressions rather than focus on a small set of high valued impressions because according to the current RTB market data, compared to the higher evaluated impressions, the lower evaluated ones are more cost effective and the chances of winning them are relatively higher. Aside from the theoretical insights, offline experiments on a real dataset and online experiments on a production RTB system verify the effectiveness of our proposed optimal bidding strategies and the functional optimisation framework.",
"We consider the budget optimization problem faced by an advertiser participating in repeated sponsored search auctions, seeking to maximize the number of clicks attained under that budget. We cast the budget optimization problem as a Markov Decision Process (MDP) with censored observations, and propose a learning algorithm based on the wellknown Kaplan-Meier or product-limit estimator. We validate the performance of this algorithm by comparing it to several others on a large set of search auction data from Microsoft adCenter, demonstrating fast convergence to optimal performance.",
"We study and formulate arbitrage in display advertising. Real-Time Bidding (RTB) mimics stock spot exchanges and utilises computers to algorithmically buy display ads per impression via a real-time auction. Despite the new automation, the ad markets are still informationally inefficient due to the heavily fragmented marketplaces. Two display impressions with similar or identical effectiveness (e.g., measured by conversion or click-through rates for a targeted audience) may sell for quite different prices at different market segments or pricing schemes. In this paper, we propose a novel data mining paradigm called Statistical Arbitrage Mining (SAM) focusing on mining and exploiting price discrepancies between two pricing schemes. In essence, our SAMer is a meta-bidder that hedges advertisers' risk between CPA (cost per action)-based campaigns and CPM (cost per mille impressions)-based ad inventories; it statistically assesses the potential profit and cost for an incoming CPM bid request against a portfolio of CPA campaigns based on the estimated conversion rate, bid landscape and other statistics learned from historical data. In SAM, (i) functional optimisation is utilised to seek for optimal bidding to maximise the expected arbitrage net profit, and (ii) a portfolio-based risk management solution is leveraged to reallocate bid volume and budget across the set of campaigns to make a risk and return trade-off. We propose to jointly optimise both components in an EM fashion with high efficiency to help the meta-bidder successfully catch the transient statistical arbitrage opportunities in RTB. Both the offline experiments on a real-world large-scale dataset and online A B tests on a commercial platform demonstrate the effectiveness of our proposed solution in exploiting arbitrage in various model settings and market environments.",
"",
"Online advertising has become a key source of revenue for both web search engines and online publishers. For them, the ability of allocating right ads to right webpages is critical because any mismatched ads would not only harm web users' satisfactions but also lower the ad income. In this paper, we study how online publishers could optimally select ads to maximize their ad incomes over time. The conventional offline, content-based matching between webpages and ads is a fine start but cannot solve the problem completely because good matching does not necessarily lead to good payoff. Moreover, with the limited display impressions, we need to balance the need of selecting ads to learn true ad payoffs (exploration) with that of allocating ads to generate high immediate payoffs based on the current belief (exploitation). In this paper, we address the problem by employing Partially observable Markov decision processes (POMDPs) and discuss how to utilize the correlation of ads to improve the efficiency of the exploration and increase ad incomes in a long run. Our mathematical derivation shows that the belief states of correlated ads can be naturally updated using a formula similar to collaborative filtering. To test our model, a real world ad dataset from a major search engine is collected and categorized. Experimenting over the data, we provide an analyse of the effect of the underlying parameters, and demonstrate that our algorithms significantly outperform other strong baselines.",
"Billions of online display advertising spots are purchased on a daily basis through real time bidding exchanges (RTBs). Advertising companies bid for these spots on behalf of a company or brand in order to purchase these spots to display banner advertisements. These bidding decisions must be made in fractions of a second after the potential purchaser is informed of what location (Internet site) has a spot available and who would see the advertisement. The entire transaction must be completed in near real-time to avoid delays loading the page and maintain a good users experience. This paper presents a bid-optimization approach that is implemented in production at Media6Degrees for bidding on these advertising opportunities at an appropriate price. The approach combines several supervised learning algorithms, as well as second price auction theory, to determine the correct price to ensure that the right message is delivered to the right person, at the right time."
]
} |
1701.02593 | 2575480513 | We introduce a simple and accurate neural model for dependency-based semantic role labeling. Our model predicts predicate-argument dependencies relying on states of a bidirectional LSTM encoder. The semantic role labeler achieves competitive performance on English, even without any kind of syntactic information and only using local inference. However, when automatically predicted part-of-speech tags are provided as input, it substantially outperforms all previous local models and approaches the best reported results on the English CoNLL-2009 dataset. We also consider Chinese, Czech and Spanish where our approach also achieves competitive results. Syntactic parsers are unreliable on out-of-domain data, so standard (i.e., syntactically-informed) SRL models are hindered when tested in this setting. Our syntax-agnostic model appears more robust, resulting in the best reported results on standard out-of-domain test sets. | Earlier approaches to SRL heavily relied on complex sets of lexico-syntactic features @cite_7 . used a support vector machine classifier and relied on two syntactic views (obtained with two different parsers), for feature extraction. In addition to hand-crafted features, enriched CRFs with an integer linear programming inference procedure in order to encode non-local constraints in SRL; employed a global reranker for dealing with structural constraint; while studied several combination strategies of local and global features obtained from several independent SRL models. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2151170651"
],
"abstract": [
"We present a system for identifying the semantic relationships, or semantic roles, filled by constituents of a sentence within a semantic frame. Given an input sentence and a target word and frame, the system labels constituents with either abstract semantic roles, such as AGENT or PATIENT, or more domain-specific semantic roles, such as SPEAKER, MESSAGE, and TOPIC.The system is based on statistical classifiers trained on roughly 50,000 sentences that were hand-annotated with semantic roles by the FrameNet semantic labeling project. We then parsed each training sentence into a syntactic tree and extracted various lexical and syntactic features, including the phrase type of each constituent, its grammatical function, and its position in the sentence. These features were combined with knowledge of the predicate verb, noun, or adjective, as well as information such as the prior probabilities of various combinations of semantic roles. We used various lexical clustering algorithms to generalize across possible fillers of roles. Test sentences were parsed, were annotated with these features, and were then passed through the classifiers.Our system achieves 82 accuracy in identifying the semantic role of presegmented constituents. At the more difficult task of simultaneously segmenting constituents and identifying their semantic role, the system achieved 65 precision and 61 recall.Our study also allowed us to compare the usefulness of different features and feature combination methods in the semantic role labeling task. We also explore the integration of role labeling with statistical syntactic parsing and attempt to generalize to predicates unseen in the training data."
]
} |
1701.02593 | 2575480513 | We introduce a simple and accurate neural model for dependency-based semantic role labeling. Our model predicts predicate-argument dependencies relying on states of a bidirectional LSTM encoder. The semantic role labeler achieves competitive performance on English, even without any kind of syntactic information and only using local inference. However, when automatically predicted part-of-speech tags are provided as input, it substantially outperforms all previous local models and approaches the best reported results on the English CoNLL-2009 dataset. We also consider Chinese, Czech and Spanish where our approach also achieves competitive results. Syntactic parsers are unreliable on out-of-domain data, so standard (i.e., syntactically-informed) SRL models are hindered when tested in this setting. Our syntax-agnostic model appears more robust, resulting in the best reported results on standard out-of-domain test sets. | In the last years there has been a flurry of work that employed neural network approaches for SRL. used hand-crafted features within an MLP for calculating potentials of a CRF model; extended the features of a non-neural SRL model with LSTM representations of syntactic paths between arguments and predicates; relied on low-rank tensor factorization that captured interactions between arguments, predicate, their syntactic path and semantic roles; while and used convolutional networks as sentence encoder and a CRF as a role classifier, both approaches employed a rich set of features as input of the convolutional encoder. Finally, jointly modeled syntactic and semantic structures; they extended one of the earliest neural approaches for SRL @cite_2 @cite_10 @cite_8 , with more sophisticated modeling techniques, for example, using LSTMs instead of vanilla RNNs. | {
"cite_N": [
"@cite_8",
"@cite_10",
"@cite_2"
],
"mid": [
"2100439229",
"2099162170",
"2144784110"
],
"abstract": [
"Motivated by the large number of languages (seven) and the short development time (two months) of the 2009 CoNLL shared task, we exploited latent variables to avoid the costly process of hand-crafted feature engineering, allowing the latent variables to induce features from the data. We took a pre-existing generative latent variable model of joint syntactic-semantic dependency parsing, developed for English, and applied it to six new languages with minimal adjustments. The parser's robustness across languages indicates that this parser has a very general feature set. The parser's high performance indicates that its latent variables succeeded in inducing effective features. This system was ranked third overall with a macro averaged F1 score of 82.14 , only 0.5 worse than the best system.",
"This demonstration presents a high-performance syntactic and semantic dependency parser. The system consists of a pipeline of modules that carry out the to-kenization, lemmatization, part-of-speech tagging, dependency parsing, and semantic role labeling of a sentence. The system's two main components draw on improved versions of a state-of-the-art dependency parser (Bohnet, 2009) and semantic role labeler (, 2009) developed independently by the authors. The system takes a sentence as input and produces a syntactic and semantic annotation using the CoNLL 2009 format. The processing time needed for a sentence typically ranges from 10 to 1000 milliseconds. The predicate--argument structures in the final output are visualized in the form of segments, which are more intuitive for a user.",
"We propose a solution to the challenge of the CoNLL 2008 shared task that uses a generative history-based latent variable model to predict the most likely derivation of a synchronous dependency parser for both syntactic and semantic dependencies. The submitted model yields 79.1 macro-average F1 performance, for the joint task, 86.9 syntactic dependencies LAS and 71.0 semantic dependencies F1. A larger model trained after the deadline achieves 80.5 macro-average F1, 87.6 syntactic dependencies LAS, and 73.1 semantic dependencies F1."
]
} |
1701.02593 | 2575480513 | We introduce a simple and accurate neural model for dependency-based semantic role labeling. Our model predicts predicate-argument dependencies relying on states of a bidirectional LSTM encoder. The semantic role labeler achieves competitive performance on English, even without any kind of syntactic information and only using local inference. However, when automatically predicted part-of-speech tags are provided as input, it substantially outperforms all previous local models and approaches the best reported results on the English CoNLL-2009 dataset. We also consider Chinese, Czech and Spanish where our approach also achieves competitive results. Syntactic parsers are unreliable on out-of-domain data, so standard (i.e., syntactically-informed) SRL models are hindered when tested in this setting. Our syntax-agnostic model appears more robust, resulting in the best reported results on standard out-of-domain test sets. | Another related line of work @cite_3 @cite_0 , instead of relying on treebank syntax, integrated grammar induction as a sub-component into their statistical model. In this way, similarly to us, they do not use treebank syntax but rather rely on the ability of their joint model to induce syntax appropriate for SRL. Their focus was primarily on the low resource setting (where syntactic annotation is not available), whereas in standard set-ups their performance was not as strong. It would be interesting to see if explicit modeling of latent syntax is also beneficial when used in conjunction with LSTMs. | {
"cite_N": [
"@cite_0",
"@cite_3"
],
"mid": [
"2038324640",
"1747312753"
],
"abstract": [
"We present a general framework for semantic role labeling. The framework combines a machine-learning technique with an integer linear programming-based inference procedure, which incorporates linguistic and structural constraints into a global decision process. Within this framework, we study the role of syntactic parsing information in semantic role labeling. We show that full syntactic parsing information is, by far, most relevant in identifying the argument, especially, in the very first stage---the pruning stage. Surprisingly, the quality of the pruning stage cannot be solely determined based on its recall and precision. Instead, it depends on the characteristics of the output candidates that determine the difficulty of the downstream problems. Motivated by this observation, we propose an effective and simple approach of combining different semantic role labeling systems through joint inference, which significantly improves its performance. Our system has been evaluated in the CoNLL-2005 shared task on semantic role labeling, and achieves the highest F1 score among 19 participants.",
"Many NLP tasks make predictions that are inherently coupled to syntactic relations, but for many languages the resources required to provide such syntactic annotations are unavailable. For others it is unclear exactly how much of the syntactic annotations can be effectively leveraged with current models, and what structures in the syntactic trees are most relevant to the current task. We propose a novel method which avoids the need for any syntactically annotated data when predicting a related NLP task. Our method couples latent syntactic representations, constrained to form valid dependency graphs or constituency parses, with the prediction task via specialized factors in a Markov random field. At both training and test time we marginalize over this hidden structure, learning the optimal latent representations for the problem. Results show that this approach provides significant gains over a syntactically un-informed baseline, outperforming models that observe syntax on an English relation extraction task, and performing comparably to them in semantic role labeling."
]
} |
1701.02446 | 2963814838 | In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we focus on the adaptation of Honeypots for improving the security of IoTs. Low-interaction honeypots are used so far in the context of IoT. Such honeypots are limited and easily detectable, and thus, there is a need to find ways how to develop high-interaction, reliable, IoT honeypots that will attract skilled attackers. In this work, we propose the SIPHON architecture - a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called wormholes distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, six physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware. | Honeypots are a common measure to understand attacker activities in computer networks. The authors of @cite_1 provide a taxonomy of honeypots, and differentiate between low and high interaction honeypots. Both low and high interaction honeypots are compared in @cite_4 , and the authors conclude that high interaction honeypots provide more insights on attacker behavior than low interaction ones. | {
"cite_N": [
"@cite_1",
"@cite_4"
],
"mid": [
"2200970049",
"2949324006"
],
"abstract": [
"Honeynet research has become more important as a way to overcome the limitations imposed by the use of individual honeypots. A honeynet can be defined as a network of honeypots following certain topology. Although there are at present many existing honeynet solutions, no taxonomies have been proposed in order to classify them. In this paper, we propose such taxonomy, identifying the main criteria used for its classification and applying the classification scheme to some of the existing honeynet solutions, in order to quickly get a clear outline of the honeynet architecture and gain insight of the honeynet technology. The analysis of the classification scheme of the taxonomy allows getting an overview of the advantages and disadvantages of each criterion value. We later use this analysis to explore the design space of honeynet solutions for the proposal of a future optimized honeynet solution.",
"This paper presents an experimental study and the lessons learned from the observation of the attackers when logged on a compromised machine. The results are based on a six months period during which a controlled experiment has been run with a high interaction honeypot. We correlate our findings with those obtained with a worldwide distributed system of lowinteraction honeypots."
]
} |
1701.02446 | 2963814838 | In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we focus on the adaptation of Honeypots for improving the security of IoTs. Low-interaction honeypots are used so far in the context of IoT. Such honeypots are limited and easily detectable, and thus, there is a need to find ways how to develop high-interaction, reliable, IoT honeypots that will attract skilled attackers. In this work, we propose the SIPHON architecture - a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called wormholes distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, six physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware. | The Honeynet Project @cite_28 is a well-established project that focuses on monitoring and analyzing attacks to complement intrusion detection tools. The Honeynet project does not use emulation, and instead leverages real systems and applications. For that reason, the Honeynet project is an example of high interaction honeypots. | {
"cite_N": [
"@cite_28"
],
"mid": [
"2147767253"
],
"abstract": [
"What specific threats do computer networks face from hackers? Who's perpetrating these threats and how? The Honeynet Project is an organization dedicated to answering these questions. It studies the bad guys and shares the lessons learned. The group gathers information by deploying networks (called honeynets) that are designed to be compromised."
]
} |
1701.02446 | 2963814838 | In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we focus on the adaptation of Honeypots for improving the security of IoTs. Low-interaction honeypots are used so far in the context of IoT. Such honeypots are limited and easily detectable, and thus, there is a need to find ways how to develop high-interaction, reliable, IoT honeypots that will attract skilled attackers. In this work, we propose the SIPHON architecture - a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called wormholes distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, six physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware. | @cite_26 , the authors describe honeynets that can be used to increase security in a large computer network. In short, honeynets are clusters of honeypots. | {
"cite_N": [
"@cite_26"
],
"mid": [
"2047108429"
],
"abstract": [
"Abstract In this paper, we address how honeynets, networks of computers intended to be compromised, can be used to increase network security in a large organizational environment. We outline the current threats Internet security is facing at present and show how honeynets can be used to learn about those threats for the future. We investigate issues researchers have to take into account before deploying or while running a honeynet. Moreover, we describe how we tied honeynet research into computer security classes at Georgia Tech to successfully train students and spark interest in computer security."
]
} |
1701.02446 | 2963814838 | In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we focus on the adaptation of Honeypots for improving the security of IoTs. Low-interaction honeypots are used so far in the context of IoT. Such honeypots are limited and easily detectable, and thus, there is a need to find ways how to develop high-interaction, reliable, IoT honeypots that will attract skilled attackers. In this work, we propose the SIPHON architecture - a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called wormholes distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, six physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware. | A low interaction honeypot system is implemented in @cite_2 . The honeypot monitors behaviors and learns the advanced attacks that may not be detected by IDS tools. The session of the attacker is redirected to the honeypot system, which then serves requests from the attacker. For that purpose, the authors provided service daemons and a fake shell so that the attacker is not able to discover the system as honeypot. | {
"cite_N": [
"@cite_2"
],
"mid": [
"2126363842"
],
"abstract": [
"In this paper, we implement a HoneyPot system equipped with several sub systems for their use. Obtaining the new knowledge on the access skills of intruder allows us to make a policy more precisely and quickly to protect a system from the new attacks. Our implementation presents an HoneyPot System cooperates with firewall and management server. In this system, firewall redirects a session from an abnormal user to HoneyPot to learn the advanced intrusion skills and to respond more effectively."
]
} |
1701.02446 | 2963814838 | In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we focus on the adaptation of Honeypots for improving the security of IoTs. Low-interaction honeypots are used so far in the context of IoT. Such honeypots are limited and easily detectable, and thus, there is a need to find ways how to develop high-interaction, reliable, IoT honeypots that will attract skilled attackers. In this work, we propose the SIPHON architecture - a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called wormholes distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, six physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware. | @cite_27 , the system was proposed which redirects attackers towards honeypots. The system creates on active computers, those ports are not used by real services. Whenever any query or request comes to the ports, the system diverts the same to the honeypot. The system does not only act as a port forwarder but also is capable of controlling network loads caused by attackers. With this high interaction honeypot deployment, authors noted that possibility of attack may increase. | {
"cite_N": [
"@cite_27"
],
"mid": [
"1822351509"
],
"abstract": [
"Most of computer security systems use the signatures of well-known attacks to detect hackers' attacks. For these systems, it is very important to get the accurate signatures of new attacks as soon as possible. For this reason, there have been several researches on honeypots. However, honeypots can not collect information about hackers attacking active computers except themselves. In this paper, we propose the DecoyPort system to redirect hackers toward honeypots. The DecoyPort system creates the DecoyPorts on active computers. All interactions with the DecoyPorts are considered as suspect because the ports are not those for real services. Accordingly, every request sent to the DecoyPorts is redirected to honeypots by the DecoyPort system. Consequently, our system enables honeypots to collect information about hackers attacking active computers except themselves."
]
} |
1701.02446 | 2963814838 | In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we focus on the adaptation of Honeypots for improving the security of IoTs. Low-interaction honeypots are used so far in the context of IoT. Such honeypots are limited and easily detectable, and thus, there is a need to find ways how to develop high-interaction, reliable, IoT honeypots that will attract skilled attackers. In this work, we propose the SIPHON architecture - a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called wormholes distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, six physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware. | @cite_3 , the authors described the implementation and deployment of a honeypot based on a number of real, vulnerable web applications. They hosted all the web applications in seven isolated virtual machines running on a VMWare Server. In order to limit the attack surface, the authors let the exposed services run as a non privileged user. They then analyzed the collected data to study attackers' behavior on the web applications during pre and post exploitation. In contrast, we work with real physical IoT devices to set up a high-interaction honeypot. | {
"cite_N": [
"@cite_3"
],
"mid": [
"1531027799"
],
"abstract": [
"Web attacks are nowadays one of the major threats on the Internet, and several studies have analyzed them, providing details on how they are performed and how they spread. However, no study seems to have sufficiently analyzed the typical behavior of an attacker after a website has been compromised. This paper presents the design, implementation, and deployment of a network of 500 fully functional honeypot websites, hosting a range of different services, whose aim is to attract attackers and collect information on what they do during and after their attacks. In 100 days of experiments, our system automatically collected, normalized, and clustered over 85,000 files that were created during approximately 6,000 attacks. Labeling the clusters allowed us to draw a general picture of the attack landscape, identifying the behavior behind each action performed both during and after the exploitation of a web application."
]
} |
1701.02446 | 2963814838 | In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we focus on the adaptation of Honeypots for improving the security of IoTs. Low-interaction honeypots are used so far in the context of IoT. Such honeypots are limited and easily detectable, and thus, there is a need to find ways how to develop high-interaction, reliable, IoT honeypots that will attract skilled attackers. In this work, we propose the SIPHON architecture - a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called wormholes distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, six physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware. | Among the related work on general honeypots, we found only one that focuses on IoT @cite_13 . In that work, the authors present a low interaction honeypot for IP cameras and Digital Video Recorders (DVR). The authors emulated services for those devices. No real devices were used to deploy the honeypot. The authors' goal was to capture telnet based attacks and analyze the same with respect to the concerned IoT devices. | {
"cite_N": [
"@cite_13"
],
"mid": [
"1669806660"
],
"abstract": [
"We analyze the increasing threats against IoT devices. We show that Telnet-based attacks that target IoT devices have rocketed since 2014. Based on this observation, we propose an IoT honeypot and sandbox, which attracts and analyzes Telnet-based attacks against various IoT devices running on different CPU architectures such as ARM, MIPS, and PPC. By analyzing the observation results of our honeypot and captured malware samples, we show that there are currently at least 4 distinct DDoS malware families targeting Telnet-enabled IoT devices and one of the families has quickly evolved to target more devices with as many as 9 different CPU architectures."
]
} |
1701.02446 | 2963814838 | In recent years, the emerging Internet-of-Things (IoT) has led to rising concerns about the security of networked embedded devices. In this work, we focus on the adaptation of Honeypots for improving the security of IoTs. Low-interaction honeypots are used so far in the context of IoT. Such honeypots are limited and easily detectable, and thus, there is a need to find ways how to develop high-interaction, reliable, IoT honeypots that will attract skilled attackers. In this work, we propose the SIPHON architecture - a Scalable high-Interaction Honeypot platform for IoT devices. Our architecture leverages IoT devices that are physically at one location and are connected to the Internet through so-called wormholes distributed around the world. The resulting architecture allows exposing few physical devices over a large number of geographically distributed IP addresses. We demonstrate the proposed architecture in a large scale experiment with 39 wormhole instances in 16 cities in 9 countries. Based on this setup, six physical IP cameras, one NVR and one IP printer are presented as 85 real IoT devices on the Internet, attracting a daily traffic of 700MB for a period of two months. A preliminary analysis of the collected traffic indicates that devices in some cities attracted significantly more traffic than others (ranging from 600 000 incoming TCP connections for the most popular destination to less than 50000 for the least popular). We recorded over 400 brute-force login attempts to the web-interface of our devices using a total of 1826 distinct credentials, from which 11 attempts were successful. Moreover, we noted login attempts to Telnet and SSH ports some of which used credentials found in the recently disclosed Mirai malware. | Another recent paper @cite_14 proposes a Bayesian game to defend against attacks in a honeypot enabled IoT network. In that paper, various game scenarios are described depending on the changing strategies of both the attacker and defender. The authors perform systematic mathematical analysis of the games and evaluate the Bayesian model through simulation. | {
"cite_N": [
"@cite_14"
],
"mid": [
"2320597598"
],
"abstract": [
"In modern days, breakthroughs in information and communications technologies lead to more and more devices of every imaginable type being connected to the Internet. This also strengthens the need for protection against cyber-attacks, as virtually any devices with a wireless connection could be vulnerable to malicious hacking attempts. Meanwhile, honeypot-based deception mechanism has been considered as one of the methods to ensure security for modern networks in the Internet of Things (IoT). In this paper, we address the problem of defending against attacks in honeypot-enabled networks by looking at a game-theoretic model of deception involving an attacker and a defender. The attacker may try to deceive the defender by employing different types of attacks ranging from a suspicious to a seemingly normal activity, while the defender in turn can make use of honeypots as a tool of deception to trap attackers. The problem is modeled as a Bayesian game of incomplete information, where equilibria are identified for both the one-shot game and the repeated game versions. Our results show that there is a threshold for the frequency of active attackers, above which both players will take deceptive actions and below which the defender can mix up his her strategy while keeping the attacker’s success rate low."
]
} |
1701.02433 | 2578244715 | In this paper, we deal with the uncertainty of bidding for display advertising. Similar to the financial market trading, real-time bidding (RTB) based display advertising employs an auction mechanism to automate the impression level media buying; and running a campaign is no different than an investment of acquiring new customers in return for obtaining additional converted sales. Thus, how to optimally bid on an ad impression to drive the profit and return-on-investment becomes essential. However, the large randomness of the user behaviors and the cost uncertainty caused by the auction competition may result in a significant risk from the campaign performance estimation. In this paper, we explicitly model the uncertainty of user click-through rate estimation and auction competition to capture the risk. We borrow an idea from finance and derive the value at risk for each ad display opportunity. Our formulation results in two risk-aware bidding strategies that penalize risky ad impressions and focus more on the ones with higher expected return and lower risk. The empirical study on real-world data demonstrates the effectiveness of our proposed risk-aware bidding strategies: yielding profit gains of 15.4 in offline experiments and up to 17.5 in an online A B test on a commercial RTB platform over the widely applied bidding strategies. | Predicting the probability of a specific user response, e.g., CTR and CVR, is a key function for performance-driven online advertising @cite_12 @cite_8 @cite_18 . The applied CTR estimation models today are mostly linear. Logistic regression is the most widely used model, normally trained by stochastic gradient descent (SGD) @cite_5 @cite_3 . The authors in @cite_8 proposed to use an online learning algorithm called follow-the-regularized-leader (FTRL) to train logistic regression from the streaming data. The model successfully bypasses the learning rate update problem in SGD and it empirically works effectively. Bayesian probit regression @cite_12 is another linear model for online learning where the feature weights are modeled with a distribution and the model learning is via updating the weight posterior. Binary naive Bayes @cite_28 is also a popular linear model, by assuming the features are conditionally independent. | {
"cite_N": [
"@cite_18",
"@cite_8",
"@cite_28",
"@cite_3",
"@cite_5",
"@cite_12"
],
"mid": [
"2076618162",
"2074694452",
"2032026767",
"2090883204",
"2012905273",
"2162979096"
],
"abstract": [
"Online advertising allows advertisers to only bid and pay for measurable user responses, such as clicks on ads. As a consequence, click prediction systems are central to most online advertising systems. With over 750 million daily active users and over 1 million active advertisers, predicting clicks on Facebook ads is a challenging machine learning task. In this paper we introduce a model which combines decision trees with logistic regression, outperforming either of these methods on its own by over 3 , an improvement with significant impact to the overall system performance. We then explore how a number of fundamental parameters impact the final prediction performance of our system. Not surprisingly, the most important thing is to have the right features: those capturing historical information about the user or ad dominate other types of features. Once we have the right features and the right model (decisions trees plus logistic regression), other factors play small roles (though even small improvements are important at scale). Picking the optimal handling for data freshness, learning rate schema and data sampling improve the model slightly, though much less than adding a high-value feature, or picking the right model to begin with.",
"Predicting ad click-through rates (CTR) is a massive-scale learning problem that is central to the multi-billion dollar online advertising industry. We present a selection of case studies and topics drawn from recent experiments in the setting of a deployed CTR prediction system. These include improvements in the context of traditional supervised learning based on an FTRL-Proximal online learning algorithm (which has excellent sparsity and convergence properties) and the use of per-coordinate learning rates. We also explore some of the challenges that arise in a real-world system that may appear at first to be outside the domain of traditional machine learning research. These include useful tricks for memory savings, methods for assessing and visualizing performance, practical methods for providing confidence estimates for predicted probabilities, calibration methods, and methods for automated management of features. Finally, we also detail several directions that did not turn out to be beneficial for us, despite promising results elsewhere in the literature. The goal of this paper is to highlight the close relationship between theoretical advances and practical engineering in this industrial setting, and to show the depth of challenges that appear when applying traditional machine learning methods in a complex dynamic system.",
"Summary Folklore has it that a very simple supervised classification rule, based on the typically false assumption that the predictor variables are independent, can be highly effective, and often more effective than sophisticated rules. We examine the evidence for this, both empirical, as observed in real data applications, and theoretical, summarising explanations for why this simple rule might be effective. Resume La tradition veunt qu'une regle tres simple assumant l'independance des variables predictives. une hypothese fausse dans la plupart des cas, peut etre tres efficace, souvent meme plus efficace qu'une methode plus sophistiquee en ce qui concerne l'attribution de classes a un groupe d'objets. A ce sujet, nous examinons les preuves empiriques, et les preuves theoriques, e'est-a-dire les raisons pour lesquelles cette simple regle pourrait faciliter le processus de tri.",
"Search engine advertising has become a significant element of the Web browsing experience. Choosing the right ads for the query and the order in which they are displayed greatly affects the probability that a user will see and click on each ad. This ranking has a strong impact on the revenue the search engine receives from the ads. Further, showing the user an ad that they prefer to click on improves user satisfaction. For these reasons, it is important to be able to accurately estimate the click-through rate of ads in the system. For ads that have been displayed repeatedly, this is empirically measurable, but for new ads, other means must be used. We show that we can use features of ads, terms, and advertisers to learn a model that accurately predicts the click-though rate for new ads. We also show that using our model improves the convergence and performance of an advertising system. As a result, our model increases both revenue and user satisfaction.",
"In targeted display advertising, the goal is to identify the best opportunities to display a banner ad to an online user who is most likely to take a desired action such as purchasing a product or signing up for a newsletter. Finding the best ad impression, i.e., the opportunity to show an ad to a user, requires the ability to estimate the probability that the user who sees the ad on his or her browser will take an action, i.e., the user will convert. However, conversion probability estimation is a challenging task since there is extreme data sparsity across different data dimensions and the conversion event occurs rarely. In this paper, we present our approach to conversion rate estimation which relies on utilizing past performance observations along user, publisher and advertiser data hierarchies. More specifically, we model the conversion event at different select hierarchical levels with separate binomial distributions and estimate the distribution parameters individually. Then we demonstrate how we can combine these individual estimators using logistic regression to accurately identify conversion events. In our presentation, we also discuss main practical considerations such as data imbalance, missing data, and output probability calibration, which render this estimation problem more difficult but yet need solving for a real-world implementation of the approach. We provide results from real advertising campaigns to demonstrate the effectiveness of our proposed approach.",
"We describe a new Bayesian click-through rate (CTR) prediction algorithm used for Sponsored Search in Microsoft's Bing search engine. The algorithm is based on a probit regression model that maps discrete or real-valued input features to probabilities. It maintains Gaussian beliefs over weights of the model and performs Gaussian online updates derived from approximate message passing. Scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. We discuss the challenges arising from evaluating and tuning the predictor as part of the complex system of sponsored search where the predictions made by the algorithm decide about future training sample composition. Finally, we show experimental results from the production system and compare to a calibrated Naive Bayes algorithm."
]
} |
1701.02433 | 2578244715 | In this paper, we deal with the uncertainty of bidding for display advertising. Similar to the financial market trading, real-time bidding (RTB) based display advertising employs an auction mechanism to automate the impression level media buying; and running a campaign is no different than an investment of acquiring new customers in return for obtaining additional converted sales. Thus, how to optimally bid on an ad impression to drive the profit and return-on-investment becomes essential. However, the large randomness of the user behaviors and the cost uncertainty caused by the auction competition may result in a significant risk from the campaign performance estimation. In this paper, we explicitly model the uncertainty of user click-through rate estimation and auction competition to capture the risk. We borrow an idea from finance and derive the value at risk for each ad display opportunity. Our formulation results in two risk-aware bidding strategies that penalize risky ad impressions and focus more on the ones with higher expected return and lower risk. The empirical study on real-world data demonstrates the effectiveness of our proposed risk-aware bidding strategies: yielding profit gains of 15.4 in offline experiments and up to 17.5 in an online A B test on a commercial RTB platform over the widely applied bidding strategies. | Linear models are simple and effective in learning, but may fail to capture the interactions between the assumed (conditionally) independent raw features @cite_12 . By contrast, non-linear models are capable of learning feature interactions in various ways and could potentially improve prediction performance @cite_26 @cite_13 . Gradient boosting decision trees (GBDT) @cite_13 @cite_18 are a straightforward non-linear model to capture feature interactions. Moreover, latent factor models, particularly factorization machines (FMs) @cite_26 , map each binary feature into a low dimensional continuous space, and the feature interaction is automatically explored via vector inner product. | {
"cite_N": [
"@cite_18",
"@cite_26",
"@cite_13",
"@cite_12"
],
"mid": [
"2076618162",
"2055079831",
"1996976533",
"2162979096"
],
"abstract": [
"Online advertising allows advertisers to only bid and pay for measurable user responses, such as clicks on ads. As a consequence, click prediction systems are central to most online advertising systems. With over 750 million daily active users and over 1 million active advertisers, predicting clicks on Facebook ads is a challenging machine learning task. In this paper we introduce a model which combines decision trees with logistic regression, outperforming either of these methods on its own by over 3 , an improvement with significant impact to the overall system performance. We then explore how a number of fundamental parameters impact the final prediction performance of our system. Not surprisingly, the most important thing is to have the right features: those capturing historical information about the user or ad dominate other types of features. Once we have the right features and the right model (decisions trees plus logistic regression), other factors play small roles (though even small improvements are important at scale). Picking the optimal handling for data freshness, learning rate schema and data sampling improve the model slightly, though much less than adding a high-value feature, or picking the right model to begin with.",
"Mobile advertising has recently seen dramatic growth, fueled by the global proliferation of mobile phones and devices. The task of predicting ad response is thus crucial for maximizing business revenue. However, ad response data change dynamically over time, and are subject to cold-start situations in which limited history hinders reliable prediction. There is also a need for a robust regression estimation for high prediction accuracy, and good ranking to distinguish the impacts of different ads. To this end, we develop a Hierarchical Importance-aware Factorization Machine (HIFM), which provides an effective generic latent factor framework that incorporates importance weights and hierarchical learning. Comprehensive empirical studies on a real-world mobile advertising dataset show that HIFM outperforms the contemporary temporal latent factor models. The results also demonstrate the efficacy of the HIFM's importance-aware and hierarchical learning in improving the overall prediction and prediction in cold-start scenarios, respectively.",
"We describe a new approach to solving the click-through rate (CTR) prediction problem in sponsored search by means of MatrixNet, the proprietary implementation of boosted trees. This problem is of special importance for the search engine, because choosing the ads to display substantially depends on the predicted CTR and greatly affects the revenue of the search engine and user experience. We discuss different issues such as evaluating and tuning MatrixNet algorithm, feature importance, performance, accuracy and training data set size. Finally, we compare MatrixNet with several other methods and present experimental results from the production system.",
"We describe a new Bayesian click-through rate (CTR) prediction algorithm used for Sponsored Search in Microsoft's Bing search engine. The algorithm is based on a probit regression model that maps discrete or real-valued input features to probabilities. It maintains Gaussian beliefs over weights of the model and performs Gaussian online updates derived from approximate message passing. Scalability of the algorithm is ensured through a principled weight pruning procedure and an approximate parallel implementation. We discuss the challenges arising from evaluating and tuning the predictor as part of the complex system of sponsored search where the predictions made by the algorithm decide about future training sample composition. Finally, we show experimental results from the production system and compare to a calibrated Naive Bayes algorithm."
]
} |
1701.02433 | 2578244715 | In this paper, we deal with the uncertainty of bidding for display advertising. Similar to the financial market trading, real-time bidding (RTB) based display advertising employs an auction mechanism to automate the impression level media buying; and running a campaign is no different than an investment of acquiring new customers in return for obtaining additional converted sales. Thus, how to optimally bid on an ad impression to drive the profit and return-on-investment becomes essential. However, the large randomness of the user behaviors and the cost uncertainty caused by the auction competition may result in a significant risk from the campaign performance estimation. In this paper, we explicitly model the uncertainty of user click-through rate estimation and auction competition to capture the risk. We borrow an idea from finance and derive the value at risk for each ad display opportunity. Our formulation results in two risk-aware bidding strategies that penalize risky ad impressions and focus more on the ones with higher expected return and lower risk. The empirical study on real-world data demonstrates the effectiveness of our proposed risk-aware bidding strategies: yielding profit gains of 15.4 in offline experiments and up to 17.5 in an online A B test on a commercial RTB platform over the widely applied bidding strategies. | The emergence of ad exchanges for display advertising in 2009 @cite_16 provides automatic trading mechanism for advertisers to buy media inventory in impression level and determine the acceptable price via second price auction @cite_1 . | {
"cite_N": [
"@cite_16",
"@cite_1"
],
"mid": [
"1554763265",
"2021375049"
],
"abstract": [
"An emerging way to sell and buy display ads on the Internet is via ad exchanges. RightMedia [1], AdECN [2] and DoubleClick Ad Exchange [3] are examples of such real-time two-sided markets. We describe an abstraction of this market. Based on that abstraction, we present several research directions and discuss some insights.",
"The real-time bidding (RTB), aka programmatic buying, has recently become the fastest growing area in online advertising. Instead of bulking buying and inventory-centric buying, RTB mimics stock exchanges and utilises computer algorithms to automatically buy and sell ads in real-time; It uses per impression context and targets the ads to specific people based on data about them, and hence dramatically increases the effectiveness of display advertising. In this paper, we provide an empirical analysis and measurement of a production ad exchange. Using the data sampled from both demand and supply side, we aim to provide first-hand insights into the emerging new impression selling infrastructure and its bidding behaviours, and help identifying research and design issues in such systems. From our study, we observed that periodic patterns occur in various statistics including impressions, clicks, bids, and conversion rates (both post-view and post-click), which suggest time-dependent models would be appropriate for capturing the repeated patterns in RTB. We also found that despite the claimed second price auction, the first price payment in fact is accounted for 55.4 of total cost due to the arrangement of the soft floor price. As such, we argue that the setting of soft floor price in the current RTB systems puts advertisers in a less favourable position. Furthermore, our analysis on the conversation rates shows that the current bidding strategy is far less optimal, indicating the significant needs for optimisation algorithms incorporating the facts such as the temporal behaviours, the frequency and recency of the ad displays, which have not been well considered in the past."
]
} |
1701.02433 | 2578244715 | In this paper, we deal with the uncertainty of bidding for display advertising. Similar to the financial market trading, real-time bidding (RTB) based display advertising employs an auction mechanism to automate the impression level media buying; and running a campaign is no different than an investment of acquiring new customers in return for obtaining additional converted sales. Thus, how to optimally bid on an ad impression to drive the profit and return-on-investment becomes essential. However, the large randomness of the user behaviors and the cost uncertainty caused by the auction competition may result in a significant risk from the campaign performance estimation. In this paper, we explicitly model the uncertainty of user click-through rate estimation and auction competition to capture the risk. We borrow an idea from finance and derive the value at risk for each ad display opportunity. Our formulation results in two risk-aware bidding strategies that penalize risky ad impressions and focus more on the ones with higher expected return and lower risk. The empirical study on real-world data demonstrates the effectiveness of our proposed risk-aware bidding strategies: yielding profit gains of 15.4 in offline experiments and up to 17.5 in an online A B test on a commercial RTB platform over the widely applied bidding strategies. | Recently, the ideas of risk management has been introduced to information retrieval, such as document ranking in web search @cite_11 @cite_9 and diversification in top-N recommendation @cite_40 @cite_21 , to improve the model robustness or catch the users' satisfaction on uncertainty psychologically. In the area of recommender system, bandits solutions @cite_25 have been proposed to model confidence interval to balance exploration and exploitation in a risk-seeking fashion. | {
"cite_N": [
"@cite_9",
"@cite_21",
"@cite_40",
"@cite_25",
"@cite_11"
],
"mid": [
"2109154214",
"2052602449",
"2148933855",
"",
"1980730196"
],
"abstract": [
"Many techniques for improving search result quality have been proposed. Typically, these techniques increase average effectiveness by devising advanced ranking features and or by developing sophisticated learning to rank algorithms. However, while these approaches typically improve average performance of search results relative to simple baselines, they often ignore the important issue of robustness. That is, although achieving an average gain overall, the new models often hurt performance on many queries. This limits their application in real-world retrieval scenarios. Given that robustness is an important measure that can negatively impact user satisfaction, we present a unified framework for jointly optimizing effectiveness and robustness. We propose an objective that captures the tradeoff between these two competing measures and demonstrate how we can jointly optimize for these two measures in a principled learning framework. Experiments indicate that ranking models learned this way significantly decreased the worst ranking failures while maintaining strong average effectiveness on par with current state-of-the-art models.",
"With the rapid prevalence of smart mobile devices, the number of mobile Apps available has exploded over the past few years. To facilitate the choice of mobile Apps, existing mobile App recommender systems typically recommend popular mobile Apps to mobile users. However, mobile Apps are highly varied and often poorly understood, particularly for their activities and functions related to privacy and security. Therefore, more and more mobile users are reluctant to adopt mobile Apps due to the risk of privacy invasion and other security concerns. To fill this crucial void, in this paper, we propose to develop a mobile App recommender system with privacy and security awareness. The design goal is to equip the recommender system with the functionality which allows to automatically detect and evaluate the security risk of mobile Apps. Then, the recommender system can provide App recommendations by considering both the Apps' popularity and the users' security preferences. Specifically, a mobile App can lead to security risk because insecure data access permissions have been implemented in this App. Therefore, we first develop the techniques to automatically detect the potential security risk for each mobile App by exploiting the requested permissions. Then, we propose a flexible approach based on modern portfolio theory for recommending Apps by striking a balance between the Apps' popularity and the users' security concerns, and build an App hash tree to efficiently recommend Apps. Finally, we evaluate our approach with extensive experiments on a large-scale data set collected from Google Play. The experimental results clearly validate the effectiveness of our approach.",
"This paper studies result diversification in collaborative filtering. We argue that the diversification level in a recommendation list should be adapted to the target users' individual situations and needs. Different users may have different ranges of interests -- the preference of a highly focused user might include only few topics, whereas that of the user with broad interests may encompass a wide range of topics. Thus, the recommended items should be diversified according to the interest range of the target user. Such an adaptation is also required due to the fact that the uncertainty of the estimated user preference model may vary significantly between users. To reduce the risk of the recommendation, we should take the difference of the uncertainty into account as well. In this paper, we study the adaptive diversification problem theoretically. We start with commonly used latent factor models and reformulate them using the mean-variance analysis from the portfolio theory in text retrieval. The resulting Latent Factor Portfolio (LFP) model captures the user's interest range and the uncertainty of the user preference by employing the variance of the learned user latent factors. It is shown that the correlations between items (and thus the item diversity) can be obtained by using the correlations between latent factors (topical diversity), which in return significantly reduce the computation load. Our mathematical derivation also reveals that diversification is necessary, not only for risk-averse system behavior (non-adpative), but also for the target users' individual situations (adaptive), which are represented by the distribution and the variance of the latent user factors. Our experiments confirm the theoretical insights and show that LFP succeeds in improving latent factor models by adaptively introducing recommendation diversity to fit the individual user's needs.",
"",
"This paper studies document ranking under uncertainty. It is tackled in a general situation where the relevance predictions of individual documents have uncertainty, and are dependent between each other. Inspired by the Modern Portfolio Theory, an economic theory dealing with investment in financial markets, we argue that ranking under uncertainty is not just about picking individual relevant documents, but about choosing the right combination of relevant documents. This motivates us to quantify a ranked list of documents on the basis of its expected overall relevance (mean) and its variance; the latter serves as a measure of risk, which was rarely studied for document ranking in the past. Through the analysis of the mean and variance, we show that an optimal rank order is the one that balancing the overall relevance (mean) of the ranked list against its risk level (variance). Based on this principle, we then derive an efficient document ranking algorithm. It generalizes the well-known probability ranking principle (PRP) by considering both the uncertainty of relevance predictions and correlations between retrieved documents. Moreover, the benefit of diversification is mathematically quantified; we show that diversifying documents is an effective way to reduce the risk of document ranking. Experimental results in text retrieval confirm performance."
]
} |
1701.02433 | 2578244715 | In this paper, we deal with the uncertainty of bidding for display advertising. Similar to the financial market trading, real-time bidding (RTB) based display advertising employs an auction mechanism to automate the impression level media buying; and running a campaign is no different than an investment of acquiring new customers in return for obtaining additional converted sales. Thus, how to optimally bid on an ad impression to drive the profit and return-on-investment becomes essential. However, the large randomness of the user behaviors and the cost uncertainty caused by the auction competition may result in a significant risk from the campaign performance estimation. In this paper, we explicitly model the uncertainty of user click-through rate estimation and auction competition to capture the risk. We borrow an idea from finance and derive the value at risk for each ad display opportunity. Our formulation results in two risk-aware bidding strategies that penalize risky ad impressions and focus more on the ones with higher expected return and lower risk. The empirical study on real-world data demonstrates the effectiveness of our proposed risk-aware bidding strategies: yielding profit gains of 15.4 in offline experiments and up to 17.5 in an online A B test on a commercial RTB platform over the widely applied bidding strategies. | Computational advertising is associated with a certain level of deficit risk, particularly for performance-driven campaigns as the goal is to acquire new users and gain more sales from them. The risk comes from the dynamics of the market and the user online behaviors @cite_38 . The authors in @cite_36 proposed to measure campaign-level risk and return in a special case of arbitrage between CPM and CPA. Compared to @cite_36 , our work focuses on single campaign optimization, and our risk is modeled from the uncertainty of user response and market competition at impression-level. Generally, our work borrows the concept of value at risk from finance to derive risk-aware bidding strategies intending to reasonably allocate budget between uncertain impressions and confident impressions and achieve a campaign-level profit gain, which is unlike in finance where risk control is only for balancing return and risk at item-level (impression-level in RTB). | {
"cite_N": [
"@cite_36",
"@cite_38"
],
"mid": [
"1973081445",
"2095916875"
],
"abstract": [
"We study and formulate arbitrage in display advertising. Real-Time Bidding (RTB) mimics stock spot exchanges and utilises computers to algorithmically buy display ads per impression via a real-time auction. Despite the new automation, the ad markets are still informationally inefficient due to the heavily fragmented marketplaces. Two display impressions with similar or identical effectiveness (e.g., measured by conversion or click-through rates for a targeted audience) may sell for quite different prices at different market segments or pricing schemes. In this paper, we propose a novel data mining paradigm called Statistical Arbitrage Mining (SAM) focusing on mining and exploiting price discrepancies between two pricing schemes. In essence, our SAMer is a meta-bidder that hedges advertisers' risk between CPA (cost per action)-based campaigns and CPM (cost per mille impressions)-based ad inventories; it statistically assesses the potential profit and cost for an incoming CPM bid request against a portfolio of CPA campaigns based on the estimated conversion rate, bid landscape and other statistics learned from historical data. In SAM, (i) functional optimisation is utilised to seek for optimal bidding to maximise the expected arbitrage net profit, and (ii) a portfolio-based risk management solution is leveraged to reallocate bid volume and budget across the set of campaigns to make a risk and return trade-off. We propose to jointly optimise both components in an EM fashion with high efficiency to help the meta-bidder successfully catch the transient statistical arbitrage opportunities in RTB. Both the offline experiments on a real-world large-scale dataset and online A B tests on a commercial platform demonstrate the effectiveness of our proposed solution in exploiting arbitrage in various model settings and market environments.",
"Many online advertising slots are sold through bidding mechanisms by publishers and search engines. Highly affected by the dual force of supply and demand, the prices of advertising slots vary significantly over time. This then influences the businesses whose major revenues are driven by online advertising, particularly for publishers and search engines. To address the problem, we propose to sell the future advertising slots via option contracts (also called ad options). The ad option can give its buyer the right to buy the future advertising slots at a prefixed price. The pricing model of ad options is developed in order to reduce the volatility of the income of publishers or search engines. Our experimental results confirm the validity of ad options and the embedded risk management mechanisms."
]
} |
1701.02141 | 2572840108 | Light field cameras capture the 3D information in a scene with a single exposure. This special feature makes light field cameras very appealing for a variety of applications: from post-capture refocus to depth estimation and image-based rendering. However, light field cameras suffer by design from strong limitations in their spatial resolution. Off-the-shelf super-resolution algorithms are not ideal for light field data, as they do not consider its structure. On the other hand, the few super-resolution algorithms explicitly tailored for light field data exhibit significant limitations, such as the need to carry out a costly disparity estimation procedure with sub-pixel precision. We propose a new light field super-resolution algorithm meant to address these limitations. We use the complementary information in the different light field views to augment the spatial resolution of the whole light field at once. In particular, we show that coupling the multi-view approach with a graph-based regularizer, which enforces the light field geometric structure, permits to avoid the need of a precise and costly disparity estimation step. Extensive experiments show that the new algorithm compares favorably to the state-of-the-art methods for light field super-resolution, both in terms of visual quality and in terms of reconstruction error. | The super-resolution literature is quite vast, but it can be divided mainly into two areas: single-frame and multi-frame super-resolution methods. In single-frame super-resolution, only one image from a scene is provided, and its resolution has to be increased. This goal is typically achieved by learning a mapping from the low resolution data to the high resolution one, either on an external training set @cite_11 @cite_28 @cite_19 or on the image itself @cite_31 @cite_13 . Single-frame algorithms can be applied to each light field view separately in order to augment the resolution of the whole light field, but this approach would neither exploit the high correlation among the views, nor enforce the consistency among them. | {
"cite_N": [
"@cite_28",
"@cite_19",
"@cite_31",
"@cite_13",
"@cite_11"
],
"mid": [
"2067625321",
"54257720",
"2534320940",
"2093633095",
"2121058967"
],
"abstract": [
"The neighbor-embedding (NE) algorithm for single-image super-resolution (SR) reconstruction assumes that the feature spaces of low-resolution (LR) and high-resolution (HR) patches are locally isometric. However, this is not true for SR because of one-to-many mappings between LR and HR patches. To overcome or at least to reduce the problem for NE-based SR reconstruction, we apply a joint learning technique to train two projection matrices simultaneously and to map the original LR and HR feature spaces onto a unified feature subspace. Subsequently, the k -nearest neighbor selection of the input LR image patches is conducted in the unified feature subspace to estimate the reconstruction weights. To handle a large number of samples, joint learning locally exploits a coupled constraint by linking the LR-HR counterparts together with the K-nearest grouping patch pairs. In order to refine further the initial SR estimate, we impose a global reconstruction constraint on the SR outcome based on the maximum a posteriori framework. Preliminary experiments suggest that the proposed algorithm outperforms NE-related baselines.",
"We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.",
"Methods for super-resolution can be broadly classified into two families of methods: (i) The classical multi-image super-resolution (combining images obtained at subpixel misalignments), and (ii) Example-Based super-resolution (learning correspondence between low and high resolution image patches from a database). In this paper we propose a unified framework for combining these two families of methods. We further show how this combined approach can be applied to obtain super resolution from as little as a single image (with no database or prior examples). Our approach is based on the observation that patches in a natural image tend to redundantly recur many times inside the image, both within the same scale, as well as across different scales. Recurrence of patches within the same image scale (at subpixel misalignments) gives rise to the classical super-resolution, whereas recurrence of patches across different scales of the same image gives rise to example-based super-resolution. Our approach attempts to recover at each pixel its best possible resolution increase based on its patch redundancy within and across scales.",
"This paper presents a novel example-based single-image superresolution procedure that upscales to high-resolution (HR) a given low-resolution (LR) input image without relying on an external dictionary of image examples. The dictionary instead is built from the LR input image itself, by generating a double pyramid of recursively scaled, and subsequently interpolated, images, from which self-examples are extracted. The upscaling procedure is multipass, i.e., the output image is constructed by means of gradual increases, and consists in learning special linear mapping functions on this double pyramid, as many as the number of patches in the current image to upscale. More precisely, for each LR patch, similar self-examples are found, and, because of them, a linear function is learned to directly map it into its HR version. Iterative back projection is also employed to ensure consistency at each pass of the procedure. Extensive experiments and comparisons with other state-of-the-art methods, based both on external and internal dictionaries, show that our algorithm can produce visually pleasant upscalings, with sharp edges and well reconstructed details. Moreover, when considering objective metrics, such as Peak signal-to-noise ratio and Structural similarity, our method turns out to give the best performance.",
"This paper presents a new approach to single-image superresolution, based upon sparse signal representation. Research on image statistics suggests that image patches can be well-represented as a sparse linear combination of elements from an appropriately chosen over-complete dictionary. Inspired by this observation, we seek a sparse representation for each patch of the low-resolution input, and then use the coefficients of this representation to generate the high-resolution output. Theoretical results from compressed sensing suggest that under mild conditions, the sparse representation can be correctly recovered from the downsampled signals. By jointly training two dictionaries for the low- and high-resolution image patches, we can enforce the similarity of sparse representations between the low-resolution and high-resolution image patch pair with respect to their own dictionaries. Therefore, the sparse representation of a low-resolution image patch can be applied with the high-resolution image patch dictionary to generate a high-resolution image patch. The learned dictionary pair is a more compact representation of the patch pairs, compared to previous approaches, which simply sample a large amount of image patch pairs , reducing the computational cost substantially. The effectiveness of such a sparsity prior is demonstrated for both general image super-resolution (SR) and the special case of face hallucination. In both cases, our algorithm generates high-resolution images that are competitive or even superior in quality to images produced by other similar SR methods. In addition, the local sparse modeling of our approach is naturally robust to noise, and therefore the proposed algorithm can handle SR with noisy inputs in a more unified framework."
]
} |
1701.02141 | 2572840108 | Light field cameras capture the 3D information in a scene with a single exposure. This special feature makes light field cameras very appealing for a variety of applications: from post-capture refocus to depth estimation and image-based rendering. However, light field cameras suffer by design from strong limitations in their spatial resolution. Off-the-shelf super-resolution algorithms are not ideal for light field data, as they do not consider its structure. On the other hand, the few super-resolution algorithms explicitly tailored for light field data exhibit significant limitations, such as the need to carry out a costly disparity estimation procedure with sub-pixel precision. We propose a new light field super-resolution algorithm meant to address these limitations. We use the complementary information in the different light field views to augment the spatial resolution of the whole light field at once. In particular, we show that coupling the multi-view approach with a graph-based regularizer, which enforces the light field geometric structure, permits to avoid the need of a precise and costly disparity estimation step. Extensive experiments show that the new algorithm compares favorably to the state-of-the-art methods for light field super-resolution, both in terms of visual quality and in terms of reconstruction error. | In the multi-frame scenario, multiple images of the same scene are used to increase the resolution of a target image. To this purpose, all the available images are typically modeled as translated and rotated versions of the target one @cite_17 @cite_9 . The multi-frame super-resolution scenario resembles the light field one, but its global image warping model does not fit the light field structure. In particular, the different moving speeds of the objects in the scene across the light field views, which encode their different depths, cannot be captured by a global warping model. Multi-frame algorithms employing more complex warping models exist, for example in video super-resolution @cite_24 @cite_3 , yet the warping models do not exactly fit the geometry of light field data and their construction is computationally demanding. In particular, multi-frame video super-resolution involves two main steps, namely optical flow estimation, which finds correspondences between temporally successive frames, and eventually a super-resolution step that is built on the optical flow. | {
"cite_N": [
"@cite_24",
"@cite_9",
"@cite_3",
"@cite_17"
],
"mid": [
"1561920876",
"2165939075",
"2110003257",
"2087380704"
],
"abstract": [
"In this paper, we propose a variational framework for computing a superresolved image of a scene from an arbitrary input video. To this end, we employ a recently proposed quadratic relaxation scheme for high accuracy optic flow estimation. Subsequently we estimate a high resolution image using a variational approach that models the image formation process and imposes a total variation regularity of the estimated intensity map. Minimization of this variational approach by gradient descent gives rise to a deblurring process with a nonlinear diffusion. In contrast to many alternative approaches, the proposed algorithm does not make assumptions regarding the motion of objects. We demonstrate good experimental performance on a variety of real-world examples. In particular we show that the computed super resolution images are indeed sharper than the individual input images.",
"Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their shortcomings. We propose an alternate approach using L sub 1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.",
"We propose a convex variational framework to compute high resolution images from a low resolution video. The image formation process is analyzed to provide to a well designed model for warping, blurring, downsampling and regularization. We provide a comprehensive investigation of the single model components. The super-resolution problem is modeled as a minimization problem in an unified convex framework, which is solved by a fast primal dual algorithm. A comprehensive evaluation on the influence of different kinds of noise is carried out. The proposed algorithm shows excellent recovery of information for various real and synthetic datasets.",
"Abstract Image resolution can be improved when the relative displacements in image sequences are known accurately, and some knowledge of the imaging process is available. The proposed approach is similar to back-projection used in tomography. Examples of improved image resolution are given for gray-level and color images, when the unknown image displacements are computed from the image sequence."
]
} |
1701.02141 | 2572840108 | Light field cameras capture the 3D information in a scene with a single exposure. This special feature makes light field cameras very appealing for a variety of applications: from post-capture refocus to depth estimation and image-based rendering. However, light field cameras suffer by design from strong limitations in their spatial resolution. Off-the-shelf super-resolution algorithms are not ideal for light field data, as they do not consider its structure. On the other hand, the few super-resolution algorithms explicitly tailored for light field data exhibit significant limitations, such as the need to carry out a costly disparity estimation procedure with sub-pixel precision. We propose a new light field super-resolution algorithm meant to address these limitations. We use the complementary information in the different light field views to augment the spatial resolution of the whole light field at once. In particular, we show that coupling the multi-view approach with a graph-based regularizer, which enforces the light field geometric structure, permits to avoid the need of a precise and costly disparity estimation step. Extensive experiments show that the new algorithm compares favorably to the state-of-the-art methods for light field super-resolution, both in terms of visual quality and in terms of reconstruction error. | In the light field representation, the views lie on a two-dimensional grid with adjacent views sharing a constant baseline under the assumption of both vertical and horizontal registration. As a consequence, not only the optical flow computation reduces to disparity estimation, but the disparity map at one view determines its warping to every other view in the light field, in the absence of occlusions. In @cite_1 Wanner and Goldluecke build over these observations to extract the disparity map at each view directly from the epipolar line slopes with the help of a structure tensor operator. Then, similarly to multi-frame super-resolution, they project all the views to the target one within a global optimization formulation endowed with a prior. Although the structure tensor operator permits to carry out disparity estimation in the continuous domain, this task remains very challenging at low spatial resolution. As a result, disparity errors unfortunately translate into significant artifacts in the textured areas and along object edges. Finally, each view of the light field has to be processed separately to super-resolve the complete light field, which does not permit to fully exploit the inter-view dependencies. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1537348865"
],
"abstract": [
"We present a variational framework to generate super-resolved novel views from 4D light field data sampled at low resolution, for example by a plenoptic camera. In contrast to previous work, we formulate the problem of view synthesis as a continuous inverse problem, which allows us to correctly take into account foreshortening effects caused by scene geometry transformations. High-accuracy depth maps for the input views are locally estimated using epipolar plane image analysis, which yields floating point depth precision without the need for expensive matching cost minimization. The disparity maps are further improved by increasing angular resolution with synthesized intermediate views. Minimization of the super-resolution model energy is performed with state of the art convex optimization algorithms within seconds."
]
} |
1701.02141 | 2572840108 | Light field cameras capture the 3D information in a scene with a single exposure. This special feature makes light field cameras very appealing for a variety of applications: from post-capture refocus to depth estimation and image-based rendering. However, light field cameras suffer by design from strong limitations in their spatial resolution. Off-the-shelf super-resolution algorithms are not ideal for light field data, as they do not consider its structure. On the other hand, the few super-resolution algorithms explicitly tailored for light field data exhibit significant limitations, such as the need to carry out a costly disparity estimation procedure with sub-pixel precision. We propose a new light field super-resolution algorithm meant to address these limitations. We use the complementary information in the different light field views to augment the spatial resolution of the whole light field at once. In particular, we show that coupling the multi-view approach with a graph-based regularizer, which enforces the light field geometric structure, permits to avoid the need of a precise and costly disparity estimation step. Extensive experiments show that the new algorithm compares favorably to the state-of-the-art methods for light field super-resolution, both in terms of visual quality and in terms of reconstruction error. | In another work, Heber and Pock @cite_4 consider the matrix obtained by warping all the views to a reference one, and propose to model it as the sum of a low rank matrix and a noise one, where the later describes the noise and occlusions. This model, that resembles @cite_10 , is primarily meant for disparity estimation at the reference view. However, the authors show that a slight modification of the objective function can provide the corresponding high resolution view, in addition to the low resolution disparity map at the reference view. The algorithm could ideally be applied separately to each view in order to super-resolve the whole light field, but that may not be the ideal solution to that global problem, due to the high redundancy in estimating all the low resolution disparity maps independently. | {
"cite_N": [
"@cite_10",
"@cite_4"
],
"mid": [
"2145962650",
"201905283"
],
"abstract": [
"This article is about a curious phenomenon. Suppose we have a data matrix, which is the superposition of a low-rank component and a sparse component. Can we recover each component individuallyq We prove that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, simply minimize a weighted combination of the nuclear norm and of the e1 norm. This suggests the possibility of a principled approach to robust principal component analysis since our methodology and results assert that one can recover the principal components of a data matrix even though a positive fraction of its entries are arbitrarily corrupted. This extends to the situation where a fraction of the entries are missing as well. We discuss an algorithm for solving this optimization problem, and present applications in the area of video surveillance, where our methodology allows for the detection of objects in a cluttered background, and in the area of face recognition, where it offers a principled way of removing shadows and specularities in images of faces.",
"In this paper we propose a new type of matching term for multi-view stereo reconstruction. Our model is based on the assumption, that if one warps the images of the various views to a common warping center and considers each warped image as one row in a matrix, then this matrix will have low rank. This also implies, that we assume a certain amount of overlap between the views after the warping has been performed. Such an assumption is obviously met in the case of light field data, which motivated us to demonstrate the proposed model for this type of data. Our final model is a large scale convex optimization problem, where the low rank minimization is relaxed via the nuclear norm. We present qualitative and quantitative experiments, where the proposed model achieves excellent results."
]
} |
1701.02141 | 2572840108 | Light field cameras capture the 3D information in a scene with a single exposure. This special feature makes light field cameras very appealing for a variety of applications: from post-capture refocus to depth estimation and image-based rendering. However, light field cameras suffer by design from strong limitations in their spatial resolution. Off-the-shelf super-resolution algorithms are not ideal for light field data, as they do not consider its structure. On the other hand, the few super-resolution algorithms explicitly tailored for light field data exhibit significant limitations, such as the need to carry out a costly disparity estimation procedure with sub-pixel precision. We propose a new light field super-resolution algorithm meant to address these limitations. We use the complementary information in the different light field views to augment the spatial resolution of the whole light field at once. In particular, we show that coupling the multi-view approach with a graph-based regularizer, which enforces the light field geometric structure, permits to avoid the need of a precise and costly disparity estimation step. Extensive experiments show that the new algorithm compares favorably to the state-of-the-art methods for light field super-resolution, both in terms of visual quality and in terms of reconstruction error. | The light field super-resolution problem has been addressed within the framework of too. In particular, @cite_25 consider the cascade of two CNNs, the first meant to super-resolve the given light field views, and the second to synthesize new high resolution views based on the previously super-resolved ones. However, the first CNN (whose design is borrowed from @cite_19 ) is meant for single-frame super-resolution, therefore the views are super-resolved independently, without considering the light field structure. | {
"cite_N": [
"@cite_19",
"@cite_25"
],
"mid": [
"54257720",
"2588196171"
],
"abstract": [
"We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) [15] that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage.",
"Commercial light field cameras provide spatial and angular information, but their limited resolution becomes an important problem in practical use. In this letter, we present a novel method for light field image super-resolution (SR) to simultaneously up-sample both the spatial and angular resolutions of a light field image via a deep convolutional neural network. We first augment the spatial resolution of each subaperture image by a spatial SR network, then novel views between super-resolved subaperture images are generated by three different angular SR networks according to the novel view locations. We improve both the efficiency of training and the quality of angular SR results by using weight sharing . In addition, we provide a new light field image dataset for training and validating the network. We train our whole network end-to-end, and show state-of-the-art performances on quantitative and qualitative evaluations."
]
} |
1701.02141 | 2572840108 | Light field cameras capture the 3D information in a scene with a single exposure. This special feature makes light field cameras very appealing for a variety of applications: from post-capture refocus to depth estimation and image-based rendering. However, light field cameras suffer by design from strong limitations in their spatial resolution. Off-the-shelf super-resolution algorithms are not ideal for light field data, as they do not consider its structure. On the other hand, the few super-resolution algorithms explicitly tailored for light field data exhibit significant limitations, such as the need to carry out a costly disparity estimation procedure with sub-pixel precision. We propose a new light field super-resolution algorithm meant to address these limitations. We use the complementary information in the different light field views to augment the spatial resolution of the whole light field at once. In particular, we show that coupling the multi-view approach with a graph-based regularizer, which enforces the light field geometric structure, permits to avoid the need of a precise and costly disparity estimation step. Extensive experiments show that the new algorithm compares favorably to the state-of-the-art methods for light field super-resolution, both in terms of visual quality and in terms of reconstruction error. | Finally, we note that some authors, e.g., @cite_18 , consider the recovery of an all in focus image with full sensor resolution from the light field camera output. They refer to this task as light field super-resolution although it is different from the problem considered in this work. In this article, no light field applications is considered a priori: the light field views are all super-resolved, thus enabling any light field application to be performed later at a resolution higher than the original one. Differently from the other light field super-resolution algorithms, ours does not require an explicit a priori disparity estimation step, and does not rely on a learning procedure. Moreover, our algorithm reconstructs all the views jointly, provides homogeneous quality across the reconstructed views, and preserves the light field structure. | {
"cite_N": [
"@cite_18"
],
"mid": [
"2105198794"
],
"abstract": [
"Portable light field (LF) cameras have demonstrated capabilities beyond conventional cameras. In a single snapshot, they enable digital image refocusing and 3D reconstruction. We show that they obtain a larger depth of field but maintain the ability to reconstruct detail at high resolution. In fact, all depths are approximately focused, except for a thin slab where blur size is bounded, i.e., their depth of field is essentially inverted compared to regular cameras. Crucial to their success is the way they sample the LF, trading off spatial versus angular resolution, and how aliasing affects the LF. We show that applying traditional multiview stereo methods to the extracted low-resolution views can result in reconstruction errors due to aliasing. We address these challenges using an explicit image formation model, and incorporate Lambertian and texture preserving priors to reconstruct both scene depth and its superresolved texture in a variational Bayesian framework, eliminating aliasing by fusing multiview information. We demonstrate the method on synthetic and real images captured with our LF camera, and show that it can outperform other computational camera systems."
]
} |
1701.02120 | 2949930054 | Privacy issues of recommender systems have become a hot topic for the society as such systems are appearing in every corner of our life. In contrast to the fact that many secure multi-party computation protocols have been proposed to prevent information leakage in the process of recommendation computation, very little has been done to restrict the information leakage from the recommendation results. In this paper, we apply the differential privacy concept to neighborhood-based recommendation methods (NBMs) under a probabilistic framework. We first present a solution, by directly calibrating Laplace noise into the training process, to differential-privately find the maximum a posteriori parameters similarity. Then we connect differential privacy to NBMs by exploiting a recent observation that sampling from the scaled posterior distribution of a Bayesian model results in provably differentially private systems. Our experiments show that both solutions allow promising accuracy with a modest privacy budget, and the second solution yields better accuracy if the sampling asymptotically converges. We also compare our solutions to the recent differentially private matrix factorization (MF) recommender systems, and show that our solutions achieve better accuracy when the privacy budget is reasonably small. This is an interesting result because MF systems often offer better accuracy when differential privacy is not applied. | A number of works have demonstrated that an attacker can infer the user sensitive information, such as gender and politic view, from public recommendation results without using much background knowledge @cite_12 @cite_16 @cite_8 @cite_25 . | {
"cite_N": [
"@cite_16",
"@cite_25",
"@cite_12",
"@cite_8"
],
"mid": [
"2226173891",
"2159196732",
"",
"2135930857"
],
"abstract": [
"The popularity of online recommender systems has soared; they are deployed in numerous websites and gather tremendous amounts of user data that are necessary for recommendation purposes. This data, however, may pose a severe threat to user privacy, if accessed by untrusted parties or used inappropriately. Hence, it is of paramount importance for recommender system designers and service providers to find a sweet spot, which allows them to generate accurate recommendations and guarantee the privacy of their users. In this chapter we overview the state of the art in privacy enhanced recommendations. We analyze the risks to user privacy imposed by recommender systems, survey the existing solutions, and discuss the privacy implications for the users of recommenders. We conclude that a considerable effort is still required to develop practical recommendation solutions that provide adequate privacy guarantees, while at the same time facilitating the delivery of high-quality recommendations to their users.",
"User demographics, such as age, gender and ethnicity, are routinely used for targeting content and advertising products to users. Similarly, recommender systems utilize user demographics for personalizing recommendations and overcoming the cold-start problem. Often, privacy-concerned users do not provide these details in their online profiles. In this work, we show that a recommender system can infer the gender of a user with high accuracy, based solely on the ratings provided by users (without additional metadata), and a relatively small number of users who share their demographics. Focusing on gender, we design techniques for effectively adding ratings to a user's profile for obfuscating the user's gender, while having an insignificant effect on the recommendations provided to that user.",
"",
"We present a new class of statistical de- anonymization attacks against high-dimensional micro-data, such as individual preferences, recommendations, transaction records and so on. Our techniques are robust to perturbation in the data and tolerate some mistakes in the adversary's background knowledge. We apply our de-anonymization methodology to the Netflix Prize dataset, which contains anonymous movie ratings of 500,000 subscribers of Netflix, the world's largest online movie rental service. We demonstrate that an adversary who knows only a little bit about an individual subscriber can easily identify this subscriber's record in the dataset. Using the Internet Movie Database as the source of background knowledge, we successfully identified the Netflix records of known users, uncovering their apparent political preferences and other potentially sensitive information."
]
} |
1701.02160 | 2552217791 | This paper describes the work that has been done in the design and development of a wireless on-board diagnostic system (OBD II) fleet management system. The system aims to measure speed, distance, and fuel consumption of vehicles for tracking and analysis purposes. An OBD II reader is designed to measure speed and mass air flow, from which the distance and fuel consumption are also computed. This data is then transmitted via WiFi to a remote server. The system also implements global positioning system tracking to determine the location of the vehicle. A database management system is implemented at the remote server for the storage and management of transmitted data and a graphical user interface is developed for analysing the transmitted data. Various qualification tests are conducted to verify the functionality of the system. The results demonstrate that the system is capable of reading the various parameters, and can successfully process, transmit, and display the readings. | The integration of OBD II and wireless communication technologies was observed in @cite_8 , where an OBD II system that measured real time vehicle data was built. The system interfaced with a car's ECU through the OBD II connector. The data received from the ECU was then transmitted to a remote device via Bluetooth, WiFi, or WCDMA in hexadecimal format. The study mainly focused on integrating various wireless communication technologies to connect to various mobile devices. The monitored parameters included vehicle speed and engine revolution per minute (RPM). A flaw in this system is that the received data is not meaningful to a casual user as the hexadecimal data requires decoding. | {
"cite_N": [
"@cite_8"
],
"mid": [
"1965068863"
],
"abstract": [
"AbstractOrdinarily, a driver knows the current driving state of a vehicle through the On Board Diagnosis-II (OBD-II) data. Lately convenience devices related to real-time vehicle control and driving information data have been using OBD-II from the vehicle network.However, when these devices receive vehicle data rom the OBD-II network, each device receives its own information separately using its own vehicle network. If the driver changes the product and OBD-II connector to use OBD-II, the driver also changes the OBD-II connector. As a result, the driver spends useless money for the product and vehicle to use OBD-II.In this paper, we implemented integrating an OBD-II connector that uses Bluetooth, Wi-Fi, and WCDMA modules to overcome the above disadvantage."
]
} |
1701.02160 | 2552217791 | This paper describes the work that has been done in the design and development of a wireless on-board diagnostic system (OBD II) fleet management system. The system aims to measure speed, distance, and fuel consumption of vehicles for tracking and analysis purposes. An OBD II reader is designed to measure speed and mass air flow, from which the distance and fuel consumption are also computed. This data is then transmitted via WiFi to a remote server. The system also implements global positioning system tracking to determine the location of the vehicle. A database management system is implemented at the remote server for the storage and management of transmitted data and a graphical user interface is developed for analysing the transmitted data. Various qualification tests are conducted to verify the functionality of the system. The results demonstrate that the system is capable of reading the various parameters, and can successfully process, transmit, and display the readings. | A system for verification of engine information and diagnosis of engine malfunction using a Bluetooth OBD II scanner was developed in @cite_1 . An Android device was used to receive the measured or diagnostic data. The system mainly focused on defining a protocol that enabled transmitting and receiving of OBD II data from multiple sensors simultaneously. This study focused on real-time diagnosis of the engine condition, and data was only made available to the driver of the vehicle. | {
"cite_N": [
"@cite_1"
],
"mid": [
"1498765256"
],
"abstract": [
"This study implemented a mobile diagnosing system that provides user-centered interfaces for more precisely estimating and diagnosing engine conditions through communications with the self-developed ECU only for industrial CRDI engine use. For the implemented system, a new protocol was designed and applied based on OBD-II standard to receive engine data values of the developed ECU. The designed protocol consists of a message structure to request data transmission from a smartphone to ECU and a response message structure for ECU to send data to a smartphone. It transmits 31 pieces of engine condition information simultaneously and sends the trouble diagnostic code. Because the diagnostic system enables real-time communication through modules, the engine condition information can be checked at any time. Thus, because when troubles take place on the engine, users can check them right away, quick response and resolution are possible, and stable system management can be expected."
]
} |
1701.02160 | 2552217791 | This paper describes the work that has been done in the design and development of a wireless on-board diagnostic system (OBD II) fleet management system. The system aims to measure speed, distance, and fuel consumption of vehicles for tracking and analysis purposes. An OBD II reader is designed to measure speed and mass air flow, from which the distance and fuel consumption are also computed. This data is then transmitted via WiFi to a remote server. The system also implements global positioning system tracking to determine the location of the vehicle. A database management system is implemented at the remote server for the storage and management of transmitted data and a graphical user interface is developed for analysing the transmitted data. Various qualification tests are conducted to verify the functionality of the system. The results demonstrate that the system is capable of reading the various parameters, and can successfully process, transmit, and display the readings. | The study in @cite_7 used an OBD II reader for acquiring real time vehicle parameters from the controller area network (CAN) bus of a hybrid electrical vehicle. The OBD II reader used the ELM 327 IC to interpret the CAN protocol. The data was received wirelessly by an Android device over a Bluetooth network and from the android device, data was sent via GPRS to a remote server. | {
"cite_N": [
"@cite_7"
],
"mid": [
"2121291038"
],
"abstract": [
"With the rapid development of the smartphone market, future cars seem to have more connections with intelligent cell phone and Internet. Intelligent transportation system (ITS) and telematics system have become research focus in recent years. There is an increasing demand for remote monitoring and diagnostic system as the further research of hybrid electric vehicle (HEV) goes on. In this paper, a remote controller area network bus (CAN-Bus) data monitor and diagnostic system for HEV is presented using on board diagnostic version-II (OBD-II) and Android-based smartphone. It is low-cost, convenient, and extensible with smartphone used in the system to realize communication with ELM327 and remote monitoring center wirelessly. The prototype of client and server is developed in Java language, and it is proved by the test that the system works stably and the collected data have practical values."
]
} |
1701.02160 | 2552217791 | This paper describes the work that has been done in the design and development of a wireless on-board diagnostic system (OBD II) fleet management system. The system aims to measure speed, distance, and fuel consumption of vehicles for tracking and analysis purposes. An OBD II reader is designed to measure speed and mass air flow, from which the distance and fuel consumption are also computed. This data is then transmitted via WiFi to a remote server. The system also implements global positioning system tracking to determine the location of the vehicle. A database management system is implemented at the remote server for the storage and management of transmitted data and a graphical user interface is developed for analysing the transmitted data. Various qualification tests are conducted to verify the functionality of the system. The results demonstrate that the system is capable of reading the various parameters, and can successfully process, transmit, and display the readings. | The impact of driving behaviour on fuel consumption was monitored in @cite_16 by measuring various parameters such as mass air flow using a Bluetooth OBD II reader. An Android application was used to view the parameters measured for analysis. The measured data was then sent to a web-based remote server. This system exploits the advantage of vehicle on-board systems by using accessible parameters to perform fuel consumption calculations. | {
"cite_N": [
"@cite_16"
],
"mid": [
"1565069923"
],
"abstract": [
"Despite the recent technological improvements in vehicles and engines, and the introduction of better fuels, road transportation is still responsible for air pollution in urban areas due to the increasing number of circulating vehicles, and their relative travelled distances. We develop a methodology to calculate, in real-time, the consumption and environmental impact of spark ignition and diesel vehicles from a set of variables such as Engine Fuel Rate, Speed, Mass Air Flow, Absolute Load, and Manifold Absolute Pressure, all of them obtained from the vehicle's Electronic Control Unit (ECU). Our platform is able to assist drivers in correcting their bad driving habits, while offering helpful recommendations to improve fuel economy. In this paper we will demonstrate through data mining, to what extent does the driving style really affect (negatively or positively) the fuel consumption, as well as the increase or reduction of greenhouse gas emissions generated by vehicles."
]
} |
1701.02160 | 2552217791 | This paper describes the work that has been done in the design and development of a wireless on-board diagnostic system (OBD II) fleet management system. The system aims to measure speed, distance, and fuel consumption of vehicles for tracking and analysis purposes. An OBD II reader is designed to measure speed and mass air flow, from which the distance and fuel consumption are also computed. This data is then transmitted via WiFi to a remote server. The system also implements global positioning system tracking to determine the location of the vehicle. A database management system is implemented at the remote server for the storage and management of transmitted data and a graphical user interface is developed for analysing the transmitted data. Various qualification tests are conducted to verify the functionality of the system. The results demonstrate that the system is capable of reading the various parameters, and can successfully process, transmit, and display the readings. | The study in @cite_0 implemented an Android-based application that monitored the vehicle via an OBD II interface by measuring the air-bag trigger and G-force experienced by the passenger during a collision, to detect accidents. | {
"cite_N": [
"@cite_0"
],
"mid": [
"2074967085"
],
"abstract": [
"The increasing activity in the Intelligent Transportation Systems (ITS) area faces a strong limitation: the slow pace at which the automotive industry is making cars \"smarter\". On the contrary, the smartphone industry is advancing quickly. Existing smartphones are endowed with multiple wireless interfaces and high computational power, being able to perform a wide variety of tasks. By combining smartphones with existing vehicles through an appropriate interface we are able to move closer to the smart vehicle paradigm, offering the user new functionalities and services when driving. In this paper we propose an Android-based application that monitors the vehicle through an On Board Diagnostics (OBD-II) interface, being able to detect accidents. Our proposed application estimates the G force experienced by the passengers in case of a frontal collision, which is used together with airbag triggers to detect accidents. The application reacts to positive detection by sending details about the accident through either e-mail or SMS to pre-defined destinations, immediately followed by an automatic phone call to the emergency services. Experimental results using a real vehicle show that the application is able to react to accident events in less than 3 seconds, a very low time, validating the feasibility of smartphone based solutions for improving safety on the road."
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | Vertical fragmentation splits a given relation @math into sub-relations that are of @math with respect to a subset of attributes. It consists in grouping together attributes that are frequently accessed by queries. Vertical fragments are thus built by projection. The original relation is reconstructed by simply joining the fragments. Relevant examples for techniques belonging to this class are the following. Navathe vertically partition a relation into fragments and propose two alternative fragmentation methods: @cite_9 and @cite_32 . The first method is based on three matrices (one capturing the , one capturing the and another one capturing the of queries) while the second one exploits an objective function. @cite_9 , authors present techniques for applying vertical fragmentation in the following specialized application contexts: databases stored on homogeneous devices, databases stored in different memory levels, and distributed databases. | {
"cite_N": [
"@cite_9",
"@cite_32"
],
"mid": [
"2039795745",
"2114679600"
],
"abstract": [
"This paper addresses the vertical partitioning of a set of logical records or a relation into fragments. The rationale behind vertical partitioning is to produce fragments, groups of attribute columns, that “closely match” the requirements of transactions. Vertical partitioning is applied in three contexts: a database stored on devices of a single type, a database stored in different memory levels, and a distributed database. In a two-level memory hierarchy, most transactions should be processed using the fragments in primary memory. In distributed databases, fragment allocation should maximize the amount of local transaction processing. Fragments may be nonoverlapping or overlapping. A two-phase approach for the determination of fragments is proposed; in the first phase, the design is driven by empirical objective functions which do not require specific cost information. The second phase performs cost optimization by incorporating the knowledge of a specific application environment. The algorithms presented in this paper have been implemented, and examples of their actual use are shown.",
"Vertical partitioning is the process of subdividing the attributes of a relation or a record type, creating fragments. Previous approaches have used an iterative binary partitioning method which is based on clustering algorithms and mathematical cost functions. In this paper, however, we propose a new vertical partitioning algorithm using a graphical technique. This algorithm starts from the attribute affinity matrix by considering it as a complete graph. Then, forming a linearly connected spanning tree, it generates all meaningful fragments simultaneously by considering a cycle as a fragment. We show its computational superiority. It provides a cleaner alternative without arbitrary objective functions and provides an improvement over our previous work on vertical partitioning."
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | Horizontal fragmentation divides a given relation @math into sub-sets of tuples by exploiting query predicates. It reduces query processing costs by minimizing the number of irrelevant accessed instances. Horizontal fragments are thus built by selection. The original relation is reconstructed by fragment union. A variant, the so-called derived horizontal fragmentation @cite_18 , consists in partitioning a relation @math with respect to predicates defined on another relation, said @math . Other significant horizontal fragmentation techniques are the following. Major algorithms that address horizontal fragmentation are @cite_36 and the @cite_32 methods (). | {
"cite_N": [
"@cite_36",
"@cite_18",
"@cite_32"
],
"mid": [
"1545406293",
"1565472858",
"2114679600"
],
"abstract": [
"",
"The problem of selecting an optimal fragmentation schema of a data warehouse is more challenging compared to that in relational and object databases. This challenge is due to the several choices of partitioning star or snowflake schemas. Data partitioning is beneficial if and only if the fact table is fragmented based on the partitioning schemas of dimension tables. This may increase the number of fragments of the fact tables dramatically and makes their maintenance very costly. Therefore, the right selection of fragmenting schemas is important for better performance of OLAP queries. In this paper, we present a genetic algorithm for schema partitioning selection problem. The proposed algorithm gives better solutions since the search space is constrained by the schema partitioning. We conduct several experimental studies using the APB-1 release II benchmark for validating the proposed algorithm.",
"Vertical partitioning is the process of subdividing the attributes of a relation or a record type, creating fragments. Previous approaches have used an iterative binary partitioning method which is based on clustering algorithms and mathematical cost functions. In this paper, however, we propose a new vertical partitioning algorithm using a graphical technique. This algorithm starts from the attribute affinity matrix by considering it as a complete graph. Then, forming a linearly connected spanning tree, it generates all meaningful fragments simultaneously by considering a cycle as a fragment. We show its computational superiority. It provides a cleaner alternative without arbitrary objective functions and provides an improvement over our previous work on vertical partitioning."
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | In order to improve ad-hoc query evaluation performance, Datta @cite_56 propose exploiting a vertical fragmentation of facts to build the index , while Golfarelli @cite_33 propose applying the same fragmentation methodology on data warehouse views. Munneke @cite_61 instead propose an original fragmentation methodology targeted to multidimensional databases. In this case, fragmentation consists in deriving a global data cube from fragments containing a sub-set of data defined by meaningful slice and dice OLAP-like operations @cite_28 @cite_52 . @cite_61 , authors also define an alternative fragmentation strategy, named , which removes one or several dimensions from the target data cube in order to produce fragments having fewer dimensions than the original data cube. | {
"cite_N": [
"@cite_61",
"@cite_33",
"@cite_28",
"@cite_52",
"@cite_56"
],
"mid": [
"185718653",
"1863762422",
"124832240",
"2103201239",
"1572056544"
],
"abstract": [
"",
"Within the framework of the data warehouse design methodology we are developing, in this paper we investigate the problem of vertical fragmentation of relational views aimed at minimizing the global query response time. Each view includes several measures which, within the workload, are seldom requested together; thus, the system performance may be increased by partitioning the views to be materialized into smaller tables. On the other hand, drill-across queries involve measures taken from two or more views; in this case the access costs may be decreased by unifying these views into larger tables. Within the data warehouse context, the presence of redundant views makes the fragmentation problem more complex than in traditional relational databases since it requires to decide on which views each query should be executed. After formalizing the fragmentation problem as a 0-1 integer linear programming problem, we define a cost function and propose a branch-and-bound algorithm to minimize it. Finally, we demonstrate the usefulness of our approach by presenting a sample set of experimental results.",
"Overview Recently, there has been a great deal of discussion in the trade press and elsewhere regarding the coexistence of so-called transaction databases with decision support systems. These discussions usually revolve around the argument that the physical design required for acceptable performance of each is incompatible and that therefore, data should be stored redundantly in multiple enterprise databases: one for transaction processing, and the other for decision support type activities. Also, these same arguments usually confuse physical schema with logical and conceptual schema.",
"Data analysis applications typically aggregate data across many dimensions looking for unusual patterns. The SQL aggregate functions and the GROUP BY operator produce zero-dimensional or one-dimensional answers. Applications need the N-dimensional generalization of these operators. The paper defines that operator, called the data cube or simply cube. The cube operator generalizes the histogram, cross-tabulation, roll-up, drill-down, and sub-total constructs found in most report writers. The cube treats each of the N aggregation attributes as a dimension of N-space. The aggregate of a particular set of attribute values is a point in this space. The set of points forms an N-dimensionaI cube. Super-aggregates are computed by aggregating the N-cube to lower dimensional spaces. Aggregation points are represented by an \"infinite value\": ALL, so the point (ALL,ALL,...,ALL, sum(*)) represents the global sum of all items. Each ALL value actually represents the set of values contributing to that aggregation.",
"Data warehousing and On-Line Analytical Processing (OLAP) are becoming critical components of decision support as advances in technology are improving the ability to manage and retrieve large volumes of data. Data warehousing refers to collection of decision support technologies aimed at enabling the knowledge worker (executive, manager, analyst) to make better and faster decisions\" [1]. OLAP refers to the technique of performing complex analysis over the information stored in a data warehouse. It is often used by management analysts and decision makers in a variety of functional areas such as sales and marketing planning. Typically, OLAP queries look for speci c trends and anomalies in the base information by aggregating, ranging, ltering and grouping data in many di erent ways [8]. E cient query processing is a critical requirement for OLAP because the underlying data warehouse is very large, queries are often quite complex, and decision support applications typically require in-"
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | Bellatreche and Boukhalfa @cite_18 apply horizontal fragmentation to data warehouse star schemas. Their fragmentation strategy is based on a reference query-workload, and it exploits a genetic algorithm to select a suitable partitioning schema among all the possible ones. Overall, the proposed approach aims at selecting an that minimizes query cost. Wu and Buchmaan @cite_2 recommend to combine horizontal and vertical fragmentation for query optimization purposes. @cite_2 , a fact table can be horizontally partitioned with respect to one or more dimensions of the data warehouse. Moreover, the fact table can also be vertically partitioned according to its dimensions, i.e. all the foreign keys to the dimensional tables are partitioned as separate tables. | {
"cite_N": [
"@cite_18",
"@cite_2"
],
"mid": [
"1565472858",
"1495399242"
],
"abstract": [
"The problem of selecting an optimal fragmentation schema of a data warehouse is more challenging compared to that in relational and object databases. This challenge is due to the several choices of partitioning star or snowflake schemas. Data partitioning is beneficial if and only if the fact table is fragmented based on the partitioning schemas of dimension tables. This may increase the number of fragments of the fact tables dramatically and makes their maintenance very costly. Therefore, the right selection of fragmenting schemas is important for better performance of OLAP queries. In this paper, we present a genetic algorithm for schema partitioning selection problem. The proposed algorithm gives better solutions since the search space is constrained by the schema partitioning. We conduct several experimental studies using the APB-1 release II benchmark for validating the proposed algorithm.",
"Data warehousing is a booming industry with many interesting research problems. The database research community has concentrated on only a few aspects. In this paper, We summarize the state of the art, suggest architectural extensions and identify research problems in the areas of warehouse modeling and design, data cleansing and loading, data refreshing and purging, metadata management, extensions to relational operators, alternative implementations of traditional relational operators, special index structures and query optimization with aggregates."
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | In order to distribute a data warehouse, Noaman @cite_4 exploit a top-down strategy making use of horizontal fragmentation. @cite_4 , authors propose an algorithm for deriving horizontal fragments from the fact table based on input queries defined on all the dimensional tables. Finally, Wehrle @cite_25 propose distributing and querying a data warehouse by meaningfully exploiting the capabilities offered by a . @cite_25 , authors make use of derived horizontal fragmentation to split the target data warehouse and build the so-called , which is a set of data portions derived from the data warehouse and used to query optimization purposes, being each portion computed as a fragment of the partition. | {
"cite_N": [
"@cite_4",
"@cite_25"
],
"mid": [
"2055393182",
"2142490918"
],
"abstract": [
"Data warehousing is one of the major research topics of appliedside database investigators. Most of the work to date has focused on building large centralized systems that are integrated repositories founded on pre-existing systems upon which all corporate-wide data are based. Unfortunately, this approach is very expensive and tends to ignore the advantages realized during the past decade in the area of distribution and support for data localization in a geographically dispersed corporate structure. This research investigates building distributed data warehouses with particular emphasis placed on distribution design for the data warehouse environment. The article provides an architectural model for a distributed data warehouse, the formal definition of the relational data model for data warehouse and a methodology for distributed data warehouse design along with a “horizontal” fragmentation algorithm for the fact relation.",
"Data warehouses store large volumes of data according to a multidimensional model with dimensions representing different axes of analysis. OLAP systems (online analytical processing) provide the ability to interactively explore the data warehouse. Rising volumes and complexity of data favor the use of more powerful distributed computing architectures. Computing grids in particular are built for decentralized management of heterogeneous distributed resources. Their lack of centralized control however conflicts with classic centralized data warehouse models. To take advantage of a computing grid infrastructure to operate a data warehouse, several problems need to be solved. First, the warehouse data must be uniquely identified and judiciously partitioned to allow efficient distribution, querying and exchange among the nodes of the grid. We propose a data model based on \"chunks\" as atomic entities of warehouse data that can be uniquely identified. We then build contiguous blocks of these chunks to obtain suitable fragments of the data warehouse. The fragments stored on each grid node must be indexed in a uniform way to effectively interact with existing grid services. Our indexing structure consists of a lattice structure mapping queries to warehouse fragments and a specialized spatial index structure formed by X-trees providing the information necessary for optimized query evaluation plans."
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | In summary, the above-outlined proposals generally exploit derived horizontal fragmentation to reduce irrelevant data accesses and efficiently process join operations across multiple relations @cite_18 @cite_4 @cite_25 . From active literature @cite_51 , we also recognize that, in order to implement derived horizontal fragmentation of data warehouses, the outlined approaches prevalently make use of the following two main fragmentation methods: | {
"cite_N": [
"@cite_51",
"@cite_18",
"@cite_4",
"@cite_25"
],
"mid": [
"2155718912",
"1565472858",
"2055393182",
"2142490918"
],
"abstract": [
"",
"The problem of selecting an optimal fragmentation schema of a data warehouse is more challenging compared to that in relational and object databases. This challenge is due to the several choices of partitioning star or snowflake schemas. Data partitioning is beneficial if and only if the fact table is fragmented based on the partitioning schemas of dimension tables. This may increase the number of fragments of the fact tables dramatically and makes their maintenance very costly. Therefore, the right selection of fragmenting schemas is important for better performance of OLAP queries. In this paper, we present a genetic algorithm for schema partitioning selection problem. The proposed algorithm gives better solutions since the search space is constrained by the schema partitioning. We conduct several experimental studies using the APB-1 release II benchmark for validating the proposed algorithm.",
"Data warehousing is one of the major research topics of appliedside database investigators. Most of the work to date has focused on building large centralized systems that are integrated repositories founded on pre-existing systems upon which all corporate-wide data are based. Unfortunately, this approach is very expensive and tends to ignore the advantages realized during the past decade in the area of distribution and support for data localization in a geographically dispersed corporate structure. This research investigates building distributed data warehouses with particular emphasis placed on distribution design for the data warehouse environment. The article provides an architectural model for a distributed data warehouse, the formal definition of the relational data model for data warehouse and a methodology for distributed data warehouse design along with a “horizontal” fragmentation algorithm for the fact relation.",
"Data warehouses store large volumes of data according to a multidimensional model with dimensions representing different axes of analysis. OLAP systems (online analytical processing) provide the ability to interactively explore the data warehouse. Rising volumes and complexity of data favor the use of more powerful distributed computing architectures. Computing grids in particular are built for decentralized management of heterogeneous distributed resources. Their lack of centralized control however conflicts with classic centralized data warehouse models. To take advantage of a computing grid infrastructure to operate a data warehouse, several problems need to be solved. First, the warehouse data must be uniquely identified and judiciously partitioned to allow efficient distribution, querying and exchange among the nodes of the grid. We propose a data model based on \"chunks\" as atomic entities of warehouse data that can be uniquely identified. We then build contiguous blocks of these chunks to obtain suitable fragments of the data warehouse. The fragments stored on each grid node must be indexed in a uniform way to effectively interact with existing grid services. Our indexing structure consists of a lattice structure mapping queries to warehouse fragments and a specialized spatial index structure formed by X-trees providing the information necessary for optimized query evaluation plans."
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | @cite_32 This method is an adaptation of the vertical fragmentation approach @cite_33 to the horizontal fragmentation one @cite_29 . It is based on the @cite_44 according to which affinity is defined in terms of query frequency. Specific predicate-usage and affinity matrices are exploited in order to cluster selection predicates. A cluster is here defined as a , and forms a fragment of a dimensional table itself. | {
"cite_N": [
"@cite_44",
"@cite_29",
"@cite_32",
"@cite_33"
],
"mid": [
"1986393412",
"74829802",
"2114679600",
"1863762422"
],
"abstract": [
"In this paper, two-phase horizontal partitioning of distributed databases is addressed. First, primary horizontal fragmentation is carried out on each relation based on the predicate affinity matrix and the bond energy algorithm. This is an application of a vertical partitioning algorithm to the horizontal fragmentation problem. Second, the derived horizontal fragmentation is further performed by considering information related to the global relational database schema and its transactions. A necessary and sufficient condition for the correctness of derived fragmentations is also proved.",
"",
"Vertical partitioning is the process of subdividing the attributes of a relation or a record type, creating fragments. Previous approaches have used an iterative binary partitioning method which is based on clustering algorithms and mathematical cost functions. In this paper, however, we propose a new vertical partitioning algorithm using a graphical technique. This algorithm starts from the attribute affinity matrix by considering it as a complete graph. Then, forming a linearly connected spanning tree, it generates all meaningful fragments simultaneously by considering a cycle as a fragment. We show its computational superiority. It provides a cleaner alternative without arbitrary objective functions and provides an improvement over our previous work on vertical partitioning.",
"Within the framework of the data warehouse design methodology we are developing, in this paper we investigate the problem of vertical fragmentation of relational views aimed at minimizing the global query response time. Each view includes several measures which, within the workload, are seldom requested together; thus, the system performance may be increased by partitioning the views to be materialized into smaller tables. On the other hand, drill-across queries involve measures taken from two or more views; in this case the access costs may be decreased by unifying these views into larger tables. Within the data warehouse context, the presence of redundant views makes the fragmentation problem more complex than in traditional relational databases since it requires to decide on which views each query should be executed. After formalizing the fragmentation problem as a 0-1 integer linear programming problem, we define a cost function and propose a branch-and-bound algorithm to minimize it. Finally, we demonstrate the usefulness of our approach by presenting a sample set of experimental results."
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | Recently, several fragmentation techniques for XML data have been proposed in literature. These techniques propose splitting an XML document into a new set of XML documents, with the main goal of either improving XML query performance @cite_6 @cite_62 @cite_46 , or distributing or exchanging XML data over a network @cite_1 @cite_59 . | {
"cite_N": [
"@cite_62",
"@cite_1",
"@cite_6",
"@cite_59",
"@cite_46"
],
"mid": [
"1580751869",
"2098423524",
"1571314868",
"2168412776",
"2129984824"
],
"abstract": [
"XML is increasingly used not only for data exchange but also to represent arbitrary data sources as virtual XML repositories. In many application scenarios, fragments of such repositories are distributed over the Web. However, design and query processing models for distributed XML data have not yet been studied in detail. The goal of this paper is to study the design and management of distributed XML repositories. Following the well-established concepts of vertical and horizontal data fragmentation schemes for relational databases, we introduce a flexible distribution design approach for XML repositories. We provide a comprehensive data allocation model with a particular focus on storage efficient index structures. These index structures encode global path information about XML fragment data at local sites and provide for an efficient, local evaluation of the most common types of global path and tree pattern queries. Finally, we describe the basic principles of a distributed query processing model based on the concept of index shipping.",
"We address the problem of querying XML data over a P2P network. In P2P networks, the allowed kinds of queries are usually exact-match queries over file names. We discuss the extensions needed to deal with XML data and XPath queries. A single peer can hold a whole document or a partial complete fragment of the latter. Each XML fragment document is identified by a distinct path expression, which is encoded in a distributed hash table. Our framework differs from content-based routing mechanisms, biased towards finding the most relevant peers holding the data. We perform fragments placement and enable fragments lookup by solely exploiting few path expressions stored on each peer. By taking advantage of quasi-zero replication of global catalogs, our system supports fast full and partial XPath querying. To this purpose, we have extended the Chord simulator and performed an experimental evaluation of our approach.",
"Fragmentation techniques for XML data are gaining momentum within both distributed and centralized XML query engines and pose novel and unrecognized challenges to the community. Albeit not novel, and clearly inspired by the classical divide et impera principle, fragmentation for XML trees has been proved successful in boosting the querying performance, and in cutting down the memory requirements. However, fragmentation considered so far has been driven by semantics, i.e. built around query predicates. In this paper, we propose a novel fragmentation technique that founds on structural constraints of XML documents (size, tree-width, and tree-depth) and on special-purpose structure histograms able to meaningfully summarize XML documents. This allows us to predict bounding intervals of structural properties of output (XML) fragments for efficient query processing of distributed XML data. An experimental evaluation of our study confirms the effectiveness of our fragmentation methodology on some representative XML data sets.",
"Data fragmentation offers various attractive alternatives to organizing and managing data, and presents interesting characteristics that may be exploited for efficient processing. XML, being inherently hierarchical and semi-structured, is an ideal candidate to reap the benefits offered by data fragmentation. However, fragmenting XML data and handling queries on fragmented XML are fraught with challenges: seamless XML fragmentation and processing models are required for deft handling of query execution on inter-connected and inter-related XML fragments, without the need of reconstructing the entire document in memory. Recent research has studied some of the challenges and has provided some insight on the data representation, and on the rather intuitive approaches for processing fragmented XML. In this paper, we provide a novel pipelined framework, called XFrag, for processing XQueries on XML fragments to achieve processing and memory efficiency. Moreover, we show that this model is suitable for low-bandwidth mobile environments by accounting for their intrinsic idiosyncrasies, without sacrificing accuracy and efficiency. We provide experimental results showing the memory savings achieved by our framework using the XMark benchmark.",
"Since the introduction of extensible Markup Language (XML), XML repositories have gained a foothold in many global (and government) organizations, where, e-commerce and e-business models have maturated in handling daily transactional data among heterogeneous information systems in multi-data formats. Due to this, the amount of data available for enterprise decision-making process is increasing exponentially and are being stored and or communicated in XML. This presents an interesting challenge to investigate models, frameworks and techniques for organizing and analyzing such voluminous, yet distributed XML documents for business intelligence in the form of XML warehouse repositories and XML marts. In this paper, we address such an issue, where we propose a view-driven approach for modeling and designing of a Global XML FACT (GxFACT) repository under the MDA initiatives. Here we propose the GxFACT using logically grouped, geographically dispersed, XML document warehouses and Document Marts in a global enterprise setting. To deal with organizations' evolving decision-making needs, we also provide three design strategies for building and managing of such GxFACT in the context of modeling of further hierarchical dimensions and or global document warehouses"
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | In order to fragment XML documents, Ma @cite_45 @cite_46 define a new fragmentation notion, called , which is inspired from the oriented-object databases context. This fragmentation technique splits elements of the input XML document, and assigns a reference to each so-obtained sub-element. References are then added to the (DTD) defining the input XML document. This avoid redundancy and inconsistence problems that could occur due to fragmentation process. Bonifati @cite_6 @cite_42 propose a fragmentation strategy for XML documents that is driven by the so-called . These constraints refer to intrinsic properties of XML trees such as the depth and the width of trees. In order to efficiently fragment the input XML document by means of structural constraint, the proposed strategy exploits heuristics and statistics simultaneously. | {
"cite_N": [
"@cite_46",
"@cite_45",
"@cite_42",
"@cite_6"
],
"mid": [
"2129984824",
"1853326381",
"1575536925",
"1571314868"
],
"abstract": [
"Since the introduction of extensible Markup Language (XML), XML repositories have gained a foothold in many global (and government) organizations, where, e-commerce and e-business models have maturated in handling daily transactional data among heterogeneous information systems in multi-data formats. Due to this, the amount of data available for enterprise decision-making process is increasing exponentially and are being stored and or communicated in XML. This presents an interesting challenge to investigate models, frameworks and techniques for organizing and analyzing such voluminous, yet distributed XML documents for business intelligence in the form of XML warehouse repositories and XML marts. In this paper, we address such an issue, where we propose a view-driven approach for modeling and designing of a Global XML FACT (GxFACT) repository under the MDA initiatives. Here we propose the GxFACT using logically grouped, geographically dispersed, XML document warehouses and Document Marts in a global enterprise setting. To deal with organizations' evolving decision-making needs, we also provide three design strategies for building and managing of such GxFACT in the context of modeling of further hierarchical dimensions and or global document warehouses",
"The world-wide web (WWW) is often considered to be the world's largest database and the eXtensible Markup Language (XML) is then considered to provide its datamodel. Adopting this view we have to deal with a distributed database. This raises the question, how to obtain a suitable distribution design for XML documents. In this paper horizontal and vertical fragmentation techniques are generalised from the relational datamodel to XML. Furthermore, splitting will be introduced as a third kind of fragmentation. Then it is shown how relational techniques for de ning reasonable fragments can be applied to the case of XML.",
"A shovel having a blade and an elongated handle is provided between the ends thereof with a laterally extending enlargement. A resilient pad and cover are placed over and supported by the enlargement and handle. The supported pad acts as a fulcrum when placed on the user's thigh just above the knee. The loaded blade may then be raised by lowering the free handle end with one hand and when the blade is in a sufficiently raised position, the handle may be grasped with the other hand near the blade for carrying or discharging the load from the blade.",
"Fragmentation techniques for XML data are gaining momentum within both distributed and centralized XML query engines and pose novel and unrecognized challenges to the community. Albeit not novel, and clearly inspired by the classical divide et impera principle, fragmentation for XML trees has been proved successful in boosting the querying performance, and in cutting down the memory requirements. However, fragmentation considered so far has been driven by semantics, i.e. built around query predicates. In this paper, we propose a novel fragmentation technique that founds on structural constraints of XML documents (size, tree-width, and tree-depth) and on special-purpose structure histograms able to meaningfully summarize XML documents. This allows us to predict bounding intervals of structural properties of output (XML) fragments for efficient query processing of distributed XML data. An experimental evaluation of our study confirms the effectiveness of our fragmentation methodology on some representative XML data sets."
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | Andrade @cite_53 propose applying fragmentation to an collection of XML documents. @cite_53 , authors adapt traditional fragmentation techniques to an XML document collection, and make use of the (TLC) algebra @cite_34 to this goal. Authors also experimentally evaluate these techniques and show that horizontal fragmentation provides the best performance. Gertz and Bremer @cite_62 introduce a distribution approach for XML repositories. They propose a fragmentation method and outline an allocation model for distributed XML fragments in a centralized architecture. @cite_62 , authors also define horizontal and vertical fragmentation for XML repositories. Here, fragments are defined on the basis of a , called , which is derived from @cite_54 . In more detail, fragments are obtained via applying an expression on a graph representing XML data, named (). Moreover, authors provide exclusion expressions that ensure fragment coherence and disjunction rigorously. | {
"cite_N": [
"@cite_54",
"@cite_53",
"@cite_34",
"@cite_62"
],
"mid": [
"109947125",
"1497989252",
"2146726084",
"1580751869"
],
"abstract": [
"",
"The data volume of XML repositories and the response time of query processing have become critical issues for many applications, especially for those in the Web. An interesting alternative to improve query processing performance consists in reducing the size of XML databases through fragmentation techniques. However, traditional fragmentation definitions do not directly apply to collections of XML documents. This work formalizes the fragmentation definition for collections of XML documents, and shows the performance of query processing over fragmented XML data. Our prototype, PartiX, exploits intra-query parallelism on top of XQuery-enabled sequential DBMS modules. We have analyzed several experimental settings, and our results showed a performance improvement of up to a 72 scale up factor against centralized databases.",
"XML is widely praised for its flexibility in allowing repeated and missing sub-elements. However, this flexibility makes it challenging to develop a bulk algebra, which typically manipulates sets of objects with identical structure. A set of XML elements, say of type book, may have members that vary greatly in structure, e.g. in the number of author sub-elements. This kind of heterogeneity may permeate the entire document in a recursive fashion: e.g., different authors of the same or different book may in turn greatly vary in structure. Even when the document conforms to a schema, the flexible nature of schemas for XML still allows such significant variations in structure among elements in a collection. Bulk processing of such heterogeneous sets is problematic.In this paper, we introduce the notion of logical classes (LC) of pattern tree nodes, and generalize the notion of pattern tree matching to handle node logical classes. This abstraction pays off significantly in allowing us to reason with an inherently heterogeneous collection of elements in a uniform, homogeneous way. Based on this, we define a Tree Logical Class (TLC) algebra that is capable of handling the heterogeneity arising in XML query processing, while avoiding redundant work. We present an algorithm to obtain a TLC algebra expression from an XQuery statement (for a large fragment of XQuery). We show how to implement the TLC algebra efficiently, introducing the nest-join as an important physical operator for XML query processing. We show that evaluation plans generated using the TLC algebra not only are simpler but also perform better than those generated by competing approaches. TLC is the algebra used in the T imber [8] system developed at the University of Michigan.",
"XML is increasingly used not only for data exchange but also to represent arbitrary data sources as virtual XML repositories. In many application scenarios, fragments of such repositories are distributed over the Web. However, design and query processing models for distributed XML data have not yet been studied in detail. The goal of this paper is to study the design and management of distributed XML repositories. Following the well-established concepts of vertical and horizontal data fragmentation schemes for relational databases, we introduce a flexible distribution design approach for XML repositories. We provide a comprehensive data allocation model with a particular focus on storage efficient index structures. These index structures encode global path information about XML fragment data at local sites and provide for an efficient, local evaluation of the most common types of global path and tree pattern queries. Finally, we describe the basic principles of a distributed query processing model based on the concept of index shipping."
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | Bose and Fegaras @cite_59 , argue to use XML fragments for efficiently supporting data exchange in P2P networks. In this proposal, XML fragments are interrelated, and each fragment is univocally identified by an . Authors also propose a fragmentation schema, called , which allows us to define the structure of fragments across the network. In turn, the structure of fragments can be exploited for data exchange and query optimization purposes. Bonifati @cite_1 also define an XML fragmentation framework for P2P networks, called (XP2P). In this proposal, XML fragments are obtained and identified via a single root-to-node path expression, and managed on a specific peer. In addition, to data management efficiency purposes, in @cite_1 authors associate two XPath-modeled path expressions to each fragment, namely and , respectively. Given an XML fragment @math , the first XPath expression identifies the root of the fragment @math from which @math has been originated; the second XPath expression instead identifies the root of a @math 's child XML fragment. These path expressions ensure the easily identification of fragments and their networked relationships. | {
"cite_N": [
"@cite_1",
"@cite_59"
],
"mid": [
"2098423524",
"2168412776"
],
"abstract": [
"We address the problem of querying XML data over a P2P network. In P2P networks, the allowed kinds of queries are usually exact-match queries over file names. We discuss the extensions needed to deal with XML data and XPath queries. A single peer can hold a whole document or a partial complete fragment of the latter. Each XML fragment document is identified by a distinct path expression, which is encoded in a distributed hash table. Our framework differs from content-based routing mechanisms, biased towards finding the most relevant peers holding the data. We perform fragments placement and enable fragments lookup by solely exploiting few path expressions stored on each peer. By taking advantage of quasi-zero replication of global catalogs, our system supports fast full and partial XPath querying. To this purpose, we have extended the Chord simulator and performed an experimental evaluation of our approach.",
"Data fragmentation offers various attractive alternatives to organizing and managing data, and presents interesting characteristics that may be exploited for efficient processing. XML, being inherently hierarchical and semi-structured, is an ideal candidate to reap the benefits offered by data fragmentation. However, fragmenting XML data and handling queries on fragmented XML are fraught with challenges: seamless XML fragmentation and processing models are required for deft handling of query execution on inter-connected and inter-related XML fragments, without the need of reconstructing the entire document in memory. Recent research has studied some of the challenges and has provided some insight on the data representation, and on the rather intuitive approaches for processing fragmented XML. In this paper, we provide a novel pipelined framework, called XFrag, for processing XQueries on XML fragments to achieve processing and memory efficiency. Moreover, we show that this model is suitable for low-bandwidth mobile environments by accounting for their intrinsic idiosyncrasies, without sacrificing accuracy and efficiency. We provide experimental results showing the memory savings achieved by our framework using the XMark benchmark."
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | In summary, the above-outlined proposals adapt classical fragmentation methods, mainly investigated and developed in the context of relation data warehouses, in order to split a given XML database into a meaningfully collection of XML fragments. An XML fragment is defined and identified by a path expression @cite_1 @cite_62 , or an XML algebra operator @cite_53 . Fragmentation is performed on a single XML document @cite_45 @cite_46 , or an homogeneous XML document collection @cite_53 . Another secondary result deriving from this is represented by the claim stating that, to the best of our knowledge, XML data warehouse fragmentation has not been addressed at now by active literature. This further confirms the innovation carried out by our research. | {
"cite_N": [
"@cite_62",
"@cite_53",
"@cite_1",
"@cite_45",
"@cite_46"
],
"mid": [
"1580751869",
"1497989252",
"2098423524",
"1853326381",
"2129984824"
],
"abstract": [
"XML is increasingly used not only for data exchange but also to represent arbitrary data sources as virtual XML repositories. In many application scenarios, fragments of such repositories are distributed over the Web. However, design and query processing models for distributed XML data have not yet been studied in detail. The goal of this paper is to study the design and management of distributed XML repositories. Following the well-established concepts of vertical and horizontal data fragmentation schemes for relational databases, we introduce a flexible distribution design approach for XML repositories. We provide a comprehensive data allocation model with a particular focus on storage efficient index structures. These index structures encode global path information about XML fragment data at local sites and provide for an efficient, local evaluation of the most common types of global path and tree pattern queries. Finally, we describe the basic principles of a distributed query processing model based on the concept of index shipping.",
"The data volume of XML repositories and the response time of query processing have become critical issues for many applications, especially for those in the Web. An interesting alternative to improve query processing performance consists in reducing the size of XML databases through fragmentation techniques. However, traditional fragmentation definitions do not directly apply to collections of XML documents. This work formalizes the fragmentation definition for collections of XML documents, and shows the performance of query processing over fragmented XML data. Our prototype, PartiX, exploits intra-query parallelism on top of XQuery-enabled sequential DBMS modules. We have analyzed several experimental settings, and our results showed a performance improvement of up to a 72 scale up factor against centralized databases.",
"We address the problem of querying XML data over a P2P network. In P2P networks, the allowed kinds of queries are usually exact-match queries over file names. We discuss the extensions needed to deal with XML data and XPath queries. A single peer can hold a whole document or a partial complete fragment of the latter. Each XML fragment document is identified by a distinct path expression, which is encoded in a distributed hash table. Our framework differs from content-based routing mechanisms, biased towards finding the most relevant peers holding the data. We perform fragments placement and enable fragments lookup by solely exploiting few path expressions stored on each peer. By taking advantage of quasi-zero replication of global catalogs, our system supports fast full and partial XPath querying. To this purpose, we have extended the Chord simulator and performed an experimental evaluation of our approach.",
"The world-wide web (WWW) is often considered to be the world's largest database and the eXtensible Markup Language (XML) is then considered to provide its datamodel. Adopting this view we have to deal with a distributed database. This raises the question, how to obtain a suitable distribution design for XML documents. In this paper horizontal and vertical fragmentation techniques are generalised from the relational datamodel to XML. Furthermore, splitting will be introduced as a third kind of fragmentation. Then it is shown how relational techniques for de ning reasonable fragments can be applied to the case of XML.",
"Since the introduction of extensible Markup Language (XML), XML repositories have gained a foothold in many global (and government) organizations, where, e-commerce and e-business models have maturated in handling daily transactional data among heterogeneous information systems in multi-data formats. Due to this, the amount of data available for enterprise decision-making process is increasing exponentially and are being stored and or communicated in XML. This presents an interesting challenge to investigate models, frameworks and techniques for organizing and analyzing such voluminous, yet distributed XML documents for business intelligence in the form of XML warehouse repositories and XML marts. In this paper, we address such an issue, where we propose a view-driven approach for modeling and designing of a Global XML FACT (GxFACT) repository under the MDA initiatives. Here we propose the GxFACT using logically grouped, geographically dispersed, XML document warehouses and Document Marts in a global enterprise setting. To deal with organizations' evolving decision-making needs, we also provide three design strategies for building and managing of such GxFACT in the context of modeling of further hierarchical dimensions and or global document warehouses"
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | Although Data Mining has already proved to be extremely useful to select physical data structures that enhance performance, such as indexes or materialized views @cite_47 @cite_55 @cite_8 @cite_35 , few fragmentation approaches that exploit Data Mining exist in literature. Therefore, it is reasonable to claim that the latter is a relatively-novel area of research, and a promising direction for future efforts in data warehouse and database fragmentation techniques. | {
"cite_N": [
"@cite_55",
"@cite_47",
"@cite_35",
"@cite_8"
],
"mid": [
"1623444038",
"2122816893",
"2050272677",
"2017733008"
],
"abstract": [
"Materialized view selection is a non-trivial task. Hence, its complexity must be reduced. A judicious choice of views must be cost-driven and influenced by the workload experienced by the system. In this paper, we propose a framework for materialized view selection that exploits a data mining technique (clustering), in order to determine clusters of similar queries. We also propose a view merging algorithm that builds a set of candidate views, as well as a greedy process for selecting a set of views to materialize. This selection is based on cost models that evaluate the cost of accessing data using views and the cost of storing these views. To validate our strategy, we executed a workload of decision-support queries on a test data warehouse, with and without using our strategy. Our experimental results demonstrate its efficiency, even when storage space is limited.",
"Automatically selecting an appropriate set of materialized views and indexes for SQL databases is a non-trivial task. A judicious choice must be cost-driven and influenced by the workload experienced by the system. Although there has been work in materialized view selection in the context of multidimensional (OLAP) databases, no past work has looked at the problem of building an industry-strength tool for automated selection of materialized views and indexes for SQL workloads. In this paper, we present an end-to-end solution to the problem of selecting materialized views and indexes. We describe results of extensive experimental evaluation that demonstrate the effectiveness of our techniques. Our solution is implemented as part of a tuning wizard that ships with Microsoft SQL Server 2000.",
"Considering the wide deployment of databases and its size, particularly in data warehouses, it is important to automate the physical design so that the task of the database administrator (DBA) is minimized. An important part of physical database design is index selection. An auto-index selection tool capable of analyzing large amounts of data and suggesting a good set of indexes for a database is the goal of auto-administration. Clustering is a data mining technique with broad appeal and usefulness in exploratory data analysis. This idea provides a motivation to apply clustering techniques to obtain good indexes for a workload in the database. We describe a technique for auto-indexing using clustering. The experiments conducted show that the proposed technique performs better than Microsoft SQL server index selection tool (1ST) and can suggest indexes faster than Microsoft's IST.",
"Analytical queries defined on data warehouses are complex and use several join operations that are very costly, especially when run on very large data volumes. To improve response times, data warehouse administrators casually use indexing techniques. This task is nevertheless complex and fastidious. In this paper, we present an automatic, dynamic index selection method for data warehouses that is based on incremental frequent itemset mining from a given query workload. The main advantage of this approach is that it helps update the set of selected indexes when workload evolves instead of recreating it from scratch. Preliminary experimental results illustrate the efficiency of this approach, both in terms of performance enhancement and overhead."
]
} |
1701.02190 | 2088553860 | XML data sources are gaining popularity in the context of Business Intelligence and On-Line Analytical Processing (OLAP) applications, due to the amenities of XML in representing and managing complex and heterogeneous data. However, XML-native database systems currently suffer from limited performance, both in terms of volumes of manageable data and query response time. Therefore, recent research efforts are focusing on horizontal fragmentation techniques, which are able to overcome the above limitations. However, classical fragmentation algorithms are not suitable to control the number of originated fragments, which instead plays a critical role in data warehouses. In this paper, we propose the use of the K-means clustering algorithm for effectively and efficiently supporting the fragmentation of very large XML data warehouses. We complement our analytical contribution with a comprehensive experimental assessment where we compare the efficiency of our proposal against existing fragmentation algorithms. | Gorla and Betty @cite_31 exploit for vertical fragmentation of relational databases. Authors consider that association rules provide a natural way to represent relationships between attributes as implied by database queries. Basically, their solution consists in adapting the well-known algorithm Apriori @cite_19 by selecting the non-overlapping item-sets having highest support and by grouping their respective attributes into one partition. Then, the algorithm exploits a cost model to select an optimal fragmentation schema.Darabant and Campan @cite_22 propose using -means clustering for efficiently supporting horizontal fragmentation of object-oriented distributed databases. This research has inspired our work. In more detail, the method proposed in @cite_22 clusters object instances into fragments via taking into account all complex relationships between classes of data objects (aggregation, associations and links induced by complex methods). Finally, Fiolet and Toursel @cite_13 propose a parallel, progressive to fragment a database and distribute it over a data grid. This approach is inspired by the sequential clustering algorithm @cite_21 that consists in clustering data by means of projection operations. | {
"cite_N": [
"@cite_22",
"@cite_21",
"@cite_19",
"@cite_31",
"@cite_13"
],
"mid": [
"2161048172",
"1977496278",
"1506285740",
"2020396904",
"2136603929"
],
"abstract": [
"Vertical and horizontal fragmentations are central issues in the design process of distributed object based systems. A good fragmentation scheme followed by an optimal allocation could greatly enhance performance in such systems, as data transfer between distributed sites is minimized. In this paper we present a horizontal fragmentation approach that uses the k-means AI clustering method for partitioning object instances into fragments. Our new method applies to existing databases, where statistics are already present. We model fragmentation input data in a vector space and give different object similarity measures together with their geometrical interpretations. We provide quality and performance evaluations using a partition evaluator function",
"Data mining applications place special requirements on clustering algorithms including: the ability to find clusters embedded in subspaces of high dimensional data, scalability, end-user comprehensibility of the results, non-presumption of any canonical data distribution, and insensitivity to the order of input records. We present CLIQUE, a clustering algorithm that satisfies each of these requirements. CLIQUE identifies dense clusters in subspaces of maximum dimensionality. It generates cluster descriptions in the form of DNF expressions that are minimized for ease of comprehension. It produces identical results irrespective of the order in which input records are presented and does not presume any specific mathematical form for data distribution. Through experiments, we show that CLIQUE efficiently finds accurate cluster in large high dimensional datasets.",
"",
"A new approach to vertical fragmentation in relational databases is proposed using association rules, a data-mining technique. Vertical fragmentation can enhance the performance of database systems by reducing the number of disk accesses needed by transactions. By adapting Apriori algorithm, a design methodology for vertical partitioning is proposed. The heuristic methodology is tested using two real-life databases for various minimum support levels and minimum confidence levels. In the smaller database, the partitioning solution obtained matched the optimal solution using exhaustive enumeration. The application of our method on the larger database resulted in the partitioning solution that has an improvement of 41.05 over unpartitioned solution and took less than a second to produce the solution. We provide future research directions on extending the procedure to distributed and object-oriented database designs.",
"The increasing availability of clusters and grids of workstations provides cheap and powerful ressources for distributed datamining. To exploit these ressources we need new algorithms adapted to this kind of environment, in particular with respect to the way to fragment data and to use this fragmentation. An \"intelligent\" distribution of data is required and can be obtained from clustering. Most existing parallel methods of clustering are developped for supercomputers with shared memory and hence can not be used on a Grid. This paper presents a new clustering algorithm, called Progressive Clustering, which executes a clustering in an efficient and incremental distributed way. The data clusters resulting from this algorithm can subsequently be used in distributed data mining tasks."
]
} |
1701.01810 | 2579205899 | The Urban Rail Transit (URT) has been one of the major trip modes in cities worldwide. As the passengers arrive at variable rates in different time slots, e.g., rush and non-rush hours, the departure frequency at a site directly relates to perceived service quality of passengers; the high departure frequency, however, incurs more operation cost to URT. Therefore, a tradeoff between the interest of railway operator and the service quality of passengers needs to be addressed. In this paper, we develop a model on the operation method of train operation scheduling using a Stackelberg game model. The railway operator is modeled as the game leader and the passengers as the game follower, and an optimal departure frequency can be determine the tradeoff between passengers' service quality and operation cost. We present several numerical examples based on the operation data from Nanjing transit subway at China. The results demonstrate that the proposed model can significantly improve the traffic efficiency. | In order to maintain a satisfying level of quality of experience, optimization models have been widely adopted to determine the optimal Urban rail transit (URT) train operation with different objectives, such as travel time @cite_9 , waiting time @cite_12 , operation cost [16-18] and robustness @cite_2 . | {
"cite_N": [
"@cite_9",
"@cite_12",
"@cite_2"
],
"mid": [
"2018577633",
"2144382499",
"2095383966"
],
"abstract": [
"This paper describes the development and use of a model designed to optimise train schedules on single line rail corridors. The model has been developed with two major applications in mind, namely: as a decision support tool for train dispatchers to schedule trains in real time in an optimal way; and as a planning tool to evaluate the impact of timetable changes, as well as railroad infrastructure changes. The mathematical programming model described here schedules trains over a single line track. The priority of each train in a conflict depends on an estimate of the remaining crossing and overtaking delay, as well as the current delay. This priority is used in a branch and bound procedure to allow and optimal solution to reasonable size train scheduling problems to be determined efficiently. The use of the model in an application to a \"real life\" problem is discussed. The impacts of changing demand by increasing the number of trains, and reducing the number of sidings for a 150 km section of single line track are discussed. It is concluded that the model is able to produce useful results in terms of optimal schedules in a reasonable time for the test applications shown here.",
"During rail operations, unforeseen events may cause timetable perturbations, which ask for the capability of traffic management systems to reschedule trains and to restore the timetable feasibility. Based on an accurate monitoring of train positions and speeds, potential conflicting routes can be predicted in advance and resolved in real time. The adjusted targets (location-time-speed) would be then communicated to the relevant trains by which drivers should be able to anticipate the changed traffic circumstances and adjust the train's speed accordingly. We adopt a detailed alternative graph model for the train dispatching problem. Conflicts between different trains are effectively detected and solved. Adopting the blocking time model, we ascertain whether a safe distance headway between trains is respected, and we also consider speed coordination issues among consecutive trains. An iterative rescheduling procedure provides an acceptable speed profile for each train over the intended time horizon. After a finite number of iterations, the final solution is a conflict-free schedule that respects the signaling and safety constraints. A computational study based on a hourly cyclical timetable of the Schiphol railway network has been carried out. Our automated dispatching system provides better solutions in terms of delay minimization when compared to dispatching rules that can be adopted by a human traffic controller",
"In this paper we survey the main studies dealing with the train timetabling problem in its nominal and robust versions. Roughly speaking, the nominal version of the problem amounts of determining “good” timetables for a set of trains (on a railway network or on a single one-way line), satisfying the so-called track capacity constraints, with the aim of optimizing an objective function that can have different meanings according to the requests of the railway company (e.g. one can be asked to schedule the trains according to the timetables preferred by the train operators or to maximize the passenger satisfaction). Two are the main variants of the nominal problem: one is to consider a cyclic (or periodic) schedule of the trains that is repeated every given time period (for example every hour), and the other one is to consider a more congested network where only a non-cyclic schedule can be performed. In the recent years, many works have been dedicated to the robust version of the problem. In this case, the aim is to determine robust timetables for the trains, i.e. to find a schedule that avoids, in case of disruptions in the railway network, delay propagation as much as possible. We present an overview of the main works on train timetabling, underlining the differences between models and methods that have been developed to tackle the nominal and the robust versions of the problem."
]
} |
1701.01810 | 2579205899 | The Urban Rail Transit (URT) has been one of the major trip modes in cities worldwide. As the passengers arrive at variable rates in different time slots, e.g., rush and non-rush hours, the departure frequency at a site directly relates to perceived service quality of passengers; the high departure frequency, however, incurs more operation cost to URT. Therefore, a tradeoff between the interest of railway operator and the service quality of passengers needs to be addressed. In this paper, we develop a model on the operation method of train operation scheduling using a Stackelberg game model. The railway operator is modeled as the game leader and the passengers as the game follower, and an optimal departure frequency can be determine the tradeoff between passengers' service quality and operation cost. We present several numerical examples based on the operation data from Nanjing transit subway at China. The results demonstrate that the proposed model can significantly improve the traffic efficiency. | Amit @cite_7 apply optimization techniques to solve train timetable optimization problems. Ghoseiri @cite_3 present a multi-objective optimization model for the passenger train scheduling problem on a railroad network. To adjust the arrival departure times of trains based on a dynamic behavior of demand, Canca @cite_10 develop a nonlinear integer programming model, which can be used to evaluate the train service quality. Considering the user satisfaction parameters, average travel time and energy consumption, Sun @cite_4 present a multi-objective optimization model of the train routing problem. Wang @cite_8 @cite_5 propose a real-time train scheduling model with stop-skipping and solve the problem with the mixed integer nonlinear programming (MINLP) approach and the mixed integer linear programming (MILP) approach. Sun @cite_6 propose an optimization method of train scheduling for metro lines with a train dwell time mode, and lagrangian duality theory is adopted to solve this optimization problem with high dimensionality. | {
"cite_N": [
"@cite_4",
"@cite_7",
"@cite_8",
"@cite_3",
"@cite_6",
"@cite_5",
"@cite_10"
],
"mid": [
"2077022807",
"2090786169",
"2173875532",
"2137264502",
"2031714270",
"",
"2963495037"
],
"abstract": [
"A major problem in wireless sensor network localization is erroneous local geometric realizations in some parts of the network due to the sensitivity to certain distance measurement errors, which may in turn affect the reliability of the localization of the whole or a major portion of the sensor network. This phenomenon is well-described using the notion of \"flip ambiguity\" in rigid graph theory. In a recent study by the coauthors, an initial formal geometric analysis of flip ambiguity problems has been provided. The ultimate aim of that study was to quantify the likelihood of flip ambiguities in arbitrary sensor neighborhood geometries. In this paper we propose a more general robustness criterion to detect flip ambiguities in arbitrary sensor neighborhood geometries in planar sensor networks. This criterion enhances the recent study by the coauthors by removing the assumptions of accurately knowing some inter- sensor distances. The established robustness criterion is found to be useful in two aspects: (a) Analyzing the effects of flip ambiguity and (b) Enhancing the reliability of the location estimates of the prevailing localization algorithms by incorporating this robustness criterion to eliminate neighborhoods with flip ambiguity from being included in the localization process.",
"In subway systems, the energy put into accelerating trains can be reconverted into electric energy by using the motors as generators during the braking phase. In general, except for a small part that is used for onboard purposes, most of the recovery energy is transmitted backward along the conversion chain and fed back into the overhead contact line. To improve the utilization of recovery energy, this paper proposes a cooperative scheduling approach to optimize the timetable so that the recovery energy that is generated by the braking train can directly be used by the accelerating train. The recovery that is generated by the braking train is less than the required energy for the accelerating train; therefore, only the synchronization between successive trains is considered. First, we propose the cooperative scheduling rules and define the overlapping time between the accelerating and braking trains for a peak-hours scenario and an off-peak-hours scenario, respectively. Second, we formulate an integer programming model to maximize the overlapping time with the headway time and dwell time control. Furthermore, we design a genetic algorithm with binary encoding to solve the optimal timetable. Last, we present six numerical examples based on the operation data from the Beijing Yizhuang subway line in China. The results illustrate that the proposed model can significantly improve the overlapping time by 22.06 at peak hours and 15.19 at off-peak hours.",
"The real-time train scheduling problem for urban rail transit systems is considered with the aim of minimizing the total travel time of passengers and the energy consumption of the operation of trains. Based on the passenger demand in the urban rail transit system, the optimal departure times, running times, and dwell times are obtained by solving the scheduling problem. A new iterative convex programming (ICP) approach is proposed to solve the train scheduling problem. The performance of the ICP approach is compared with other alternative approaches, i.e., nonlinear programming approaches, a mixed-integer nonlinear programming (MINLP) approach, and a mixed-integer linear programming (MILP) approach. In addition, this paper formulates the real-time train scheduling problem with stop-skipping and shows how to solve it using an MINLP approach and an MILP approach. The ICP approach is shown, via a case study, to provide a better tradeoff between performance and computational complexity for the real-time train scheduling problem. Furthermore, for the train scheduling problem with stop-skipping, the MINLP approach turns out to have a good tradeoff between the control performance and the computational efficiency.",
"This paper develops a multi-objective optimization model for the passenger train-scheduling problem on a railroad network which includes single and multiple tracks, as well as multiple platforms with different train capacities. In this study, lowering the fuel consumption cost is the measure of satisfaction of the railway company and shortening the total passenger-time is being regarded as the passenger satisfaction criterion. The solution of the problem consists of two steps. First the Pareto frontier is determined using the [var epsilon]-constraint method, and second, based on the obtained Pareto frontier detailed multi-objective optimization is performed using the distance-based method with three types of distances. Numerical examples are given to illustrate the model and solution methodology.",
"This paper proposes an optimization method of train scheduling for metro lines with a train dwell time model according to passenger demand. An optimization problem of train scheduling is established with constraints of a headway equation, passenger equation, and train dwell time equation, where the train dwell time is modeled as a function of boarding and alighting passenger volumes. The aim of the optimization problem is to minimize the waiting time of passengers and train operation cost. Lagrangian duality theory is adopted to solve this optimization problem with high dimensionality. Finally, simulation results illustrate that this method is efficient to generate the train schedule, which meets the passengers' exchanging requirements between trains and platforms. The contribution of this paper is that a dwell time model is introduced in train schedule optimization, which provides the possibility of reducing the operation cost in the precondition that the exchanging time of passengers between platforms and trains is assured.",
"",
"SUMMARY Railway scheduling and timetabling are common stages in the classical hierarchical railway planning process and they perhaps represent the step with major influence on user's perception about quality of service. This aspect, in conjunction with their contribution to service profitability, makes them a widely studied topic in the literature, where, nowadays, many efforts are focused on improving the solving methods of the corresponding optimization problems. However, literature about models considering detailed descriptions of passenger demand is sparse. This paper tackles the problem of timetable determination by means of building and solving a nonlinear integer programming model that fits the arrival and departure train times to a dynamic behavior of demand. The optimization model results are then used for computing several measures to characterize the quality of the obtained timetables considering jointly both user and company points of view. Some aspects are discussed, including the influence of train capacity and the validity of Random Incidence Theorem. An application to the C5 line of Madrid rapid transit system is presented. Different measures are analyzed in order to improve the insight into the proposed model and analyze in advance the influence of different objectives on the resulting timetable. Copyright © 2014 John Wiley & Sons, Ltd."
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.