text
stringlengths 0
4.09k
|
|---|
Abstract: Rewards typically express desirabilities or preferences over a set of alternatives. Here we propose that rewards can be defined for any probability distribution based on three desiderata, namely that rewards should be real-valued, additive and order-preserving, where the latter implies that more probable events should also be more desirable. Our main result states that rewards are then uniquely determined by the negative information content. To analyze stochastic processes, we define the utility of a realization as its reward rate. Under this interpretation, we show that the expected utility of a stochastic process is its negative entropy rate. Furthermore, we apply our results to analyze agent-environment interactions. We show that the expected utility that will actually be achieved by the agent is given by the negative cross-entropy from the input-output (I/O) distribution of the coupled interaction system and the agent's I/O distribution. Thus, our results allow for an information-theoretic interpretation of the notion of utility and the characterization of agent-environment interactions in terms of entropy dynamics.
|
Title: Sparse Convolved Multiple Output Gaussian Processes
|
Abstract: Recently there has been an increasing interest in methods that deal with multiple outputs. This has been motivated partly by frameworks like multitask learning, multisensor networks or structured output data. From a Gaussian processes perspective, the problem reduces to specifying an appropriate covariance function that, whilst being positive semi-definite, captures the dependencies between all the data points and across all the outputs. One approach to account for non-trivial correlations between outputs employs convolution processes. Under a latent function interpretation of the convolution transform we establish dependencies between output variables. The main drawbacks of this approach are the associated computational and storage demands. In this paper we address these issues. We present different sparse approximations for dependent output Gaussian processes constructed through the convolution formalism. We exploit the conditional independencies present naturally in the model. This leads to a form of the covariance similar in spirit to the so called PITC and FITC approximations for a single output. We show experimental results with synthetic and real data, in particular, we show results in pollution prediction, school exams score prediction and gene expression data.
|
Title: Standardization of the formal representation of lexical information for NLP
|
Abstract: A survey of dictionary models and formats is presented as well as a presentation of corresponding recent standardisation activities.
|
Title: The ILIUM forward modelling algorithm for multivariate parameter estimation and its application to derive stellar parameters from Gaia spectrophotometry
|
Abstract: I introduce an algorithm for estimating parameters from multidimensional data based on forward modelling. In contrast to many machine learning approaches it avoids fitting an inverse model and the problems associated with this. The algorithm makes explicit use of the sensitivities of the data to the parameters, with the goal of better treating parameters which only have a weak impact on the data. The forward modelling approach provides uncertainty (full covariance) estimates in the predicted parameters as well as a goodness-of-fit for observations. I demonstrate the algorithm, ILIUM, with the estimation of stellar astrophysical parameters (APs) from simulations of the low resolution spectrophotometry to be obtained by Gaia. The AP accuracy is competitive with that obtained by a support vector machine. For example, for zero extinction stars covering a wide range of metallicity, surface gravity and temperature, ILIUM can estimate Teff to an accuracy of 0.3% at G=15 and to 4% for (lower signal-to-noise ratio) spectra at G=20. [Fe/H] and logg can be estimated to accuracies of 0.1-0.4dex for stars with G<=18.5. If extinction varies a priori over a wide range (Av=0-10mag), then Teff and Av can be estimated quite accurately (3-4% and 0.1-0.2mag respectively at G=15), but there is a strong and ubiquitous degeneracy in these parameters which limits our ability to estimate either accurately at faint magnitudes. Using the forward model we can map these degeneracies (in advance), and thus provide a complete probability distribution over solutions. (Abridged)
|
Title: Bayesian Inference from Composite Likelihoods, with an Application to Spatial Extremes
|
Abstract: Composite likelihoods are increasingly used in applications where the full likelihood is analytically unknown or computationally prohibitive. Although the maximum composite likelihood estimator has frequentist properties akin to those of the usual maximum likelihood estimator, Bayesian inference based on composite likelihoods has yet to be explored. In this paper we investigate the use of the Metropolis--Hastings algorithm to compute a pseudo-posterior distribution based on the composite likelihood. Two methodologies for adjusting the algorithm are presented and their performance on approximating the true posterior distribution is investigated using simulated data sets and real data on spatial extremes of rainfall.
|
Title: Positive Definite Kernels in Machine Learning
|
Abstract: This survey is an introduction to positive definite kernels and the set of methods they have inspired in the machine learning literature, namely kernel methods. We first discuss some properties of positive definite kernels as well as reproducing kernel Hibert spaces, the natural extension of the set of functions $\k(x,\cdot),x\in\$ associated with a kernel $k$ defined on a space $$. We discuss at length the construction of kernel functions that take advantage of well-known statistical models. We provide an overview of numerous data-analysis methods which take advantage of reproducing kernel Hilbert spaces and discuss the idea of combining several kernels to improve the performance on certain tasks. We also provide a short cookbook of different kernels which are particularly useful for certain data-types such as images, graphs or speech segments.
|
Title: Maximin affinity learning of image segmentation
|
Abstract: Images can be segmented by first using a classifier to predict an affinity graph that reflects the degree to which image pixels must be grouped together and then partitioning the graph to yield a segmentation. Machine learning has been applied to the affinity classifier to produce affinity graphs that are good in the sense of minimizing edge misclassification rates. However, this error measure is only indirectly related to the quality of segmentations produced by ultimately partitioning the affinity graph. We present the first machine learning algorithm for training a classifier to produce affinity graphs that are good in the sense of producing segmentations that directly minimize the Rand index, a well known segmentation performance measure. The Rand index measures segmentation performance by quantifying the classification of the connectivity of image pixel pairs after segmentation. By using the simple graph partitioning algorithm of finding the connected components of the thresholded affinity graph, we are able to train an affinity classifier to directly minimize the Rand index of segmentations resulting from the graph partitioning. Our learning algorithm corresponds to the learning of maximin affinities between image pixel pairs, which are predictive of the pixel-pair connectivity.
|
Title: Covering rough sets based on neighborhoods: An approach without using neighborhoods
|
Abstract: Rough set theory, a mathematical tool to deal with inexact or uncertain knowledge in information systems, has originally described the indiscernibility of elements by equivalence relations. Covering rough sets are a natural extension of classical rough sets by relaxing the partitions arising from equivalence relations to coverings. Recently, some topological concepts such as neighborhood have been applied to covering rough sets. In this paper, we further investigate the covering rough sets based on neighborhoods by approximation operations. We show that the upper approximation based on neighborhoods can be defined equivalently without using neighborhoods. To analyze the coverings themselves, we introduce unary and composition operations on coverings. A notion of homomorphismis provided to relate two covering approximation spaces. We also examine the properties of approximations preserved by the operations and homomorphisms, respectively.
|
Title: An axiomatic approach to the roughness measure of rough sets
|
Abstract: In Pawlak's rough set theory, a set is approximated by a pair of lower and upper approximations. To measure numerically the roughness of an approximation, Pawlak introduced a quantitative measure of roughness by using the ratio of the cardinalities of the lower and upper approximations. Although the roughness measure is effective, it has the drawback of not being strictly monotonic with respect to the standard ordering on partitions. Recently, some improvements have been made by taking into account the granularity of partitions. In this paper, we approach the roughness measure in an axiomatic way. After axiomatically defining roughness measure and partition measure, we provide a unified construction of roughness measure, called strong Pawlak roughness measure, and then explore the properties of this measure. We show that the improved roughness measures in the literature are special instances of our strong Pawlak roughness measure and introduce three more strong Pawlak roughness measures as well. The advantage of our axiomatic approach is that some properties of a roughness measure follow immediately as soon as the measure satisfies the relevant axiomatic definition.
|
Title: Laser Actuated Presentation System
|
Abstract: We present here a pattern sensitive PowerPoint presentation scheme. The presentation is actuated by simple patterns drawn on the presentation screen by a laser pointer. A specific pattern corresponds to a particular command required to operate the presentation. Laser spot on the screen is captured by a RGB webcam with a red filter mounted, and its location is identified at the blue layer of each captured frame by estimating the mean position of the pixels whose intensity is above a given threshold value. Measured Reliability, Accuracy and Latency of our system are 90%, 10 pixels (in the worst case) and 38 ms respectively.
|
Title: Penalized Likelihood Methods for Estimation of Sparse High Dimensional Directed Acyclic Graphs
|
Abstract: Directed acyclic graphs (DAGs) are commonly used to represent causal relationships among random variables in graphical models. Applications of these models arise in the study of physical, as well as biological systems, where directed edges between nodes represent the influence of components of the system on each other. The general problem of estimating DAGs from observed data is computationally NP-hard, Moreover two directed graphs may be observationally equivalent. When the nodes exhibit a natural ordering, the problem of estimating directed graphs reduces to the problem of estimating the structure of the network. In this paper, we propose a penalized likelihood approach that directly estimates the adjacency matrix of DAGs. Both lasso and adaptive lasso penalties are considered and an efficient algorithm is proposed for estimation of high dimensional DAGs. We study variable selection consistency of the two penalties when the number of variables grows to infinity with the sample size. We show that although lasso can only consistently estimate the true network under stringent assumptions, adaptive lasso achieves this task under mild regularity conditions. The performance of the proposed methods is compared to alternative methods in simulated, as well as real, data examples.
|
Title: An Iterative Algorithm for Fitting Nonconvex Penalized Generalized Linear Models with Grouped Predictors
|
Abstract: High-dimensional data pose challenges in statistical learning and modeling. Sometimes the predictors can be naturally grouped where pursuing the between-group sparsity is desired. Collinearity may occur in real-world high-dimensional applications where the popular $l_1$ technique suffers from both selection inconsistency and prediction inaccuracy. Moreover, the problems of interest often go beyond Gaussian models. To meet these challenges, nonconvex penalized generalized linear models with grouped predictors are investigated and a simple-to-implement algorithm is proposed for computation. A rigorous theoretical result guarantees its convergence and provides tight preliminary scaling. This framework allows for grouped predictors and nonconvex penalties, including the discrete $l_0$ and the `$l_0+l_2$' type penalties. Penalty design and parameter tuning for nonconvex penalties are examined. Applications of super-resolution spectrum estimation in signal processing and cancer classification with joint gene selection in bioinformatics show the performance improvement by nonconvex penalized estimation.
|
Title: Pigment Melanin: Pattern for Iris Recognition
|
Abstract: Recognition of iris based on Visible Light (VL) imaging is a difficult problem because of the light reflection from the cornea. Nonetheless, pigment melanin provides a rich feature source in VL, unavailable in Near-Infrared (NIR) imaging. This is due to biological spectroscopy of eumelanin, a chemical not stimulated in NIR. In this case, a plausible solution to observe such patterns may be provided by an adaptive procedure using a variational technique on the image histogram. To describe the patterns, a shape analysis method is used to derive feature-code for each subject. An important question is how much the melanin patterns, extracted from VL, are independent of iris texture in NIR. With this question in mind, the present investigation proposes fusion of features extracted from NIR and VL to boost the recognition performance. We have collected our own database (UTIRIS) consisting of both NIR and VL images of 158 eyes of 79 individuals. This investigation demonstrates that the proposed algorithm is highly sensitive to the patterns of cromophores and improves the iris recognition rate.
|
Title: Sparse Empirical Bayes Analysis (SEBA)
|
Abstract: We consider a joint processing of $n$ independent sparse regression problems. Each is based on a sample $(y_i1,x_i1)...,(y_im,x_im)$ of $m$ \iid observations from $y_i1=x_i1\t\beta_i+\eps_i1$, $y_i1\in \R$, $x_i 1\in\R^p$, $i=1,...,n$, and $\eps_i1\dist N(0,\sig^2)$, say. $p$ is large enough so that the empirical risk minimizer is not consistent. We consider three possible extensions of the lasso estimator to deal with this problem, the lassoes, the group lasso and the RING lasso, each utilizing a different assumption how these problems are related. For each estimator we give a Bayesian interpretation, and we present both persistency analysis and non-asymptotic error bounds based on restricted eigenvalue - type assumptions.
|
Title: A Decision-Optimization Approach to Quantum Mechanics and Game Theory
|
Abstract: The fundamental laws of quantum world upsets the logical foundation of classic physics. They are completely counter-intuitive with many bizarre behaviors. However, this paper shows that they may make sense from the perspective of a general decision-optimization principle for cooperation. This principle also offers a generalization of Nash equilibrium, a key concept in game theory, for better payoffs and stability of game playing.
|
Title: Acquisition d'informations lexicales \`a partir de corpus C\'edric Messiant et Thierry Poibeau
|
Abstract: This paper is about automatic acquisition of lexical information from corpora, especially subcategorization acquisition.
|
Title: Vector Autoregressive Models With Measurement Errors for Testing Ganger Causality
|
Abstract: This paper develops a method for estimating parameters of a vector autoregression (VAR) observed in white noise. The estimation method assumes the noise variance matrix is known and does not require any iterative process. This study provides consistent estimators and shows the asymptotic distribution of the parameters required for conducting tests of Granger causality. Methods in the existing statistical literature cannot be used for testing Granger causality, since under the null hypothesis the model becomes unidentifiable. Measurement error effects on parameter estimates were evaluated by using computational simulations. The results show that the proposed approach produces empirical false positive rates close to the adopted nominal level (even for small samples) and has a good performance around the null hypothesis. The applicability and usefulness of the proposed approach are illustrated using a functional magnetic resonance imaging dataset.
|
Title: Hierarchies in Dictionary Definition Space
|
Abstract: A dictionary defines words in terms of other words. Definitions can tell you the meanings of words you don't know, but only if you know the meanings of the defining words. How many words do you need to know (and which ones) in order to be able to learn all the rest from definitions? We reduced dictionaries to their "grounding kernels" (GKs), about 10% of the dictionary, from which all the other words could be defined. The GK words turned out to have psycholinguistic correlates: they were learned at an earlier age and more concrete than the rest of the dictionary. But one can compress still more: the GK turns out to have internal structure, with a strongly connected "kernel core" (KC) and a surrounding layer, from which a hierarchy of definitional distances can be derived, all the way out to the periphery of the full dictionary. These definitional distances, too, are correlated with psycholinguistic variables (age of acquisition, concreteness, imageability, oral and written frequency) and hence perhaps with the "mental lexicon" in each of our heads.
|
Title: Learning in a Large Function Space: Privacy-Preserving Mechanisms for SVM Learning
|
Abstract: Several recent studies in privacy-preserving learning have considered the trade-off between utility or risk and the level of differential privacy guaranteed by mechanisms for statistical query processing. In this paper we study this trade-off in private Support Vector Machine (SVM) learning. We present two efficient mechanisms, one for the case of finite-dimensional feature mappings and one for potentially infinite-dimensional feature mappings with translation-invariant kernels. For the case of translation-invariant kernels, the proposed mechanism minimizes regularized empirical risk in a random Reproducing Kernel Hilbert Space whose kernel uniformly approximates the desired kernel with high probability. This technique, borrowed from large-scale learning, allows the mechanism to respond with a finite encoding of the classifier, even when the function class is of infinite VC dimension. Differential privacy is established using a proof technique from algorithmic stability. Utility--the mechanism's response function is pointwise epsilon-close to non-private SVM with probability 1-delta--is proven by appealing to the smoothness of regularized empirical risk minimization with respect to small perturbations to the feature mapping. We conclude with a lower bound on the optimal differential privacy of the SVM. This negative result states that for any delta, no mechanism can be simultaneously (epsilon,delta)-useful and beta-differentially private for small epsilon and small beta.
|
Title: Differentially Private Empirical Risk Minimization
|
Abstract: Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the $\epsilon$-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance.
|
Title: Learning Mixtures of Gaussians using the k-means Algorithm
|
Abstract: One of the most popular algorithms for clustering in Euclidean space is the $k$-means algorithm; $k$-means is difficult to analyze mathematically, and few theoretical guarantees are known about it, particularly when the data is \em well-clustered. In this paper, we attempt to fill this gap in the literature by analyzing the behavior of $k$-means on well-clustered data. In particular, we study the case when each cluster is distributed as a different Gaussian -- or, in other words, when the input comes from a mixture of Gaussians. We analyze three aspects of the $k$-means algorithm under this assumption. First, we show that when the input comes from a mixture of two spherical Gaussians, a variant of the 2-means algorithm successfully isolates the subspace containing the means of the mixture components. Second, we show an exact expression for the convergence of our variant of the 2-means algorithm, when the input is a very large number of samples from a mixture of spherical Gaussians. Our analysis does not require any lower bound on the separation between the mixture components. Finally, we study the sample requirement of $k$-means; for a mixture of 2 spherical Gaussians, we show an upper bound on the number of samples required by a variant of 2-means to get close to the true solution. The sample requirement grows with increasing dimensionality of the data, and decreasing separation between the means of the Gaussians. To match our upper bound, we show an information-theoretic lower bound on any algorithm that learns mixtures of two spherical Gaussians; our lower bound indicates that in the case when the overlap between the probability masses of the two distributions is small, the sample requirement of $k$-means is \em near-optimal.
|
Title: Opportunistic Adaptation Knowledge Discovery
|
Abstract: Adaptation has long been considered as the Achilles' heel of case-based reasoning since it requires some domain-specific knowledge that is difficult to acquire. In this paper, two strategies are combined in order to reduce the knowledge engineering cost induced by the adaptation knowledge (CA) acquisition task: CA is learned from the case base by the means of knowledge discovery techniques, and the CA acquisition sessions are opportunistically triggered, i.e., at problem-solving time.
|
Title: Under-determined reverberant audio source separation using a full-rank spatial covariance model
|
Abstract: This article addresses the modeling of reverberant recording environments in the context of under-determined convolutive blind source separation. We model the contribution of each source to all mixture channels in the time-frequency domain as a zero-mean Gaussian random variable whose covariance encodes the spatial characteristics of the source. We then consider four specific covariance models, including a full-rank unconstrained model. We derive a family of iterative expectationmaximization (EM) algorithms to estimate the parameters of each model and propose suitable procedures to initialize the parameters and to align the order of the estimated sources across all frequency bins based on their estimated directions of arrival (DOA). Experimental results over reverberant synthetic mixtures and live recordings of speech data show the effectiveness of the proposed approach.
|
Title: A Multi-stage Probabilistic Algorithm for Dynamic Path-Planning
|
Abstract: Probabilistic sampling methods have become very popular to solve single-shot path planning problems. Rapidly-exploring Random Trees (RRTs) in particular have been shown to be efficient in solving high dimensional problems. Even though several RRT variants have been proposed for dynamic replanning, these methods only perform well in environments with infrequent changes. This paper addresses the dynamic path planning problem by combining simple techniques in a multi-stage probabilistic algorithm. This algorithm uses RRTs for initial planning and informed local search for navigation. We show that this combination of simple techniques provides better responses to highly dynamic environments than the RRT extensions.
|
Title: Mapping the spatiotemporal dynamics of calcium signaling in cellular neural networks using optical flow
|
Abstract: An optical flow gradient algorithm was applied to spontaneously forming net- works of neurons and glia in culture imaged by fluorescence optical microscopy in order to map functional calcium signaling with single pixel resolution. Optical flow estimates the direction and speed of motion of objects in an image between subsequent frames in a recorded digital sequence of images (i.e. a movie). Computed vector field outputs by the algorithm were able to track the spatiotemporal dynamics of calcium signaling pat- terns. We begin by briefly reviewing the mathematics of the optical flow algorithm, and then describe how to solve for the displacement vectors and how to measure their reliability. We then compare computed flow vectors with manually estimated vectors for the progression of a calcium signal recorded from representative astrocyte cultures. Finally, we applied the algorithm to preparations of primary astrocytes and hippocampal neurons and to the rMC-1 Muller glial cell line in order to illustrate the capability of the algorithm for capturing different types of spatiotemporal calcium activity. We discuss the imaging requirements, parameter selection and threshold selection for reliable measurements, and offer perspectives on uses of the vector data.
|
Title: Combining a Probabilistic Sampling Technique and Simple Heuristics to solve the Dynamic Path Planning Problem
|
Abstract: Probabilistic sampling methods have become very popular to solve single-shot path planning problems. Rapidly-exploring Random Trees (RRTs) in particular have been shown to be very efficient in solving high dimensional problems. Even though several RRT variants have been proposed to tackle the dynamic replanning problem, these methods only perform well in environments with infrequent changes. This paper addresses the dynamic path planning problem by combining simple techniques in a multi-stage probabilistic algorithm. This algorithm uses RRTs as an initial solution, informed local search to fix unfeasible paths and a simple greedy optimizer. The algorithm is capable of recognizing when the local search is stuck, and subsequently restart the RRT. We show that this combination of simple techniques provides better responses to a highly dynamic environment than the dynamic RRT variants.
|
Title: Single-Agent On-line Path Planning in Continuous, Unpredictable and Highly Dynamic Environments
|
Abstract: This document is a thesis on the subject of single-agent on-line path planning in continuous,unpredictable and highly dynamic environments. The problem is finding and traversing a collision-free path for a holonomic robot, without kinodynamic restrictions, moving in an environment with several unpredictably moving obstacles or adversaries. The availability of perfect information of the environment at all times is assumed. Several static and dynamic variants of the Rapidly Exploring Random Trees (RRT) algorithm are explored, as well as an evolutionary algorithm for planning in dynamic environments called the Evolutionary Planner/Navigator. A combination of both kinds of algorithms is proposed to overcome shortcomings in both, and then a combination of a RRT variant for initial planning and informed local search for navigation, plus a simple greedy heuristic for optimization. We show that this combination of simple techniques provides better responses to highly dynamic environments than the RRT extensions.
|
Title: Hodge Theory on Metric Spaces
|
Abstract: Hodge theory is a beautiful synthesis of geometry, topology, and analysis, which has been developed in the setting of Riemannian manifolds. On the other hand, spaces of images, which are important in the mathematical foundations of vision and pattern recognition, do not fit this framework. This motivates us to develop a version of Hodge theory on metric spaces with a probability measure. We believe that this constitutes a step towards understanding the geometry of vision. The appendix by Anthony Baker provides a separable, compact metric space with infinite dimensional \alpha-scale homology.
|
Title: Isometric Multi-Manifolds Learning
|
Abstract: Isometric feature mapping (Isomap) is a promising manifold learning method. However, Isomap fails to work on data which distribute on clusters in a single manifold or manifolds. Many works have been done on extending Isomap to multi-manifolds learning. In this paper, we first proposed a new multi-manifolds learning algorithm (M-Isomap) with help of a general procedure. The new algorithm preserves intra-manifold geodesics and multiple inter-manifolds edges precisely. Compared with previous methods, this algorithm can isometrically learn data distributed on several manifolds. Secondly, the original multi-cluster manifold learning algorithm first proposed in and called D-C Isomap has been revised so that the revised D-C Isomap can learn multi-manifolds data. Finally, the features and effectiveness of the proposed multi-manifolds learning algorithms are demonstrated and compared through experiments.
|
Title: Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views
|
Abstract: Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.
|
Title: Reversible Image Authentication with Tamper Localization Based on Integer Wavelet Transform
|
Abstract: In this paper, a new reversible image authentication technique with tamper localization based on watermarking in integer wavelet transform is proposed. If the image authenticity is verified, then the distortion due to embedding the watermark can be completely removed from the watermarked image. If the image is tampered, then the tampering positions can also be localized. Two layers of watermarking are used. The first layer embedded in spatial domain verifies authenticity and the second layer embedded in transform domain provides reversibility. This technique utilizes selective LSB embedding and histogram characteristics of the difference images of the wavelet coefficients and modifies pixel values slightly to embed the watermark. Experimental results demonstrate that the proposed scheme can detect any modifications of the watermarked image.
|
Title: Behavior and performance of the deep belief networks on image classification
|
Abstract: We apply deep belief networks of restricted Boltzmann machines to bags of words of sift features obtained from databases of 13 Scenes, 15 Scenes and Caltech 256 and study experimentally their behavior and performance. We find that the final performance in the supervised phase is reached much faster if the system is pre-trained. Pre-training the system on a larger dataset keeping the supervised dataset fixed improves the performance (for the 13 Scenes case). After the unsupervised pre-training, neurons arise that form approximate explicit representations for several categories (meaning they are mostly active for this category). The last three facts suggest that unsupervised training really discovers structure in these data. Pre-training can be done on a completely different dataset (we use Corel dataset) and we find that the supervised phase performs just as good (on the 15 Scenes dataset). This leads us to conjecture that one can pre-train the system once (e.g. in a factory) and subsequently apply it to many supervised problems which then learn much faster. The best performance is obtained with single hidden layer system suggesting that the histogram of sift features doesn't have much high level structure. The overall performance is almost equal, but slightly worse then that of the support vector machine and the spatial pyramidal matching.
|
Title: Training a Large Scale Classifier with the Quantum Adiabatic Algorithm
|
Abstract: In a previous publication we proposed discrete global optimization as a method to train a strong binary classifier constructed as a thresholded sum over weak classifiers. Our motivation was to cast the training of a classifier into a format amenable to solution by the quantum adiabatic algorithm. Applying adiabatic quantum computing (AQC) promises to yield solutions that are superior to those which can be achieved with classical heuristic solvers. Interestingly we found that by using heuristic solvers to obtain approximate solutions we could already gain an advantage over the standard method AdaBoost. In this communication we generalize the baseline method to large scale classifier training. By large scale we mean that either the cardinality of the dictionary of candidate weak classifiers or the number of weak learners used in the strong classifier exceed the number of variables that can be handled effectively in a single global optimization. For such situations we propose an iterative and piecewise approach in which a subset of weak classifiers is selected in each iteration via global optimization. The strong classifier is then constructed by concatenating the subsets of weak classifiers. We show in numerical studies that the generalized method again successfully competes with AdaBoost. We also provide theoretical arguments as to why the proposed optimization method, which does not only minimize the empirical loss but also adds L0-norm regularization, is superior to versions of boosting that only minimize the empirical loss. By conducting a Quantum Monte Carlo simulation we gather evidence that the quantum adiabatic algorithm is able to handle a generic training problem efficiently.
|
Title: Lexical evolution rates by automated stability measure
|
Abstract: Phylogenetic trees can be reconstructed from the matrix which contains the distances between all pairs of languages in a family. Recently, we proposed a new method which uses normalized Levenshtein distances among words with same meaning and averages on all the items of a given list. Decisions about the number of items in the input lists for language comparison have been debated since the beginning of glottochronology. The point is that words associated to some of the meanings have a rapid lexical evolution. Therefore, a large vocabulary comparison is only apparently more accurate then a smaller one since many of the words do not carry any useful information. In principle, one should find the optimal length of the input lists studying the stability of the different items. In this paper we tackle the problem with an automated methodology only based on our normalized Levenshtein distance. With this approach, the program of an automated reconstruction of languages relationships is completed.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.