text stringlengths 0 4.09k |
|---|
Title: A note on convergence of the equi-energy sampler |
Abstract: In a recent paper `The equi-energy sampler with applications statistical inference and statistical mechanics' [Ann. Stat. 34 (2006) 1581--1619], Kou, Zhou & Wong have presented a new stochastic simulation method called the equi-energy (EE) sampler. This technique is designed to simulate from a probability measure $\pi$, perhaps only known up to a normalizing constant. The authors demonstrate that the sampler performs well in quite challenging problems but their convergence results (Theorem 2) appear incomplete. This was pointed out, in the discussion of the paper, by Atchad\'e & Liu (2006) who proposed an alternative convergence proof. However, this alternative proof, whilst theoretically correct, does not correspond to the algorithm that is implemented. In this note we provide a new proof of convergence of the equi-energy sampler based on the Poisson equation and on the theory developed in Andrieu et al. (2007) for Markov chain Monte Carlo (MCMC). The objective of this note is to provide a proof of correctness of the EE sampler when there is only one feeding chain; the general case requires a much more technical approach than is suitable for a short note. In addition, we also seek to highlight the difficulties associated with the analysis of this type of algorithm and present the main techniques that may be adopted to prove the convergence of it. |
Title: Population-Based Reversible Jump Markov Chain Monte Carlo |
Abstract: In this paper we present an extension of population-based Markov chain Monte Carlo (MCMC) to the trans-dimensional case. One of the main challenges in MCMC-based inference is that of simulating from high and trans-dimensional target measures. In such cases, MCMC methods may not adequately traverse the support of the target; the simulation results will be unreliable. We develop population methods to deal with such problems, and give a result proving the uniform ergodicity of these population algorithms, under mild assumptions. This result is used to demonstrate the superiority, in terms of convergence rate, of a population transition kernel over a reversible jump sampler for a Bayesian variable selection problem. We also give an example of a population algorithm for a Bayesian multivariate mixture model with an unknown number of components. This is applied to gene expression data of 1000 data points in six dimensions and it is demonstrated that our algorithm out performs some competing Markov chain samplers. |
Title: A Tutorial on Spectral Clustering |
Abstract: In recent years, spectral clustering has become one of the most popular modern clustering algorithms. It is simple to implement, can be solved efficiently by standard linear algebra software, and very often outperforms traditional clustering algorithms such as the k-means algorithm. On the first glance spectral clustering appears slightly mysterious, and it is not obvious to see why it works at all and what it really does. The goal of this tutorial is to give some intuition on those questions. We describe different graph Laplacians and their basic properties, present the most common spectral clustering algorithms, and derive those algorithms from scratch by several different approaches. Advantages and disadvantages of the different spectral clustering algorithms are discussed. |
Title: A Geometric Approach to Confidence Sets for Ratios: Fieller's Theorem, Generalizations, and Bootstrap |
Abstract: We present a geometric method to determine confidence sets for the ratio E(Y)/E(X) of the means of random variables X and Y. This method reduces the problem of constructing confidence sets for the ratio of two random variables to the problem of constructing confidence sets for the means of one-dimensional random variables. It is valid in a large variety of circumstances. In the case of normally distributed random variables, the so constructed confidence sets coincide with the standard Fieller confidence sets. Generalizations of our construction lead to definitions of exact and conservative confidence sets for very general classes of distributions, provided the joint expectation of (X,Y) exists and the linear combinations of the form aX + bY are well-behaved. Finally, our geometric method allows to derive a very simple bootstrap approach for constructing conservative confidence sets for ratios which perform favorably in certain situations, in particular in the asymmetric heavy-tailed regime. |
Title: Bayesian finite mixtures: a note on prior specification and posterior computation |
Abstract: A new method for the computation of the posterior distribution of the number k of components in a finite mixture is presented. Two aspects of prior specification are also studied: an argument is made for the use of a Poisson(1) distribution as the prior for k; and methods are given for the selection of hyperparameter values in the mixture of normals model, with natural conjugate priors on the components parameters. |
Title: Singular Curves in the Joint Space and Cusp Points of 3-RPR parallel manipulators |
Abstract: This paper investigates the singular curves in the joint space of a family of planar parallel manipulators. It focuses on special points, referred to as cusp points, which may appear on these curves. Cusp points play an important role in the kinematic behavior of parallel manipulators since they make possible a nonsingular change of assembly mode. The purpose of this study is twofold. First, it exposes a method to compute joint space singular curves of 3-RPR planar parallel manipulators. Second, it presents an algorithm for detecting and computing all cusp points in the joint space of these same manipulators. |
Title: On the Distribution of Penalized Maximum Likelihood Estimators: The LASSO, SCAD, and Thresholding |
Abstract: We study the distributions of the LASSO, SCAD, and thresholding estimators, in finite samples and in the large-sample limit. The asymptotic distributions are derived for both the case where the estimators are tuned to perform consistent model selection and for the case where the estimators are tuned to perform conservative model selection. Our findings complement those of Knight and Fu (2000) and Fan and Li (2001). We show that the distributions are typically highly nonnormal regardless of how the estimator is tuned, and that this property persists in large samples. The uniform convergence rate of these estimators is also obtained, and is shown to be slower than 1/root(n) in case the estimator is tuned to perform consistent model selection. An impossibility result regarding estimation of the estimators' distribution function is also provided. |
Title: Discriminative Phoneme Sequences Extraction for Non-Native Speaker's Origin Classification |
Abstract: In this paper we present an automated method for the classification of the origin of non-native speakers. The origin of non-native speakers could be identified by a human listener based on the detection of typical pronunciations for each nationality. Thus we suppose the existence of several phoneme sequences that might allow the classification of the origin of non-native speakers. Our new method is based on the extraction of discriminative sequences of phonemes from a non-native English speech database. These sequences are used to construct a probabilistic classifier for the speakers' origin. The existence of discriminative phone sequences in non-native speech is a significant result of this work. The system that we have developed achieved a significant correct classification rate of 96.3% and a significant error reduction compared to some other tested techniques. |
Title: Nonparametric Regression, Confidence Regions and Regularization |
Abstract: In this paper we offer a unified approach to the problem of nonparametric regression on the unit interval. It is based on a universal, honest and non-asymptotic confidence region which is defined by a set of linear inequalities involving the values of the functions at the design points. Interest will typically centre on certain simplest functions in that region where simplicity can be defined in terms of shape (number of local extremes, intervals of convexity/concavity) or smoothness (bounds on derivatives) or a combination of both. Once some form of regularization has been decided upon the confidence region can be used to provide honest non-asymptotic confidence bounds which are less informative but conceptually much simpler. |
Title: Performance Bounds for Lambda Policy Iteration and Application to the Game of Tetris |
Abstract: We consider the discrete-time infinite-horizon optimal control problem formalized by Markov Decision Processes. We revisit the work of Bertsekas and Ioffe, that introduced $\lambda$ Policy Iteration, a family of algorithms parameterized by $\lambda$ that generalizes the standard algorithms Value Iteration and Policy Iteration, and has some deep connections with the Temporal Differences algorithm TD($\lambda$) described by Sutton and Barto. We deepen the original theory developped by the authors by providing convergence rate bounds which generalize standard bounds for Value Iteration described for instance by Puterman. Then, the main contribution of this paper is to develop the theory of this algorithm when it is used in an approximate form and show that this is sound. Doing so, we extend and unify the separate analyses developped by Munos for Approximate Value Iteration and Approximate Policy Iteration. Eventually, we revisit the use of this algorithm in the training of a Tetris playing controller as originally done by Bertsekas and Ioffe. We provide an original performance bound that can be applied to such an undiscounted control problem. Our empirical results are different from those of Bertsekas and Ioffe (which were originally qualified as "paradoxical" and "intriguing"), and much more conform to what one would expect from a learning experiment. We discuss the possible reason for such a difference. |
Title: Addendum to Research MMMCV; A Man/Microbio/Megabio/Computer Vision |
Abstract: In October 2007, a Research Proposal for the University of Sydney, Australia, the author suggested that biovie-physical phenomenon as `electrodynamic dependant biological vision', is governed by relativistic quantum laws and biovision. The phenomenon on the basis of `biovielectroluminescence', satisfies man/microbio/megabio/computer vision (MMMCV), as a robust candidate for physical and visual sciences. The general aim of this addendum is to present a refined text of Sections 1-3 of that proposal and highlighting the contents of its Appendix in form of a `Mechanisms' Section. We then briefly remind in an article aimed for December 2007, by appending two more equations into Section 3, a theoretical II-time scenario as a time model well-proposed for the phenomenon. The time model within the core of the proposal, plays a significant role in emphasizing the principle points on Objectives no. 1-8, Sub-hypothesis 3.1.2, mentioned in Article [arXiv:0710.0410]. It also expresses the time concept in terms of causing quantized energy f(|E|) of time |t|, emit in regard to shortening the probability of particle loci as predictable patterns of particle's un-occurred motion, a solution to Heisenberg's uncertainty principle (HUP) into a simplistic manner. We conclude that, practical frames via a time algorithm to this model, fixates such predictable patterns of motion of scenery bodies onto recordable observation points of a MMMCV system. It even suppresses/predicts superposition phenomena coming from a human subject and/or other bio-subjects for any decision making event, e.g., brainwave quantum patterns based on vision. Maintaining the existential probability of Riemann surfaces of II-time scenarios in the context of biovielectroluminescence, makes motion-prediction a possibility. |
Title: Combined Acoustic and Pronunciation Modelling for Non-Native Speech Recognition |
Abstract: In this paper, we present several adaptation methods for non-native speech recognition. We have tested pronunciation modelling, MLLR and MAP non-native pronunciation adaptation and HMM models retraining on the HIWIRE foreign accented English speech database. The ``phonetic confusion'' scheme we have developed consists in associating to each spoken phone several sequences of confused phones. In our experiments, we have used different combinations of acoustic models representing the canonical and the foreign pronunciations: spoken and native models, models adapted to the non-native accent with MAP and MLLR. The joint use of pronunciation modelling and acoustic adaptation led to further improvements in recognition accuracy. The best combination of the above mentioned techniques resulted in a relative word error reduction ranging from 46% to 71%. |
Title: Infinite Viterbi alignments in the two state hidden Markov models |
Abstract: Since the early days of digital communication, Hidden Markov Models (HMMs) have now been routinely used in speech recognition, processing of natural languages, images, and in bioinformatics. An HMM $(X_i,Y_i)_i\ge 1$ assumes observations $X_1,X_2,...$ to be conditionally independent given an "explanotary" Markov process $Y_1,Y_2,...$, which itself is not observed; moreover, the conditional distribution of $X_i$ depends solely on $Y_i$. Central to the theory and applications of HMM is the Viterbi algorithm to find \em a maximum a posteriori estimate $q_1:n=(q_1,q_2,...,q_n)$ of $Y_1:n$ given the observed data $x_1:n$. Maximum \em a posteriori paths are also called Viterbi paths or alignments. Recently, attempts have been made to study the behavior of Viterbi alignments of HMMs with two hidden states when $n$ tends to infinity. It has indeed been shown that in some special cases a well-defined limiting Viterbi alignment exists. While innovative, these attempts have relied on rather strong assumptions. This work proves the existence of infinite Viterbi alignments for virtually any HMM with two hidden states. |
Title: Confidence Sets Based on Sparse Estimators Are Necessarily Large |
Abstract: Confidence sets based on sparse estimators are shown to be large compared to more standard confidence sets, demonstrating that sparsity of an estimator comes at a substantial price in terms of the quality of the estimator. The results are set in a general parametric or semiparametric framework. |
Title: Am\'elioration des Performances des Syst\`emes Automatiques de Reconnaissance de la Parole pour la Parole Non Native |
Abstract: In this article, we present an approach for non native automatic speech recognition (ASR). We propose two methods to adapt existing ASR systems to the non-native accents. The first method is based on the modification of acoustic models through integration of acoustic models from the mother tong. The phonemes of the target language are pronounced in a similar manner to the native language of speakers. We propose to combine the models of confused phonemes so that the ASR system could recognize both concurrent pronounciations. The second method we propose is a refinment of the pronounciation error detection through the introduction of graphemic constraints. Indeed, non native speakers may rely on the writing of words in their uttering. Thus, the pronounctiation errors might depend on the characters composing the words. The average error rate reduction that we observed is (22.5%) relative for the sentence error rate, and 34.5% (relative) in word error rate. |
Title: Modeling homophily and stochastic equivalence in symmetric relational data |
Abstract: This article discusses a latent variable model for inference and prediction of symmetric relational data. The model, based on the idea of the eigenvalue decomposition, represents the relationship between two nodes as the weighted inner-product of node-specific vectors of latent characteristics. This ``eigenmodel'' generalizes other popular latent variable models, such as latent class and distance models: It is shown mathematically that any latent class or distance model has a representation as an eigenmodel, but not vice-versa. The practical implications of this are examined in the context of three real datasets, for which the eigenmodel has as good or better out-of-sample predictive performance than the other two models. |
Title: Analytical approach to bit-string models of language evolution |
Abstract: A formulation of bit-string models of language evolution, based on differential equations for the population speaking each language, is introduced and preliminarily studied. Connections with replicator dynamics and diffusion processes are pointed out. The stability of the dominance state, where most of the population speaks a single language, is analyzed within a mean-field-like approximation, while the homogeneous state, where the population is evenly distributed among languages, can be exactly studied. This analysis discloses the existence of a bistability region, where dominance coexists with homogeneity as possible asymptotic states. Numerical resolution of the differential system validates these findings. |
Title: Towards a Sound Theory of Adaptation for the Simple Genetic Algorithm |
Abstract: The pace of progress in the fields of Evolutionary Computation and Machine Learning is currently limited -- in the former field, by the improbability of making advantageous extensions to evolutionary algorithms when their capacity for adaptation is poorly understood, and in the latter by the difficulty of finding effective semi-principled reductions of hard real-world problems to relatively simple optimization problems. In this paper we explain why a theory which can accurately explain the simple genetic algorithm's remarkable capacity for adaptation has the potential to address both these limitations. We describe what we believe to be the impediments -- historic and analytic -- to the discovery of such a theory and highlight the negative role that the building block hypothesis (BBH) has played. We argue based on experimental results that a fundamental limitation which is widely believed to constrain the SGA's adaptive ability (and is strongly implied by the BBH) is in fact illusionary and does not exist. The SGA therefore turns out to be more powerful than it is currently thought to be. We give conditions under which it becomes feasible to numerically approximate and study the multivariate marginals of the search distribution of an infinite population SGA over multiple generations even when its genomes are long, and explain why this analysis is relevant to the riddle of the SGA's remarkable adaptive abilities. |
Title: Instantaneous and lagged measurements of linear and nonlinear dependence between groups of multivariate time series: frequency decomposition |
Abstract: Measures of linear dependence (coherence) and nonlinear dependence (phase synchronization) between any number of multivariate time series are defined. The measures are expressed as the sum of lagged dependence and instantaneous dependence. The measures are non-negative, and take the value zero only when there is independence of the pertinent type. These measures are defined in the frequency domain and are applicable to stationary and non-stationary time series. These new results extend and refine significantly those presented in a previous technical report (Pascual-Marqui 2007, arXiv:0706.1776 [stat.ME], http://arxiv.org/abs/0706.1776), and have been largely motivated by the seminal paper on linear feedback by Geweke (1982 JASA 77:304-313). One important field of application is neurophysiology, where the time series consist of electric neuronal activity at several brain locations. Coherence and phase synchronization are interpreted as "connectivity" between locations. However, any measure of dependence is highly contaminated with an instantaneous, non-physiological contribution due to volume conduction and low spatial resolution. The new techniques remove this confounding factor considerably. Moreover, the measures of dependence can be applied to any number of brain areas jointly, i.e. distributed cortical networks, whose activity can be estimated with eLORETA (Pascual-Marqui 2007, arXiv:0710.3341 [math-ph]). |
Title: Predicting relevant empty spots in social interaction |
Abstract: An empty spot refers to an empty hard-to-fill space which can be found in the records of the social interaction, and is the clue to the persons in the underlying social network who do not appear in the records. This contribution addresses a problem to predict relevant empty spots in social interaction. Homogeneous and inhomogeneous networks are studied as a model underlying the social interaction. A heuristic predictor function approach is presented as a new method to address the problem. Simulation experiment is demonstrated over a homogeneous network. A test data in the form of baskets is generated from the simulated communication. Precision to predict the empty spots is calculated to demonstrate the performance of the presented approach. |
Title: Inference for stochastic volatility models using time change transformations |
Abstract: We address the problem of parameter estimation for diffusion driven stochastic volatility models through Markov chain Monte Carlo (MCMC). To avoid degeneracy issues we introduce an innovative reparametrisation defined through transformations that operate on the time scale of the diffusion. A novel MCMC scheme which overcomes the inherent difficulties of time change transformations is also presented. The algorithm is fast to implement and applies to models with stochastic volatility. The methodology is tested through simulation based experiments and illustrated on data consisting of US treasury bill rates. |
Title: Likelihood-based inference for correlated diffusions |
Abstract: We address the problem of likelihood based inference for correlated diffusion processes using Markov chain Monte Carlo (MCMC) techniques. Such a task presents two interesting problems. First, the construction of the MCMC scheme should ensure that the correlation coefficients are updated subject to the positive definite constraints of the diffusion matrix. Second, a diffusion may only be observed at a finite set of points and the marginal likelihood for the parameters based on these observations is generally not available. We overcome the first issue by using the Cholesky factorisation on the diffusion matrix. To deal with the likelihood unavailability, we generalise the data augmentation framework of Roberts and Stramer (2001 Biometrika 88(3):603-621) to d-dimensional correlated diffusions including multivariate stochastic volatility models. Our methodology is illustrated through simulation based experiments and with daily EUR /USD, GBP/USD rates together with their implied volatilities. |
Title: Enhancing Sparsity by Reweighted L1 Minimization |
Abstract: It is now well understood that (1) it is possible to reconstruct sparse signals exactly from what appear to be highly incomplete sets of linear measurements and (2) that this can be done by constrained L1 minimization. In this paper, we study a novel method for sparse signal recovery that in many situations outperforms L1 minimization in the sense that substantially fewer measurements are needed for exact recovery. The algorithm consists of solving a sequence of weighted L1-minimization problems where the weights used for the next iteration are computed from the value of the current solution. We present a series of experiments demonstrating the remarkable performance and broad applicability of this algorithm in the areas of sparse signal recovery, statistical estimation, error correction and image processing. Interestingly, superior gains are also achieved when our method is applied to recover signals with assumed near-sparsity in overcomplete representations--not by reweighting the L1 norm of the coefficient sequence as is common, but by reweighting the L1 norm of the transformed object. An immediate consequence is the possibility of highly efficient data acquisition protocols by improving on a technique known as compressed sensing. |
Title: Kinematic calibration of orthoglide-type mechanisms |
Abstract: The paper proposes a novel calibration approach for the Orthoglide-type mechanisms based on observations of the manipulator leg parallelism during mo-tions between the prespecified test postures. It employs a low-cost measuring system composed of standard comparator indicators attached to the universal magnetic stands. They are sequentially used for measuring the deviation of the relevant leg location while the manipulator moves the TCP along the Cartesian axes. Using the measured differences, the developed algorithm estimates the joint offsets that are treated as the most essential parameters to be adjusted. The sensitivity of the meas-urement methods and the calibration accuracy are also studied. Experimental re-sults are presented that demonstrate validity of the proposed calibration technique |
Title: Building Rules on Top of Ontologies for the Semantic Web with Inductive Logic Programming |
Abstract: Building rules on top of ontologies is the ultimate goal of the logical layer of the Semantic Web. To this aim an ad-hoc mark-up language for this layer is currently under discussion. It is intended to follow the tradition of hybrid knowledge representation and reasoning systems such as $$-log that integrates the description logic $$ and the function-free Horn clausal language . In this paper we consider the problem of automating the acquisition of these rules for the Semantic Web. We propose a general framework for rule induction that adopts the methodological apparatus of Inductive Logic Programming and relies on the expressive and deductive power of $$-log. The framework is valid whatever the scope of induction (description vs. prediction) is. Yet, for illustrative purposes, we also discuss an instantiation of the framework which aims at description and turns out to be useful in Ontology Refinement. Keywords: Inductive Logic Programming, Hybrid Knowledge Representation and Reasoning Systems, Ontologies, Semantic Web. Note: To appear in Theory and Practice of Logic Programming (TPLP) |
Title: Dated ancestral trees from binary trait data and its application to the diversification of languages |
Abstract: Binary trait data record the presence or absence of distinguishing traits in individuals. We treat the problem of estimating ancestral trees with time depth from binary trait data. Simple analysis of such data is problematic. Each homology class of traits has a unique birth event on the tree, and the birth event of a trait visible at the leaves is biased towards the leaves. We propose a model-based analysis of such data, and present an MCMC algorithm that can sample from the resulting posterior distribution. Our model is based on using a birth-death process for the evolution of the elements of sets of traits. Our analysis correctly accounts for the removal of singleton traits, which are commonly discarded in real data sets. We illustrate Bayesian inference for two binary-trait data sets which arise in historical linguistics. The Bayesian approach allows for the incorporation of information from ancestral languages. The marginal prior distribution of the root time is uniform. We present a thorough analysis of the robustness of our results to model mispecification, through analysis of predictive distributions for external data, and fitting data simulated under alternative observation models. The reconstructed ages of tree nodes are relatively robust, whilst posterior probabilities for topology are not reliable. |
Title: The Residual Information Criterion, Corrected |
Abstract: Shi and Tsai (JRSSB, 2002) proposed an interesting residual information criterion (RIC) for model selection in regression. Their RIC was motivated by the principle of minimizing the Kullback-Leibler discrepancy between the residual likelihoods of the true and candidate model. We show, however, under this principle, RIC would always choose the full (saturated) model. The residual likelihood therefore, is not appropriate as a discrepancy measure in defining information criterion. We explain why it is so and provide a corrected residual information criterion as a remedy. |
Title: Bootstrap Confidence Regions for Optimal Operating Conditions in Response Surface Methodology |
Abstract: This article concerns the application of bootstrap methodology to construct a likelihood-based confidence region for operating conditions associated with the maximum of a response surface constrained to a specified region. Unlike classical methods based on the stationary point, proper interpretation of this confidence region does not depend on unknown model parameters. In addition, the methodology does not require the assumption of normally distributed errors. The approach is demonstrated for concave-down and saddle system cases in two dimensions. Simulation studies were performed to assess the coverage probability of these regions. |
Title: Empirical Evaluation of Four Tensor Decomposition Algorithms |
Abstract: Higher-order tensor decompositions are analogous to the familiar Singular Value Decomposition (SVD), but they transcend the limitations of matrices (second-order tensors). SVD is a powerful tool that has achieved impressive results in information retrieval, collaborative filtering, computational linguistics, computational vision, and other fields. However, SVD is limited to two-dimensional arrays of data (two modes), and many potential applications have three or more modes, which require higher-order tensor decompositions. This paper evaluates four algorithms for higher-order tensor decomposition: Higher-Order Singular Value Decomposition (HO-SVD), Higher-Order Orthogonal Iteration (HOOI), Slice Projection (SP), and Multislice Projection (MP). We measure the time (elapsed run time), space (RAM and disk space requirements), and fit (tensor reconstruction accuracy) of the four algorithms, under a variety of conditions. We find that standard implementations of HO-SVD and HOOI do not scale up to larger tensors, due to increasing RAM requirements. We recommend HOOI for tensors that are small enough for the available RAM and MP for larger tensors. |
Title: Computer Model of a "Sense of Humour". I. General Algorithm |
Abstract: A computer model of a "sense of humour" is proposed. The humorous effect is interpreted as a specific malfunction in the course of information processing due to the need for the rapid deletion of the false version transmitted into consciousness. The biological function of a sense of humour consists in speeding up the bringing of information into consciousness and in fuller use of the resources of the brain. |
Title: Computer Model of a "Sense of Humour". II. Realization in Neural Networks |
Abstract: The computer realization of a "sense of humour" requires the creation of an algorithm for solving the "linguistic problem", i.e. the problem of recognizing a continuous sequence of polysemantic images. Such algorithm may be realized in the Hopfield model of a neural network after its proper modification. |
Title: On the Information Rates of the Plenoptic Function |
Abstract: The \it plenoptic function (Adelson and Bergen, 91) describes the visual information available to an observer at any point in space and time. Samples of the plenoptic function (POF) are seen in video and in general visual content, and represent large amounts of information. In this paper we propose a stochastic model to study the compression limits of the plenoptic function. In the proposed framework, we isolate the two fundamental sources of information in the POF: the one representing the camera motion and the other representing the information complexity of the "reality" being acquired and transmitted. The sources of information are combined, generating a stochastic process that we study in detail. We first propose a model for ensembles of realities that do not change over time. The proposed model is simple in that it enables us to derive precise coding bounds in the information-theoretic sense that are sharp in a number of cases of practical interest. For this simple case of static realities and camera motion, our results indicate that coding practice is in accordance with optimal coding from an information-theoretic standpoint. The model is further extended to account for visual realities that change over time. We derive bounds on the lossless and lossy information rates for this dynamic reality model, stating conditions under which the bounds are tight. Examples with synthetic sources suggest that in the presence of scene dynamics, simple hybrid coding using motion/displacement estimation with DPCM performs considerably suboptimally relative to the true rate-distortion bound. |
Title: Can a Computer Laugh ? |
Abstract: A computer model of "a sense of humour" suggested previously [arXiv:0711.2058,0711.2061], relating the humorous effect with a specific malfunction in information processing, is given in somewhat different exposition. Psychological aspects of humour are elaborated more thoroughly. The mechanism of laughter is formulated on the more general level. Detailed discussion is presented for the higher levels of information processing, which are responsible for a perception of complex samples of humour. Development of a sense of humour in the process of evolution is discussed. |
Title: Models for dependent extremes using stable mixtures |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.