text
stringlengths
0
4.09k
Abstract: In this paper, we are interested in the classical problem of restoring data degraded by a convolution and the addition of a white Gaussian noise. The originality of the proposed approach is two-fold. Firstly, we formulate the restoration problem as a nonlinear estimation problem leading to the minimization of a criterion derived from Stein's unbiased quadratic risk estimate. Secondly, the deconvolution procedure is performed using any analysis and synthesis frames that can be overcomplete or not. New theoretical results concerning the calculation of the variance of the Stein's risk estimate are also provided in this work. Simulations carried out on natural images show the good performance of our method w.r.t. conventional wavelet-based restoration methods.
Title: Computational modelling of evolution: ecosystems and language
Abstract: Recently, computational modelling became a very important research tool that enables us to study problems that for decades evaded scientific analysis. Evolutionary systems are certainly examples of such problems: they are composed of many units that might reproduce, diffuse, mutate, die, or in some cases for example communicate. These processes might be of some adaptive value, they influence each other and occur on various time scales. That is why such systems are so difficult to study. In this paper we briefly review some computational approaches, as well as our contributions, to the evolution of ecosystems and language. We start from Lotka-Volterra equations and the modelling of simple two-species prey-predator systems. Such systems are canonical example for studying oscillatory behaviour in competitive populations. Then we describe various approaches to study long-term evolution of multi-species ecosystems. We emphasize the need to use models that take into account both ecological and evolutionary processes. Finally, we address the problem of the emergence and development of language. It is becoming more and more evident that any theory of language origin and development must be consistent with darwinian principles of evolution. Consequently, a number of techniques developed for modelling evolution of complex ecosystems are being applied to the problem of language. We briefly review some of these approaches.
Title: A non-negative expansion for small Jensen-Shannon Divergences
Abstract: In this report, we derive a non-negative series expansion for the Jensen-Shannon divergence (JSD) between two probability distributions. This series expansion is shown to be useful for numerical calculations of the JSD, when the probability distributions are nearly equal, and for which, consequently, small numerical errors dominate evaluation.
Title: Choice of neighbor order in nearest-neighbor classification
Abstract: The $k$th-nearest neighbor rule is arguably the simplest and most intuitively appealing nonparametric classification procedure. However, application of this method is inhibited by lack of knowledge about its properties, in particular, about the manner in which it is influenced by the value of $k$; and by the absence of techniques for empirical choice of $k$. In the present paper we detail the way in which the value of $k$ determines the misclassification error. We consider two models, Poisson and Binomial, for the training samples. Under the first model, data are recorded in a Poisson stream and are "assigned" to one or other of the two populations in accordance with the prior probabilities. In particular, the total number of data in both training samples is a Poisson-distributed random variable. Under the Binomial model, however, the total number of data in the training samples is fixed, although again each data value is assigned in a random way. Although the values of risk and regret associated with the Poisson and Binomial models are different, they are asymptotically equivalent to first order, and also to the risks associated with kernel-based classifiers that are tailored to the case of two derivatives. These properties motivate new methods for choosing the value of $k$.
Title: 3D Face Recognition with Sparse Spherical Representations
Abstract: This paper addresses the problem of 3D face recognition using simultaneous sparse approximations on the sphere. The 3D face point clouds are first aligned with a novel and fully automated registration process. They are then represented as signals on the 2D sphere in order to preserve depth and geometry information. Next, we implement a dimensionality reduction process with simultaneous sparse approximations and subspace projection. It permits to represent each 3D face by only a few spherical functions that are able to capture the salient facial characteristics, and hence to preserve the discriminant facial information. We eventually perform recognition by effective matching in the reduced space, where Linear Discriminant Analysis can be further activated for improved recognition performance. The 3D face recognition algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to outperform classical state-of-the-art solutions that work with depth images.
Title: Approximating the marginal likelihood using copula
Abstract: Model selection is an important activity in modern data analysis and the conventional Bayesian approach to this problem involves calculation of marginal likelihoods for different models, together with diagnostics which examine specific aspects of model fit. Calculating the marginal likelihood is a difficult computational problem. Our article proposes some extensions of the Laplace approximation for this task that are related to copula models and which are easy to apply. Variations which can be used both with and without simulation from the posterior distribution are considered, as well as use of the approximations with bridge sampling and in random effects models with a large number of latent variables. The use of a t-copula to obtain higher accuracy when multivariate dependence is not well captured by a Gaussian copula is also discussed.
Title: A Novel Clustering Algorithm Based on a Modified Model of Random Walk
Abstract: We introduce a modified model of random walk, and then develop two novel clustering algorithms based on it. In the algorithms, each data point in a dataset is considered as a particle which can move at random in space according to the preset rules in the modified model. Further, this data point may be also viewed as a local control subsystem, in which the controller adjusts its transition probability vector in terms of the feedbacks of all data points, and then its transition direction is identified by an event-generating function. Finally, the positions of all data points are updated. As they move in space, data points collect gradually and some separating parts emerge among them automatically. As a consequence, data points that belong to the same class are located at a same position, whereas those that belong to different classes are away from one another. Moreover, the experimental results have demonstrated that data points in the test datasets are clustered reasonably and efficiently, and the comparison with other algorithms also provides an indication of the effectiveness of the proposed algorithms.
Title: A Theory of Truncated Inverse Sampling
Abstract: In this paper, we have established a new framework of truncated inverse sampling for estimating mean values of non-negative random variables such as binomial, Poisson, hyper-geometrical, and bounded variables. We have derived explicit formulas and computational methods for designing sampling schemes to ensure prescribed levels of precision and confidence for point estimators. Moreover, we have developed interval estimation methods.
Title: A branch-and-bound feature selection algorithm for U-shaped cost functions
Abstract: This paper presents the formulation of a combinatorial optimization problem with the following characteristics: i.the search space is the power set of a finite set structured as a Boolean lattice; ii.the cost function forms a U-shaped curve when applied to any lattice chain. This formulation applies for feature selection in the context of pattern recognition. The known approaches for this problem are branch-and-bound algorithms and heuristics, that explore partially the search space. Branch-and-bound algorithms are equivalent to the full search, while heuristics are not. This paper presents a branch-and-bound algorithm that differs from the others known by exploring the lattice structure and the U-shaped chain curves of the search space. The main contribution of this paper is the architecture of this algorithm that is based on the representation and exploration of the search space by new lattice properties proven here. Several experiments, with well known public data, indicate the superiority of the proposed method to SFFS, which is a popular heuristic that gives good results in very short computational time. In all experiments, the proposed method got better or equal results in similar or even smaller computational time.
Title: Touchscreen Voting Machines Cause Long Lines and Disenfranchise Voters
Abstract: Computerized touchscreen "Direct Recording Electronic" DRE voting systems have been used by over 1/3 of American voters in recent elections. In many places, insufficient DRE numbers in combination with lengthy ballots and high voter traffic have caused long lines and disenfranchised voters who left without voting. We have applied computer queuing simulation to the voting process and conclude that far more DREs, at great expense, would be needed to keep waiting times low. Alternatively, paper ballot-optical scan systems can be easily and economically scaled to prevent long lines and meet unexpected contingencies.
Title: Temporal Difference Updating without a Learning Rate
Abstract: We derive an equation for temporal difference learning from statistical principles. Specifically, we start with the variational principle and then bootstrap to produce an updating rule for discounted state value estimates. The resulting equation is similar to the standard equation for temporal difference learning with eligibility traces, so called TD(lambda), however it lacks the parameter alpha that specifies the learning rate. In the place of this free parameter there is now an equation for the learning rate that is specific to each state transition. We experimentally test this new learning rule against TD(lambda) and find that it offers superior performance in various settings. Finally, we make some preliminary investigations into how to extend our new temporal difference algorithm to reinforcement learning. To do this we combine our update equation with both Watkins' Q(lambda) and Sarsa(lambda) and find that it again offers superior performance without a learning rate parameter.
Title: On the Possibility of Learning in Reactive Environments with Arbitrary Dependence
Abstract: We address the problem of reinforcement learning in which observations may exhibit an arbitrary form of stochastic dependence on past observations and actions, i.e. environments more general than (PO)MDPs. The task for an agent is to attain the best possible asymptotic reward where the true generating environment is unknown but belongs to a known countable family of environments. We find some sufficient conditions on the class of environments under which an agent exists which attains the best asymptotic reward for any environment in the class. We analyze how tight these conditions are and how they relate to different probabilistic assumptions known in reinforcement learning and related fields, such as Markov Decision Processes and mixing conditions.
Title: Gibbs posterior for variable selection in high-dimensional classification and data mining
Abstract: In the popular approach of "Bayesian variable selection" (BVS), one uses prior and posterior distributions to select a subset of candidate variables to enter the model. A completely new direction will be considered here to study BVS with a Gibbs posterior originating in statistical mechanics. The Gibbs posterior is constructed from a risk function of practical interest (such as the classification error) and aims at minimizing a risk function without modeling the data probabilistically. This can improve the performance over the usual Bayesian approach, which depends on a probability model which may be misspecified. Conditions will be provided to achieve good risk performance, even in the presence of high dimensionality, when the number of candidate variables "$K$" can be much larger than the sample size "$n$." In addition, we develop a convenient Markov chain Monte Carlo algorithm to implement BVS with the Gibbs posterior.
Title: On the Conditional Independence Implication Problem: A Lattice-Theoretic Approach
Abstract: A lattice-theoretic framework is introduced that permits the study of the conditional independence (CI) implication problem relative to the class of discrete probability measures. Semi-lattices are associated with CI statements and a finite, sound and complete inference system relative to semi-lattice inclusions is presented. This system is shown to be (1) sound and complete for saturated CI statements, (2) complete for general CI statements, and (3) sound and complete for stable CI statements. These results yield a criterion that can be used to falsify instances of the implication problem and several heuristics are derived that approximate this "lattice-exclusion" criterion in polynomial time. Finally, we provide experimental results that relate our work to results obtained from other existing inference algorithms.
Title: Spectral Connectivity Analysis
Abstract: Spectral kernel methods are techniques for transforming data into a coordinate system that efficiently reveals the geometric structure - in particular, the "connectivity" - of the data. These methods depend on certain tuning parameters. We analyze the dependence of the method on these tuning parameters. We focus on one particular technique - diffusion maps - but our analysis can be used for other methods as well. We identify the population quantities implicitly being estimated, we explain how these methods relate to classical kernel smoothing and we define an appropriate risk function for analyzing the estimators. We also show that, in some cases, fast rates of convergence are possible even in high dimensions.
Title: A computational model of affects
Abstract: This article provides a simple logical structure, in which affective concepts (i.e. concepts related to emotions and feelings) can be defined. The set of affects defined is similar to the set of emotions covered in the OCC model (Ortony A., Collins A., and Clore G. L.: The Cognitive Structure of Emotions. Cambridge University Press, 1988), but the model presented in this article is fully computationally defined.
Title: Balancing Exploration and Exploitation by an Elitist Ant System with Exponential Pheromone Deposition Rule
Abstract: The paper presents an exponential pheromone deposition rule to modify the basic ant system algorithm which employs constant deposition rule. A stability analysis using differential equation is carried out to find out the values of parameters that make the ant system dynamics stable for both kinds of deposition rule. A roadmap of connected cities is chosen as the problem environment where the shortest route between two given cities is required to be discovered. Simulations performed with both forms of deposition approach using Elitist Ant System model reveal that the exponential deposition approach outperforms the classical one by a large extent. Exhaustive experiments are also carried out to find out the optimum setting of different controlling parameters for exponential deposition approach and an empirical relationship between the major controlling parameters of the algorithm and some features of problem environment.
Title: A Novel Parser Design Algorithm Based on Artificial Ants
Abstract: This article presents a unique design for a parser using the Ant Colony Optimization algorithm. The paper implements the intuitive thought process of human mind through the activities of artificial ants. The scheme presented here uses a bottom-up approach and the parsing program can directly use ambiguous or redundant grammars. We allocate a node corresponding to each production rule present in the given grammar. Each node is connected to all other nodes (representing other production rules), thereby establishing a completely connected graph susceptible to the movement of artificial ants. Each ant tries to modify this sentential form by the production rule present in the node and upgrades its position until the sentential form reduces to the start symbol S. Successful ants deposit pheromone on the links that they have traversed through. Eventually, the optimum path is discovered by the links carrying maximum amount of pheromone concentration. The design is simple, versatile, robust and effective and obviates the calculation of the above mentioned sets and precedence relation tables. Further advantages of our scheme lie in i) ascertaining whether a given string belongs to the language represented by the grammar, and ii) finding out the shortest possible path from the given string to the start symbol S in case multiple routes exist.
Title: Extension of Max-Min Ant System with Exponential Pheromone Deposition Rule
Abstract: The paper presents an exponential pheromone deposition approach to improve the performance of classical Ant System algorithm which employs uniform deposition rule. A simplified analysis using differential equations is carried out to study the stability of basic ant system dynamics with both exponential and constant deposition rules. A roadmap of connected cities, where the shortest path between two specified cities are to be found out, is taken as a platform to compare Max-Min Ant System model (an improved and popular model of Ant System algorithm) with exponential and constant deposition rules. Extensive simulations are performed to find the best parameter settings for non-uniform deposition approach and experiments with these parameter settings revealed that the above approach outstripped the traditional one by a large extent in terms of both solution quality and convergence time.
Title: Entropy, Perception, and Relativity
Abstract: In this paper, I expand Shannon's definition of entropy into a new form of entropy that allows integration of information from different random events. Shannon's notion of entropy is a special case of my more general definition of entropy. I define probability using a so-called performance function, which is de facto an exponential distribution. Assuming that my general notion of entropy reflects the true uncertainty about a probabilistic event, I understand that our perceived uncertainty differs. I claim that our perception is the result of two opposing forces similar to the two famous antagonists in Chinese philosophy: Yin and Yang. Based on this idea, I show that our perceived uncertainty matches the true uncertainty in points determined by the golden ratio. I demonstrate that the well-known sigmoid function, which we typically employ in artificial neural networks as a non-linear threshold function, describes the actual performance. Furthermore, I provide a motivation for the time dilation in Einstein's Special Relativity, basically claiming that although time dilation conforms with our perception, it does not correspond to reality. At the end of the paper, I show how to apply this theoretical framework to practical applications. I present recognition rates for a pattern recognition problem, and also propose a network architecture that can take advantage of general entropy to solve complex decision problems.
Title: Effect of Tuned Parameters on a LSA MCQ Answering Model
Abstract: This paper presents the current state of a work in progress, whose objective is to better understand the effects of factors that significantly influence the performance of Latent Semantic Analysis (LSA). A difficult task, which consists in answering (French) biology Multiple Choice Questions, is used to test the semantic properties of the truncated singular space and to study the relative influence of main parameters. A dedicated software has been designed to fine tune the LSA semantic space for the Multiple Choice Questions task. With optimal parameters, the performances of our simple model are quite surprisingly equal or superior to those of 7th and 8th grades students. This indicates that semantic spaces were quite good despite their low dimensions and the small sizes of training data sets. Besides, we present an original entropy global weighting of answers' terms of each question of the Multiple Choice Questions which was necessary to achieve the model's success.
Title: Plans D'Experiences D'Information De Kullback-Leibler Minimale
Abstract: Experimental designs are tools which can dramatically reduce the number of simulations required by time-consuming computer codes. Because we don't know the true relation between the response and inputs, designs should allow one to fit a variety of models and should provide information about all portions of the experimental region. One strategy for selecting the values of the inputs at which to observe the response is to choose these values so they are spread evenly throughout the experimental region, according to "space-filling designs". In this article, we suggest a new method based on comparing the empirical distribution of the points in a design to the uniform distribution with the Kullback-Leibler information. The considered approach consists in estimating this difference or, reciprocally, the Shannon entropy. The entropy is estimated by a Monte Carlo method where the density function is replaced by its kernel density estimator or by using the nearest neighbor distances
Title: A Bit of Information Theory, and the Data Augmentation Algorithm Converges
Abstract: The data augmentation (DA) algorithm is a simple and powerful tool in statistical computing. In this note basic information theory is used to prove a nontrivial convergence theorem for the DA algorithm.
Title: Edhibou: a Customizable Interface for Decision Support in a Semantic Portal
Abstract: The Semantic Web is becoming more and more a reality, as the required technologies have reached an appropriate level of maturity. However, at this stage, it is important to provide tools facilitating the use and deployment of these technologies by end-users. In this paper, we describe EdHibou, an automatically generated, ontology-based graphical user interface that integrates in a semantic portal. The particularity of EdHibou is that it makes use of OWL reasoning capabilities to provide intelligent features, such as decision support, upon the underlying ontology. We present an application of EdHibou to medical decision support based on a formalization of clinical guidelines in OWL and show how it can be customized thanks to an ontology of graphical components.
Title: Cooperative interface of a swarm of UAVs
Abstract: After presenting the broad context of authority sharing, we outline how introducing more natural interaction in the design of the ground operator interface of UV systems should help in allowing a single operator to manage the complexity of his/her task. Introducing new modalities is one one of the means in the realization of our vision of next- generation GOI. A more fundamental aspect resides in the interaction manager which should help balance the workload of the operator between mission and interaction, notably by applying a multi-strategy approach to generation and interpretation. We intend to apply these principles to the context of the Smaart prototype, and in this perspective, we illustrate how to characterize the workload associated with a particular operational situation.
Title: Document stream clustering: experimenting an incremental algorithm and AR-based tools for highlighting dynamic trends
Abstract: We address here two major challenges presented by dynamic data mining: 1) the stability challenge: we have implemented a rigorous incremental density-based clustering algorithm, independent from any initial conditions and ordering of the data-vectors stream, 2) the cognitive challenge: we have implemented a stringent selection process of association rules between clusters at time t-1 and time t for directly generating the main conclusions about the dynamics of a data-stream. We illustrate these points with an application to a two years and 2600 documents scientific information database.
Title: Embedding Non-Ground Logic Programs into Autoepistemic Logic for Knowledge Base Combination
Abstract: In the context of the Semantic Web, several approaches to the combination of ontologies, given in terms of theories of classical first-order logic and rule bases, have been proposed. They either cast rules into classical logic or limit the interaction between rules and ontologies. Autoepistemic logic (AEL) is an attractive formalism which allows to overcome these limitations, by serving as a uniform host language to embed ontologies and nonmonotonic logic programs into it. For the latter, so far only the propositional setting has been considered. In this paper, we present three embeddings of normal and three embeddings of disjunctive non-ground logic programs under the stable model semantics into first-order AEL. While the embeddings all correspond with respect to objective ground atoms, differences arise when considering non-atomic formulas and combinations with first-order theories. We compare the embeddings with respect to stable expansions and autoepistemic consequences, considering the embeddings by themselves, as well as combinations with classical theories. Our results reveal differences and correspondences of the embeddings and provide useful guidance in the choice of a particular embedding for knowledge combination.
Title: CoZo+ - A Content Zoning Engine for textual documents
Abstract: Content zoning can be understood as a segmentation of textual documents into zones. This is inspired by [6] who initially proposed an approach for the argumentative zoning of textual documents. With the prototypical CoZo+ engine, we focus on content zoning towards an automatic processing of textual streams while considering only the actors as the zones. We gain information that can be used to realize an automatic recognition of content for pre-defined actors. We understand CoZo+ as a necessary pre-step towards an automatic generation of summaries and to make intellectual ownership of documents detectable.
Title: Hierarchical structure and the prediction of missing links in networks
Abstract: Networks have in recent years emerged as an invaluable tool for describing and quantifying complex systems in many branches of science. Recent studies suggest that networks often exhibit hierarchical organization, where vertices divide into groups that further subdivide into groups of groups, and so forth over multiple scales. In many cases these groups are found to correspond to known functional units, such as ecological niches in food webs, modules in biochemical networks (protein interaction networks, metabolic networks, or genetic regulatory networks), or communities in social networks. Here we present a general technique for inferring hierarchical structure from network data and demonstrate that the existence of hierarchy can simultaneously explain and quantitatively reproduce many commonly observed topological properties of networks, such as right-skewed degree distributions, high clustering coefficients, and short path lengths. We further show that knowledge of hierarchical structure can be used to predict missing connections in partially known networks with high accuracy, and for more general network structures than competing techniques. Taken together, our results suggest that hierarchy is a central organizing principle of complex networks, capable of offering insight into many network phenomena.
Title: Local antithetic sampling with scrambled nets
Abstract: We consider the problem of computing an approximation to the integral $I=\int_[0,1]^df(x) dx$. Monte Carlo (MC) sampling typically attains a root mean squared error (RMSE) of $O(n^-1/2)$ from $n$ independent random function evaluations. By contrast, quasi-Monte Carlo (QMC) sampling using carefully equispaced evaluation points can attain the rate $O(n^-1+\varepsilon)$ for any $\varepsilon>0$ and randomized QMC (RQMC) can attain the RMSE $O(n^-3/2+\varepsilon)$, both under mild conditions on $f$. Classical variance reduction methods for MC can be adapted to QMC. Published results combining QMC with importance sampling and with control variates have found worthwhile improvements, but no change in the error rate. This paper extends the classical variance reduction method of antithetic sampling and combines it with RQMC. One such method is shown to bring a modest improvement in the RMSE rate, attaining $O(n^-3/2-1/d+\varepsilon)$ for any $\varepsilon>0$, for smooth enough $f$.
Title: UNL-French deconversion as transfer & generation from an interlingua with possible quality enhancement through offline human interaction
Abstract: We present the architecture of the UNL-French deconverter, which "generates" from the UNL interlingua by first"localizing" the UNL form for French, within UNL, and then applying slightly adapted but classical transfer and generation techniques, implemented in GETA's Ariane-G5 environment, supplemented by some UNL-specific tools. Online interaction can be used during deconversion to enhance output quality and is now used for development purposes. We show how interaction could be delayed and embedded in the postedition phase, which would then interact not directly with the output text, but indirectly with several components of the deconverter. Interacting online or offline can improve the quality not only of the utterance at hand, but also of the utterances processed later, as various preferences may be automatically changed to let the deconverter "learn".
Title: Classification dynamique d'un flux documentaire : une \'evaluation statique pr\'ealable de l'algorithme GERMEN
Abstract: Data-stream clustering is an ever-expanding subdomain of knowledge extraction. Most of the past and present research effort aims at efficient scaling up for the huge data repositories. Our approach focuses on qualitative improvement, mainly for "weak signals" detection and precise tracking of topical evolutions in the framework of information watch - though scalability is intrinsically guaranteed in a possibly distributed implementation. Our GERMEN algorithm exhaustively picks up the whole set of density peaks of the data at time t, by identifying the local perturbations induced by the current document vector, such as changing cluster borders, or new/vanishing clusters. Optimality yields from the uniqueness 1) of the density landscape for any value of our zoom parameter, 2) of the cluster allocation operated by our border propagation rule. This results in a rigorous independence from the data presentation ranking or any initialization parameter. We present here as a first step the only assessment of a static view resulting from one year of the CNRS/INIST Pascal database in the field of geotechnics.
Title: Estimation of missing data by using the filtering process in a time series modeling
Abstract: This paper proposed a new method to estimate the missing data by using the filtering process. We used datasets without missing data and randomly missing data to evaluate the new method of estimation by using the Box - Jenkins modeling technique to predict monthly average rainfall for site 5504035 Lahar Ikan Mati at Kepala Batas, P. Pinang station in Malaysia. The rainfall data was collected from the $1^st$ January 1969 to $31^st$ December 1997 in the station. The data used in the development of the model to predict rainfall were represented by an autoregressive integrated moving - average (ARIMA) model. The model for both datasets was ARIMA$(1,0,0)(0,1,1)_s$. The result checked with the Naive test, which is the Thiel's statistic and was found to be equal to $U=0.72086$ for the complete data and $U=0.726352$ for the missing data, which mean they were good models.
Title: Cognitive OFDM network sensing: a free probability approach
Abstract: In this paper, a practical power detection scheme for OFDM terminals, based on recent free probability tools, is proposed. The objective is for the receiving terminal to determine the transmission power and the number of the surrounding base stations in the network. However, thesystem dimensions of the network model turn energy detection into an under-determined problem. The focus of this paper is then twofold: (i) discuss the maximum amount of information that an OFDM terminal can gather from the surrounding base stations in the network, (ii) propose a practical solution for blind cell detection using the free deconvolution tool. The efficiency of this solution is measured through simulations, which show better performance than the classical power detection methods.