text
stringlengths
0
4.09k
Title: Algorithms for Dynamic Spectrum Access with Learning for Cognitive Radio
Abstract: We study the problem of dynamic spectrum sensing and access in cognitive radio systems as a partially observed Markov decision process (POMDP). A group of cognitive users cooperatively tries to exploit vacancies in primary (licensed) channels whose occupancies follow a Markovian evolution. We first consider the scenario where the cognitive users have perfect knowledge of the distribution of the signals they receive from the primary users. For this problem, we obtain a greedy channel selection and access policy that maximizes the instantaneous reward, while satisfying a constraint on the probability of interfering with licensed transmissions. We also derive an analytical universal upper bound on the performance of the optimal policy. Through simulation, we show that our scheme achieves good performance relative to the upper bound and improved performance relative to an existing scheme. We then consider the more practical scenario where the exact distribution of the signal from the primary is unknown. We assume a parametric model for the distribution and develop an algorithm that can learn the true distribution, still guaranteeing the constraint on the interference probability. We show that this algorithm outperforms the naive design that assumes a worst case value for the parameter. We also provide a proof for the convergence of the learning algorithm.
Title: ABC likelihood-freee methods for model choice in Gibbs random fields
Abstract: Gibbs random fields (GRF) are polymorphous statistical models that can be used to analyse different types of dependence, in particular for spatially correlated data. However, when those models are faced with the challenge of selecting a dependence structure from many, the use of standard model choice methods is hampered by the unavailability of the normalising constant in the Gibbs likelihood. In particular, from a Bayesian perspective, the computation of the posterior probabilities of the models under competition requires special likelihood-free simulation techniques like the Approximate Bayesian Computation (ABC) algorithm that is intensively used in population genetics. We show in this paper how to implement an ABC algorithm geared towards model choice in the general setting of Gibbs random fields, demonstrating in particular that there exists a sufficient statistic across models. The accuracy of the approximation to the posterior probabilities can be further improved by importance sampling on the distribution of the models. The practical aspects of the method are detailed through two applications, the test of an iid Bernoulli model versus a first-order Markov chain, and the choice of a folding structure for two proteins.
Title: Visual Grouping by Neural Oscillators
Abstract: Distributed synchronization is known to occur at several scales in the brain, and has been suggested as playing a key functional role in perceptual grouping. State-of-the-art visual grouping algorithms, however, seem to give comparatively little attention to neural synchronization analogies. Based on the framework of concurrent synchronization of dynamic systems, simple networks of neural oscillators coupled with diffusive connections are proposed to solve visual grouping problems. Multi-layer algorithms and feedback mechanisms are also studied. The same algorithm is shown to achieve promising results on several classical visual grouping problems, including point clustering, contour integration and image segmentation.
Title: On Probability Distributions for Trees: Representations, Inference and Learning
Abstract: We study probability distributions over free algebras of trees. Probability distributions can be seen as particular (formal power) tree series [Berstel et al 82, Esik et al 03], i.e. mappings from trees to a semiring K . A widely studied class of tree series is the class of rational (or recognizable) tree series which can be defined either in an algebraic way or by means of multiplicity tree automata. We argue that the algebraic representation is very convenient to model probability distributions over a free algebra of trees. First, as in the string case, the algebraic representation allows to design learning algorithms for the whole class of probability distributions defined by rational tree series. Note that learning algorithms for rational tree series correspond to learning algorithms for weighted tree automata where both the structure and the weights are learned. Second, the algebraic representation can be easily extended to deal with unranked trees (like XML trees where a symbol may have an unbounded number of children). Both properties are particularly relevant for applications: nondeterministic automata are required for the inference problem to be relevant (recall that Hidden Markov Models are equivalent to nondeterministic string automata); nowadays applications for Web Information Extraction, Web Services and document processing consider unranked trees.
Title: A note on state space representations of locally stationary wavelet time series
Abstract: In this note we show that the locally stationary wavelet process can be decomposed into a sum of signals, each of which following a moving average process with time-varying parameters. We then show that such moving average processes are equivalent to state space models with stochastic design components. Using a simple simulation step, we propose a heuristic method of estimating the above state space models and then we apply the methodology to foreign exchange rates data.
Title: Principle of detailed balance and convergence assessment of Markov Chain Monte Carlo methods and simulated annealing
Abstract: Markov Chain Monte Carlo (MCMC) methods are employed to sample from a given distribution of interest, whenever either the distribution does not exist in closed form, or, if it does, no efficient method to simulate an independent sample from it is available. Although a wealth of diagnostic tools for convergence assessment of MCMC methods have been proposed in the last two decades, the search for a dependable and easy to implement tool is ongoing. We present in this article a criterion based on the principle of detailed balance which provides a qualitative assessment of the convergence of a given chain. The criterion is based on the behaviour of a one-dimensional statistic, whose asymptotic distribution under the assumption of stationarity is derived; our results apply under weak conditions and have the advantage of being completely intuitive. We implement this criterion as a stopping rule for simulated annealing in the problem of finding maximum likelihood estimators for parameters of a 20-component mixture model. We also apply it to the problem of sampling from a 10-dimensional funnel distribution via slice sampling and the Metropolis-Hastings algorithm. Furthermore, based on this convergence criterion we define a measure of efficiency of one algorithm versus another.
Title: The NAO humanoid: a combination of performance and affordability
Abstract: This article presents the design of the autonomous humanoid robot called NAO that is built by the French company Aldebaran-Robotics. With its height of 0.57 m and its weight about 4.5 kg, this innovative robot is lightweight and compact. It distinguishes itself from its existing Japanese, American, and other counterparts thanks to its pelvis kinematics design, its proprietary actuation system based on brush DC motors, its electronic, computer and distributed software architectures. This robot has been designed to be affordable without sacrificing quality and performance. It is an open and easy-to-handle platform where the user can change all the embedded system software or just add some applications to make the robot adopt specific behaviours. The robot's head and forearms are modular and can be changed to promote further evolution. The comprehensive and functional design is one of the reasons that helped select NAO to replace the AIBO quadrupeds in the 2008 RoboCup standard league.
Title: Exploiting Bird Locomotion Kinematics Data for Robotics Modeling
Abstract: We present here the results of an analysis carried out by biologists and roboticists with the aim of modeling bird locomotion kinematics for robotics purposes. The aim was to develop a bio-inspired kinematic model of the bird leg from biological data. We first acquired and processed kinematic data for sagittal and top views obtained by X-ray radiography of quails walking. Data processing involved filtering and specific data reconstruction in three dimensions, as two-dimensional views cannot be synchronized. We then designed a robotic model of a bird-like leg based on a kinematic analysis of the biological data. Angular velocity vectors were calculated to define the number of degrees of freedom (DOF) at each joint and the orientation of the rotation axes.
Title: Constructing a Knowledge Base for Gene Regulatory Dynamics by Formal Concept Analysis Methods
Abstract: Our aim is to build a set of rules, such that reasoning over temporal dependencies within gene regulatory networks is possible. The underlying transitions may be obtained by discretizing observed time series, or they are generated based on existing knowledge, e.g. by Boolean networks or their nondeterministic generalization. We use the mathematical discipline of formal concept analysis (FCA), which has been applied successfully in domains as knowledge representation, data mining or software engineering. By the attribute exploration algorithm, an expert or a supporting computer program is enabled to decide about the validity of a minimal set of implications and thus to construct a sound and complete knowledge base. From this all valid implications are derivable that relate to the selected properties of a set of genes. We present results of our method for the initiation of sporulation in Bacillus subtilis. However the formal structures are exhibited in a most general manner. Therefore the approach may be adapted to signal transduction or metabolic networks, as well as to discrete temporal transitions in many biological and nonbiological areas.
Title: Universal Denoising of Discrete-time Continuous-Amplitude Signals
Abstract: We consider the problem of reconstructing a discrete-time signal (sequence) with continuous-valued components corrupted by a known memoryless channel. When performance is measured using a per-symbol loss function satisfying mild regularity conditions, we develop a sequence of denoisers that, although independent of the distribution of the underlying `clean' sequence, is universally optimal in the limit of large sequence length. This sequence of denoisers is universal in the sense of performing as well as any sliding window denoising scheme which may be optimized for the underlying clean signal. Our results are initially developed in a ``semi-stochastic'' setting, where the noiseless signal is an unknown individual sequence, and the only source of randomness is due to the channel noise. It is subsequently shown that in the fully stochastic setting, where the noiseless sequence is a stationary stochastic process, our schemes universally attain optimum performance. The proposed schemes draw from nonparametric density estimation techniques and are practically implementable. We demonstrate efficacy of the proposed schemes in denoising gray-scale images in the conventional additive white Gaussian noise setting, with additional promising results for less conventional noise distributions.
Title: Elastic-Net Regularization in Learning Theory
Abstract: Within the framework of statistical learning theory we analyze in detail the so-called elastic-net regularization scheme proposed by Zou and Hastie for the selection of groups of correlated variables. To investigate on the statistical properties of this scheme and in particular on its consistency properties, we set up a suitable mathematical framework. Our setting is random-design regression where we allow the response variable to be vector-valued and we consider prediction functions which are linear combination of elements (\em features) in an infinite-dimensional dictionary. Under the assumption that the regression function admits a sparse representation on the dictionary, we prove that there exists a particular ``\em elastic-net representation'' of the regression function such that, if the number of data increases, the elastic-net estimator is consistent not only for prediction but also for variable/feature selection. Our results include finite-sample bounds and an adaptive scheme to select the regularization parameter. Moreover, using convex analysis tools, we derive an iterative thresholding algorithm for computing the elastic-net solution which is different from the optimization procedure originally proposed by Zou and Hastie
Title: Inference with Discriminative Posterior
Abstract: We study Bayesian discriminative inference given a model family $p(c,\x, \theta)$ that is assumed to contain all our prior information but still known to be incorrect. This falls in between "standard" Bayesian generative modeling and Bayesian regression, where the margin $p(\x,\theta)$ is known to be uninformative about $p(c|\x,\theta)$. We give an axiomatic proof that discriminative posterior is consistent for conditional inference; using the discriminative posterior is standard practice in classical Bayesian regression, but we show that it is theoretically justified for model families of joint densities as well. A practical benefit compared to Bayesian regression is that the standard methods of handling missing values in generative modeling can be extended into discriminative inference, which is useful if the amount of data is small. Compared to standard generative modeling, discriminative posterior results in better conditional inference if the model family is incorrect. If the model family contains also the true model, the discriminative posterior gives the same result as standard Bayesian generative modeling. Practical computation is done with Markov chain Monte Carlo.
Title: Implementing general belief function framework with a practical codification for low complexity
Abstract: In this chapter, we propose a new practical codification of the elements of the Venn diagram in order to easily manipulate the focal elements. In order to reduce the complexity, the eventual constraints must be integrated in the codification at the beginning. Hence, we only consider a reduced hyper power set $D_r^\Theta$ that can be $2^\Theta$ or $D^\Theta$. We describe all the steps of a general belief function framework. The step of decision is particularly studied, indeed, when we can decide on intersections of the singletons of the discernment space no actual decision functions are easily to use. Hence, two approaches are proposed, an extension of previous one and an approach based on the specificity of the elements on which to decide. The principal goal of this chapter is to provide practical codes of a general belief function framework for the researchers and users needing the belief function theory.
Title: TuLiPA: Towards a Multi-Formalism Parsing Environment for Grammar Engineering
Abstract: In this paper, we present an open-source parsing environment (Tuebingen Linguistic Parsing Architecture, TuLiPA) which uses Range Concatenation Grammar (RCG) as a pivot formalism, thus opening the way to the parsing of several mildly context-sensitive formalisms. This environment currently supports tree-based grammars (namely Tree-Adjoining Grammars, TAG) and Multi-Component Tree-Adjoining Grammars with Tree Tuples (TT-MCTAG)) and allows computation not only of syntactic structures, but also of the corresponding semantic representations. It is used for the development of a tree-based grammar for German.
Title: A new probabilistic transformation of belief mass assignment
Abstract: In this paper, we propose in Dezert-Smarandache Theory (DSmT) framework, a new probabilistic transformation, called DSmP, in order to build a subjective probability measure from any basic belief assignment defined on any model of the frame of discernment. Several examples are given to show how the DSmP transformation works and we compare it to main existing transformations proposed in the literature so far. We show the advantages of DSmP over classical transformations in term of Probabilistic Information Content (PIC). The direct extension of this transformation for dealing with qualitative belief assignments is also presented.
Title: Data spectroscopy: Eigenspaces of convolution operators and clustering
Abstract: This paper focuses on obtaining clustering information about a distribution from its i.i.d. samples. We develop theoretical results to understand and use clustering information contained in the eigenvectors of data adjacency matrices based on a radial kernel function with a sufficiently fast tail decay. In particular, we provide population analyses to gain insights into which eigenvectors should be used and when the clustering information for the distribution can be recovered from the sample. We learn that a fixed number of top eigenvectors might at the same time contain redundant clustering information and miss relevant clustering information. We use this insight to design the data spectroscopic clustering (DaSpec) algorithm that utilizes properly selected eigenvectors to determine the number of clusters automatically and to group the data accordingly. Our findings extend the intuitions underlying existing spectral techniques such as spectral clustering and Kernel Principal Components Analysis, and provide new understanding into their usability and modes of failure. Simulation studies and experiments on real-world data are conducted to show the potential of our algorithm. In particular, DaSpec is found to handle unbalanced groups and recover clusters of different shapes better than the competing methods.
Title: A path following algorithm for Sparse Pseudo-Likelihood Inverse Covariance Estimation (SPLICE)
Abstract: Given n observations of a p-dimensional random vector, the covariance matrix and its inverse (precision matrix) are needed in a wide range of applications. Sample covariance (e.g. its eigenstructure) can misbehave when p is comparable to the sample size n. Regularization is often used to mitigate the problem. In this paper, we proposed an l1-norm penalized pseudo-likelihood estimate for the inverse covariance matrix. This estimate is sparse due to the l1-norm penalty, and we term this method SPLICE. Its regularization path can be computed via an algorithm based on the homotopy/LARS-Lasso algorithm. Simulation studies are carried out for various inverse covariance structures for p=15 and n=20, 1000. We compare SPLICE with the l1-norm penalized likelihood estimate and a l1-norm penalized Cholesky decomposition based method. SPLICE gives the best overall performance in terms of three metrics on the precision matrix and ROC curve for model selection. Moreover, our simulation results demonstrate that the SPLICE estimates are positive-definite for most of the regularization path even though the restriction is not enforced.
Title: Formal semantics of language and the Richard-Berry paradox
Abstract: The classical logical antinomy known as Richard-Berry paradox is combined with plausible assumptions about the size i.e. the descriptional complexity of Turing machines formalizing certain sentences, to show that formalization of language leads to contradiction.
Title: A Distributed Process Infrastructure for a Distributed Data Structure
Abstract: The Resource Description Framework (RDF) is continuing to grow outside the bounds of its initial function as a metadata framework and into the domain of general-purpose data modeling. This expansion has been facilitated by the continued increase in the capacity and speed of RDF database repositories known as triple-stores. High-end RDF triple-stores can hold and process on the order of 10 billion triples. In an effort to provide a seamless integration of the data contained in RDF repositories, the Linked Data community is providing specifications for linking RDF data sets into a universal distributed graph that can be traversed by both man and machine. While the seamless integration of RDF data sets is important, at the scale of the data sets that currently exist and will ultimately grow to become, the "download and index" philosophy of the World Wide Web will not so easily map over to the Semantic Web. This essay discusses the importance of adding a distributed RDF process infrastructure to the current distributed RDF data structure.
Title: Prediction of multivariate responses with a select number of principal components
Abstract: This paper proposes a new method and algorithm for predicting multivariate responses in a regression setting. Research into classification of High Dimension Low Sample Size (HDLSS) data, in particular microarray data, has made considerable advances, but regression prediction for high-dimensional data with continuous responses has had less attention. Recently Bair et al (2006) proposed an efficient prediction method based on supervised principal component regression (PCR). Motivated by the fact that a larger number of principal components results in better regression performance, this paper extends the method of Bair et al in several ways: a comprehensive variable ranking is combined with a selection of the best number of components for PCR, and the new method further extends to regression with multivariate responses. The new method is particularly suited to HDLSS problems. Applications to simulated and real data demonstrate the performance of the new method. Comparisons with Bair et al (2006) show that for high-dimensional data in particular the new ranking results in a smaller number of predictors and smaller errors.
Title: Estimating a difference between Kullback-Leibler risks by a normalized difference of AIC
Abstract: AIC is commonly used for model selection but the precise value of AIC has no direct interpretation. We are interested in quantifying a difference of risks between two models. This may be useful for both an explanatory point of view or for prediction, where a simpler model may be preferred if it does nearly as well as a more complex model. The difference of risks can be interpreted by linking the risks with relative errors in the computation of probabilities and looking at the values obtained for simple models. A scale of values going from negligible to large is proposed. We propose a normalization of a difference of Akaike criteria for estimating the difference of expected Kullback-Leibler risks between maximum likelihood estimators of the distribution in two different models. The variability of this statistic can be estimated. Thus, an interval can be constructed which contains the true difference of expected Kullback-Leibler risks with a pre-specified probability. A simulation study shows that the method works and it is illustrated on two examples. The first is a study of the relationship between body-mass index and depression in elderly people. The second is the choice between models of HIV dynamics, where one model makes the distinction between activated CD4+ T lymphocytes and the other does not.
Title: Positive factor networks: A graphical framework for modeling non-negative sequential data
Abstract: We present a novel graphical framework for modeling non-negative sequential data with hierarchical structure. Our model corresponds to a network of coupled non-negative matrix factorization (NMF) modules, which we refer to as a positive factor network (PFN). The data model is linear, subject to non-negativity constraints, so that observation data consisting of an additive combination of individually representable observations is also representable by the network. This is a desirable property for modeling problems in computational auditory scene analysis, since distinct sound sources in the environment are often well-modeled as combining additively in the corresponding magnitude spectrogram. We propose inference and learning algorithms that leverage existing NMF algorithms and that are straightforward to implement. We present a target tracking example and provide results for synthetic observation data which serve to illustrate the interesting properties of PFNs and motivate their potential usefulness in applications such as music transcription, source separation, and speech recognition. We show how a target process characterized by a hierarchical state transition model can be represented as a PFN. Our results illustrate that a PFN which is defined in terms of a single target observation can then be used to effectively track the states of multiple simultaneous targets. Our results show that the quality of the inferred target states degrades gradually as the observation noise is increased. We also present results for an example in which meaningful hierarchical features are extracted from a spectrogram. Such a hierarchical representation could be useful for music transcription and source separation applications. We also propose a network for language modeling.
Title: QR-Adjustment for Clustering Tests Based on Nearest Neighbor Contingency Tables
Abstract: The spatial interaction between two or more classes of points may cause spatial clustering patterns such as segregation or association, which can be tested using a nearest neighbor contingency table (NNCT). A NNCT is constructed using the frequencies of class types of points in nearest neighbor (NN) pairs. For the NNCT-tests, the null pattern is either complete spatial randomness (CSR) of the points from two or more classes (called CSR independence) or random labeling (RL). The distributions of the NNCT-test statistics depend on the number of reflexive NNs (denoted by $R$) and the number of shared NNs (denoted by $Q$), both of which depend on the allocation of the points. Hence $Q$ and $R$ are fixed quantities under RL, but random variables under CSR independence. Using their observed values in NNCT analysis makes the distributions of the NNCT-test statistics conditional on $Q$ and $R$ under CSR independence. In this article, I use the empirically estimated expected values of $Q$ and $R$ under CSR independence pattern to remove the conditioning of NNCT-tests (such a correction is called the , henceforth). I present a Monte Carlo simulation study to compare the conditional NNCT-tests and QR-adjusted tests under CSR independence and segregation and association alternatives. I demonstrate that QR-adjustment does not significantly improve the empirical size estimates under CSR independence and power estimates under segregation or association alternatives. For illustrative purposes, I apply the conditional and empirically corrected tests on two example data sets.
Title: On the Use of Nearest Neighbor Contingency Tables for Testing Spatial Segregation
Abstract: For two or more classes (or types) of points, nearest neighbor contingency tables (NNCTs) are constructed using nearest neighbor (NN) frequencies and are used in testing spatial segregation of the classes. Pielou's test of independence, Dixon's cell-specific, class-specific, and overall tests are the tests based on NNCTs (i.e., they are NNCT-tests). These tests are designed and intended for use under the null pattern of random labeling (RL) of completely mapped data. However, it has been shown that Pielou's test is not appropriate for testing segregation against the RL pattern while Dixon's tests are. In this article, we compare Pielou's and Dixon's NNCT-tests; introduce the one-sided versions of Pielou's test; extend the use of NNCT-tests for testing complete spatial randomness (CSR) of points from two or more classes (which is called , henceforth). We assess the finite sample performance of the tests by an extensive Monte Carlo simulation study and demonstrate that Dixon's tests are also appropriate for testing CSR independence; but Pielou's test and the corresponding one-sided versions are liberal for testing CSR independence or RL. Furthermore, we show that Pielou's tests are only appropriate when the NNCT is based on a random sample of (base, NN) pairs. We also prove the consistency of the tests under their appropriate null hypotheses. Moreover, we investigate the edge (or boundary) effects on the NNCT-tests and compare the buffer zone and toroidal edge correction methods for these tests. We illustrate the tests on a real life and an artificial data set.
Title: Avoider robot design to dim the fire with dt basic mini system
Abstract: Avoider robot is mean robot who is designed to avoid the block in around. Except that, this robot is also added by an addition application to dim the fire. This robot is made with ultrasonic sensor PING. This sensor is set on the front, right and left from robot. This sensor is used robot to look for the right street, so that robot can walk on. After the robot can look for the right street, next accomplished the robot is looking for the fire in around. And the next, dim the fire with fan. This robot is made with basic stamp 2 micro-controller. And that micro-controller can be found in dt-basic mini system module. This robot is made with servo motor on the right and left side, which is used to movement.
Title: On Introspection, Metacognitive Control and Augmented Data Mining Live Cycles
Abstract: We discuss metacognitive modelling as an enhancement to cognitive modelling and computing. Metacognitive control mechanisms should enable AI systems to self-reflect, reason about their actions, and to adapt to new situations. In this respect, we propose implementation details of a knowledge taxonomy and an augmented data mining life cycle which supports a live integration of obtained models.
Title: An Image-Based Sensor System for Autonomous Rendez-Vous with Uncooperative Satellites
Abstract: In this paper are described the image processing algorithms developed by SENER, Ingenieria y Sistemas to cope with the problem of image-based, autonomous rendez-vous (RV) with an orbiting satellite. The methods developed have a direct application in the OLEV (Orbital Life Extension Extension Vehicle) mission. OLEV is a commercial mission under development by a consortium formed by Swedish Space Corporation, Kayser-Threde and SENER, aimed to extend the operational life of geostationary telecommunication satellites by supplying them control, navigation and guidance services. OLEV is planned to use a set of cameras to determine the angular position and distance to the client satellite during the complete phases of rendez-vous and docking, thus enabling the operation with satellites not equipped with any specific navigational aid to provide support during the approach. The ability to operate with un-equipped client satellites significantly expands the range of applicability of the system under development, compared to other competing video technologies already tested in previous spatial missions, such as the ones described here below.
Title: AceWiki: A Natural and Expressive Semantic Wiki
Abstract: We present AceWiki, a prototype of a new kind of semantic wiki using the controlled natural language Attempto Controlled English (ACE) for representing its content. ACE is a subset of English with a restricted grammar and a formal semantics. The use of ACE has two important advantages over existing semantic wikis. First, we can improve the usability and achieve a shallow learning curve. Second, ACE is more expressive than the formal languages of existing semantic wikis. Our evaluation shows that people who are not familiar with the formal foundations of the Semantic Web are able to deal with AceWiki after a very short learning phase and without the help of an expert.
Title: AceWiki: Collaborative Ontology Management in Controlled Natural Language
Abstract: AceWiki is a prototype that shows how a semantic wiki using controlled natural language - Attempto Controlled English (ACE) in our case - can make ontology management easy for everybody. Sentences in ACE can automatically be translated into first-order logic, OWL, or SWRL. AceWiki integrates the OWL reasoner Pellet and ensures that the ontology is always consistent. Previous results have shown that people with no background in logic are able to add formal knowledge to AceWiki without being instructed or trained in advance.
Title: Hacia una teoria de unificacion para los comportamientos cognitivos
Abstract: Each cognitive science tries to understand a set of cognitive behaviors. The structuring of knowledge of this nature's aspect is far from what it can be expected about a science. Until now universal standard consistently describing the set of cognitive behaviors has not been found, and there are many questions about the cognitive behaviors for which only there are opinions of members of the scientific community. This article has three proposals. The first proposal is to raise to the scientific community the necessity of unified the cognitive behaviors. The second proposal is claim the application of the Newton's reasoning rules about nature of his book, Philosophiae Naturalis Principia Mathematica, to the cognitive behaviors. The third is to propose a scientific theory, currently developing, that follows the rules established by Newton to make sense of nature, and could be the theory to explain all the cognitive behaviors.
Title: Covariance fields
Abstract: We introduce and study covariance fields of distributions on a Riemannian manifold. At each point on the manifold, covariance is defined to be a symmetric and positive definite (2,0)-tensor. Its product with the metric tensor specifies a linear operator on the respected tangent space. Collectively, these operators form a covariance operator field. We show that, in most circumstances, covariance fields are continuous. We also solve the inverse problem: recovering distribution from a covariance field. Surprisingly, this is not possible on Euclidean spaces. On non-Euclidean manifolds however, covariance fields are true distribution representations.
Title: An image processing analysis of skin textures
Abstract: Colour and coarseness of skin are visually different. When image processing is involved in the skin analysis, it is important to quantitatively evaluate such differences using texture features. In this paper, we discuss a texture analysis and measurements based on a statistical approach to the pattern recognition. Grain size and anisotropy are evaluated with proper diagrams. The possibility to determine the presence of pattern defects is also discussed.
Title: On an Auxiliary Function for Log-Density Estimation
Abstract: In this note we provide explicit expressions and expansions for a special function which appears in nonparametric estimation of log-densities. This function returns the integral of a log-linear function on a simplex of arbitrary dimension. In particular it is used in the R-package "LogCondDEAD" by Cule et al. (2007).