text
stringlengths
0
4.09k
Title: Stiffness Analysis of 3-d.o.f. Overconstrained Translational Parallel Manipulators
Abstract: The paper presents a new stiffness modelling method for overconstrained parallel manipulators, which is applied to 3-d.o.f. translational mechanisms. It is based on a multidimensional lumped-parameter model that replaces the link flexibility by localized 6-d.o.f. virtual springs. In contrast to other works, the method includes a FEA-based link stiffness evaluation and employs a new solution strategy of the kinetostatic equations, which allows computing the stiffness matrix for the overconstrained architectures and for the singular manipulator postures. The advantages of the developed technique are confirmed by application examples, which deal with comparative stiffness analysis of two translational parallel manipulators.
Title: Classification Constrained Dimensionality Reduction
Abstract: Dimensionality reduction is a topic of recent interest. In this paper, we present the classification constrained dimensionality reduction (CCDR) algorithm to account for label information. The algorithm can account for multiple classes as well as the semi-supervised setting. We present an out-of-sample expressions for both labeled and unlabeled data. For unlabeled data, we introduce a method of embedding a new point as preprocessing to a classifier. For labeled data, we introduce a method that improves the embedding during the training phase using the out-of-sample extension. We investigate classification performance using the CCDR algorithm on hyper-spectral satellite imagery data. We demonstrate the performance gain for both local and global classifiers and demonstrate a 10% improvement of the $k$-nearest neighbors algorithm performance. We present a connection between intrinsic dimension estimation and the optimal embedding dimension obtained using the CCDR algorithm.
Title: Tellipsoid: Exploiting inter-gene correlation for improved detection of differential gene expression
Abstract: Motivation: Algorithms for differential analysis of microarray data are vital to modern biomedical research. Their accuracy strongly depends on effective treatment of inter-gene correlation. Correlation is ordinarily accounted for in terms of its effect on significance cut-offs. In this paper it is shown that correlation can, in fact, be exploited to share information across tests, which, in turn, can increase statistical power. Results: Vastly and demonstrably improved differential analysis approaches are the result of combining identifiability (the fact that in most microarray data sets, a large proportion of genes can be identified a priori as non-differential) with optimization criteria that incorporate correlation. As a special case, we develop a method which builds upon the widely used two-sample t-statistic based approach and uses the Mahalanobis distance as an optimality criterion. Results on the prostate cancer data of Singh et al. (2002) suggest that the proposed method outperforms all published approaches in terms of statistical power. Availability: The proposed algorithm is implemented in MATLAB and in R. The software, called Tellipsoid, and relevant data sets are available at http://www.egr.msu.edu/ desaikey
Title: Penalized model-based clustering with cluster-specific diagonal covariance matrices and grouped variables
Abstract: Clustering analysis is one of the most widely used statistical tools in many emerging areas such as microarray data analysis. For microarray and other high-dimensional data, the presence of many noise variables may mask underlying clustering structures. Hence removing noise variables via variable selection is necessary. For simultaneous variable selection and parameter estimation, existing penalized likelihood approaches in model-based clustering analysis all assume a common diagonal covariance matrix across clusters, which however may not hold in practice. To analyze high-dimensional data, particularly those with relatively low sample sizes, this article introduces a novel approach that shrinks the variances together with means, in a more general situation with cluster-specific (diagonal) covariance matrices. Furthermore, selection of grouped variables via inclusion or exclusion of a group of variables altogether is permitted by a specific form of penalty, which facilitates incorporating subject-matter knowledge, such as gene functions in clustering microarray samples for disease subtype discovery. For implementation, EM algorithms are derived for parameter estimation, in which the M-steps clearly demonstrate the effects of shrinkage and thresholding. Numerical examples, including an application to acute leukemia subtype discovery with microarray gene expression data, are provided to demonstrate the utility and advantage of the proposed method.
Title: Design and Implementation of Aggregate Functions in the DLV System
Abstract: Disjunctive Logic Programming (DLP) is a very expressive formalism: it allows for expressing every property of finite structures that is decidable in the complexity class SigmaP2 (= NP^NP). Despite this high expressiveness, there are some simple properties, often arising in real-world applications, which cannot be encoded in a simple and natural manner. Especially properties that require the use of arithmetic operators (like sum, times, or count) on a set or multiset of elements, which satisfy some conditions, cannot be naturally expressed in classic DLP. To overcome this deficiency, we extend DLP by aggregate functions in a conservative way. In particular, we avoid the introduction of constructs with disputed semantics, by requiring aggregates to be stratified. We formally define the semantics of the extended language (called DLP^A), and illustrate how it can be profitably used for representing knowledge. Furthermore, we analyze the computational complexity of DLP^A, showing that the addition of aggregates does not bring a higher cost in that respect. Finally, we provide an implementation of DLP^A in DLV -- a state-of-the-art DLP system -- and report on experiments which confirm the usefulness of the proposed extension also for the efficiency of computation.
Title: Testing the number of parameters with multidimensional MLP
Abstract: This work concerns testing the number of parameters in one hidden layer multilayer perceptron (MLP). For this purpose we assume that we have identifiable models, up to a finite group of transformations on the weights, this is for example the case when the number of hidden units is know. In this framework, we show that we get a simple asymptotic distribution, if we use the logarithm of the determinant of the empirical error covariance matrix as cost function.
Title: Efficient Estimation of Multidimensional Regression Model with Multilayer Perceptron
Abstract: This work concerns estimation of multidimensional nonlinear regression models using multilayer perceptron (MLP). The main problem with such model is that we have to know the covariance matrix of the noise to get optimal estimator. however we show that, if we choose as cost function the logarithm of the determinant of the empirical error covariance matrix, we get an asymptotically optimal estimator.
Title: Estimation of linear autoregressive models with Markov-switching, the E.M. algorithm revisited
Abstract: This work concerns estimation of linear autoregressive models with Markov-switching using expectation maximisation (E.M.) algorithm. Our method generalise the method introduced by Elliot for general hidden Markov models and avoid to use backward recursion.
Title: Self Organizing Map algorithm and distortion measure
Abstract: We study the statistical meaning of the minimization of distortion measure and the relation between the equilibrium points of the SOM algorithm and the minima of distortion measure. If we assume that the observations and the map lie in an compact Euclidean space, we prove the strong consistency of the map which almost minimizes the empirical distortion. Moreover, after calculating the derivatives of the theoretical distortion measure, we show that the points minimizing this measure and the equilibria of the Kohonen map do not match in general. We illustrate, with a simple example, how this occurs.
Title: Characterization of the convergence of stationary Fokker-Planck learning
Abstract: The convergence properties of the stationary Fokker-Planck algorithm for the estimation of the asymptotic density of stochastic search processes is studied. Theoretical and empirical arguments for the characterization of convergence of the estimation in the case of separable and nonseparable nonlinear optimization problems are given. Some implications of the convergence of stationary Fokker-Planck learning for the inference of parameters in artificial neural network models are outlined.
Title: Adaptive Confidence Sets for the Optimal Approximating Model
Abstract: In the setting of high-dimensional linear models with Gaussian noise, we investigate the possibility of confidence statements connected to model selection. Although there exist numerous procedures for adaptive point estimation, the construction of adaptive confidence regions is severely limited (cf. Li, 1989). The present paper sheds new light on this gap. We develop exact and adaptive confidence sets for the best approximating model in terms of risk. One of our constructions is based on a multiscale procedure and a particular coupling argument. Utilizing exponential inequalities for noncentral chi-squared distributions, we show that the risk and quadratic loss of all models within our confidence region are uniformly bounded by the minimal risk times a factor close to one.
Title: Some Aspects of Testing Process for Transport Streams in Digital Video Broadcasting
Abstract: This paper presents some aspects related to the DVB (Digital Video Broadcasting) investigation. The basic aspects of DVB are presented, with an emphasis on DVB-T version of standard. The main purpose of this research is to analyze the way that the transmission of the transport streams is realized in case of the Terrestrial Digital Video Broadcasting (DVB-T). To accomplish this, first, Digital Video Broadcasting standard is presented, and then the main aspects of DVB testing and analysis of the transport streams are investigated. The paper presents also the results obtained using two programs designed for DVB analysis: Mosalina and TSA.
Title: Implementing a Test Strategy for an Advanced Video Acquisition and Processing Architecture
Abstract: This paper presents some aspects related to test process of an advanced video system used in remote IP surveillance. The system is based on a Pentium compatible architecture using the industrial standard PC104+. First the overall architecture of the system is presented, involving both hardware or software aspects. The acquisition board which is developed in a special, nonstandard architecture, is also briefly presented. The main purpose of this research was to set a coherent set of procedures in order to test all the aspects of the video acquisition board. To accomplish this, it was necessary to set-up a procedure in two steps: stand alone video board test (functional test) and an in-system test procedure verifying the compatibility with both OS: Linux and Windows. The paper presents also the results obtained using this procedure.
Title: Use of Rapid Probabilistic Argumentation for Ranking on Large Complex Networks
Abstract: We introduce a family of novel ranking algorithms called ERank which run in linear/near linear time and build on explicitly modeling a network as uncertain evidence. The model uses Probabilistic Argumentation Systems (PAS) which are a combination of probability theory and propositional logic, and also a special case of Dempster-Shafer Theory of Evidence. ERank rapidly generates approximate results for the NP-complete problem involved enabling the use of the technique in large networks. We use a previously introduced PAS model for citation networks generalizing it for all networks. We propose a statistical test to be used for comparing the performances of different ranking algorithms based on a clustering validity test. Our experimentation using this test on a real-world network shows ERank to have the best performance in comparison to well-known algorithms including PageRank, closeness, and betweenness.
Title: Evaluation and selection of models for out-of-sample prediction when the sample size is small relative to the complexity of the data-generating process
Abstract: In regression with random design, we study the problem of selecting a model that performs well for out-of-sample prediction. We do not assume that any of the candidate models under consideration are correct. Our analysis is based on explicit finite-sample results. Our main findings differ from those of other analyses that are based on traditional large-sample limit approximations because we consider a situation where the sample size is small relative to the complexity of the data-generating process, in the sense that the number of parameters in a `good' model is of the same order as sample size. Also, we allow for the case where the number of candidate models is (much) larger than sample size.
Title: A Universal In-Place Reconfiguration Algorithm for Sliding Cube-Shaped Robots in a Quadratic Number of Moves
Abstract: In the modular robot reconfiguration problem, we are given $n$ cube-shaped modules (or robots) as well as two configurations, i.e., placements of the $n$ modules so that their union is face-connected. The goal is to find a sequence of moves that reconfigures the modules from one configuration to the other using "sliding moves," in which a module slides over the face or edge of a neighboring module, maintaining connectivity of the configuration at all times. For many years it has been known that certain module configurations in this model require at least $\Omega(n^2)$ moves to reconfigure between them. In this paper, we introduce the first universal reconfiguration algorithm -- i.e., we show that any $n$-module configuration can reconfigure itself into any specified $n$-module configuration using just sliding moves. Our algorithm achieves reconfiguration in $O(n^2)$ moves, making it asymptotically tight. We also present a variation that reconfigures in-place, it ensures that throughout the reconfiguration process, all modules, except for one, will be contained in the union of the bounding boxes of the start and end configuration.
Title: A Truncation Approach for Fast Computation of Distribution Functions
Abstract: In this paper, we propose a general approach for improving the efficiency of computing distribution functions. The idea is to truncate the domain of summation or integration.
Title: Estimating Traffic Parameters with Rigorous Error Control
Abstract: To perform a queuing analysis or design in a communications context, we need to estimate the values of the input parameters, specifically the mean of the arrival rate and service time. In this paper, we propose an approach for estimating the arrival rate of Poisson processes and the average service time for servers under the assumption that the service time is exponential. In particular, we derive sample size (i.e., the number of i.i.d. observations) required to obtain an estimate satisfying a pre-specified relative accuracy with a given confidence level. A remarkable feature of this approach is that no a priori information about the parameter is needed. In contrast to conventional methods such as, standard error estimation and confidence interval construction, which only provides post-experimental evaluations of the estimate, this approach allows experimenters to rigorously control the error of estimation.
Title: Wavelet and Curvelet Moments for Image Classification: Application to Aggregate Mixture Grading
Abstract: We show the potential for classifying images of mixtures of aggregate, based themselves on varying, albeit well-defined, sizes and shapes, in order to provide a far more effective approach compared to the classification of individual sizes and shapes. While a dominant (additive, stationary) Gaussian noise component in image data will ensure that wavelet coefficients are of Gaussian distribution, long tailed distributions (symptomatic, for example, of extreme values) may well hold in practice for wavelet coefficients. Energy (2nd order moment) has often been used for image characterization for image content-based retrieval, and higher order moments may be important also, not least for capturing long tailed distributional behavior. In this work, we assess 2nd, 3rd and 4th order moments of multiresolution transform -- wavelet and curvelet transform -- coefficients as features. As analysis methodology, taking account of image types, multiresolution transforms, and moments of coefficients in the scales or bands, we use correspondence analysis as well as k-nearest neighbors supervised classification.
Title: Interval Estimation of Bounded Variable Means via Inverse Sampling
Abstract: In this paper, we develop interval estimation methods for means of bounded random variables based on a sequential procedure such that the sampling is continued until the sample sum is no less than a prescribed threshold.
Title: Processing Information in Quantum Decision Theory
Abstract: A survey is given summarizing the state of the art of describing information processing in Quantum Decision Theory, which has been recently advanced as a novel variant of decision making, based on the mathematical theory of separable Hilbert spaces. This mathematical structure captures the effect of superposition of composite prospects, including many incorporated intended actions. The theory characterizes entangled decision making, non-commutativity of subsequent decisions, and intention interference. The self-consistent procedure of decision making, in the frame of the quantum decision theory, takes into account both the available objective information as well as subjective contextual effects. This quantum approach avoids any paradox typical of classical decision theory. Conditional maximization of entropy, equivalent to the minimization of an information functional, makes it possible to connect the quantum and classical decision theories, showing that the latter is the limit of the former under vanishing interference terms.
Title: On variance stabilisation by double Rao-Blackwellisation
Abstract: Population Monte Carlo has been introduced as a sequential importance sampling technique to overcome poor fit of the importance function. In this paper, we compare the performances of the original Population Monte Carlo algorithm with a modified version that eliminates the influence of the transition particle via a double Rao-Blackwellisation. This modification is shown to improve the exploration of the modes through an large simulation experiment on posterior distributions of mean mixtures of distributions.
Title: Knowledge Technologies
Abstract: Several technologies are emerging that provide new ways to capture, store, present and use knowledge. This book is the first to provide a comprehensive introduction to five of the most important of these technologies: Knowledge Engineering, Knowledge Based Engineering, Knowledge Webs, Ontologies and Semantic Webs. For each of these, answers are given to a number of key questions (What is it? How does it operate? How is a system developed? What can it be used for? What tools are available? What are the main issues?). The book is aimed at students, researchers and practitioners interested in Knowledge Management, Artificial Intelligence, Design Engineering and Web Technologies. During the 1990s, Nick worked at the University of Nottingham on the application of AI techniques to knowledge management and on various knowledge acquisition projects to develop expert systems for military applications. In 1999, he joined Epistemics where he worked on numerous knowledge projects and helped establish knowledge management programmes at large organisations in the engineering, technology and legal sectors. He is author of the book "Knowledge Acquisition in Practice", which describes a step-by-step procedure for acquiring and implementing expertise. He maintains strong links with leading research organisations working on knowledge technologies, such as knowledge-based engineering, ontologies and semantic technologies.
Title: Belief Propagation and Loop Series on Planar Graphs
Abstract: We discuss a generic model of Bayesian inference with binary variables defined on edges of a planar graph. The Loop Calculus approach of [1, 2] is used to evaluate the resulting series expansion for the partition function. We show that, for planar graphs, truncating the series at single-connected loops reduces, via a map reminiscent of the Fisher transformation [3], to evaluating the partition function of the dimer matching model on an auxiliary planar graph. Thus, the truncated series can be easily re-summed, using the Pfaffian formula of Kasteleyn [4]. This allows to identify a big class of computationally tractable planar models reducible to a dimer model via the Belief Propagation (gauge) transformation. The Pfaffian representation can also be extended to the full Loop Series, in which case the expansion becomes a sum of Pfaffian contributions, each associated with dimer matchings on an extension to a subgraph of the original graph. Algorithmic consequences of the Pfaffian representation, as well as relations to quantum and non-planar models, are discussed.
Title: Brain architecture: A design for natural computation
Abstract: Fifty years ago, John von Neumann compared the architecture of the brain with that of computers that he invented and which is still in use today. In those days, the organisation of computers was based on concepts of brain organisation. Here, we give an update on current results on the global organisation of neural systems. For neural systems, we outline how the spatial and topological architecture of neuronal and cortical networks facilitates robustness against failures, fast processing, and balanced network activation. Finally, we discuss mechanisms of self-organization for such architectures. After all, the organization of the brain might again inspire computer architecture.
Title: Hubs in Languages: Scale Free Networks of Synonyms
Abstract: Natural languages are described in this paper in terms of networks of synonyms: a word is identified with a node, and synonyms are connected by undirected links. Our statistical analysis of the network of synonyms in Polish language showed it is scale-free; similar to what is known for English. The statistical properties of the networks are also similar. Thus, the statistical aspects of the networks are good candidates for culture independent elements of human language. We hypothesize that optimization for robustness and efficiency is responsible for this universality. Despite the statistical similarity, there is no one-to-one mapping between networks of these two languages. Although many hubs in Polish are translated into similarly highly connected hubs in English, there are also hubs specific to one of these languages only: a single word in one language is equivalent to many different and disconnected words in the other, in accordance with the Whorf hypothesis about language relativity. Identifying language-specific hubs is vitally important for automatic translation, and for understanding contextual, culturally related messages that are frequently missed or twisted in a naive, literary translation.
Title: Bayesian Estimation of Inequalities with Non-Rectangular Censored Survey Data
Abstract: Synthetic indices are used in Economics to measure various aspects of monetary inequalities. These scalar indices take as input the distribution over a finite population, for example the population of a specific country. In this article we consider the case of the French 2004 Wealth survey. We have at hand a partial measurement on the distribution of interest consisting of bracketed and sometimes missing data, over a subsample of the population of interest. We present in this article the statistical methodology used to obtain point and interval estimates taking into account the various uncertainties. The inequality indices being nonlinear in the input distribution, we rely on a simulation based approach where the model for the wealth per household is multivariate. Using the survey data as well as matched auxiliary tax declarations data, we have at hand a quite intricate non-rectangle multidimensional censoring. For practical issues we use a Bayesian approach. Inference using Monte-Carlo approximations relies on a Monte-Carlo Markov chain algorithm namely the Gibbs sampler. The quantities interesting to the decision maker are taken to be the various inequality indices for the French population. Their distribution conditional on the data of the subsample are assumed to be normal centered on the design-based estimates with variance computed through linearization and taking into account the sample design and total nonresponse. Exogeneous selection of the subsample, in particular the nonresponse mechanism, is assumed and we condition on the adequate covariates.
Title: Some properties of the Ukrainian writing system
Abstract: We investigate the grapheme-phoneme relation in Ukrainian and some properties of the Ukrainian version of the Cyrillic alphabet.
Title: Equilibrium (Zipf) and Dynamic (Grasseberg-Procaccia) method based analyses of human texts. A comparison of natural (english) and artificial (esperanto) languages
Abstract: A comparison of two english texts from Lewis Carroll, one (Alice in wonderland), also translated into esperanto, the other (Through a looking glass) are discussed in order to observe whether natural and artificial languages significantly differ from each other. One dimensional time series like signals are constructed using only word frequencies (FTS) or word lengths (LTS). The data is studied through (i) a Zipf method for sorting out correlations in the FTS and (ii) a Grassberger-Procaccia (GP) technique based method for finding correlations in LTS. Features are compared : different power laws are observed with characteristic exponents for the ranking properties, and the \it phase space attractor dimensionality. The Zipf exponent can take values much less than unity ($ca.$ 0.50 or 0.30) depending on how a sentence is defined. This non-universality is conjectured to be a measure of the author $style$. Moreover the attractor dimension $r$ is a simple function of the so called phase space dimension $n$, i.e., $r = n^\lambda$, with $\lambda = 0.79$. Such an exponent should also conjecture to be a measure of the author $creativity$. However, even though there are quantitative differences between the original english text and its esperanto translation, the qualitative differences are very minutes, indicating in this case a translation relatively well respecting, along our analysis lines, the content of the author writing.
Title: The Generation of Textual Entailment with NLML in an Intelligent Dialogue system for Language Learning CSIEC
Abstract: This research report introduces the generation of textual entailment within the project CSIEC (Computer Simulation in Educational Communication), an interactive web-based human-computer dialogue system with natural language for English instruction. The generation of textual entailment (GTE) is critical to the further improvement of CSIEC project. Up to now we have found few literatures related with GTE. Simulating the process that a human being learns English as a foreign language we explore our naive approach to tackle the GTE problem and its algorithm within the framework of CSIEC, i.e. rule annotation in NLML, pattern recognition (matching), and entailment transformation. The time and space complexity of our algorithm is tested with some entailment examples. Further works include the rules annotation based on the English textbooks and a GUI interface for normal users to edit the entailment rules.
Title: Automated Termination Proofs for Logic Programs by Term Rewriting
Abstract: There are two kinds of approaches for termination analysis of logic programs: "transformational" and "direct" ones. Direct approaches prove termination directly on the basis of the logic program. Transformational approaches transform a logic program into a term rewrite system (TRS) and then analyze termination of the resulting TRS instead. Thus, transformational approaches make all methods previously developed for TRSs available for logic programs as well. However, the applicability of most existing transformations is quite restricted, as they can only be used for certain subclasses of logic programs. (Most of them are restricted to well-moded programs.) In this paper we improve these transformations such that they become applicable for any definite logic program. To simulate the behavior of logic programs by TRSs, we slightly modify the notion of rewriting by permitting infinite terms. We show that our transformation results in TRSs which are indeed suitable for automated termination analysis. In contrast to most other methods for termination of logic programs, our technique is also sound for logic programming without occur check, which is typically used in practice. We implemented our approach in the termination prover AProVE and successfully evaluated it on a large collection of examples.
Title: Adaptive methods for sequential importance sampling with application to state space models
Abstract: In this paper we discuss new adaptive proposal strategies for sequential Monte Carlo algorithms--also known as particle filters--relying on criteria evaluating the quality of the proposed particles. The choice of the proposal distribution is a major concern and can dramatically influence the quality of the estimates. Thus, we show how the long-used coefficient of variation of the weights can be used for estimating the chi-square distance between the target and instrumental distributions of the auxiliary particle filter. As a by-product of this analysis we obtain an auxiliary adjustment multiplier weight type for which this chi-square distance is minimal. Moreover, we establish an empirical estimate of linear complexity of the Kullback-Leibler divergence between the involved distributions. Guided by these results, we discuss adaptive designing of the particle filter proposal distribution and illustrate the methods on a numerical example.
Title: Polynomial time algorithms for bi-criteria, multi-objective and ratio problems in clustering and imaging. Part I: Normalized cut and ratio regions
Abstract: Partitioning and grouping of similar objects plays a fundamental role in image segmentation and in clustering problems. In such problems a typical goal is to group together similar objects, or pixels in the case of image processing. At the same time another goal is to have each group distinctly dissimilar from the rest and possibly to have the group size fairly large. These goals are often combined as a ratio optimization problem. One example of such problem is the normalized cut problem, another is the ratio regions problem. We devise here the first polynomial time algorithms solving these problems optimally. The algorithms are efficient and combinatorial. This contrasts with the heuristic approaches used in the image segmentation literature that formulate those problems as nonlinear optimization problems, which are then relaxed and solved with spectral techniques in real numbers. These approaches not only fail to deliver an optimal solution, but they are also computationally expensive. The algorithms presented here use as a subroutine a minimum $s,t-cut procedure on a related graph which is of polynomial size. The output consists of the optimal solution to the respective ratio problem, as well as a sequence of nested solution with respect to any relative weighting of the objectives of the numerator and denominator. An extension of the results here to bi-criteria and multi-criteria objective functions is presented in part II.