text
stringlengths
0
4.09k
Title: Optimal-order bounds on the rate of convergence to normality in the multivariate delta method
Abstract: Uniform and nonuniform Berry--Esseen (BE) bounds of optimal orders on the closeness to normality for general abstract nonlinear statistics are given, which are then used to obtain optimal bounds on the rate of convergence in the delta method for vector statistics. Specific applications to Pearson's, non-central Student's and Hotelling's statistics, sphericity test statistics, a regularized canonical correlation, and maximum likelihood estimators (MLEs) are given; all these uniform and nonuniform BE bounds appear to be the first known results of these kinds, except for uniform BE bounds for MLEs. When applied to the well-studied case of the central Student statistic, our general results compare well with known ones in that case, obtained previously by specialized methods. The proofs use a Stein-type method developed by Chen and Shao, a Cram\'er-type of tilt transform, exponential and Rosenthal-type inequalities for sums of random vectors established by Pinelis, Sakhanenko, and Utev, as well as a number of other, quite recent results motivated by this study. The method allows one to obtain bounds with explicit and rather moderate-size constants, at least as far as the uniform bounds are concerned. For instance, one has the uniform BE bound $3.61(Y_1^6+Z_1^6)\,(1+\sigma^-3)/\sqrt n$ for the Pearson sample correlation coefficient based on independent identically distributed random pairs $(Y_1,Z_1),\dots,(Y_n,Z_n)$ with $ Y_1= Z_1= Y_1Z_1=0$ and $ Y_1^2= Z_1^2=1$, where $\sigma:= Y_1^2Z_1^2$.
Title: Some Numerical Results on the Rank of Generic Three-Way Arrays over R
Abstract: The aim of this paper is the introduction of a new method for the numerical computation of the rank of a three-way array. We show that the rank of a three-way array over R is intimately related to the real solution set of a system of polynomial equations. Using this, we present some numerical results based on the computation of Grobner bases. Key words: Tensors; three-way arrays; Candecomp/Parafac; Indscal; generic rank; typical rank; Veronese variety; Segre variety; Grobner bases. AMS classification: Primary 15A69; Secondary 15A72, 15A18.
Title: Equations of States in Statistical Learning for a Nonparametrizable and Regular Case
Abstract: Many learning machines that have hierarchical structure or hidden variables are now being used in information science, artificial intelligence, and bioinformatics. However, several learning machines used in such fields are not regular but singular statistical models, hence their generalization performance is still left unknown. To overcome these problems, in the previous papers, we proved new equations in statistical learning, by which we can estimate the Bayes generalization loss from the Bayes training loss and the functional variance, on the condition that the true distribution is a singularity contained in a learning machine. In this paper, we prove that the same equations hold even if a true distribution is not contained in a parametric model. Also we prove that, the proposed equations in a regular case are asymptotically equivalent to the Takeuchi information criterion. Therefore, the proposed equations are always applicable without any condition on the unknown true distribution.
Title: Solar radiation forecasting using ad-hoc time series preprocessing and neural networks
Abstract: In this paper, we present an application of neural networks in the renewable energy domain. We have developed a methodology for the daily prediction of global solar radiation on a horizontal surface. We use an ad-hoc time series preprocessing and a Multi-Layer Perceptron (MLP) in order to predict solar radiation at daily horizon. First results are promising with nRMSE < 21% and RMSE < 998 Wh/m2. Our optimized MLP presents prediction similar to or even better than conventional methods such as ARIMA techniques, Bayesian inference, Markov chains and k-Nearest-Neighbors approximators. Moreover we found that our data preprocessing approach can reduce significantly forecasting errors.
Title: Total Variation, Adaptive Total Variation and Nonconvex Smoothly Clipped Absolute Deviation Penalty for Denoising Blocky Images
Abstract: The total variation-based image denoising model has been generalized and extended in numerous ways, improving its performance in different contexts. We propose a new penalty function motivated by the recent progress in the statistical literature on high-dimensional variable selection. Using a particular instantiation of the majorization-minimization algorithm, the optimization problem can be efficiently solved and the computational procedure realized is similar to the spatially adaptive total variation model. Our two-pixel image model shows theoretically that the new penalty function solves the bias problem inherent in the total variation model. The superior performance of the new penalty is demonstrated through several experiments. Our investigation is limited to "blocky" images which have small total variation.
Title: An optimal linear separator for the Sonar Signals Classification task
Abstract: The problem of classifying sonar signals from rocks and mines first studied by Gorman and Sejnowski has become a benchmark against which many learning algorithms have been tested. We show that both the training set and the test set of this benchmark are linearly separable, although with different hyperplanes. Moreover, the complete set of learning and test patterns together, is also linearly separable. We give the weights that separate these sets, which may be used to compare results found by other algorithms.
Title: On the modified Basis Pursuit reconstruction for Compressed Sensing with partially known support
Abstract: The goal of this short note is to present a refined analysis of the modified Basis Pursuit ($\ell_1$-minimization) approach to signal recovery in Compressed Sensing with partially known support, as introduced by Vaswani and Lu. The problem is to recover a signal $x \in \mathbb R^p$ using an observation vector $y=Ax$, where $A \in \mathbb R^n\times p$ and in the highly underdetermined setting $n\ll p$. Based on an initial and possibly erroneous guess $T$ of the signal's support $\rm supp(x)$, the Modified Basis Pursuit method of Vaswani and Lu consists of minimizing the $\ell_1$ norm of the estimate over the indices indexed by $T^c$ only. We prove exact recovery essentially under a Restricted Isometry Property assumption of order 2 times the cardinal of $T^c \cap \rm supp(x)$, i.e. the number of missed components.
Title: Optimal Byzantine Resilient Convergence in Asynchronous Robot Networks
Abstract: We propose the first deterministic algorithm that tolerates up to $f$ byzantine faults in $3f+1$-sized networks and performs in the asynchronous CORDA model. Our solution matches the previously established lower bound for the semi-synchronous ATOM model on the number of tolerated Byzantine robots. Our algorithm works under bounded scheduling assumptions for oblivious robots moving in a uni-dimensional space.
Title: Quality assessment of the MPEG-4 scalable video CODEC
Abstract: In this paper, the performance of the emerging MPEG-4 SVC CODEC is evaluated. In the first part, a brief introduction on the subject of quality assessment and the development of the MPEG-4 SVC CODEC is given. After that, the used test methodologies are described in detail, followed by an explanation of the actual test scenarios. The main part of this work concentrates on the performance analysis of the MPEG-4 SVC CODEC - both objective and subjective.
Title: Encoding models for scholarly literature
Abstract: We examine the issue of digital formats for document encoding, archiving and publishing, through the specific example of "born-digital" scholarly journal articles. We will begin by looking at the traditional workflow of journal editing and publication, and how these practices have made the transition into the online domain. We will examine the range of different file formats in which electronic articles are currently stored and published. We will argue strongly that, despite the prevalence of binary and proprietary formats such as PDF and MS Word, XML is a far superior encoding choice for journal articles. Next, we look at the range of XML document structures (DTDs, Schemas) which are in common use for encoding journal articles, and consider some of their strengths and weaknesses. We will suggest that, despite the existence of specialized schemas intended specifically for journal articles (such as NLM), and more broadly-used publication-oriented schemas such as DocBook, there are strong arguments in favour of developing a subset or customization of the Text Encoding Initiative (TEI) schema for the purpose of journal-article encoding; TEI is already in use in a number of journal publication projects, and the scale and precision of the TEI tagset makes it particularly appropriate for encoding scholarly articles. We will outline the document structure of a TEI-encoded journal article, and look in detail at suggested markup patterns for specific features of journal articles.
Title: A corrected AIC for the selection of seemingly unrelated regressions models
Abstract: A bias correction to Akaike's information criterion (AIC) is derived for seemingly unrelated regressions models. The correction is of particular use when the sample size is not much larger than the number of fitted parameters. A small-sample simulation study indicates that the bias-corrected AIC (AICc) provides better model choices than other model selection criteria.
Title: Size dependent word frequencies and translational invariance of books
Abstract: It is shown that a real novel shares many characteristic features with a null model in which the words are randomly distributed throughout the text. Such a common feature is a certain translational invariance of the text. Another is that the functional form of the word-frequency distribution of a novel depends on the length of the text in the same way as the null model. This means that an approximate power-law tail ascribed to the data will have an exponent which changes with the size of the text-section which is analyzed. A further consequence is that a novel cannot be described by text-evolution models like the Simon model. The size-transformation of a novel is found to be well described by a specific Random Book Transformation. This size transformation in addition enables a more precise determination of the functional form of the word-frequency distribution. The implications of the results are discussed.
Title: Using Genetic Algorithms for Texts Classification Problems
Abstract: The avalanche quantity of the information developed by mankind has led to concept of automation of knowledge extraction - Data Mining ([1]). This direction is connected with a wide spectrum of problems - from recognition of the fuzzy set to creation of search machines. Important component of Data Mining is processing of the text information. Such problems lean on concept of classification and clustering ([2]). Classification consists in definition of an accessory of some element (text) to one of in advance created classes. Clustering means splitting a set of elements (texts) on clusters which quantity are defined by localization of elements of the given set in vicinities of these some natural centers of these clusters. Realization of a problem of classification initially should lean on the given postulates, basic of which - the aprioristic information on primary set of texts and a measure of affinity of elements and classes.
Title: Fast Weak Learner Based on Genetic Algorithm
Abstract: An approach to the acceleration of parametric weak classifier boosting is proposed. Weak classifier is called parametric if it has fixed number of parameters and, so, can be represented as a point into multidimensional space. Genetic algorithm is used instead of exhaustive search to learn parameters of such classifier. Proposed approach also takes cases when effective algorithm for learning some of the classifier parameters exists into account. Experiments confirm that such an approach can dramatically decrease classifier training time while keeping both training and test errors small.
Title: Mining Compressed Repetitive Gapped Sequential Patterns Efficiently
Abstract: Mining frequent sequential patterns from sequence databases has been a central research topic in data mining and various efficient mining sequential patterns algorithms have been proposed and studied. Recently, in many problem domains (e.g, program execution traces), a novel sequential pattern mining research, called mining repetitive gapped sequential patterns, has attracted the attention of many researchers, considering not only the repetition of sequential pattern in different sequences but also the repetition within a sequence is more meaningful than the general sequential pattern mining which only captures occurrences in different sequences. However, the number of repetitive gapped sequential patterns generated by even these closed mining algorithms may be too large to understand for users, especially when support threshold is low. In this paper, we propose and study the problem of compressing repetitive gapped sequential patterns. Inspired by the ideas of summarizing frequent itemsets, RPglobal, we develop an algorithm, CRGSgrow (Compressing Repetitive Gapped Sequential pattern grow), including an efficient pruning strategy, SyncScan, and an efficient representative pattern checking scheme, -dominate sequential pattern checking. The CRGSgrow is a two-step approach: in the first step, we obtain all closed repetitive sequential patterns as the candidate set of representative repetitive sequential patterns, and at the same time get the most of representative repetitive sequential patterns; in the second step, we only spend a little time in finding the remaining the representative patterns from the candidate set. An empirical study with both real and synthetic data sets clearly shows that the CRGSgrow has good performance.
Title: A Dynamic Programming Approach for Approximate Uniform Generation of Binary Matrices with Specified Margins
Abstract: Consider the collection of all binary matrices having a specific sequence of row and column sums and consider sampling binary matrices uniformly from this collection. Practical algorithms for exact uniform sampling are not known, but there are practical algorithms for approximate uniform sampling. Here it is shown how dynamic programming and recent asymptotic enumeration results can be used to simplify and improve a certain class of approximate uniform samplers. The dynamic programming perspective suggests interesting generalizations.
Title: The CIFF Proof Procedure for Abductive Logic Programming with Constraints: Theory, Implementation and Experiments
Abstract: We present the CIFF proof procedure for abductive logic programming with constraints, and we prove its correctness. CIFF is an extension of the IFF proof procedure for abductive logic programming, relaxing the original restrictions over variable quantification (allowedness conditions) and incorporating a constraint solver to deal with numerical constraints as in constraint logic programming. Finally, we describe the CIFF system, comparing it with state of the art abductive systems and answer set solvers and showing how to use it to program some applications. (To appear in Theory and Practice of Logic Programming - TPLP).
Title: U-Quantile-Statistics
Abstract: In 1948, W. Hoeffding introduced a large class of unbiased estimators called U-statistics, defined as the average value of a real-valued m-variate function h calculated at all possible sets of m points from a random sample. In the present paper, we investigate the corresponding robust analogue which we call U-quantile-statistics. We are concerned with the asymptotic behavior of the sample p-quantile of such function h instead of its average. Alternatively, U-quantile-statistics can be viewed as quantile estimators for a certain class of dependent random variables. Examples are given by a slightly modified Hodges-Lehmann estimator of location and the median interpoint distance among random points in space.
Title: Syntax is from Mars while Semantics from Venus! Insights from Spectral Analysis of Distributional Similarity Networks
Abstract: We study the global topology of the syntactic and semantic distributional similarity networks for English through the technique of spectral analysis. We observe that while the syntactic network has a hierarchical structure with strong communities and their mixtures, the semantic network has several tightly knit communities along with a large core without any such well-defined community structure.
Title: On Defining 'I' "I logy"
Abstract: Could we define I? Throughout this article we give a negative answer to this question. More exactly, we show that there is no definition for I in a certain way. But this negative answer depends on our definition of definability. Here, we try to consider sufficient generalized definition of definability. In the middle of paper a paradox will arise which makes us to modify the way we use the concept of property and definability.
Title: Knowledge Management in Economic Intelligence with Reasoning on Temporal Attributes
Abstract: People have to make important decisions within a time frame. Hence, it is imperative to employ means or strategy to aid effective decision making. Consequently, Economic Intelligence (EI) has emerged as a field to aid strategic and timely decision making in an organization. In the course of attaining this goal: it is indispensable to be more optimistic towards provision for conservation of intellectual resource invested into the process of decision making. This intellectual resource is nothing else but the knowledge of the actors as well as that of the various processes for effecting decision making. Knowledge has been recognized as a strategic economic resource for enhancing productivity and a key for innovation in any organization or community. Thus, its adequate management with cognizance of its temporal properties is highly indispensable. Temporal properties of knowledge refer to the date and time (known as timestamp) such knowledge is created as well as the duration or interval between related knowledge. This paper focuses on the needs for a user-centered knowledge management approach as well as exploitation of associated temporal properties. Our perspective of knowledge is with respect to decision-problems projects in EI. Our hypothesis is that the possibility of reasoning about temporal properties in exploitation of knowledge in EI projects should foster timely decision making through generation of useful inferences from available and reusable knowledge for a new project.
Title: Toward a Category Theory Design of Ontological Knowledge Bases
Abstract: I discuss (ontologies_and_ontological_knowledge_bases / formal_methods_and_theories) duality and its category theory extensions as a step toward a solution to Knowledge-Based Systems Theory. In particular I focus on the example of the design of elements of ontologies and ontological knowledge bases of next three electronic courses: Foundations of Research Activities, Virtual Modeling of Complex Systems and Introduction to String Theory.
Title: The S-Estimator in Change-Point Random Model with Long Memory
Abstract: The paper considers two-phase random design linear regression models. The errors and the regressors are stationary long-range dependent Gaussian. The regression parameters, the scale parameters and the change-point are estimated using a method introduced by Rousseeuw and Yohai(1984). This is called S-estimator and it has the property that is more robust than the classical estimators; the outliers don't spoil the estimation results. Some asymptotic results, including the strong consistency and the convergence rate of the S-estimators, are proved.
Title: Feature Reinforcement Learning: Part I: Unstructured MDPs
Abstract: General-purpose, intelligent, learning agents cycle through sequences of observations, actions, and rewards that are complex, uncertain, unknown, and non-Markovian. On the other hand, reinforcement learning is well-developed for small finite state Markov decision processes (MDPs). Up to now, extracting the right state representations out of bare observations, that is, reducing the general agent setup to the MDP framework, is an art that involves significant effort by designers. The primary goal of this work is to automate the reduction process and thereby significantly expand the scope of many existing reinforcement learning algorithms and the agents that employ them. Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in Part II. The role of POMDPs is also considered there.
Title: Segmentation of Facial Expressions Using Semi-Definite Programming and Generalized Principal Component Analysis
Abstract: In this paper, we use semi-definite programming and generalized principal component analysis (GPCA) to distinguish between two or more different facial expressions. In the first step, semi-definite programming is used to reduce the dimension of the image data and "unfold" the manifold which the data points (corresponding to facial expressions) reside on. Next, GPCA is used to fit a series of subspaces to the data points and associate each data point with a subspace. Data points that belong to the same subspace are claimed to belong to the same facial expression category. An example is provided.
Title: Large-Margin kNN Classification Using a Deep Encoder Network
Abstract: KNN is one of the most popular classification methods, but it often fails to work well with inappropriate choice of distance metric or due to the presence of numerous class-irrelevant features. Linear feature transformation methods have been widely applied to extract class-relevant information to improve kNN classification, which is very limited in many applications. Kernels have been used to learn powerful non-linear feature transformations, but these methods fail to scale to large datasets. In this paper, we present a scalable non-linear feature mapping method based on a deep neural network pretrained with restricted boltzmann machines for improving kNN classification in a large-margin framework, which we call DNet-kNN. DNet-kNN can be used for both classification and for supervised dimensionality reduction. The experimental results on two benchmark handwritten digit datasets show that DNet-kNN has much better performance than large-margin kNN using a linear mapping and kNN based on a deep autoencoder pretrained with retricted boltzmann machines.
Title: Managing Requirement Volatility in an Ontology-Driven Clinical LIMS Using Category Theory. International Journal of Telemedicine and Applications
Abstract: Requirement volatility is an issue in software engineering in general, and in Web-based clinical applications in particular, which often originates from an incomplete knowledge of the domain of interest. With advances in the health science, many features and functionalities need to be added to, or removed from, existing software applications in the biomedical domain. At the same time, the increasing complexity of biomedical systems makes them more difficult to understand, and consequently it is more difficult to define their requirements, which contributes considerably to their volatility. In this paper, we present a novel agent-based approach for analyzing and managing volatile and dynamic requirements in an ontology-driven laboratory information management system (LIMS) designed for Web-based case reporting in medical mycology. The proposed framework is empowered with ontologies and formalized using category theory to provide a deep and common understanding of the functional and nonfunctional requirement hierarchies and their interrelations, and to trace the effects of a change on the conceptual framework.
Title: Towards Improving Validation, Verification, Crash Investigations, and Event Reconstruction of Flight-Critical Systems with Self-Forensics
Abstract: This paper introduces a novel concept of self-forensics to complement the standard autonomic self-CHOP properties of the self-managed systems, to be specified in the Forensic Lucid language. We argue that self-forensics, with the forensics taken out of the cybercrime domain, is applicable to "self-dissection" for the purpose of verification of autonomous software and hardware systems of flight-critical systems for automated incident and anomaly analysis and event reconstruction by the engineering teams in a variety of incident scenarios during design and testing as well as actual flight data.
Title: The VOISE Algorithm: a Versatile Tool for Automatic Segmentation of Astronomical Images
Abstract: The auroras on Jupiter and Saturn can be studied with a high sensitivity and resolution by the Hubble Space Telescope (HST) ultraviolet (UV) and far-ultraviolet (FUV) Space Telescope spectrograph (STIS) and Advanced Camera for Surveys (ACS) instruments. We present results of automatic detection and segmentation of Jupiter's auroral emissions as observed by HST ACS instrument with VOronoi Image SEgmentation (VOISE). VOISE is a dynamic algorithm for partitioning the underlying pixel grid of an image into regions according to a prescribed homogeneity criterion. The algorithm consists of an iterative procedure that dynamically constructs a tessellation of the image plane based on a Voronoi Diagram, until the intensity of the underlying image within each region is classified as homogeneous. The computed tessellations allow the extraction of quantitative information about the auroral features such as mean intensity, latitudinal and longitudinal extents and length scales. These outputs thus represent a more automated and objective method of characterising auroral emissions than manual inspection.
Title: On Maximum a Posteriori Estimation of Hidden Markov Processes
Abstract: We present a theoretical analysis of Maximum a Posteriori (MAP) sequence estimation for binary symmetric hidden Markov processes. We reduce the MAP estimation to the energy minimization of an appropriately defined Ising spin model, and focus on the performance of MAP as characterized by its accuracy and the number of solutions corresponding to a typical observed sequence. It is shown that for a finite range of sufficiently low noise levels, the solution is uniquely related to the observed sequence, while the accuracy degrades linearly with increasing the noise strength. For intermediate noise values, the accuracy is nearly noise-independent, but now there are exponentially many solutions to the estimation problem, which is reflected in non-zero ground-state entropy for the Ising model. Finally, for even larger noise intensities, the number of solutions reduces again, but the accuracy is poor. It is shown that these regimes are different thermodynamic phases of the Ising model that are related to each other via first-order phase transitions.
Title: Matrix Completion from Noisy Entries
Abstract: Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the `Netflix problem') to structure-from-motion and positioning. We study a low complexity algorithm introduced by Keshavan et al.(2009), based on a combination of spectral techniques and manifold optimization, that we call here OptSpace. We prove performance guarantees that are order-optimal in a number of circumstances.
Title: Regularization methods for learning incomplete matrices
Abstract: We use convex relaxation techniques to provide a sequence of solutions to the matrix completion problem. Using the nuclear norm as a regularizer, we provide simple and very efficient algorithms for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm iteratively replaces the missing elements with those obtained from a thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions.
Title: Chain graph models of multivariate regression type for categorical data
Abstract: We discuss a class of chain graph models for categorical variables defined by what we call a multivariate regression chain graph Markov property. First, the set of local independencies of these models is shown to be Markov equivalent to those of a chain graph model recently defined in the literature. Next we provide a parametrization based on a sequence of generalized linear models with a multivariate logistic link function that captures all independence constraints in any chain graph model of this kind.