text
stringlengths
0
4.09k
Abstract: Art historians and archaeologists have long grappled with the regional classification of ancient Near Eastern ivory carvings. Based on the visual similarity of sculptures, individuals within these fields have proposed object assemblages linked to hypothesized regional production centers. Using quantitative rather than visual methods, we here approach this classification task by exploiting computational methods from machine learning currently used with success in a variety of statistical problems in science and engineering. We first construct a prediction function using 66 categorical features as inputs and regional style as output. The model assigns regional style group (RSG), with 98 percent prediction accuracy. We then rank these features by their mutual information with RSG, quantifying single-feature predictive power. Using the highest- ranking features in combination with nomographic visualization, we have found previously unknown relationships that may aid in the regional classification of these ivories and their interpretation in art historical context.
Title: An Algebraic Approach for the MIMO Control of Small Scale Helicopter
Abstract: The control of small-scale helicopter is a MIMO problem. To use of classical control approach to formally solve a MIMO problem, one needs to come up with multidimensional Root Locus diagram to tune the control parameters. The problem with the required dimension of the RL diagram for MIMO design has forced the design procedure of classical approach to be conducted in cascaded multi-loop SISO system starting from the innermost loop outward. To implement this control approach for a helicopter, a pitch and roll attitude control system is often subordinated to a, respectively, longitudinal and lateral velocity control system in a nested architecture. The requirement for this technique to work is that the inner attitude control loop must have a higher bandwidth than the outer velocity control loop which is not the case for high performance mini helicopter. To address the above problems, an algebraic design approach is proposed in this work. The designed control using s-CDM approach is demonstrated for hovering control of small-scale helicopter simultaneously subjected to plant parameter uncertainties and wind disturbances.
Title: A Fixed-Parameter Algorithm for Random Instances of Weighted d-CNF Satisfiability
Abstract: We study random instances of the weighted $d$-CNF satisfiability problem (WEIGHTED $d$-SAT), a generic W[1]-complete problem. A random instance of the problem consists of a fixed parameter $k$ and a random $d$-CNF formula $pk, d$ generated as follows: for each subset of $d$ variables and with probability $p$, a clause over the $d$ variables is selected uniformly at random from among the $2^d - 1$ clauses that contain at least one negated literals. We show that random instances of WEIGHTED $d$-SAT can be solved in $O(k^2n + n^O(1))$-time with high probability, indicating that typical instances of WEIGHTED $d$-SAT under this instance distribution are fixed-parameter tractable. The result also hold for random instances from the model $pk,d(d')$ where clauses containing less than $d' (1 < d' < d)$ negated literals are forbidden, and for random instances of the renormalized (miniaturized) version of WEIGHTED $d$-SAT in certain range of the random model's parameter $p(n)$. This, together with our previous results on the threshold behavior and the resolution complexity of unsatisfiable instances of $pk, d$, provides an almost complete characterization of the typical-case behavior of random instances of WEIGHTED $d$-SAT.
Title: Sparse Online Learning via Truncated Gradient
Abstract: We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties: The degree of sparsity is continuous -- a parameter controls the rate of sparsification from no sparsification to total sparsification. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular $L_1$-regularization method in the batch setting. We prove that small rates of sparsification result in only small additional regret with respect to typical online learning guarantees. The approach works well empirically. We apply the approach to several datasets and find that for datasets with large numbers of features, substantial sparsity is discoverable.
Title: Improving Point and Interval Estimates of Monotone Functions by Rearrangement
Abstract: Suppose that a target function is monotonic, namely, weakly increasing, and an available original estimate of this target function is not weakly increasing. Rearrangements, univariate and multivariate, transform the original estimate to a monotonic estimate that always lies closer in common metrics to the target function. Furthermore, suppose an original simultaneous confidence interval, which covers the target function with probability at least $1-\alpha$, is defined by an upper and lower end-point functions that are not weakly increasing. Then the rearranged confidence interval, defined by the rearranged upper and lower end-point functions, is shorter in length in common norms than the original interval and also covers the target function with probability at least $1-\alpha$. We demonstrate the utility of the improved point and interval estimates with an age-height growth chart example.
Title: A new Hedging algorithm and its application to inferring latent random variables
Abstract: We present a new online learning algorithm for cumulative discounted gain. This learning algorithm does not use exponential weights on the experts. Instead, it uses a weighting scheme that depends on the regret of the master algorithm relative to the experts. In particular, experts whose discounted cumulative gain is smaller (worse) than that of the master algorithm receive zero weight. We also sketch how a regret-based algorithm can be used as an alternative to Bayesian averaging in the context of inferring latent random variables.
Title: Frequentist and Bayesian measures of confidence via multiscale bootstrap for testing three regions
Abstract: A new computation method of frequentist $p$-values and Bayesian posterior probabilities based on the bootstrap probability is discussed for the multivariate normal model with unknown expectation parameter vector. The null hypothesis is represented as an arbitrary-shaped region. We introduce new parametric models for the scaling-law of bootstrap probability so that the multiscale bootstrap method, which was designed for one-sided test, can also computes confidence measures of two-sided test, extending applicability to a wider class of hypotheses. Parameter estimation is improved by the two-step multiscale bootstrap and also by including higher-order terms. Model selection is important not only as a motivating application of our method, but also as an essential ingredient in the method. A compromise between frequentist and Bayesian is attempted by showing that the Bayesian posterior probability with an noninformative prior is interpreted as a frequentist $p$-value of ``zero-sided'' test.
Title: Graph Kernels
Abstract: We present a unified framework to study graph kernels, special cases of which include the random walk graph kernel , marginalized graph kernel , and geometric kernel on graphs . Through extensions of linear algebra to Reproducing Kernel Hilbert Spaces (RKHS) and reduction to a Sylvester equation, we construct an algorithm that improves the time complexity of kernel computation from $O(n^6)$ to $O(n^3)$. When the graphs are sparse, conjugate gradient solvers or fixed-point iterations bring our algorithm into the sub-cubic domain. Experiments on graphs from bioinformatics and other application domains show that it is often more than a thousand times faster than previous approaches. We then explore connections between diffusion kernels , regularization on graphs , and graph kernels, and use these connections to propose new graph kernels. Finally, we show that rational kernels when specialized to graphs reduce to the random walk graph kernel.
Title: About the creation of a parallel bilingual corpora of web-publications
Abstract: The algorithm of the creation texts parallel corpora was presented. The algorithm is based on the use of "key words" in text documents, and on the means of their automated translation. Key words were singled out by means of using Russian and Ukrainian morphological dictionaries, as well as dictionaries of the translation of nouns for the Russian and Ukrainianlanguages. Besides, to calculate the weights of the terms in the documents, empiric-statistic rules were used. The algorithm under consideration was realized in the form of a program complex, integrated into the content-monitoring InfoStream system. As a result, a parallel bilingual corpora of web-publications containing about 30 thousand documents, was created
Title: Unveiling the mystery of visual information processing in human brain
Abstract: It is generally accepted that human vision is an extremely powerful information processing system that facilitates our interaction with the surrounding world. However, despite extended and extensive research efforts, which encompass many exploration fields, the underlying fundamentals and operational principles of visual information processing in human brain remain unknown. We still are unable to figure out where and how along the path from eyes to the cortex the sensory input perceived by the retina is converted into a meaningful object representation, which can be consciously manipulated by the brain. Studying the vast literature considering the various aspects of brain information processing, I was surprised to learn that the respected scholarly discussion is totally indifferent to the basic keynote question: "What is information?" in general or "What is visual information?" in particular. In the old days, it was assumed that any scientific research approach has first to define its basic departure points. Why was it overlooked in brain information processing research remains a conundrum. In this paper, I am trying to find a remedy for this bizarre situation. I propose an uncommon definition of "information", which can be derived from Kolmogorov's Complexity Theory and Chaitin's notion of Algorithmic Information. Embracing this new definition leads to an inevitable revision of traditional dogmas that shape the state of the art of brain information processing research. I hope this revision would better serve the challenging goal of human visual information processing modeling.
Title: Modeling belief systems with scale-free networks
Abstract: Evolution of belief systems has always been in focus of cognitive research. In this paper we delineate a new model describing belief systems as a network of statements considered true. Testing the model a small number of parameters enabled us to reproduce a variety of well-known mechanisms ranging from opinion changes to development of psychological problems. The self-organizing opinion structure showed a scale-free degree distribution. The novelty of our work lies in applying a convenient set of definitions allowing us to depict opinion network dynamics in a highly favorable way, which resulted in a scale-free belief network. As an additional benefit, we listed several conjectural consequences in a number of areas related to thinking and reasoning.
Title: Music, Complexity, Information
Abstract: These are the preparatory notes for a Science & Music essay, "Playing by numbers", appeared in Nature 453 (2008) 988-989.
Title: Belief decision support and reject for textured images characterization
Abstract: The textured images' classification assumes to consider the images in terms of area with the same texture. In uncertain environment, it could be better to take an imprecise decision or to reject the area corresponding to an unlearning class. Moreover, on the areas that are the classification units, we can have more than one texture. These considerations allows us to develop a belief decision model permitting to reject an area as unlearning and to decide on unions and intersections of learning classes. The proposed approach finds all its justification in an application of seabed characterization from sonar images, which contributes to an illustration.
Title: Case-deletion importance sampling estimators: Central limit theorems and related results
Abstract: Case-deleted analysis is a popular method for evaluating the influence of a subset of cases on inference. The use of Monte Carlo estimation strategies in complicated Bayesian settings leads naturally to the use of importance sampling techniques to assess the divergence between full-data and case-deleted posteriors and to provide estimates under the case-deleted posteriors. However, the dependability of the importance sampling estimators depends critically on the variability of the case-deleted weights. We provide theoretical results concerning the assessment of the dependability of case-deleted importance sampling estimators in several Bayesian models. In particular, these results allow us to establish whether or not the estimators satisfy a central limit theorem. Because the conditions we derive are of a simple analytical nature, the assessment of the dependability of the estimators can be verified routinely before estimation is performed. We illustrate the use of the results in several examples.
Title: The Correspondence Analysis Platform for Uncovering Deep Structure in Data and Information
Abstract: We study two aspects of information semantics: (i) the collection of all relationships, (ii) tracking and spotting anomaly and change. The first is implemented by endowing all relevant information spaces with a Euclidean metric in a common projected space. The second is modelled by an induced ultrametric. A very general way to achieve a Euclidean embedding of different information spaces based on cross-tabulation counts (and from other input data formats) is provided by Correspondence Analysis. From there, the induced ultrametric that we are particularly interested in takes a sequential - e.g. temporal - ordering of the data into account. We employ such a perspective to look at narrative, "the flow of thought and the flow of language" (Chafe). In application to policy decision making, we show how we can focus analysis in a small number of dimensions.
Title: Bayesian Analysis of Marginal Log-Linear Graphical Models for Three Way Contingency Tables
Abstract: This paper deals with the Bayesian analysis of graphical models of marginal independence for three way contingency tables. We use a marginal log-linear parametrization, under which the model is defined through suitable zero-constraints on the interaction parameters calculated within marginal distributions. We undertake a comprehensive Bayesian analysis of these models, involving suitable choices of prior distributions, estimation, model determination, as well as the allied computational issues. The methodology is illustrated with reference to two real data sets.
Title: Catching Up Faster by Switching Sooner: A Prequential Solution to the AIC-BIC Dilemma
Abstract: Bayesian model averaging, model selection and its approximations such as BIC are generally statistically consistent, but sometimes achieve slower rates og convergence than other methods such as AIC and leave-one-out cross-validation. On the other hand, these other methods can br inconsistent. We identify the "catch-up phenomenon" as a novel explanation for the slow convergence of Bayesian methods. Based on this analysis we define the switch distribution, a modification of the Bayesian marginal distribution. We show that, under broad conditions,model selection and prediction based on the switch distribution is both consistent and achieves optimal convergence rates, thereby resolving the AIC-BIC dilemma. The method is practical; we give an efficient implementation. The switch distribution has a data compression interpretation, and can thus be viewed as a "prequential" or MDL method; yet it is different from the MDL methods that are usually considered in the literature. We compare the switch distribution to Bayes factor model selection and leave-one-out cross-validation.
Title: Principal components analysis for sparsely observed correlated functional data using a kernel smoothing approach
Abstract: In this paper, we consider the problem of estimating the covariance kernel and its eigenvalues and eigenfunctions from sparse, irregularly observed, noise corrupted and (possibly) correlated functional data. We present a method based on pre-smoothing of individual sample curves through an appropriate kernel. We show that the naive empirical covariance of the pre-smoothed sample curves gives highly biased estimator of the covariance kernel along its diagonal. We attend to this problem by estimating the diagonal and off-diagonal parts of the covariance kernel separately. We then present a practical and efficient method for choosing the bandwidth for the kernel by using an approximation to the leave-one-curve-out cross validation score. We prove that under standard regularity conditions on the covariance kernel and assuming i.i.d. samples, the risk of our estimator, under $L^2$ loss, achieves the optimal nonparametric rate when the number of measurements per curve is bounded. We also show that even when the sample curves are correlated in such a way that the noiseless data has a separable covariance structure, the proposed method is still consistent and we quantify the role of this correlation in the risk of the estimator.
Title: Quantitative comparisons between finitary posterior distributions and Bayesian posterior distributions
Abstract: The main object of Bayesian statistical inference is the determination of posterior distributions. Sometimes these laws are given for quantities devoid of empirical value. This serious drawback vanishes when one confines oneself to considering a finite horizon framework. However, assuming infinite exchangeability gives rise to fairly tractable \it a posteriori quantities, which is very attractive in applications. Hence, with a view to a reconciliation between these two aspects of the Bayesian way of reasoning, in this paper we provide quantitative comparisons between posterior distributions of finitary parameters and posterior distributions of allied parameters appearing in usual statistical models.
Title: Semiparametric curve alignment and shift density estimation for biological data
Abstract: Assume that we observe a large number of curves, all of them with identical, although unknown, shape, but with a different random shift. The objective is to estimate the individual time shifts and their distribution. Such an objective appears in several biological applications like neuroscience or ECG signal processing, in which the estimation of the distribution of the elapsed time between repetitive pulses with a possibly low signal-noise ratio, and without a knowledge of the pulse shape is of interest. We suggest an M-estimator leading to a three-stage algorithm: we split our data set in blocks, on which the estimation of the shifts is done by minimizing a cost criterion based on a functional of the periodogram; the estimated shifts are then plugged into a standard density estimator. We show that under mild regularity assumptions the density estimate converges weakly to the true shift distribution. The theory is applied both to simulations and to alignment of real ECG signals. The estimator of the shift distribution performs well, even in the case of low signal-to-noise ratio, and is shown to outperform the standard methods for curve alignment.
Title: Algorithm Selection as a Bandit Problem with Unbounded Losses
Abstract: Algorithm selection is typically based on models of algorithm performance, learned during a separate offline training sequence, which can be prohibitively expensive. In recent work, we adopted an online approach, in which a performance model is iteratively updated and used to guide selection on a sequence of problem instances. The resulting exploration-exploitation trade-off was represented as a bandit problem with expert advice, using an existing solver for this game, but this required the setting of an arbitrary bound on algorithm runtimes, thus invalidating the optimal regret of the solver. In this paper, we propose a simpler framework for representing algorithm selection as a bandit problem, with partial information, and an unknown bound on losses. We adapt an existing solver to this game, proving a bound on its expected regret, which holds also for the resulting algorithm selection technique. We present preliminary experiments with a set of SAT solvers on a mixed SAT-UNSAT benchmark.
Title: Scientific Paper Summarization Using Citation Summary Networks
Abstract: Quickly moving to a new area of research is painful for researchers due to the vast amount of scientific literature in each field of study. One possible way to overcome this problem is to summarize a scientific topic. In this paper, we propose a model of summarizing a single article, which can be further used to summarize an entire topic. Our model is based on analyzing others' viewpoint of the target article's contributions and the study of its citation summary network using a clustering approach.
Title: Large-Sample Confidence Intervals for the Treatment Difference in a Two-Period Crossover Trial, Utilizing Prior Information
Abstract: Consider a two-treatment, two-period crossover trial, with responses that are continuous random variables. We find a large-sample frequentist 1-alpha confidence interval for the treatment difference that utilizes the uncertain prior information that there is no differential carryover effect.
Title: Extension of Inagaki General Weighted Operators and A New Fusion Rule Class of Proportional Redistribution of Intersection Masses
Abstract: In this paper we extend Inagaki Weighted Operators fusion rule (WO) in information fusion by doing redistribution of not only the conflicting mass, but also of masses of non-empty intersections, that we call Double Weighted Operators (DWO). Then we propose a new fusion rule Class of Proportional Redistribution of Intersection Masses (CPRIM), which generates many interesting particular fusion rules in information fusion. Both formulas are presented for any number of sources of information. An application and comparison with other fusion rules are given in the last section.
Title: Multi-Instance Learning by Treating Instances As Non-I.I.D. Samples
Abstract: Multi-instance learning attempts to learn from a training set consisting of labeled bags each containing many unlabeled instances. Previous studies typically treat the instances in the bags as independently and identically distributed. However, the instances in a bag are rarely independent, and therefore a better performance can be expected if the instances are treated in an non-i.i.d. way that exploits the relations among instances. In this paper, we propose a simple yet effective multi-instance learning method, which regards each bag as a graph and uses a specific kernel to distinguish the graphs by considering the features of the nodes as well as the features of the edges that convey some relations among instances. The effectiveness of the proposed method is validated by experiments.
Title: Intrusion Detection Using Cost-Sensitive Classification
Abstract: Intrusion Detection is an invaluable part of computer networks defense. An important consideration is the fact that raising false alarms carries a significantly lower cost than not detecting at- tacks. For this reason, we examine how cost-sensitive classification methods can be used in Intrusion Detection systems. The performance of the approach is evaluated under different experimental conditions, cost matrices and different classification models, in terms of expected cost, as well as detection and false alarm rates. We find that even under unfavourable conditions, cost-sensitive classification can improve performance significantly, if only slightly.
Title: The Five Points Pose Problem : A New and Accurate Solution Adapted to any Geometric Configuration
Abstract: The goal of this paper is to estimate directly the rotation and translation between two stereoscopic images with the help of five homologous points. The methodology presented does not mix the rotation and translation parameters, which is comparably an important advantage over the methods using the well-known essential matrix. This results in correct behavior and accuracy for situations otherwise known as quite unfavorable, such as planar scenes, or panoramic sets of images (with a null base length), while providing quite comparable results for more "standard" cases. The resolution of the algebraic polynomials resulting from the modeling of the coplanarity constraint is made with the help of powerful algebraic solver tools (the Groebner bases and the Rational Univariate Representation).
Title: Two Dimensional Density Estimation using Smooth Invertible Transformations
Abstract: We investigate the problem of estimating a smooth invertible transformation f when observing independent samples X_1, ..., X_n P \circ f, where P is a known measure. We focus on the two dimensional case where P and f are defined on R^2. We present a flexible class of smooth invertible transformations in two dimensions with variational equations for optimizing over the classes, then study the problem of estimating the transformation f by penalized maximum likelihood estimation. We apply our methodology to the case when P \circ f has a density with respect to Lebesgue measure on R^2 and demonstrate improvements over kernel density estimation on three examples.
Title: Hardware/Software Co-Design for Spike Based Recognition
Abstract: The practical applications based on recurrent spiking neurons are limited due to their non-trivial learning algorithms. The temporal nature of spiking neurons is more favorable for hardware implementation where signals can be represented in binary form and communication can be done through the use of spikes. This work investigates the potential of recurrent spiking neurons implementations on reconfigurable platforms and their applicability in temporal based applications. A theoretical framework of reservoir computing is investigated for hardware/software implementation. In this framework, only readout neurons are trained which overcomes the burden of training at the network level. These recurrent neural networks are termed as microcircuits which are viewed as basic computational units in cortical computation. This paper investigates the potential of recurrent neural reservoirs and presents a novel hardware/software strategy for their implementation on FPGAs. The design is implemented and the functionality is tested in the context of speech recognition application.
Title: Polygon Exploration with Time-Discrete Vision
Abstract: With the advent of autonomous robots with two- and three-dimensional scanning capabilities, classical visibility-based exploration methods from computational geometry have gained in practical importance. However, real-life laser scanning of useful accuracy does not allow the robot to scan continuously while in motion; instead, it has to stop each time it surveys its environment. This requirement was studied by Fekete, Klein and Nuechter for the subproblem of looking around a corner, but until now has not been considered in an online setting for whole polygonal regions. We give the first algorithmic results for this important algorithmic problem that combines stationary art gallery-type aspects with watchman-type issues in an online scenario: We demonstrate that even for orthoconvex polygons, a competitive strategy can be achieved only for limited aspect ratio A (the ratio of the maximum and minimum edge length of the polygon), i.e., for a given lower bound on the size of an edge; we give a matching upper bound by providing an O(log A)-competitive strategy for simple rectilinear polygons, using the assumption that each edge of the polygon has to be fully visible from some scan point.
Title: CPBVP: A Constraint-Programming Framework for Bounded Program Verification
Abstract: This paper studies how to verify the conformity of a program with its specification and proposes a novel constraint-programming framework for bounded program verification (CPBPV). The CPBPV framework uses constraint stores to represent the specification and the program and explores execution paths nondeterministically. The input program is partially correct if each constraint store so produced implies the post-condition. CPBPV does not explore spurious execution paths as it incrementally prunes execution paths early by detecting that the constraint store is not consistent. CPBPV uses the rich language of constraint programming to express the constraint store. Finally, CPBPV is parametrized with a list of solvers which are tried in sequence, starting with the least expensive and less general. Experimental results often produce orders of magnitude improvements over earlier approaches, running times being often independent of the variable domains. Moreover, CPBPV was able to detect subtle errors in some programs while other frameworks based on model checking have failed.
Title: Text Data Mining: Theory and Methods
Abstract: This paper provides the reader with a very brief introduction to some of the theory and methods of text data mining. The intent of this article is to introduce the reader to some of the current methodologies that are employed within this discipline area while at the same time making the reader aware of some of the interesting challenges that remain to be solved within the area. Finally, the articles serves as a very rudimentary tutorial on some of techniques while also providing the reader with a list of references for additional study.
Title: A decomposition result for the Haar distribution on the orthogonal group
Abstract: Let H be a Haar distributed random matrix on the group of pxp real orthogonal matrices. Partition H into four blocks: (1) the (1,1) element, (2)the rest of the first row, (3) the rest of the first column, and (4)the remaining (p-1)x(p-1) matrix. The marginal distribution of (1) is well known. In this paper, we give the conditional distribution of (2) and (3) given (1), and the conditional distribution of (4) given (1), (2), (3). This conditional specification uniquely determines the Haar distribution. The two conditional distributions involve well known probability distributions namely, the uniform distribution on the unit sphere in p-1 dimensional space and the Haar distribution on (p-2)x(p-2) orthogonal matrices. Our results show how to construct the Haar distribution on pxp orthogonal matrices from the Haar distribution on (p-2)x(p-2) orthogonal matrices coupled with the uniform distribution on the unit sphere in p-1 dimensions.
Title: On Endogenous Reconfiguration in Mobile Robotic Networks
Abstract: In this paper, our focus is on certain applications for mobile robotic networks, where reconfiguration is driven by factors intrinsic to the network rather than changes in the external environment. In particular, we study a version of the coverage problem useful for surveillance applications, where the objective is to position the robots in order to minimize the average distance from a random point in a given environment to the closest robot. This problem has been well-studied for omni-directional robots and it is shown that optimal configuration for the network is a centroidal Voronoi configuration and that the coverage cost belongs to $\Theta(m^-1/2)$, where $m$ is the number of robots in the network. In this paper, we study this problem for more realistic models of robots, namely the double integrator (DI) model and the differential drive (DD) model. We observe that the introduction of these motion constraints in the algorithm design problem gives rise to an interesting behavior. For a network, the optimal algorithm for these models of robots mimics that for omni-directional robots. We propose novel algorithms whose performances are within a constant factor of the optimal asymptotically (i.e., as $m \to +\infty$). In particular, we prove that the coverage cost for the DI and DD models of robots is of order $m^-1/3$. Additionally, we show that, as the network grows, these novel algorithms outperform the conventional algorithm; hence necessitating a reconfiguration in the network in order to maintain optimal quality of service.