text
stringlengths 0
4.09k
|
|---|
Title: Serializing the Parallelism in Parallel Communicating Pushdown Automata Systems
|
Abstract: We consider parallel communicating pushdown automata systems (PCPA) and define a property called known communication for it. We use this property to prove that the power of a variant of PCPA, called returning centralized parallel communicating pushdown automata (RCPCPA), is equivalent to that of multi-head pushdown automata. The above result presents a new sub-class of returning parallel communicating pushdown automata systems (RPCPA) called simple-RPCPA and we show that it can be written as a finite intersection of multi-head pushdown automata systems.
|
Title: Computational methods for Bayesian model choice
|
Abstract: In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective.
|
Title: On Classification from Outlier View
|
Abstract: Classification is the basis of cognition. Unlike other solutions, this study approaches it from the view of outliers. We present an expanding algorithm to detect outliers in univariate datasets, together with the underlying foundation. The expanding algorithm runs in a holistic way, making it a rather robust solution. Synthetic and real data experiments show its power. Furthermore, an application for multi-class problems leads to the introduction of the oscillator algorithm. The corresponding result implies the potential wide use of the expanding algorithm.
|
Title: Hilbert space embeddings and metrics on probability measures
|
Abstract: A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A pseudometric on the space of probability measures can be defined as the distance between distribution embeddings: we denote this as $\gamma_k$, indexed by the kernel function $k$ that defines the inner product in the RKHS. We present three theoretical properties of $\gamma_k$. First, we consider the question of determining the conditions on the kernel $k$ for which $\gamma_k$ is a metric: such $k$ are denoted \em characteristic kernels. Unlike pseudometrics, a metric is zero only when two distributions coincide, thus ensuring the RKHS embedding maps all distributions uniquely (i.e., the embedding is injective). While previously published conditions may apply only in restricted circumstances (e.g. on compact domains), and are difficult to check, our conditions are straightforward and intuitive: bounded continuous strictly positive definite kernels are characteristic. Alternatively, if a bounded continuous kernel is translation-invariant on $^d$, then it is characteristic if and only if the support of its Fourier transform is the entire $^d$. Second, we show that there exist distinct distributions that are arbitrarily close in $\gamma_k$. Third, to understand the nature of the topology induced by $\gamma_k$, we relate $\gamma_k$ to other popular metrics on probability measures, and present conditions on the kernel $k$ under which $\gamma_k$ metrizes the weak topology.
|
Title: Multiple pattern classification by sparse subspace decomposition
|
Abstract: A robust classification method is developed on the basis of sparse subspace decomposition. This method tries to decompose a mixture of subspaces of unlabeled data (queries) into class subspaces as few as possible. Each query is classified into the class whose subspace significantly contributes to the decomposed subspace. Multiple queries from different classes can be simultaneously classified into their respective classes. A practical greedy algorithm of the sparse subspace decomposition is designed for the classification. The present method achieves high recognition rate and robust performance exploiting joint sparsity.
|
Title: Monotonicity properties of the asymptotic relative efficiency between common correlation statistics in the bivariate normal model
|
Abstract: Pearson's is the most common correlation statistic, used mainly in parametric settings. Most common among nonparametric correlation statistics are Spearman's and Kendall's. We show that for bivariate normal i.i.d. samples the pairwise asymptotic relative efficiency between these three statistics depends monotonically on the population correlation coefficient. This monotonicity is a corollary to a stronger result. The proofs rely on the use of l'Hospital-type rules for monotonicity patterns.
|
Title: How the initialization affects the stability of the k-means algorithm
|
Abstract: We investigate the role of the initialization for the stability of the k-means clustering algorithm. As opposed to other papers, we consider the actual k-means algorithm and do not ignore its property of getting stuck in local optima. We are interested in the actual clustering, not only in the costs of the solution. We analyze when different initializations lead to the same local optimum, and when they lead to different local optima. This enables us to prove that it is reasonable to select the number of clusters based on stability scores.
|
Title: Convergence of Expected Utility for Universal AI
|
Abstract: We consider a sequence of repeated interactions between an agent and an environment. Uncertainty about the environment is captured by a probability distribution over a space of hypotheses, which includes all computable functions. Given a utility function, we can evaluate the expected utility of any computational policy for interaction with the environment. After making some plausible assumptions (and maybe one not-so-plausible assumption), we show that if the utility function is unbounded, then the expected utility of any policy is undefined.
|
Title: Online Learning for Matrix Factorization and Sparse Coding
|
Abstract: Sparse coding--that is, modelling data vectors as sparse linear combinations of basis elements--is widely used in machine learning, neuroscience, signal processing, and statistics. This paper focuses on the large-scale matrix factorization problem that consists of learning the basis set, adapting it to specific data. Variations of this problem include dictionary learning in signal processing, non-negative matrix factorization and sparse principal component analysis. In this paper, we propose to address these tasks with a new online optimization algorithm, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems. A proof of convergence is presented, along with experiments with natural images and genomic data demonstrating that it leads to state-of-the-art performance in terms of speed and optimization for both small and large datasets.
|
Title: Making statistical methods in management research more useful: some suggestions from a case study
|
Abstract: I present a critique of the methods used in a typical paper. This leads to three broad conclusions about the conventional use of statistical methods. First, results are often reported in an unnecessarily obscure manner. Second, the null hypothesis testing paradigm is deeply flawed: estimating the size of effects and citing confidence intervals or levels is usually better. Third, there are several issues, independent of the particular statistical concepts employed, which limit the value of any statistical approach: e.g. difficulties of generalizing to different contexts, and the weakness of some research in terms of the size of the effects found. The first two of these are easily remedied: I illustrate some of the possibilities by re-analyzing the data from the case study article. The third means that in some contexts a statistical approach may not be worthwhile. My case study is a management paper, but similar problems arise in other social sciences. Keywords: Confidence, Hypothesis testing, Null hypothesis significance tests, Philosophy of statistics, Statistical methods, User-friendliness.
|
Title: Knowledge Discovery of Hydrocyclone s Circuit Based on SONFIS and SORST
|
Abstract: This study describes application of some approximate reasoning methods to analysis of hydrocyclone performance. In this manner, using a combining of Self Organizing Map (SOM), Neuro-Fuzzy Inference System (NFIS)-SONFIS- and Rough Set Theory (RST)-SORST-crisp and fuzzy granules are obtained. Balancing of crisp granules and non-crisp granules can be implemented in close-open iteration. Using different criteria and based on granulation level balance point (interval) or a pseudo-balance point is estimated. Validation of the proposed methods, on the data set of the hydrocyclone is rendered.
|
Title: A Class of DSm Conditional Rules
|
Abstract: In this paper we introduce two new DSm fusion conditioning rules with example, and as a generalization of them a class of DSm fusion conditioning rules, and then extend them to a class of DSm conditioning rules.
|
Title: Empirical assessment of the impact of highway design exceptions on the frequency and severity of vehicle accidents
|
Abstract: Compliance to standardized highway design criteria is considered essential to ensure the roadway safety. However, for a variety of reasons, situations arise where exceptions to standard-design criteria are requested and accepted after review. This research explores the impact that design exceptions have on the accident severity and accident frequency in Indiana. Data on accidents at roadway sites with and without design exceptions are used to estimate appropriate statistical models for the frequency and severity accidents at these sites using some of the most recent statistical advances with mixing distributions. The results of the modeling process show that presence of approved design exceptions has not had a statistically significant effect on the average frequency or severity of accidents -- suggesting that current procedures for granting design exceptions have been sufficiently rigorous to avoid adverse safety impacts.
|
Title: FPGA-based Controller for a Mobile Robot
|
Abstract: With application in the robotics and automation, more and more it becomes necessary the development of applications based on methodologies that facilitate future modifications, updates and enhancements in the original projected system. This project presents a conception of mobile robots using rapid prototyping, distributing the several control actions in growing levels of complexity and computing proposal oriented to embed systems implementation. This kind of controller can be tested on different platform representing the mobile robots using reprogrammable logic components (FPGA). This mobile robot will detect obstacle and also be able to control the speed. Different modules will be Actuators, Sensors, wireless transmission. All this modules will be interfaced using FPGA controller. I would like to construct a mechanically simple robot model, which can measure the distance from obstacle with the aid of sensor and accordingly should able to control the speed of motor. I would like to construct a mechanically simple robot model, which can measure the distance from obstacle with the aid of sensor and accordingly should able to control the speed of motor.
|
Title: Geometry of diagonal-effect models for contingency tables
|
Abstract: In this work we study several types of diagonal-effect models for two-way contingency tables in the framework of Algebraic Statistics. We use both toric models and mixture models to encode the different behavior of the diagonal cells. We compute the invariants of these models and we explore their geometrical structure.
|
Title: Regret Bounds for Opportunistic Channel Access
|
Abstract: We consider the task of opportunistic channel access in a primary system composed of independent Gilbert-Elliot channels where the secondary (or opportunistic) user does not dispose of a priori information regarding the statistical characteristics of the system. It is shown that this problem may be cast into the framework of model-based learning in a specific class of Partially Observed Markov Decision Processes (POMDPs) for which we introduce an algorithm aimed at striking an optimal tradeoff between the exploration (or estimation) and exploitation requirements. We provide finite horizon regret bounds for this algorithm as well as a numerical evaluation of its performance in the single channel model as well as in the case of stochastically identical channels.
|
Title: A Reflection on the Structure and Process of the Web of Data
|
Abstract: The Web community has introduced a set of standards and technologies for representing, querying, and manipulating a globally distributed data structure known as the Web of Data. The proponents of the Web of Data envision much of the world's data being interrelated and openly accessible to the general public. This vision is analogous in many ways to the Web of Documents of common knowledge, but instead of making documents and media openly accessible, the focus is on making data openly accessible. In providing data for public use, there has been a stimulated interest in a movement dubbed Open Data. Open Data is analogous in many ways to the Open Source movement. However, instead of focusing on software, Open Data is focused on the legal and licensing issues around publicly exposed data. Together, various technological and legal tools are laying the groundwork for the future of global-scale data management on the Web. As of today, in its early form, the Web of Data hosts a variety of data sets that include encyclopedic facts, drug and protein data, metadata on music, books and scholarly articles, social network representations, geospatial information, and many other types of information. The size and diversity of the Web of Data is a demonstration of the flexibility of the underlying standards and the overall feasibility of the project as a whole. The purpose of this article is to provide a review of the technological underpinnings of the Web of Data as well as some of the hurdles that need to be overcome if the Web of Data is to emerge as the defacto medium for data representation, distribution, and ultimately, processing.
|
Title: Byzantine Convergence in Robots Networks: The Price of Asynchrony
|
Abstract: We study the convergence problem in fully asynchronous, uni-dimensional robot networks that are prone to Byzantine (i.e. malicious) failures. In these settings, oblivious anonymous robots with arbitrary initial positions are required to eventually converge to an a apriori unknown position despite a subset of them exhibiting Byzantine behavior. Our contribution is twofold. We propose a deterministic algorithm that solves the problem in the most generic settings: fully asynchronous robots that operate in the non-atomic CORDA model. Our algorithm provides convergence in 5f+1-sized networks where f is the upper bound on the number of Byzantine robots. Additionally, we prove that 5f+1 is a lower bound whenever robot scheduling is fully asynchronous. This constrasts with previous results in partially synchronous robots networks, where 3f+1 robots are necessary and sufficient.
|
Title: Nonlinear Principal Components and Long-run Implications of Multivariate Diffusions
|
Abstract: We investigate a method for extracting nonlinear principal components (NPCs). These NPCs maximize variation subject to smoothness and orthogonality constraints; but we allow for a general class of constraints and multivariate probability densities, including densities without compact support and even densities with algebraic tails. We provide primitive sufficient conditions for the existence of these NPCs. By exploiting the theory of continuous-time, reversible Markov diffusion processes, we give a different interpretation of these NPCs and the smoothness constraints. When the diffusion matrix is used to enforce smoothness, the NPCs maximize long-run variation relative to the overall variation subject to orthogonality constraints. Moreover, the NPCs behave as scalar autoregressions with heteroskedastic innovations; this supports semiparametric identification and estimation of a multivariate reversible diffusion process and tests of the overidentifying restrictions implied by such a process from low frequency data. We also explore implications for stationary, possibly non-reversible diffusion processes. Finally, we suggest a sieve method to estimate the NPCs from discretely-sampled data.
|
Title: The Infinite Hierarchical Factor Regression Model
|
Abstract: We propose a nonparametric Bayesian factor regression model that accounts for uncertainty in the number of factors, and the relationship between factors. To accomplish this, we propose a sparse variant of the Indian Buffet Process and couple this with a hierarchical model over factors, based on Kingman's coalescent. We apply this model to two problems (factor analysis and factor regression) in gene-expression data analysis.
|
Title: Streamed Learning: One-Pass SVMs
|
Abstract: We present a streaming model for large-scale classification (in the context of $\ell_2$-SVM) by leveraging connections between learning and computational geometry. The streaming model imposes the constraint that only a single pass over the data is allowed. The $\ell_2$-SVM is known to have an equivalent formulation in terms of the minimum enclosing ball (MEB) problem, and an efficient algorithm based on the idea of exists (Core Vector Machine, CVM). CVM learns a $(1+\varepsilon)$-approximate MEB for a set of points and yields an approximate solution to corresponding SVM instance. However CVM works in batch mode requiring multiple passes over the data. This paper presents a single-pass SVM which is based on the minimum enclosing ball of streaming data. We show that the MEB updates for the streaming case can be easily adapted to learn the SVM weight vector in a way similar to using online stochastic gradient updates. Our algorithm performs polylogarithmic computation at each example, and requires very small and constant storage. Experimental results show that, even in such restrictive settings, we can learn efficiently in just one pass and get accuracies comparable to other state-of-the-art SVM solvers (batch and online). We also give an analysis of the algorithm, and discuss some open issues and possible extensions.
|
Title: Functional Partial Linear Model
|
Abstract: When predicting scalar responses in the situation where the explanatory variables are functions, it is sometimes the case that some functional variables are related to responses linearly while other variables have more complicated relationships with the responses. In this paper, we propose a new semi-parametric model to take advantage of both parametric and nonparametric functional modeling. Asymptotic properties of the proposed estimators are established and finite sample behavior is investigated through a small simulation experiment.
|
Title: Online Learning of Assignments that Maximize Submodular Functions
|
Abstract: Which ads should we display in sponsored search in order to maximize our revenue? How should we dynamically rank information sources to maximize value of information? These applications exhibit strong diminishing returns: Selection of redundant ads and information sources decreases their marginal utility. We show that these and other problems can be formalized as repeatedly selecting an assignment of items to positions to maximize a sequence of monotone submodular functions that arrive one by one. We present an efficient algorithm for this general problem and analyze it in the no-regret model. Our algorithm possesses strong theoretical guarantees, such as a performance ratio that converges to the optimal constant of 1-1/e. We empirically evaluate our algorithm on two real-world online optimization problems on the web: ad allocation with submodular utilities, and dynamically ranking blogs to detect information cascades.
|
Title: Clustering for Improved Learning in Maze Traversal Problem
|
Abstract: The maze traversal problem (finding the shortest distance to the goal from any position in a maze) has been an interesting challenge in computational intelligence. Recent work has shown that the cellular simultaneous recurrent neural network (CSRN) can solve this problem for simple mazes. This thesis focuses on exploiting relevant information about the maze to improve learning and decrease the training time for the CSRN to solve mazes. Appropriate variables are identified to create useful clusters using relevant information. The CSRN was next modified to allow for an additional external input. With this additional input, several methods were tested and results show that clustering the mazes improves the overall learning of the traversal problem for the CSRN.
|
Title: An Application of Bayesian classification to Interval Encoded Temporal mining with prioritized items
|
Abstract: In real life, media information has time attributes either implicitly or explicitly known as temporal data. This paper investigates the usefulness of applying Bayesian classification to an interval encoded temporal database with prioritized items. The proposed method performs temporal mining by encoding the database with weighted items which prioritizes the items according to their importance from the user perspective. Naive Bayesian classification helps in making the resulting temporal rules more effective. The proposed priority based temporal mining (PBTM) method added with classification aids in solving problems in a well informed and systematic manner. The experimental results are obtained from the complaints database of the telecommunications system, which shows the feasibility of this method of classification based temporal mining.
|
Title: Side-channel attack on labeling CAPTCHAs
|
Abstract: We propose a new scheme of attack on the Microsoft's ASIRRA CAPTCHA which represents a significant shortcut to the intended attacking path, as it is not based in any advance in the state of the art on the field of image recognition. After studying the ASIRRA Public Corpus, we conclude that the security margin as stated by their authors seems to be quite optimistic. Then, we analyze which of the studied parameters for the image files seems to disclose the most valuable information for helping in correct classification, arriving at a surprising discovery. This represents a completely new approach to breaking CAPTCHAs that can be applied to many of the currently proposed image-labeling algorithms, and to prove this point we show how to use the very same approach against the HumanAuth CAPTCHA. Lastly, we investigate some measures that could be used to secure the ASIRRA and HumanAuth schemes, but conclude no easy solutions are at hand.
|
Title: Discrete Temporal Models of Social Networks
|
Abstract: We propose a family of statistical models for social network evolution over time, which represents an extension of Exponential Random Graph Models (ERGMs). Many of the methods for ERGMs are readily adapted for these models, including maximum likelihood estimation algorithms. We discuss models of this type and their properties, and give examples, as well as a demonstration of their use for hypothesis testing and classification. We believe our temporal ERG models represent a useful new framework for modeling time-evolving social networks, and rewiring networks from other domains such as gene regulation circuitry, and communication networks.
|
Title: Segmentation for radar images based on active contour
|
Abstract: We exam various geometric active contour methods for radar image segmentation. Due to special properties of radar images, we propose our new model based on modified Chan-Vese functional. Our method is efficient in separating non-meteorological noises from meteorological images.
|
Title: Study of the Nonequilibrium Critical Quenching and Annealing Dynamics for the Long-Range Ising Model
|
Abstract: Extensive Monte Carlo simulations are employed in order to study the dynamic critical behavior of the one-dimensional Ising magnet, with algebraically decaying long-range interactions of the form $r^d+\sigma$, with $\sigma=0.75$. The critical temperature, as well as the critical exponents, is evaluated from the power-law behavior of suitable physical observables when the system is quenched from uncorrelated states, corresponding to infinite temperature, to the critical point. These results are compared with those obtained from the dynamic evolution of the system when it is suddenly annealed at the critical point from the ordered state. Also, the critical temperature in the infinite interaction limit is obtained by means of a finite-range scaling analysis of data measured with different cutoffs of the interaction range. All the estimated static critical exponents ($\gamma /\nu $, $\beta /\nu $, and $1/\nu $) are in good agreement with Renormalization Group (RG) predictions and previously reported numerical data obtained under equilibrium conditions. It is found that the dynamic exponent $z$ is different for quenching and annealing experiments, most likely due to the influence of the Kosterlitz-Thouless transition occurring at relatively similar algebraic decay of the interactions with $\sigma =1$. However, for annealing experiments the measured exponent $z$ is close to the RG predictions. On the other hand, the relevant exponents of the dynamic behavior ($z$ and $\theta$) are slightly different than the RG predictions, most likely due to the fact that they may depend on the especific dynamics used (Metropolis in the present paper).
|
Title: Approximating the Permanent with Belief Propagation
|
Abstract: This work describes a method of approximating matrix permanents efficiently using belief propagation. We formulate a probability distribution whose partition function is exactly the permanent, then use Bethe free energy to approximate this partition function. After deriving some speedups to standard belief propagation, the resulting algorithm requires $(n^2)$ time per iteration. Finally, we demonstrate the advantages of using this approximation.
|
Title: A dyadic solution of relative pose problems
|
Abstract: A hierarchical interval subdivision is shown to lead to a $p$-adic encoding of image data. This allows in the case of the relative pose problem in computer vision and photogrammetry to derive equations having 2-adic numbers as coefficients, and to use Hensel's lifting method to their solution. This method is applied to the linear and non-linear equations coming from eight, seven or five point correspondences. An inherent property of the method is its robustness.
|
Title: Simultaneous confidence bands for nonparametric regression with functional data
|
Abstract: We consider nonparametric regression in the context of functional data, that is, when a random sample of functions is observed on a fine grid. We obtain a functional asymptotic normality result allowing to build simultaneous confidence bands (SCB) for various estimation and inference tasks. Two applications to a SCB procedure for the regression function and to a goodness-of-fit test for curvilinear regression models are proposed. The first one has improved accuracy upon the other available methods while the second can detect local departures from a parametric shape, as opposed to the usual goodness-of-fit tests which only track global departures. A numerical study of the SCB procedures and an illustration with a speech data set are provided.
|
Title: View-based Propagator Derivation
|
Abstract: When implementing a propagator for a constraint, one must decide about variants: When implementing min, should one also implement max? Should one implement linear constraints both with unit and non-unit coefficients? Constraint variants are ubiquitous: implementing them requires considerable (if not prohibitive) effort and decreases maintainability, but will deliver better performance than resorting to constraint decomposition. This paper shows how to use views to derive perfect propagator variants. A model for views and derived propagators is introduced. Derived propagators are proved to be indeed perfect in that they inherit essential properties such as correctness and domain and bounds consistency. Techniques for systematically deriving propagators such as transformation, generalization, specialization, and type conversion are developed. The paper introduces an implementation architecture for views that is independent of the underlying constraint programming system. A detailed evaluation of views implemented in Gecode shows that derived propagators are efficient and that views often incur no overhead. Without views, Gecode would either require 180 000 rather than 40 000 lines of propagator code, or would lack many efficient propagator variants. Compared to 8 000 lines of code for views, the reduction in code for propagators yields a 1750% return on investment.
|
Title: Statistical inference for stochastic epidemic models with three levels of mixing
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.