id
stringlengths
9
16
title
stringlengths
4
278
categories
stringlengths
5
104
abstract
stringlengths
6
4.09k
math/0509325
On $Z_{2^k}$-Dual Binary Codes
math.CO cs.IT math.IT
A new generalization of the Gray map is introduced. The new generalization $\Phi: Z_{2^k}^n \to Z_{2}^{2^{k-1}n}$ is connected with the known generalized Gray map $\phi$ in the following way: if we take two dual linear $Z_{2^k}$-codes and construct binary codes from them using the generalizations $\phi$ and $\Phi$ of the Gray map, then the weight enumerators of the binary codes obtained will satisfy the MacWilliams identity. The classes of $Z_{2^k}$-linear Hadamard codes and co-$Z_{2^k}$-linear extended 1-perfect codes are described, where co-$Z_{2^k}$-linearity means that the code can be obtained from a linear $Z_{2^k}$-code with the help of the new generalized Gray map. Keywords: Gray map, Hadamard codes, MacWilliams identity, perfect codes, $Z_{2^k}$-linearity
math/0509358
On decomposability of 4-ary distance 2 MDS codes, double-codes, and n-quasigroups of order 4
math.CO cs.IT math.IT
A subset $S$ of $\{0,1,...,2t-1\}^n$ is called a $t$-fold MDS code if every line in each of $n$ base directions contains exactly $t$ elements of $S$. The adjacency graph of a $t$-fold MDS code is not connected if and only if the characteristic function of the code is the repetition-free sum of the characteristic functions of $t$-fold MDS codes of smaller lengths. In the case $t=2$, the theory has the following application. The union of two disjoint $(n,4^{n-1},2)$ MDS codes in $\{0,1,2,3\}^n$ is a double-MDS-code. If the adjacency graph of the double-MDS-code is not connected, then the double-code can be decomposed into double-MDS-codes of smaller lengths. If the graph has more than two connected components, then the MDS codes are also decomposable. The result has an interpretation as a test for reducibility of $n$-quasigroups of order 4. Keywords: MDS codes, n-quasigroups, decomposability, reducibility, frequency hypercubes, latin hypercubes
math/0509575
Evolutionary Trees and the Ising Model on the Bethe Lattice: a Proof of Steel's Conjecture
math.PR cs.CE cs.DS math.CA math.CO math.ST q-bio.PE stat.TH
A major task of evolutionary biology is the reconstruction of phylogenetic trees from molecular data. The evolutionary model is given by a Markov chain on a tree. Given samples from the leaves of the Markov chain, the goal is to reconstruct the leaf-labelled tree. It is well known that in order to reconstruct a tree on $n$ leaves, sample sequences of length $\Omega(\log n)$ are needed. It was conjectured by M. Steel that for the CFN/Ising evolutionary model, if the mutation probability on all edges of the tree is less than $p^{\ast} = (\sqrt{2}-1)/2^{3/2}$, then the tree can be recovered from sequences of length $O(\log n)$. The value $p^{\ast}$ is given by the transition point for the extremality of the free Gibbs measure for the Ising model on the binary tree. Steel's conjecture was proven by the second author in the special case where the tree is "balanced." The second author also proved that if all edges have mutation probability larger than $p^{\ast}$ then the length needed is $n^{\Omega(1)}$. Here we show that Steel's conjecture holds true for general trees by giving a reconstruction algorithm that recovers the tree from $O(\log n)$-length sequences when the mutation probabilities are discretized and less than $p^\ast$. Our proof and results demonstrate that extremality of the free Gibbs measure on the infinite binary tree, which has been studied before in probability, statistical physics and computer science, determines how distinguishable are Gibbs measures on finite binary trees.
math/0509620
On diameter perfect constant-weight ternary codes
math.CO cs.IT math.IT
From cosets of binary Hamming codes we construct diameter perfect constant-weight ternary codes with weight $n-1$ (where $n$ is the code length) and distances 3 and 5. The class of distance 5 codes has parameters unknown before. Keywords: constant-weight codes, ternary codes, perfect codes, diameter perfect codes, perfect matchings, Preparata codes
math/0510276
An algorithmic and a geometric characterization of Coarsening At Random
math.ST cs.AI stat.ME stat.TH
We show that the class of conditional distributions satisfying the coarsening at Random (CAR) property for discrete data has a simple and robust algorithmic description based on randomized uniform multicovers: combinatorial objects generalizing the notion of partition of a set. However, the complexity of a given CAR mechanism can be large: the maximal "height" of the needed multicovers can be exponential in the number of points in the sample space. The results stem from a geometric interpretation of the set of CAR distributions as a convex polytope and a characterization of its extreme points. The hierarchy of CAR models defined in this way could be useful in parsimonious statistical modelling of CAR mechanisms, though the results also raise doubts in applied work as to the meaningfulness of the CAR assumption in its full generality.
math/0510521
On surrogate loss functions and $f$-divergences
math.ST cs.IT math.IT stat.TH
The goal of binary classification is to estimate a discriminant function $\gamma$ from observations of covariate vectors and corresponding binary labels. We consider an elaboration of this problem in which the covariates are not available directly but are transformed by a dimensionality-reducing quantizer $Q$. We present conditions on loss functions such that empirical risk minimization yields Bayes consistency when both the discriminant function and the quantizer are estimated. These conditions are stated in terms of a general correspondence between loss functions and a class of functionals known as Ali-Silvey or $f$-divergence functionals. Whereas this correspondence was established by Blackwell [Proc. 2nd Berkeley Symp. Probab. Statist. 1 (1951) 93--102. Univ. California Press, Berkeley] for the 0--1 loss, we extend the correspondence to the broader class of surrogate loss functions that play a key role in the general theory of Bayes consistency for binary classification. Our result makes it possible to pick out the (strict) subset of surrogate loss functions that yield Bayes consistency for joint estimation of the discriminant function and the quantizer.
math/0512263
Metric and probabilistic information associated with Fredholm integral equations of the first kind
math.CA cs.IT math.IT
The problem of evaluating the information associated with Fredholm integral equations of the first kind, when the integral operator is self-adjoint and compact, is considered here. The data function is assumed to be perturbed gently by an additive noise so that it still belongs to the range of the operator. First we estimate upper and lower bounds for the epsilon-capacity (and then for the metric information), and explicit computations in some specific cases are given; then the problem is reformulated from a probabilistic viewpoint and use is made of the probabilistic information theory. The results obtained by these two approaches are then compared.
math/0602070
The Matrix-Forest Theorem and Measuring Relations in Small Social Groups
math.CO cs.IR math.AG
We propose a family of graph structural indices related to the Matrix-forest theorem. The properties of the basic index that expresses the mutual connectivity of two vertices are studied in detail. The derivative indices that measure "dissociation," "solitariness," and "provinciality" of vertices are also considered. A nonstandard metric on the set of vertices is introduced, which is determined by their connectivity. The application of these indices in sociometry is discussed.
math/0602171
Preference fusion when the number of alternatives exceeds two: indirect scoring procedures
math.OC cs.MA math.CO
We consider the problem of aggregation of incomplete preferences represented by arbitrary binary relations or incomplete paired comparison matrices. For a number of indirect scoring procedures we examine whether or not they satisfy the axiom of self-consistent monotonicity. The class of {\em win-loss combining scoring procedures} is introduced which contains a majority of known scoring procedures. Two main results are established. According to the first one, every win-loss combining scoring procedure breaks self-consistent monotonicity. The second result provides a sufficient condition of satisfying self-consistent monotonicity.
math/0602505
MDL Convergence Speed for Bernoulli Sequences
math.ST cs.IT cs.LG math.IT math.PR stat.TH
The Minimum Description Length principle for online sequence estimation/prediction in a proper learning setup is studied. If the underlying model class is discrete, then the total expected square loss is a particularly interesting performance measure: (a) this quantity is finitely bounded, implying convergence with probability one, and (b) it additionally specifies the convergence speed. For MDL, in general one can only have loss bounds which are finite but exponentially larger than those for Bayes mixtures. We show that this is even the case if the model class contains only Bernoulli distributions. We derive a new upper bound on the prediction error for countable Bernoulli classes. This implies a small bound (comparable to the one for Bayes mixtures) for certain important model classes. We discuss the application to Machine Learning tasks such as classification and hypothesis testing, and generalization to countable classes of i.i.d. models.
math/0602522
Characterizations of scoring methods for preference aggregation
math.OC cs.MA math.FA
The paper surveys more than forty characterizations of scoring methods for preference aggregation and contains one new result. A general scoring operator is {\it self-consistent} if alternative $i$ is assigned a greater score than $j$ whenever $i$ gets no worse (better) results of comparisons and its `opponents' are assigned respectively greater (no smaller) scores than those of $j$. We prove that self-consistency is satisfied if and only if the application of a scoring operator reduces to the solution of a homogeneous system of algebraic equations with a monotone function on the left-hand side.
math/0602552
From Incomplete Preferences to Ranking via Optimization
math.OC cs.MA math.CO
We consider methods for aggregating preferences that are based on the resolution of discrete optimization problems. The preferences are represented by arbitrary binary relations (possibly weighted) or incomplete paired comparison matrices. This incomplete case remains practically unexplored so far. We examine the properties of several known methods and propose one new method. In particular, we test whether these methods obey a new axiom referred to as {\it Self-Consistent Monotonicity}. Some results are established that characterize solutions of the related optimization problems.
math/0603155
Vers une commande multivariable sans mod\`ele
math.OC cs.CE cs.RO physics.class-ph
A control strategy without any precise mathematical model is derived for linear or nonlinear systems which are assumed to be finite-dimensional. Two convincing numerical simulations are provided.
math/0604233
Generalization error bounds in semi-supervised classification under the cluster assumption
math.ST cs.LG stat.TH
We consider semi-supervised classification when part of the available data is unlabeled. These unlabeled data can be useful for the classification problem when we make an assumption relating the behavior of the regression function to that of the marginal distribution. Seeger (2000) proposed the well-known "cluster assumption" as a reasonable one. We propose a mathematical formulation of this assumption and a method based on density level sets estimation that takes advantage of it to achieve fast rates of convergence both in the number of unlabeled examples and the number of labeled examples.
math/0605498
Cross-Entropic Learning of a Machine for the Decision in a Partially Observable Universe
math.OC cs.AI cs.LG cs.NE cs.RO math.ST stat.TH
Revision of the paper previously entitled "Learning a Machine for the Decision in a Partially Observable Markov Universe" In this paper, we are interested in optimal decisions in a partially observable universe. Our approach is to directly approximate an optimal strategic tree depending on the observation. This approximation is made by means of a parameterized probabilistic law. A particular family of hidden Markov models, with input \emph{and} output, is considered as a model of policy. A method for optimizing the parameters of these HMMs is proposed and applied. This optimization is based on the cross-entropic principle for rare events simulation developed by Rubinstein.
math/0605740
Sharp thresholds for high-dimensional and noisy recovery of sparsity
math.ST cs.IT math.IT stat.TH
The problem of consistently estimating the sparsity pattern of a vector $\betastar \in \real^\mdim$ based on observations contaminated by noise arises in various contexts, including subset selection in regression, structure estimation in graphical models, sparse approximation, and signal denoising. We analyze the behavior of $\ell_1$-constrained quadratic programming (QP), also referred to as the Lasso, for recovering the sparsity pattern. Our main result is to establish a sharp relation between the problem dimension $\mdim$, the number $\spindex$ of non-zero elements in $\betastar$, and the number of observations $\numobs$ that are required for reliable recovery. For a broad class of Gaussian ensembles satisfying mutual incoherence conditions, we establish existence and compute explicit values of thresholds $\ThreshLow$ and $\ThreshUp$ with the following properties: for any $\epsilon > 0$, if $\numobs > 2 (\ThreshUp + \epsilon) \log (\mdim - \spindex) + \spindex + 1$, then the Lasso succeeds in recovering the sparsity pattern with probability converging to one for large problems, whereas for $\numobs < 2 (\ThreshLow - \epsilon) \log (\mdim - \spindex) + \spindex + 1$, then the probability of successful recovery converges to zero. For the special case of the uniform Gaussian ensemble, we show that $\ThreshLow = \ThreshUp = 1$, so that the threshold is sharp and exactly determined.
math/0606315
Bayesian Regression of Piecewise Constant Functions
math.ST cs.LG math.PR stat.TH
We derive an exact and efficient Bayesian regression algorithm for piecewise constant functions of unknown segment number, boundary location, and levels. It works for any noise and segment level prior, e.g. Cauchy which can handle outliers. We derive simple but good estimates for the in-segment variance. We also propose a Bayesian regression curve as a better way of smoothing data without blurring boundaries. The Bayesian approach also allows straightforward determination of the evidence, break probabilities and error estimates, useful for model selection and significance and robustness studies. We discuss the performance on synthetic and real-world examples. Many possible extensions will be discussed.
math/0606643
Entropy And Vision
math.PR cs.CV cs.DB cs.DM cs.LG math.CO
In vector quantization the number of vectors used to construct the codebook is always an undefined problem, there is always a compromise between the number of vectors and the quantity of information lost during the compression. In this text we present a minimum of Entropy principle that gives solution to this compromise and represents an Entropy point of view of signal compression in general. Also we present a new adaptive Object Quantization technique that is the same for the compression and the perception.
math/0606734
Codes in spherical caps
math.MG cs.IT math.IT
We consider bounds on codes in spherical caps and related problems in geometry and coding theory. An extension of the Delsarte method is presented that relates upper bounds on the size of spherical codes to upper bounds on codes in caps. Several new upper bounds on codes in caps are derived. Applications of these bounds to estimates of the kissing numbers and one-sided kissing numbers are considered. It is proved that the maximum size of codes in spherical caps for large dimensions is determined by the maximum size of spherical codes, so these problems are asymptotically equivalent.
math/0607243
An active curve approach for tomographic reconstruction of binary radially symmetric objects
math.OC cs.CV
This paper deals with a method of tomographic reconstruction of radially symmetric objects from a single radiograph, in order to study the behavior of shocked material. The usual tomographic reconstruction algorithms such as generalized inverse or filtered back-projection cannot be applied here because data are very noisy and the inverse problem associated to single view tomographic reconstruction is highly unstable. In order to improve the reconstruction, we propose here to add some a priori assumptions on the looked after object. One of these assumptions is that the object is binary and consequently, the object may be described by the curves that separate the two materials. We present a model that lives in BV space and leads to a non local Hamilton-Jacobi equation, via a level set strategy. Numerical experiments are performed (using level sets methods) on synthetic objects.
math/0607507
In-Degree and PageRank of Web pages: Why do they follow similar power laws?
math.PR cs.IR
The PageRank is a popularity measure designed by Google to rank Web pages. Experiments confirm that the PageRank obeys a `power law' with the same exponent as the In-Degree. This paper presents a novel mathematical model that explains this phenomenon. The relation between the PageRank and In-Degree is modelled through a stochastic equation, which is inspired by the original definition of the PageRank, and is analogous to the well-known distributional identity for the busy period in the M/G/1 queue. Further, we employ the theory of regular variation and Tauberian theorems to analytically prove that the tail behavior of the PageRank and the In-Degree differ only by a multiplicative factor, for which we derive a closed-form expression. Our analytical results are in good agreement with experimental data.
math/0607648
Singular Values and Eigenvalues of Tensors: A Variational Approach
math.SP cs.IR cs.NA math.NA math.OC
We propose a theory of eigenvalues, eigenvectors, singular values, and singular vectors for tensors based on a constrained variational approach much like the Rayleigh quotient for symmetric matrix eigenvalues. These notions are particularly useful in generalizing certain areas where the spectral theory of matrices has traditionally played an important role. For illustration, we will discuss a multilinear generalization of the Perron-Frobenius theorem.
math/0608522
Graph Laplacians and their convergence on random neighborhood graphs
math.ST cs.LG stat.TH
Given a sample from a probability measure with support on a submanifold in Euclidean space one can construct a neighborhood graph which can be seen as an approximation of the submanifold. The graph Laplacian of such a graph is used in several machine learning methods like semi-supervised learning, dimensionality reduction and clustering. In this paper we determine the pointwise limit of three different graph Laplacians used in the literature as the sample size increases and the neighborhood size approaches zero. We show that for a uniform measure on the submanifold all graph Laplacians have the same limit up to constants. However in the case of a non-uniform measure on the submanifold only the so called random walk graph Laplacian converges to the weighted Laplace-Beltrami operator.
math/0608556
On optimal quantization rules for some problems in sequential decentralized detection
math.ST cs.IT math.IT stat.TH
We consider the design of systems for sequential decentralized detection, a problem that entails several interdependent choices: the choice of a stopping rule (specifying the sample size), a global decision function (a choice between two competing hypotheses), and a set of quantization rules (the local decisions on the basis of which the global decision is made). This paper addresses an open problem of whether in the Bayesian formulation of sequential decentralized detection, optimal local decision functions can be found within the class of stationary rules. We develop an asymptotic approximation to the optimal cost of stationary quantization rules and exploit this approximation to show that stationary quantizers are not optimal in a broad class of settings. We also consider the class of blockwise stationary quantizers, and show that asymptotically optimal quantizers are likelihood-based threshold rules.
math/0608571
Intensional Models for the Theory of Types
math.LO cs.AI
In this paper we define intensional models for the classical theory of types, thus arriving at an intensional type logic ITL. Intensional models generalize Henkin's general models and have a natural definition. As a class they do not validate the axiom of Extensionality. We give a cut-free sequent calculus for type theory and show completeness of this calculus with respect to the class of intensional models via a model existence theorem. After this we turn our attention to applications. Firstly, it is argued that, since ITL is truly intensional, it can be used to model ascriptions of propositional attitude without predicting logical omniscience. In order to illustrate this a small fragment of English is defined and provided with an ITL semantics. Secondly, it is shown that ITL models contain certain objects that can be identified with possible worlds. Essential elements of modal logic become available within classical type theory once the axiom of Extensionality is given up.
math/0608713
Occam's hammer: a link between randomized learning and multiple testing FDR control
math.ST cs.LG stat.TH
We establish a generic theoretical tool to construct probabilistic bounds for algorithms where the output is a subset of objects from an initial pool of candidates (or more generally, a probability distribution on said pool). This general device, dubbed "Occam's hammer'', acts as a meta layer when a probabilistic bound is already known on the objects of the pool taken individually, and aims at controlling the proportion of the objects in the set output not satisfying their individual bound. In this regard, it can be seen as a non-trivial generalization of the "union bound with a prior'' ("Occam's razor''), a familiar tool in learning theory. We give applications of this principle to randomized classifiers (providing an interesting alternative approach to PAC-Bayes bounds) and multiple testing (where it allows to retrieve exactly and extend the so-called Benjamini-Yekutieli testing procedure).
math/0609461
Cross-Entropy method: convergence issues for extended implementation
math.OC cs.LG cs.NE math.ST stat.TH
The cross-entropy method (CE) developed by R. Rubinstein is an elegant practical principle for simulating rare events. The method approximates the probability of the rare event by means of a family of probabilistic models. The method has been extended to optimization, by considering an optimal event as a rare event. CE works rather good when dealing with deterministic function optimization. Now, it appears that two conditions are needed for a good convergence of the method. First, it is necessary to have a family of models sufficiently flexible for discriminating the optimal events. Indirectly, it appears also that the function to be optimized should be deterministic. The purpose of this paper is to consider the case of partially discriminating model family, and of stochastic functions. It will be shown on simple examples that the CE could fail when relaxing these hypotheses. Alternative improvements of the CE method are investigated and compared on random examples in order to handle this issue.
math/0609562
On quadratic residue codes and hyperelliptic curves
math.CO cs.IT math.AG math.IT math.NT
A long standing problem has been to develop "good" binary linear codes to be used for error-correction. This paper investigates in some detail an attack on this problem using a connection between quadratic residue codes and hyperelliptic curves. One question which coding theory is used to attack is: Does there exist a c<2 such that, for all sufficiently large $p$ and all subsets S of GF(p), we have |X_S(GF(p))| < cp?
math/0610184
Adaptive Poisson disorder problem
math.PR cs.IT math.IT math.ST stat.TH
We study the quickest detection problem of a sudden change in the arrival rate of a Poisson process from a known value to an unknown and unobservable value at an unknown and unobservable disorder time. Our objective is to design an alarm time which is adapted to the history of the arrival process and detects the disorder time as soon as possible. In previous solvable versions of the Poisson disorder problem, the arrival rate after the disorder has been assumed a known constant. In reality, however, we may at most have some prior information about the likely values of the new arrival rate before the disorder actually happens, and insufficient estimates of the new rate after the disorder happens. Consequently, we assume in this paper that the new arrival rate after the disorder is a random variable. The detection problem is shown to admit a finite-dimensional Markovian sufficient statistic, if the new rate has a discrete distribution with finitely many atoms. Furthermore, the detection problem is cast as a discounted optimal stopping problem with running cost for a finite-dimensional piecewise-deterministic Markov process. This optimal stopping problem is studied in detail in the special case where the new arrival rate has Bernoulli distribution. This is a nontrivial optimal stopping problem for a two-dimensional piecewise-deterministic Markov process driven by the same point process. Using a suitable single-jump operator, we solve it fully, describe the analytic properties of the value function and the stopping region, and present methods for their numerical calculation. We provide a concrete example where the value function does not satisfy the smooth-fit principle on a proper subset of the connected, continuously differentiable optimal stopping boundary, whereas it does on the complement of this set.
math/0611422
Cartes auto-organis\'{e}es pour l'analyse exploratoire de donn\'{e}es et la visualisation
math.ST cs.NE nlin.AO stat.TH
This paper shows how to use the Kohonen algorithm to represent multidimensional data, by exploiting the self-organizing property. It is possible to get such maps as well for quantitative variables as for qualitative ones, or for a mixing of both. The contents of the paper come from various works by SAMOS-MATISSE members, in particular by E. de Bodt, B. Girard, P. Letr\'{e}my, S. Ibbou, P. Rousset. Most of the examples have been studied with the computation routines written by Patrick Letr\'{e}my, with the language IML-SAS, which are available on the WEB page http://samos.univ-paris1.fr.
math/0611433
Working times in atypical forms of employment: the special case of part-time work
math.ST cs.NE stat.TH
In the present article, we attempt to devise a typology of forms of part-time employment by applying a widely used neuronal methodology called Kohonen maps. Starting out with data that we describe using category-specific variables, we show how it is possible to represent observations and the modalities of the variables that define them simultaneously, on a single map. This allows us to ascertain, and to try to describe, the main categories of part-time employment.
math/0611937
Remarks on Inheritance Systems
math.LO cs.AI
We try a conceptual analysis of inheritance diagrams, first in abstract terms, and then compare to "normality" and the "small/big sets" of preferential and related reasoning. The main ideas are about nodes as truth values and information sources, truth comparison by paths, accessibility or relevance of information by paths, relative normality, and prototypical reasoning.
math/0612046
On the Submodularity of Influence in Social Networks
math.PR cs.GT cs.SI
We prove and extend a conjecture of Kempe, Kleinberg, and Tardos (KKT) on the spread of influence in social networks. A social network can be represented by a directed graph where the nodes are individuals and the edges indicate a form of social relationship. A simple way to model the diffusion of ideas, innovative behavior, or ``word-of-mouth'' effects on such a graph is to consider an increasing process of ``infected'' (or active) nodes: each node becomes infected once an activation function of the set of its infected neighbors crosses a certain threshold value. Such a model was introduced by KKT in \cite{KeKlTa:03,KeKlTa:05} where the authors also impose several natural assumptions: the threshold values are (uniformly) random; and the activation functions are monotone and submodular. For an initial set of active nodes $S$, let $\sigma(S)$ denote the expected number of active nodes at termination. Here we prove a conjecture of KKT: we show that the function $\sigma(S)$ is submodular under the assumptions above. We prove the same result for the expected value of any monotone, submodular function of the set of active nodes at termination.
math/0612682
Convergence Speed in Distributed Consensus and Control
math.OC cs.SY
We study the convergence speed of distributed iterative algorithms for the consensus and averaging problems, with emphasis on the latter. We first consider the case of a fixed communication topology. We show that a simple adaptation of a consensus algorithm leads to an averaging algorithm. We prove lower bounds on the worst-case convergence time for various classes of linear, time-invariant, distributed consensus methods, and provide an algorithm that essentially matches those lower bounds. We then consider the case of a time-varying topology, and provide a polynomial-time averaging algorithm.
math/0701131
Compressed Sensing and Redundant Dictionaries
math.PR cs.IT math.IT
This article extends the concept of compressed sensing to signals that are not sparse in an orthonormal basis but rather in a redundant dictionary. It is shown that a matrix, which is a composition of a random matrix of certain type and a deterministic dictionary, has small restricted isometry constants. Thus, signals that are sparse with respect to the dictionary can be recovered via Basis Pursuit from a small number of random measurements. Further, thresholding is investigated as recovery algorithm for compressed sensing and conditions are provided that guarantee reconstruction with high probability. The different schemes are compared by numerical experiments.
math/0701142
On the use of self-organizing maps to accelerate vector quantization
math.ST cs.NE stat.TH
Self-organizing maps (SOM) are widely used for their topology preservation property: neighboring input vectors are quantified (or classified) either on the same location or on neighbor ones on a predefined grid. SOM are also widely used for their more classical vector quantization property. We show in this paper that using SOM instead of the more classical Simple Competitive Learning (SCL) algorithm drastically increases the speed of convergence of the vector quantization process. This fact is demonstrated through extensive simulations on artificial and real examples, with specific SOM (fixed and decreasing neighborhoods) and SCL algorithms.
math/0701144
Statistical tools to assess the reliability of self-organizing maps
math.ST cs.NE stat.TH
Results of neural network learning are always subject to some variability, due to the sensitivity to initial conditions, to convergence to local minima, and, sometimes more dramatically, to sampling variability. This paper presents a set of tools designed to assess the reliability of the results of Self-Organizing Maps (SOM), i.e. to test on a statistical basis the confidence we can have on the result of a specific SOM. The tools concern the quantization error in a SOM, and the neighborhood relations (both at the level of a specific pair of observations and globally on the map). As a by-product, these measures also allow to assess the adequacy of the number of units chosen in a map. The tools may also be used to measure objectively how the SOM are less sensitive to non-linear optimization problems (local minima, convergence, etc.) than other neural network models.
math/0701145
Bootstrap for neural model selection
math.ST cs.NE stat.TH
Bootstrap techniques (also called resampling computation techniques) have introduced new advances in modeling and model evaluation. Using resampling methods to construct a series of new samples which are based on the original data set, allows to estimate the stability of the parameters. Properties such as convergence and asymptotic normality can be checked for any particular observed data set. In most cases, the statistics computed on the generated data sets give a good idea of the confidence regions of the estimates. In this paper, we debate on the contribution of such methods for model selection, in the case of feedforward neural networks. The method is described and compared with the leave-one-out resampling method. The effectiveness of the bootstrap method, versus the leave-one-out methode, is checked through a number of examples.
math/0701152
Missing values : processing with the Kohonen algorithm
math.ST cs.NE stat.TH
The processing of data which contain missing values is a complicated and always awkward problem, when the data come from real-world contexts. In applications, we are very often in front of observations for which all the values are not available, and this can occur for many reasons: typing errors, fields left unanswered in surveys, etc. Most of the statistical software (as SAS for example) simply suppresses incomplete observations. It has no practical consequence when the data are very numerous. But if the number of remaining data is too small, it can remove all significance to the results. To avoid suppressing data in that way, it is possible to replace a missing value with the mean value of the corresponding variable, but this approximation can be very bad when the variable has a large variance. So it is very worthwhile seeing that the Kohonen algorithm (as well as the Forgy algorithm) perfectly deals with data with missing values, without having to estimate them beforehand. We are particularly interested in the Kohonen algorithm for its visualization properties.
math/0701261
Tracking Stopping Times Through Noisy Observations
math.ST cs.IT math.IT stat.TH
A novel quickest detection setting is proposed which is a generalization of the well-known Bayesian change-point detection model. Suppose \{(X_i,Y_i)\}_{i\geq 1} is a sequence of pairs of random variables, and that S is a stopping time with respect to \{X_i\}_{i\geq 1}. The problem is to find a stopping time T with respect to \{Y_i\}_{i\geq 1} that optimally tracks S, in the sense that T minimizes the expected reaction delay E(T-S)^+, while keeping the false-alarm probability P(T<S) below a given threshold \alpha \in [0,1]. This problem formulation applies in several areas, such as in communication, detection, forecasting, and quality control. Our results relate to the situation where the X_i's and Y_i's take values in finite alphabets and where S is bounded by some positive integer \kappa. By using elementary methods based on the analysis of the tree structure of stopping times, we exhibit an algorithm that computes the optimal average reaction delays for all \alpha \in [0,1], and constructs the associated optimal stopping times T. Under certain conditions on \{(X_i,Y_i)\}_{i\geq 1} and S, the algorithm running time is polynomial in \kappa.
math/0701419
Strategies for prediction under imperfect monitoring
math.ST cs.LG stat.TH
We propose simple randomized strategies for sequential prediction under imperfect monitoring, that is, when the forecaster does not have access to the past outcomes but rather to a feedback signal. The proposed strategies are consistent in the sense that they achieve, asymptotically, the best possible average reward. It was Rustichini (1999) who first proved the existence of such consistent predictors. The forecasters presented here offer the first constructive proof of consistency. Moreover, the proposed algorithms are computationally efficient. We also establish upper bounds for the rates of convergence. In the case of deterministic feedback, these rates are optimal up to logarithmic terms.
math/0701791
Linear versus Non-linear Acquisition of Step-Functions
math.CA cs.CV
We address in this paper the following two closely related problems: 1. How to represent functions with singularities (up to a prescribed accuracy) in a compact way? 2. How to reconstruct such functions from a small number of measurements? The stress is on a comparison of linear and non-linear approaches. As a model case we use piecewise-constant functions on [0,1], in particular, the Heaviside jump function. Considered as a curve in the Hilbert space, it is completely characterized by the fact that any two its disjoint chords are orthogonal. We reinterpret this fact in a context of step-functions in one or two variables. Next we study the limitations on representability and reconstruction of piecewise-constant functions by linear and semi-linear methods. Our main tools in this problem are Kolmogorov's n-width and entropy, as well as Temlyakov's (N,m)-width. On the positive side, we show that a very accurate non-linear reconstruction is possible. It goes through a solution of certain specific non-linear systems of algebraic equations. We discuss the form of these systems and methods of their solution, stressing their relation to Moment Theory and Complex Analysis. Finally, we informally discuss two problems in Computer Imaging which are parallel to the problems 1 and 2 above: compression of still images and video-sequences on one side, and image reconstruction from indirect measurement (for example, in Computer Tomography), on the other.
math/0702116
A Direct Matrix Method for Computing Analytical Jacobians of Discretized Nonlinear Integro-differential Equations
math.NA cs.CE
In this pedagogical article, we present a simple direct matrix method for analytically computing the Jacobian of nonlinear algebraic equations that arise from the discretization of nonlinear integro-differential equations. The method is based on a formulation of the discretized equations in vector form using only matrix-vector products and component-wise operations. By applying simple matrix-based differentiation rules, the matrix form of the analytical Jacobian can be calculated with little more difficulty than that required when computing derivatives in single-variable calculus. After describing the direct matrix method, we present numerical experiments demonstrating the computational performance of the method, discuss its connection to the Newton-Kantorovich method, and apply it to illustrative 1D and 2D example problems. MATLAB code is provided to demonstrate the low code complexity required by the method.
math/0702301
Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting
math.ST cs.IT math.IT stat.TH
The problem of recovering the sparsity pattern of a fixed but unknown vector $\beta^* \in \real^p based on a set of $n$ noisy observations arises in a variety of settings, including subset selection in regression, graphical model selection, signal denoising, compressive sensing, and constructive approximation. Of interest are conditions on the model dimension $p$, the sparsity index $s$ (number of non-zero entries in $\beta^*$), and the number of observations $n$ that are necessary and/or sufficient to ensure asymptotically perfect recovery of the sparsity pattern. This paper focuses on the information-theoretic limits of sparsity recovery: in particular, for a noisy linear observation model based on measurement vectors drawn from the standard Gaussian ensemble, we derive both a set of sufficient conditions for asymptotically perfect recovery using the optimal decoder, as well as a set of necessary conditions that any decoder, regardless of its computational complexity, must satisfy for perfect recovery. This analysis of optimal decoding limits complements our previous work (ARXIV: math.ST/0605740) on sharp thresholds for sparsity recovery using the Lasso ($\ell_1$-constrained quadratic programming) with Gaussian measurement ensembles.
math/0702804
The Loss Rank Principle for Model Selection
math.ST cs.LG stat.ME stat.ML stat.TH
We introduce a new principle for model selection in regression and classification. Many regression models are controlled by some smoothness or flexibility or complexity parameter c, e.g. the number of neighbors to be averaged over in k nearest neighbor (kNN) regression or the polynomial degree in regression with polynomials. Let f_D^c be the (best) regressor of complexity c on data D. A more flexible regressor can fit more data D' well than a more rigid one. If something (here small loss) is easy to achieve it's typically worth less. We define the loss rank of f_D^c as the number of other (fictitious) data D' that are fitted better by f_D'^c than D is fitted by f_D^c. We suggest selecting the model complexity c that has minimal loss rank (LoRP). Unlike most penalized maximum likelihood variants (AIC,BIC,MDL), LoRP only depends on the regression function and loss function. It works without a stochastic noise model, and is directly applicable to any non-parametric regressor, like kNN. In this paper we formalize, discuss, and motivate LoRP, study it for specific regression problems, in particular linear ones, and compare it to other model selection schemes.
math/0702866
Consumer Profile Identification and Allocation
math.ST cs.NE stat.TH
We propose an easy-to-use methodology to allocate one of the groups which have been previously built from a complete learning data base, to new individuals. The learning data base contains continuous and categorical variables for each individual. The groups (clusters) are built by using only the continuous variables and described with the help of the categorical ones. For the new individuals, only the categorical variables are available, and it is necessary to define a model which computes the probabilities to belong to each of the clusters, by using only the categorical variables. Then this model provides a decision rule to assign the new individuals and gives an efficient tool to decision-makers. This tool is shown to be very efficient for customers allocation in consumer clusters for marketing purposes, for example.
math/0703019
Reading policies for joins: An asymptotic analysis
math.PR cs.DB
Suppose that $m_n$ observations are made from the distribution $\mathbf {R}$ and $n-m_n$ from the distribution $\mathbf {S}$. Associate with each pair, $x$ from $\mathbf {R}$ and $y$ from $\mathbf {S}$, a nonnegative score $\phi(x,y)$. An optimal reading policy is one that yields a sequence $m_n$ that maximizes $\mathbb{E}(M(n))$, the expected sum of the $(n-m_n)m_n$ observed scores, uniformly in $n$. The alternating policy, which switches between the two sources, is the optimal nonadaptive policy. In contrast, the greedy policy, which chooses its source to maximize the expected gain on the next step, is shown to be the optimal policy. Asymptotics are provided for the case where the $\mathbf {R}$ and $\mathbf {S}$ distributions are discrete and $\phi(x,y)=1 or 0$ according as $x=y$ or not (i.e., the observations match). Specifically, an invariance result is proved which guarantees that for a wide class of policies, including the alternating and the greedy, the variable M(n) obeys the same CLT and LIL. A more delicate analysis of the sequence $\mathbb{E}(M(n))$ and the sample paths of M(n), for both alternating and greedy, reveals the slender sense in which the latter policy is asymptotically superior to the former, as well as a sense of equivalence of the two and robustness of the former.
math/9310227
A linear construction for certain Kerdock and Preparata codes
math.CO cs.IT math.IT
The Nordstrom-Robinson, Kerdock, and (slightly modified) Pre\- parata codes are shown to be linear over $\ZZ_4$, the integers $\bmod~4$. The Kerdock and Preparata codes are duals over $\ZZ_4$, and the Nordstrom-Robinson code is self-dual. All these codes are just extended cyclic codes over $\ZZ_4$. This provides a simple definition for these codes and explains why their Hamming weight distributions are dual to each other. First- and second-order Reed-Muller codes are also linear codes over $\ZZ_4$, but Hamming codes in general are not, nor is the Golay code.
math/9508219
New results on binary linear codes
math.CO cs.IT math.IT
This research announcement describes in very rough terms methods and a computer language under development, which can be used to prove the nonexistence of binary linear codes. Over a hundred new results have been obtained by the author. For example, there is no [29,11,10] code. The proof of this is roughly outlined.
math/9801152
On the classifiability of cellular automata
math.LO cs.NE
Based on computer simulations Wolfram presented in several papers conjectured classifications of cellular automata into 4 types. He distinguishes the 4 classes of cellular automata by the evolution of the pattern generated by applying a cellular automaton to a finite input. Wolfram's qualitative classification is based on the examination of a large number of simulations. In addition to this classification based on the rate of growth, he conjectured a similar classification according to the eventual pattern. We consider here one formalization of his rate of growth suggestion. After completing our major results (based only on Wolfram's work), we investigated other contributions to the area and we report the relation of some of them to our discoveries.
math/9905046
Duality between Multidimensional Convolutional Codes and Systems
math.OC cs.IT math.AC math.AG math.IT
Multidimensional convolutional codes generalize (one dimensional) convolutional codes and they correspond under a natural duality to multidimensional systems widely studied in the systems literature.
math/9905165
Perception games, the image understanding and interpretational geometry
math.HO cs.CV
The interactive game theoretical approach to the description of perception processes is proposed. The subject is treated formally in terms of a new class of the verbalizable interactive games which are called the perception games. An application of the previously elaborated formalism of dialogues and verbalizable interactive games to the visual perception allows to combine the linguistic (such as formal grammars), psycholinguistic and (interactive) game theoretical methods for analysis of the image understanding by a human that may be also useful for the elaboration of computer vision systems. By the way the interactive game theoretical aspects of interpretational geometries are clarified.
math/9909163
Coding Theory and Uniform Distributions
math.CO cs.IT math.IT math.NT
In the present paper we introduce and study finite point subsets of a special kind, called optimum distributions, in the n-dimensional unit cube. Such distributions are closely related with known (delta,s,n)-nets of low discrepancy. It turns out that optimum distributions have a rich combinatorial structure. Namely, we show that optimum distributions can be characterized completely as maximum distance separable codes with respect to a non-Hamming metric. Weight spectra of such codes can be evaluated precisely. We also consider linear codes and distributions and study their general properties including the duality with respect to a suitable inner product. The corresponding generalized MacWilliams identities for weight enumerators are briefly discussed. Broad classes of linear maximum distance separable codes and linear optimum distributions are explicitely constructed in the paper by the Hermite interpolations over finite fields.
math/9910062
Efficient sphere-covering and converse measure concentration via generalized coding theorems
math.PR cs.IT math.FA math.IT
Suppose A is a finite set equipped with a probability measure P and let M be a ``mass'' function on A. We give a probabilistic characterization of the most efficient way in which A^n can be almost-covered using spheres of a fixed radius. An almost-covering is a subset C_n of A^n, such that the union of the spheres centered at the points of C_n has probability close to one with respect to the product measure P^n. An efficient covering is one with small mass M^n(C_n); n is typically large. With different choices for M and the geometry on A our results give various corollaries as special cases, including Shannon's data compression theorem, a version of Stein's lemma (in hypothesis testing), and a new converse to some measure concentration inequalities on discrete spaces. Under mild conditions, we generalize our results to abstract spaces and non-product measures.
math/9910149
Rational points, genus and asymptotic behaviour in reduced algebraic curves over finite fields
math.AG cs.IT math.IT math.NT
The number A(q) shows the asymptotic behaviour of the quotient of the number of rational points over the genus of non-singular absolutely irreducible curves over a finite field Fq. Research on bounds for A(q) is closely connected with the so-called asymptotic main problem in Coding Theory. In this paper, we study some generalizations of this number for non-irreducible curves, their connection with A(q) and its application in Coding Theory.
math/9910151
Decoding Algebraic Geometry codes by a key equation
math.AG cs.IT math.IT math.NA
A new effective decoding algorithm is presented for arbitrary algebraic-geometric codes on the basis of solving a generalized key equation with the majority coset scheme of Duursma. It is an improvement of Ehrhard's algorithm, since the method corrects up to the half of the Goppa distance with complexity order O(n**2.81), and with no further assumption on the degree of the divisor G.
math/9910155
Computing Weierstrass semigroups and the Feng-Rao distance from singular plane models
math.AG cs.IT math.IT math.NA
We present an algorithm to compute the Weierstrass semigroup at a point P together with functions for each value in the semigroup, provided P is the only branch at infinity of a singular plane model for the curve. As a byproduct, the method also provides us with a basis for the spaces L(mP) and the computation of the Feng-Rao distance for the corresponding array of geometric Goppa codes. A general computation of the Feng-Rao distance is also obtained. Everything can be applied to the decoding problem by using the majority scheme of Feng and Rao.
math/9910175
Polynomial method in coding and information theory
math.CO cs.IT math.IT
Polynomial, or Delsarte's, method in coding theory accounts for a variety of structural results on, and bounds on the size of, extremal configurations (codes and designs) in various metric spaces. In recent works of the authors the applicability of the method was extended to cover a wider range of problems in coding and information theory. In this paper we present a general framework for the method which includes previous results as particular cases. We explain how this generalization leads to new asymptotic bounds on the performance of codes in binary-input memoryless channels and the Gaussian channel, which improve the results of Shannon et al. of 1959-67, and to a number of other results in combinatorial coding theory.
math/9911025
On the parameters of Algebraic Geometry codes related to Arf semigroups
math.NT cs.IT math.AG math.IT
In this paper we compute the order (or Feng-Rao) bound on the minimum distance of one-point algebraic geometry codes, when the Weierstrass semigroup at the point Q is an Arf semigroup. The results developed to that purpose also provide the dimension of the improved geometric Goppa codes related to those.
nlin/0001057
Numerical Replication of Computer Simulations: Some Pitfalls and How To Avoid Them
nlin.AO cs.NE
A computer simulation, such as a genetic algorithm, that uses IEEE standard floating-point arithmetic may not produce exactly the same results in two different runs, even if it is rerun on the same computer with the same input and random number seeds. Researchers should not simply assume that the results from one run replicate those from another but should verify this by actually comparing the data. However, researchers who are aware of this pitfall can reliably replicate simulations, in practice. This paper discusses the problem and suggests solutions.
nlin/0002040
The dynamics of iterated transportation simulations
nlin.AO cs.CE
Iterating between a router and a traffic micro-simulation is an increasibly accepted method for doing traffic assignment. This paper, after pointing out that the analytical theory of simulation-based assignment to-date is insufficient for some practical cases, presents results of simulation studies from a real world study. Specifically, we look into the issues of uniqueness, variability, and robustness and validation. Regarding uniqueness, despite some cautionary notes from a theoretical point of view, we find no indication of ``meta-stable'' states for the iterations. Variability however is considerable. By variability we mean the variation of the simulation of a given plan set by just changing the random seed. We show then results from three different micro-simulations under the same iteration scenario in order to test for the robustness of the results under different implementations. We find the results encouraging, also when comparing to reality and with a traditional assignment result. Keywords: dynamic traffic assignment (DTA); traffic micro-simulation; TRANSIMS; large-scale simulations; urban planning
nlin/0006025
Information Bottlenecks, Causal States, and Statistical Relevance Bases: How to Represent Relevant Information in Memoryless Transduction
nlin.AO cond-mat.dis-nn cs.LG physics.data-an
Discovering relevant, but possibly hidden, variables is a key step in constructing useful and predictive theories about the natural world. This brief note explains the connections between three approaches to this problem: the recently introduced information-bottleneck method, the computational mechanics approach to inferring optimal models, and Salmon's statistical relevance basis.
nlin/0109019
Olfactory search at high Reynolds number
nlin.CD cs.RO nlin.AO physics.bio-ph
Locating the source of odor in a turbulent environment - a common behavior for living organisms - is non-trivial because of the random nature of mixing. Here we analyze the statistical physics aspects of the problem and propose an efficient strategy for olfactory search which can work in turbulent plumes. The algorithm combines the maximum likelihood inference of the source position with an active search. Our approach provides the theoretical basis for the design of olfactory robots and the quantitative tools for the analysis of the observed olfactory search behavior of living creatures (e.g. odor modulated optomotor anemotaxis of moth)
nlin/0202038
On model selection and the disability of neural networks to decompose tasks
nlin.AO cond-mat.dis-nn cs.NE
A neural network with fixed topology can be regarded as a parametrization of functions, which decides on the correlations between functional variations when parameters are adapted. We propose an analysis, based on a differential geometry point of view, that allows to calculate these correlations. In practise, this describes how one response is unlearned while another is trained. Concerning conventional feed-forward neural networks we find that they generically introduce strong correlations, are predisposed to forgetting, and inappropriate for task decomposition. Perspectives to solve these problems are discussed.
nlin/0202039
A neural model for multi-expert architectures
nlin.AO cond-mat.dis-nn cs.NE
We present a generalization of conventional artificial neural networks that allows for a functional equivalence to multi-expert systems. The new model provides an architectural freedom going beyond existing multi-expert models and an integrative formalism to compare and combine various techniques of learning. (We consider gradient, EM, reinforcement, and unsupervised learning.) Its uniform representation aims at a simple genetic encoding and evolutionary structure optimization of multi-expert systems. This paper contains a detailed description of the model and learning rules, empirically validates its functionality, and discusses future perspectives.
nlin/0204038
Neutrality: A Necessity for Self-Adaptation
nlin.AO cs.NE q-bio
Self-adaptation is used in all main paradigms of evolutionary computation to increase efficiency. We claim that the basis of self-adaptation is the use of neutrality. In the absence of external control neutrality allows a variation of the search distribution without the risk of fitness loss.
nlin/0210041
Simple Model for the Dynamics of Correlations in the Evolution of Economic Entities Under Varying Economic Conditions
nlin.AO cond-mat.stat-mech cs.CE physics.soc-ph
From some observations on economic behaviors, in particular changing economic conditions with time and space, we develop a very simple model for the evolution of economic entities within a geographical type of framework. We raise a few questions and attempt to investigate whether some of them can be tackled by our model. Several cases of interest are reported. It is found that the model even in its simple forms can lead to a large variety of situations, including: delocalization and cycles, but also pre-chaotic behavior.
nlin/0211010
Evolution and anti-evolution in a minimal stock market model
nlin.AO cond-mat.stat-mech cs.MA q-fin.TR
We present a novel microscopic stock market model consisting of a large number of random agents modeling traders in a market. Each agent is characterized by a set of parameters that serve to make iterated predictions of two successive returns. The future price is determined according to the offer and the demand of all agents. The system evolves by redistributing the capital among the agents in each trading cycle. Without noise the dynamics of this system is nearly regular and thereby fails to reproduce the stochastic return fluctuations observed in real markets. However, when in each cycle a small amount of noise is introduced we find the typical features of real financial time series like fat-tails of the return distribution and large temporal correlations in the volatility without significant correlations in the price returns. Introducing the noise by an evolutionary process leads to different scalings of the return distributions that depend on the definition of fitness. Because our realistic model has only very few parameters, and the results appear to be robust with respect to the noise level and the number of agents we expect that our framework may serve as new paradigm for modeling self generated return fluctuations in markets.
nlin/0211013
A Spin Glass Model of Human Logic Systems
nlin.AO cond-mat.dis-nn cs.MA
In this paper, we discuss different models for human logic systems and describe a game with nature. G\"odel`s incompleteness theorem is taken into account to construct a model of logical networks based on axioms obtained by symmetry breaking. These classical logic networks are then coupled using rules that depend on whether two networks contain axioms or anti-axioms. The social lattice of axiom based logic networks is then placed with the environment network in a game including entropy as a cost factor. The classical logical networks are then replaced with ``preference axioms'' to the role of fuzzy logic.
nlin/0211024
Exploring the cooperative regimes in a model of agents without memory or "tags": indirect reciprocity vs. selfish incentives
nlin.AO cond-mat cs.CE hep-lat nlin.CG physics.soc-ph
The self-organization in cooperative regimes in a simple mean-field version of a model based on "selfish" agents which play the Prisoner's Dilemma (PD) game is studied. The agents have no memory and use strategies not based on direct reciprocity nor 'tags'. Two variables are assigned to each agent $i$ at time $t$, measuring its capital $C(i;t)$ and its probability of cooperation $p(i;t)$. At each time step $t$ a pair of agents interact by playing the PD game. These 2 agents update their probability of cooperation $p(i)$ as follows: they compare the profits they made in this interaction $\delta C(i;t)$ with an estimator $\epsilon(i;t)$ and, if $\delta C(i;t) \ge \epsilon(i;t)$, agent $i$ increases its $p(i;t)$ while if $\delta C(i;t) < \epsilon(i;t)$ the agent decreases $p(i;t)$. The 4!=24 different cases produced by permuting the four Prisoner's Dilemma canonical payoffs 3, 0, 1, and 5 - corresponding,respectively, to $R$ (reward), $S$ (sucker's payoff), $T$ (temptation to defect) and $P$ (punishment) - are analyzed. It turns out that for all these 24 possibilities, after a transient,the system self-organizes into a stationary state with average equilibrium probability of cooperation $\bar{p}_\infty$ = constant $ > 0$.Depending on the payoff matrix, there are different equilibrium states characterized by their average probability of cooperation and average equilibrium per-capita-income ($\bar{p}_\infty,\bar{\delta C}_\infty$).
nlin/0212030
The structure of evolutionary exploration: On crossover, buildings blocks and Estimation-Of-Distribution Algorithms
nlin.AO cs.NE q-bio
The notion of building blocks can be related to the structure of the offspring probability distribution: loci of which variability is strongly correlated constitute a building block. We call this correlated exploration. With this background we analyze the structure of the offspring probability distribution, or exploration distribution, for a GA with mutation only, a crossover GA, and an Estimation-Of-Distribution Algorithm (EDA). The results allow a precise characterization of the structure of the crossover exploration distribution. Essentially, the crossover operator destroys mutual information between loci by transforming it into entropy; it does the inverse of correlated exploration. In contrast, the objective of EDAs is to model the mutual information between loci in the fitness distribution and thereby they induce correlated exploration.
nlin/0304006
Determining possible avenues of approach using ANTS
nlin.AO cs.AI
Threat assessment is an important part of level 3 data fusion. Here we study a subproblem of this, worst-case risk assessment. Inspired by agent-based models used for simulation of trail formation for urban planning, we use ant colony optimization (ANTS) to determine possible avenues of approach for the enemy, given a situation picture. One way of determining such avenues would be to calculate the ``potential field'' caused by placing sources at possible goals for the enemy. This requires postulating a functional form for the potential, and also takes long time. Here we instead seek a method for quickly obtaining an effective potential. ANTS, which has previously been used to obtain approximate solutions to various optimization problems, is well suited for this. The output of our method describes possible avenues of approach for the enemy, i.e, areas where we should be prepared for attack. (The algorithm can also be run ``reversed'' to instead get areas of opportunity for our forces to exploit.) Using real geographical data, we found that our method gives a fast and reliable way of determining such avenues. Our method can be used in a computer-based command and control system to replace the first step of human intelligence analysis.
nlin/0306055
A Model for Prejudiced Learning in Noisy Environments
nlin.AO cs.LG
Based on the heuristics that maintaining presumptions can be beneficial in uncertain environments, we propose a set of basic axioms for learning systems to incorporate the concept of prejudice. The simplest, memoryless model of a deterministic learning rule obeying the axioms is constructed, and shown to be equivalent to the logistic map. The system's performance is analysed in an environment in which it is subject to external randomness, weighing learning defectiveness against stability gained. The corresponding random dynamical system with inhomogeneous, additive noise is studied, and shown to exhibit the phenomena of noise induced stability and stochastic bifurcations. The overall results allow for the interpretation that prejudice in uncertain environments entails a considerable portion of stubbornness as a secondary phenomenon.
nlin/0309039
Self-organizing Traffic Control: First Results
nlin.AO cs.MA
We developed a virtual laboratory for traffic control where agents use different strategies in order to self-organize on the road. We present our first results where we compare the performance and behaviour promoted by environmental constrains and five different simple strategies: three inspired in flocking behaviour, one selfish, and one inspired in the minority game. Experiments are presented for comparing the strategies. Different issues are discussed, such as the important role of environmental constrains and the emergence of traffic lanes.
nlin/0312056
Shannon information, LMC complexity and Renyi entropies: a straightforward approach
nlin.AO cond-mat.stat-mech cs.IT math.IT nlin.CD physics.comp-ph q-bio.QM
The LMC complexity, an indicator of complexity based on a probabilistic description, is revisited. A straightforward approach allows us to establish the time evolution of this indicator in a near-equilibrium situation and gives us a new insight for interpreting the LMC complexity for a general non equilibrium system. Its relationship with the Renyi entropies is also explained. One of the advantages of this indicator is that its calculation does not require a considerable computational effort in many cases of physical and biological interest.
nlin/0402046
Spontaneous Emergence of Complex Optimal Networks through Evolutionary Adaptation
nlin.AO cond-mat.stat-mech cs.MA nlin.CG q-bio.QM
An important feature of many complex systems, both natural and artificial, is the structure and organization of their interaction networks with interesting properties. Here we present a theory of self-organization by evolutionary adaptation in which we show how the structure and organization of a network is related to the survival, or in general the performance, objectives of the system. We propose that a complex system optimizes its network structure in order to maximize its overall survival fitness which is composed of short-term and long-term survival components. These in turn depend on three critical measures of the network, namely, efficiency, robustness and cost, and the environmental selection pressure. Using a graph theoretical case study, we show that when efficiency is paramount the "Star" topology emerges and when robustness is important the "Circle" topology is found. When efficiency and robustness requirements are both important to varying degrees, other classes of networks such as the "Hub" emerge. Our assumptions and results are consistent with observations across a wide variety of applications.
nlin/0404004
Protocol Requirements for Self-organizing Artifacts: Towards an Ambient Intelligence
nlin.AO cs.AI
We discuss which properties common-use artifacts should have to collaborate without human intervention. We conceive how devices, such as mobile phones, PDAs, and home appliances, could be seamlessly integrated to provide an "ambient intelligence" that responds to the user's desires without requiring explicit programming or commands. While the hardware and software technology to build such systems already exists, as yet there is no standard protocol that can learn new meanings. We propose the first steps in the development of such a protocol, which would need to be adaptive, extensible, and open to the community, while promoting self-organization. We argue that devices, interacting through "game-like" moves, can learn to agree about how to communicate, with whom to cooperate, and how to delegate and coordinate specialized tasks. Thus, they may evolve a distributed cognition or collective intelligence capable of tackling complex tasks.
nlin/0404032
Metrics for more than two points at once
nlin.AO cond-mat.other cs.LG math.GM
The conventional definition of a topological metric over a space specifies properties that must be obeyed by any measure of "how separated" two points in that space are. Here it is shown how to extend that definition, and in particular the triangle inequality, to concern arbitrary numbers of points. Such a measure of how separated the points within a collection are can be bootstrapped, to measure "how separated" from each other are two (or more) collections. The measure presented here also allows fractional membership of an element in a collection. This means it directly concerns measures of ``how spread out" a probability distribution over a space is. When such a measure is bootstrapped to compare two collections, it allows us to measure how separated two probability distributions are, or more generally, how separated a distribution of distributions is.
nlin/0407032
Application of Artificial Neural Network in Jitter Analysis of Dispersion-Managed Communication System
nlin.PS cs.AI cs.NA
Artificial Neural Network (ANN) is used as numerical methode in solving modified Nonlinear Schroedinger (NLS) equation with Dispersion Managed System (DMS) for jitter analysis. We take the optical axis z and the time t as input, and then some relevant values such as the change of position and the center frequency of the pulse, and further the mean square time of incoming pulse which are needed for jitter analysis. It shows that ANN yields numerical solutions which are adaptive with respect to the numerical errors and also verifies the previous numerical results using conventional numerical method. Our result indicates that DMS can minimize the timing jitter induced by some amplifiers.
nlin/0408007
Entropy Maximization as a Holistic Design Principle for Complex Optimal Networks and the Emergence of Power Laws
nlin.AO cond-mat.stat-mech cs.IT math.IT q-bio.QM
We present a general holistic theory for the organization of complex networks, both human-engineered and naturally-evolved. Introducing concepts of value of interactions and satisfaction as generic network performance measures, we show that the underlying organizing principle is to meet an overall performance target for wide-ranging operating or environmental conditions. This design or survival requirement of reliable performance under uncertainty leads, via the maximum entropy principle, to the emergence of a power law vertex degree distribution. The theory also predicts exponential or Poisson degree distributions depending on network redundancy, thus explaining all three regimes as different manifestations of a common underlying phenomenon within a unified theoretical framework.
nlin/0408039
Stability and Diversity in Collective Adaptation
nlin.AO cs.LG math.DS nlin.CD stat.ML
We derive a class of macroscopic differential equations that describe collective adaptation, starting from a discrete-time stochastic microscopic model. The behavior of each agent is a dynamic balance between adaptation that locally achieves the best action and memory loss that leads to randomized behavior. We show that, although individual agents interact with their environment and other agents in a purely self-interested way, macroscopic behavior can be interpreted as game dynamics. Application to several familiar, explicit game interactions shows that the adaptation dynamics exhibits a diversity of collective behaviors. The simplicity of the assumptions underlying the macroscopic equations suggests that these behaviors should be expected broadly in collective adaptation. We also analyze the adaptation dynamics from an information-theoretic viewpoint and discuss self-organization induced by information flux between agents, giving a novel view of collective adaptation.
nlin/0408040
Notes on information geometry and evolutionary processes
nlin.AO cs.NE
In order to analyze and extract different structural properties of distributions, one can introduce different coordinate systems over the manifold of distributions. In Evolutionary Computation, the Walsh bases and the Building Block Bases are often used to describe populations, which simplifies the analysis of evolutionary operators applying on populations. Quite independent from these approaches, information geometry has been developed as a geometric way to analyze different order dependencies between random variables (e.g., neural activations or genes). In these notes I briefly review the essentials of various coordinate bases and of information geometry. The goal is to give an overview and make the approaches comparable. Besides introducing meaningful coordinate bases, information geometry also offers an explicit way to distinguish different order interactions and it offers a geometric view on the manifold and thereby also on operators that apply on the manifold. For instance, uniform crossover can be interpreted as an orthogonal projection of a population along an m-geodesic, monotonously reducing the theta-coordinates that describe interactions between genes.
nlin/0409013
Epistemic communities: description and hierarchic categorization
nlin.AO cs.IR
Social scientists have shown an increasing interest in understanding the structure of knowledge communities, and particularly the organization of "epistemic communities", that is groups of agents sharing common knowledge concerns. However, most existing approaches are based only on either social relationships or semantic similarity, while there has been roughly no attempt to link social and semantic aspects. In this paper, we introduce a formal framework addressing this issue and propose a method based on Galois lattices (or concept lattices) for categorizing epistemic communities in an automated and hierarchically structured fashion. Suggesting that our process allows us to rebuild a whole community structure and taxonomy, and notably fields and subfields gathering a certain proportion of agents, we eventually apply it to empirical data to exhibit these alleged structural properties, and successfully compare our results with categories spontaneously given by domain experts.
nlin/0411063
Detecting synchronization in spatially extended discrete systems by complexity measurements
nlin.CG cond-mat.dis-nn cs.MA math.DS nlin.PS q-bio.QM
The synchronization of two stochastically coupled one-dimensional cellular automata (CA) is analyzed. It is shown that the transition to synchronization is characterized by a dramatic increase of the statistical complexity of the patterns generated by the difference automaton. This singular behavior is verified to be present in several CA rules displaying complex behavior.
nlin/0411066
Self-Organizing Traffic Lights
nlin.AO cond-mat.stat-mech cs.AI cs.MA
Steering traffic in cities is a very complex task, since improving efficiency involves the coordination of many actors. Traditional approaches attempt to optimize traffic lights for a particular density and configuration of traffic. The disadvantage of this lies in the fact that traffic densities and configurations change constantly. Traffic seems to be an adaptation problem rather than an optimization problem. We propose a simple and feasible alternative, in which traffic lights self-organize to improve traffic flow. We use a multi-agent simulation to study three self-organizing methods, which are able to outperform traditional rigid and adaptive methods. Using simple rules and no direct communication, traffic lights are able to self-organize and adapt to changing traffic conditions, reducing waiting times, number of stopped cars, and increasing average speeds.
nlin/0505043
A network analysis of committees in the United States House of Representatives
nlin.AO cs.MA math.ST physics.data-an physics.soc-ph stat.TH
Network theory provides a powerful tool for the representation and analysis of complex systems of interacting agents. Here we investigate the United States House of Representatives network of committees and subcommittees, with committees connected according to ``interlocks'' or common membership. Analysis of this network reveals clearly the strong links between different committees, as well as the intrinsic hierarchical structure within the House as a whole. We show that network theory, combined with the analysis of roll call votes using singular value decomposition, successfully uncovers political and organizational correlations between committees in the House without the need to incorporate other political information.
nlin/0506061
Transmitting a signal by amplitude modulation in a chaotic network
nlin.CD cond-mat.stat-mech cs.NE
We discuss the ability of a network with non linear relays and chaotic dynamics to transmit signals, on the basis of a linear response theory developed by Ruelle \cite{Ruelle} for dissipative systems. We show in particular how the dynamics interfere with the graph topology to produce an effective transmission network, whose topology depends on the signal, and cannot be directly read on the ``wired'' network. This leads one to reconsider notions such as ``hubs''. Then, we show examples where, with a suitable choice of the carrier frequency (resonance), one can transmit a signal from a node to another one by amplitude modulation, \textit{in spite of chaos}. Also, we give an example where a signal, transmitted to any node via different paths, can only be recovered by a couple of \textit{specific} nodes. This opens the possibility for encoding data in a way such that the recovery of the signal requires the knowledge of the carrier frequency \textit{and} can be performed only at some specific node.
nlin/0508006
Metamimetic Games : Modeling Metadynamics in Social Cognition
nlin.AO cs.MA nlin.CG
Imitation is fundamental in the understanding of social system dynamics. But the diversity of imitation rules employed by modelers proves that the modeling of mimetic processes cannot avoid the traditional problem of endogenization of all the choices, including the one of the mimetic rules. Starting from the remark that human reflexive capacities are the ground for a new class of mimetic rules, I propose a formal framework, metamimetic games, that enable to endogenize the distribution of imitation rules while being human specific. The corresponding concepts of equilibrium - counterfactually stable state - and attractor are introduced. Finally, I give an interpretation of social differentiation in terms of cultural co-evolution among a set of possible motivations, which departs from the traditional view of optimization indexed to criteria that exist prior to the activity of agents.
nlin/0509007
Lattices for Dynamic, Hierarchic & Overlapping Categorization: the Case of Epistemic Communities
nlin.AO cs.AI cs.DL cs.IR
We present a method for hierarchic categorization and taxonomy evolution description. We focus on the structure of epistemic communities (ECs), or groups of agents sharing common knowledge concerns. Introducing a formal framework based on Galois lattices, we categorize ECs in an automated and hierarchically structured way and propose criteria for selecting the most relevant epistemic communities - for instance, ECs gathering a certain proportion of agents and thus prototypical of major fields. This process produces a manageable, insightful taxonomy of the community. Then, the longitudinal study of these static pictures makes possible an historical description. In particular, we capture stylized facts such as field progress, decline, specialization, interaction (merging or splitting), and paradigm emergence. The detection of such patterns in social networks could fruitfully be applied to other contexts.
nlin/0511015
Combinatorial Approach to Object Analysis
nlin.AO cs.LG
We present a perceptional mathematical model for image and signal analysis. A resemblance measure is defined, and submitted to an innovating combinatorial optimization algorithm. Numerical Simulations are also presented
nlin/0512048
Modeling Endogenous Social Networks: the Example of Emergence and Stability of Cooperation without Refusal
nlin.AO cond-mat.other cs.GT cs.MA cs.OH q-bio.OT q-bio.PE
Aggregated phenomena in social sciences and economics are highly dependent on the way individuals interact. To help understanding the interplay between socio-economic activities and underlying social networks, this paper studies a sequential prisoner's dilemma with binary choice. It proposes an analytical and computational insight about the role of endogenous networks in emergence and sustainability of cooperation and exhibits an alternative to the choice and refusal mechanism that is often proposed to explain cooperation. The study focuses on heterogeneous equilibriums and emergence of cooperation from an all-defector state that are the two stylized facts that this model successfully reconstructs.
nlin/0605029
Three Logistic Models for the Ecological and Economic Interactions: Symbiosis, Predator-Prey and Competition
nlin.AO cs.MA math.DS
If one isolated species (corporation) is supposed to evolve following the logistic mapping, then we are tempted to think that the dynamics of two species (corporations) can be expressed by a coupled system of two discrete logistic equations. As three basic relationships between two species are present in Nature, namely symbiosis, predator-prey and competition, three different models are obtained. Each model is a cubic two-dimensional discrete logistic-type equation with its own dynamical properties: stationary regime, periodicity, quasi-periodicity and chaos. We also propose that these models could be useful for thinking in the different interactions happening in the economic world, as for instance for the competition and the collaboration between corporations. Furthermore, these models could be considered as the basic ingredients to construct more complex interactions in the ecological and economic networks.
nlin/0609033
Fame Emerges as a Result of Small Memory
nlin.AO cs.CY cs.MA physics.soc-ph
A dynamic memory model is proposed in which an agent ``learns'' a new agent by means of recommendation. The agents can also ``remember'' and ``forget''. The memory size is decreased while the population size is kept constant. ``Fame'' emerged as a few agents become very well known in expense of the majority being completely forgotten. The minimum and the maximum of fame change linearly with the relative memory size. The network properties of the who-knows-who graph, which represents the state of the system, are investigated.
nlin/0609038
From Neuron to Neural Networks dynamics
nlin.AO cond-mat.dis-nn cs.NE
This paper presents an overview of some techniques and concepts coming from dynamical system theory and used for the analysis of dynamical neural networks models. In a first section, we describe the dynamics of the neuron, starting from the Hodgkin-Huxley description, which is somehow the canonical description for the ``biological neuron''. We discuss some models reducing the Hodgkin-Huxley model to a two dimensional dynamical system, keeping one of the main feature of the neuron: its excitability. We present then examples of phase diagram and bifurcation analysis for the Hodgin-Huxley equations. Finally, we end this section by a dynamical system analysis for the nervous flux propagation along the axon. We then consider neuron couplings, with a brief description of synapses, synaptic plasticiy and learning, in a second section. We also briefly discuss the delicate issue of causal action from one neuron to another when complex feedback effects and non linear dynamics are involved. The third section presents the limit of weak coupling and the use of normal forms technics to handle this situation. We consider then several examples of recurrent models with different type of synaptic interactions (symmetric, cooperative, random). We introduce various techniques coming from statistical physics and dynamical systems theory. A last section is devoted to a detailed example of recurrent model where we go in deep in the analysis of the dynamics and discuss the effect of learning on the neuron dynamics. We also present recent methods allowing the analysis of the non linear effects of the neural dynamics on signal propagation and causal action. An appendix, presenting the main notions of dynamical systems theory useful for the comprehension of the chapter, has been added for the convenience of the reader.
nlin/0610040
Self-organizing traffic lights: A realistic simulation
nlin.AO cond-mat.stat-mech cs.AI physics.comp-ph physics.soc-ph
We have previously shown in an abstract simulation (Gershenson, 2005) that self-organizing traffic lights can improve greatly traffic flow for any density. In this paper, we extend these results to a realistic setting, implementing self-organizing traffic lights in an advanced traffic simulator using real data from a Brussels avenue. On average, for different traffic densities, travel waiting times are reduced by 50% compared to the current green wave method.
nlin/0611044
Why the Maxwellian Distribution is the Attractive Fixed Point of the Boltzmann Equation
nlin.CD cond-mat.stat-mech cs.MA math.ST physics.class-ph stat.TH
The origin of the Boltzmann factor is revisited. An alternative derivation from the microcanonical picture is given. The Maxwellian distribution in a mono-dimensional ideal gas is obtained by following this insight. Other possible applications, as for instance the obtaining of the wealth distribution in the human society, are suggested in the remarks.
nlin/0611054
A Model of a Trust-based Recommendation System on a Social Network
nlin.AO cs.IR physics.soc-ph
In this paper, we present a model of a trust-based recommendation system on a social network. The idea of the model is that agents use their social network to reach information and their trust relationships to filter it. We investigate how the dynamics of trust among agents affect the performance of the system by comparing it to a frequency-based recommendation system. Furthermore, we identify the impact of network density, preference heterogeneity among agents, and knowledge sparseness to be crucial factors for the performance of the system. The system self-organises in a state with performance near to the optimum; the performance on the global level is an emergent property of the system, achieved without explicit coordination from the local interactions of agents.
nlin/0702001
Bistability: a common feature in some "aggregates" of logistic maps
nlin.AO cs.NE
As it was argued by Anderson [Science 177, 393 (1972)], the "reductionist" hypothesis does not by any means imply a "constructionist" one. Hence, in general, the behavior of large and complex aggregates of elementary components can not be understood nor extrapolated from the properties of a few components. Following this insight, we have simulated different "aggregates" of logistic maps according to a particular coupling scheme. All these aggregates show a similar pattern of dynamical properties, concretely a bistable behavior, that is also found in a network of many units of the same type, independently of the number of components and of the interconnection topology. A qualitative relationship with brain-like systems is suggested.
nlin/0703036
Statistical User Model for the Internet Access
nlin.AO cond-mat.stat-mech cs.MA cs.NI
A new statistical based model approach to characterize a user's behavior in an Internet access link is presented. The real patterns of Internet traffic in a heterogeneous Campus Network are studied. We find three clearly different patterns of individual user's behavior, study their common features and group particular users behaving alike in three clusters. This allows us to build a probabilistic mixture model, that can explain the expected global behavior for the three different types of users. We discuss the implications of this emergent phenomenology in the field of multi-agent complex systems.
nlin/0703050
Competition of Self-Organized Rotating Spiral Autowaves in a Nonequilibrium Dissipative System of Three-Level Phaser
nlin.CG cs.NE nlin.AO
We present results of cellular automata based investigations of rotating spiral autowaves in a nonequilibrium excitable medium which models three-level paramagnetic microwave phonon laser (phaser). The computational model is described in arXiv:cond-mat/0410460v2 and arXiv:cond-mat/0602345v1 . We have observed several new scenarios of self-organization, competition and dynamical stabilization of rotating spiral autowaves under conditions of cross-relaxation between three-level active centers. In particular, phenomena of inversion of topological charge, as well as processes of regeneration and replication of rotating spiral autowaves in various excitable media were revealed and visualized for mesoscopic-scale areas of phaser-type active systems, which model real phaser devices.