text
stringlengths
0
4.09k
Abstract: The objective of this paper is to develop statistical methodology for planning and evaluating three-armed non-inferiority trials for general retention of effect hypotheses, where the endpoint of interest may follow any (regular) parametric distribution family. This generalizes and unifies specific results for binary, normally and exponentially distributed endpoints. We propose a Wald-type test procedure for the retention of effect hypothesis (RET), which assures that the test treatment maintains at least a proportion $\Delta$ of reference treatment effect compared to placebo. At this, we distinguish the cases where the variance of the test statistic is estimated unrestrictedly and restrictedly to the null hypothesis, to improve accuracy of the nominal level. We present a general valid sample size allocation rule to achieve optimal power and sample size formulas, which significantly improve existing ones. Moreover, we propose a general applicable rule of thumb for sample allocation and give conditions where this rule is theoretically justified. The presented methodologies are discussed in detail for binary and for Poisson distributed endpoints by means of two clinical trials in the treatment of depression and in the treatment of epilepsy, respectively. $R$-software for implementation of the proposed tests and for sample size planning accompanies this paper.
Title: Test Martingales, Bayes Factors and $p$-Values
Abstract: A nonnegative martingale with initial value equal to one measures evidence against a probabilistic hypothesis. The inverse of its value at some stopping time can be interpreted as a Bayes factor. If we exaggerate the evidence by considering the largest value attained so far by such a martingale, the exaggeration will be limited, and there are systematic ways to eliminate it. The inverse of the exaggerated value at some stopping time can be interpreted as a $p$-value. We give a simple characterization of all increasing functions that eliminate the exaggeration.
Title: Inferring Multiple Graphical Structures
Abstract: Gaussian Graphical Models provide a convenient framework for representing dependencies between variables. Recently, this tool has received a high interest for the discovery of biological networks. The literature focuses on the case where a single network is inferred from a set of measurements, but, as wetlab data is typically scarce, several assays, where the experimental conditions affect interactions, are usually merged to infer a single network. In this paper, we propose two approaches for estimating multiple related graphs, by rendering the closeness assumption into an empirical prior or group penalties. We provide quantitative results demonstrating the benefits of the proposed approaches. The methods presented in this paper are embeded in the R package 'simone' from version 1.0-0 and later.
Title: Learning to Predict Combinatorial Structures
Abstract: The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.
Title: Consensus Dynamics in a non-deterministic Naming Game with Shared Memory
Abstract: In the naming game, individuals or agents exchange pairwise local information in order to communicate about objects in their common environment. The goal of the game is to reach a consensus about naming these objects. Originally used to investigate language formation and self-organizing vocabularies, we extend the classical naming game with a globally shared memory accessible by all agents. This shared memory can be interpreted as an external source of knowledge like a book or an Internet site. The extended naming game models an environment similar to one that can be found in the context of social bookmarking and collaborative tagging sites where users tag sites using appropriate labels, but also mimics an important aspect in the field of human-based image labeling. Although the extended naming game is non-deterministic in its word selection, we show that consensus towards a common vocabulary is reached. More importantly, we show the qualitative and quantitative influence of the external source of information, i.e. the shared memory, on the consensus dynamics between the agents.
Title: Lambert W random variables - a new family of generalized skewed distributions with applications to risk estimation
Abstract: Originating from a system theory and an input/output point of view, I introduce a new class of generalized distributions. A parametric nonlinear transformation converts a random variable $X$ into a so-called Lambert $W$ random variable $Y$, which allows a very flexible approach to model skewed data. Its shape depends on the shape of $X$ and a skewness parameter $\gamma$. In particular, for symmetric $X$ and nonzero $\gamma$ the output $Y$ is skewed. Its distribution and density function are particular variants of their input counterparts. Maximum likelihood and method of moments estimators are presented, and simulations show that in the symmetric case additional estimation of $\gamma$ does not affect the quality of other parameter estimates. Applications in finance and biomedicine show the relevance of this class of distributions, which is particularly useful for slightly skewed data. A practical by-result of the Lambert $W$ framework: data can be "unskewed." The $R$ package http://cran.r-project.org/web/packages/LambertWLambertW developed by the author is publicly available (http://cran.r-project.orgCRAN).
Title: Fast Alternating Linearization Methods for Minimizing the Sum of Two Convex Functions
Abstract: We present in this paper first-order alternating linearization algorithms based on an alternating direction augmented Lagrangian approach for minimizing the sum of two convex functions. Our basic methods require at most $O(1/\epsilon)$ iterations to obtain an $\epsilon$-optimal solution, while our accelerated (i.e., fast) versions of them require at most $O(1/)$ iterations, with little change in the computational effort required at each iteration. For both types of methods, we present one algorithm that requires both functions to be smooth with Lipschitz continuous gradients and one algorithm that needs only one of the functions to be so. Algorithms in this paper are Gauss-Seidel type methods, in contrast to the ones proposed by Goldfarb and Ma in [21] where the algorithms are Jacobi type methods. Numerical results are reported to support our theoretical conclusions and demonstrate the practical potential of our algorithms.
Title: A Necessary and Sufficient Condition for Graph Matching Being Equivalent to the Maximum Weight Clique Problem
Abstract: This paper formulates a necessary and sufficient condition for a generic graph matching problem to be equivalent to the maximum vertex and edge weight clique problem in a derived association graph. The consequences of this results are threefold: first, the condition is general enough to cover a broad range of practical graph matching problems; second, a proof to establish equivalence between graph matching and clique search reduces to showing that a given graph matching problem satisfies the proposed condition; and third, the result sets the scene for generic continuous solutions for a broad range of graph matching problems. To illustrate the mathematical framework, we apply it to a number of graph matching problems, including the problem of determining the graph edit distance.
Title: Elkan's k-Means for Graphs
Abstract: This paper extends k-means algorithms from the Euclidean domain to the domain of graphs. To recompute the centroids, we apply subgradient methods for solving the optimization-based formulation of the sample mean of graphs. To accelerate the k-means algorithm for graphs without trading computational time against solution quality, we avoid unnecessary graph distance calculations by exploiting the triangle inequality of the underlying distance metric following Elkan's k-means algorithm proposed in . In experiments we show that the accelerated k-means algorithm are faster than the standard k-means algorithm for graphs provided there is a cluster structure in the data.
Title: The use of ideas of Information Theory for studying "language" and intelligence in ants
Abstract: In this review we integrate results of long term experimental study on ant "language" and intelligence which were fully based on fundamental ideas of Information Theory, such as the Shannon entropy, the Kolmogorov complexity, and the Shannon's equation connecting the length of a message ($l$) and its frequency $(p)$, i.e. $l = - \log p$ for rational communication systems. This approach, new for studying biological communication systems, enabled us to obtain the following important results on ants' communication and intelligence: i) to reveal "distant homing" in ants, that is, their ability to transfer information about remote events; ii) to estimate the rate of information transmission; iii) to reveal that ants are able to grasp regularities and to use them for "compression" of information; iv) to reveal that ants are able to transfer to each other the information about the number of objects; v) to discover that ants can add and subtract small numbers. The obtained results show that Information Theory is not only wonderful mathematical theory, but many its results may be considered as Nature laws.
Title: Likelihood-free Bayesian inference for alpha-stable models
Abstract: $\alpha$-stable distributions are utilised as models for heavy-tailed noise in many areas of statistics, finance and signal processing engineering. However, in general, neither univariate nor multivariate $\alpha$-stable models admit closed form densities which can be evaluated pointwise. This complicates the inferential procedure. As a result, $\alpha$-stable models are practically limited to the univariate setting under the Bayesian paradigm, and to bivariate models under the classical framework. In this article we develop a novel Bayesian approach to modelling univariate and multivariate $\alpha$-stable distributions based on recent advances in "likelihood-free" inference. We present an evaluation of the performance of this procedure in 1, 2 and 3 dimensions, and provide an analysis of real daily currency exchange rate data. The proposed approach provides a feasible inferential methodology at a moderate computational cost.
Title: Similarit\'e en intension vs en extension : \`a la crois\'ee de l'informatique et du th\'e\^atre
Abstract: Traditional staging is based on a formal approach of similarity leaning on dramaturgical ontologies and instanciation variations. Inspired by interactive data mining, that suggests different approaches, we give an overview of computer science and theater researches using computers as partners of the actor to escape the a priori specification of roles.
Title: On Finding Predictors for Arbitrary Families of Processes
Abstract: The problem is sequence prediction in the following setting. A sequence $x_1,...,x_n,...$ of discrete-valued observations is generated according to some unknown probabilistic law (measure) $\mu$. After observing each outcome, it is required to give the conditional probabilities of the next observation. The measure $\mu$ belongs to an arbitrary but known class $C$ of stochastic process measures. We are interested in predictors $\rho$ whose conditional probabilities converge (in some sense) to the "true" $\mu$-conditional probabilities if any $\mu\in C$ is chosen to generate the sequence. The contribution of this work is in characterizing the families $C$ for which such predictors exist, and in providing a specific and simple form in which to look for a solution. We show that if any predictor works, then there exists a Bayesian predictor, whose prior is discrete, and which works too. We also find several sufficient and necessary conditions for the existence of a predictor, in terms of topological characterizations of the family $C$, as well as in terms of local behaviour of the measures in $C$, which in some cases lead to procedures for constructing such predictors. It should be emphasized that the framework is completely general: the stochastic processes considered are not required to be i.i.d., stationary, or to belong to any parametric or countable family.
Title: An Invariance Principle for Polytopes
Abstract: Let X be randomly chosen from -1,1^n, and let Y be randomly chosen from the standard spherical Gaussian on R^n. For any (possibly unbounded) polytope P formed by the intersection of k halfspaces, we prove that |Pr [X belongs to P] - Pr [Y belongs to P]| < log^8/5k * Delta, where Delta is a parameter that is small for polytopes formed by the intersection of "regular" halfspaces (i.e., halfspaces with low influence). The novelty of our invariance principle is the polylogarithmic dependence on k. Previously, only bounds that were at least linear in k were known. We give two important applications of our main result: (1) A polylogarithmic in k bound on the Boolean noise sensitivity of intersections of k "regular" halfspaces (previous work gave bounds linear in k). (2) A pseudorandom generator (PRG) with seed length O((log n)*poly(log k,1/delta)) that delta-fools all polytopes with k faces with respect to the Gaussian distribution. We also obtain PRGs with similar parameters that fool polytopes formed by intersection of regular halfspaces over the hypercube. Using our PRG constructions, we obtain the first deterministic quasi-polynomial time algorithms for approximately counting the number of solutions to a broad class of integer programs, including dense covering problems and contingency tables.
Title: Nonparametric Bayesian Density Modeling with Gaussian Processes
Abstract: We present the Gaussian process density sampler (GPDS), an exchangeable generative model for use in nonparametric Bayesian density estimation. Samples drawn from the GPDS are consistent with exact, independent samples from a distribution defined by a density that is a transformation of a function drawn from a Gaussian process prior. Our formulation allows us to infer an unknown density from data using Markov chain Monte Carlo, which gives samples from the posterior distribution over density functions and from the predictive distribution on data space. We describe two such MCMC methods. Both methods also allow inference of the hyperparameters of the Gaussian process.
Title: Genus Computing for 3D digital objects: algorithm and implementation
Abstract: This paper deals with computing topological invariants such as connected components, boundary surface genus, and homology groups. For each input data set, we have designed or implemented algorithms to calculate connected components, boundary surfaces and their genus, and homology groups. Due to the fact that genus calculation dominates the entire task for 3D object in 3D space, in this paper, we mainly discuss the calculation of the genus. The new algorithms designed in this paper will perform: (1) pathological cases detection and deletion, (2) raster space to point space (dual space) transformation, (3) the linear time algorithm for boundary point classification, and (4) genus calculation.
Title: Inference for Extremal Conditional Quantile Models, with an Application to Market and Birthweight Risks
Abstract: Quantile regression is an increasingly important empirical tool in economics and other sciences for analyzing the impact of a set of regressors on the conditional distribution of an outcome. Extremal quantile regression, or quantile regression applied to the tails, is of interest in many economic and financial applications, such as conditional value-at-risk, production efficiency, and adjustment bands in (S,s) models. In this paper we provide feasible inference tools for extremal conditional quantile models that rely upon extreme value approximations to the distribution of self-normalized quantile regression statistics. The methods are simple to implement and can be of independent interest even in the non-regression case. We illustrate the results with two empirical examples analyzing extreme fluctuations of a stock return and extremely low percentiles of live infants' birthweights in the range between 250 and 1500 grams.
Title: Complexity of stochastic branch and bound methods for belief tree search in Bayesian reinforcement learning
Abstract: There has been a lot of recent work on Bayesian methods for reinforcement learning exhibiting near-optimal online performance. The main obstacle facing such methods is that in most problems of interest, the optimal solution involves planning in an infinitely large tree. However, it is possible to obtain stochastic lower and upper bounds on the value of each tree node. This enables us to use stochastic branch and bound algorithms to search the tree efficiently. This paper proposes two such algorithms and examines their complexity in this setting.
Title: A Rational Decision Maker with Ordinal Utility under Uncertainty: Optimism and Pessimism
Abstract: In game theory and artificial intelligence, decision making models often involve maximizing expected utility, which does not respect ordinal invariance. In this paper, the author discusses the possibility of preserving ordinal invariance and still making a rational decision under uncertainty.
Title: Ranking relations using analogies in biological and information networks
Abstract: Analogical reasoning depends fundamentally on the ability to learn and generalize about relations between objects. We develop an approach to relational learning which, given a set of pairs of objects $=\A^(1):B^(1),A^(2):B^(2),\ldots,A^(N):B ^(N)\$, measures how well other pairs A:B fit in with the set $$. Our work addresses the following question: is the relation between objects A and B analogous to those relations found in $$? Such questions are particularly relevant in information retrieval, where an investigator might want to search for analogous pairs of objects that match the query set of interest. There are many ways in which objects can be related, making the task of measuring analogies very challenging. Our approach combines a similarity measure on function spaces with Bayesian analysis to produce a ranking. It requires data containing features of the objects of interest and a link matrix specifying which relationships exist; no further attributes of such relationships are necessary. We illustrate the potential of our method on text analysis and information networks. An application on discovering functional interactions between pairs of proteins is discussed in detail, where we show that our approach can work in practice even if a small set of protein pairs is provided.
Title: Penalized Composite Quasi-Likelihood for Ultrahigh-Dimensional Variable Selection
Abstract: In high-dimensional model selection problems, penalized simple least-square approaches have been extensively used. This paper addresses the question of both robustness and efficiency of penalized model selection methods, and proposes a data-driven weighted linear combination of convex loss functions, together with weighted $L_1$-penalty. It is completely data-adaptive and does not require prior knowledge of the error distribution. The weighted $L_1$-penalty is used both to ensure the convexity of the penalty term and to ameliorate the bias caused by the $L_1$-penalty. In the setting with dimensionality much larger than the sample size, we establish a strong oracle property of the proposed method that possesses both the model selection consistency and estimation efficiency for the true non-zero coefficients. As specific examples, we introduce a robust method of composite L1-L2, and optimal composite quantile method and evaluate their performance in both simulated and real data examples.
Title: Believe It or Not: Adding Belief Annotations to Databases
Abstract: We propose a database model that allows users to annotate data with belief statements. Our motivation comes from scientific database applications where a community of users is working together to assemble, revise, and curate a shared data repository. As the community accumulates knowledge and the database content evolves over time, it may contain conflicting information and members can disagree on the information it should store. For example, Alice may believe that a tuple should be in the database, whereas Bob disagrees. He may also insert the reason why he thinks Alice believes the tuple should be in the database, and explain what he thinks the correct tuple should be instead. We propose a formal model for Belief Databases that interprets users' annotations as belief statements. These annotations can refer both to the base data and to other annotations. We give a formal semantics based on a fragment of multi-agent epistemic logic and define a query language over belief databases. We then prove a key technical result, stating that every belief database can be encoded as a canonical Kripke structure. We use this structure to describe a relational representation of belief databases, and give an algorithm for translating queries over the belief database into standard relational queries. Finally, we report early experimental results with our prototype implementation on synthetic data.
Title: Selection models under generalized symmetry settings
Abstract: An active stream of literature has followed up the idea of skew-elliptical densities initiated by Azzalini and Capitanio (1999). Their original formulation was based on a general lemma which is however of broader applicability than usually perceived. This note examines new directions of its use, and illustrates them with the construction of some probability distributions falling outside the family of the so-called skew-symmetric densities.
Title: Why so? or Why no? Functional Causality for Explaining Query Answers
Abstract: In this paper, we propose causality as a unified framework to explain query answers and non-answers, thus generalizing and extending several previously proposed approaches of provenance and missing query result explanations. We develop our framework starting from the well-studied definition of actual causes by Halpern and Pearl. After identifying some undesirable characteristics of the original definition, we propose functional causes as a refined definition of causality with several desirable properties. These properties allow us to apply our notion of causality in a database context and apply it uniformly to define the causes of query results and their individual contributions in several ways: (i) we can model both provenance as well as non-answers, (ii) we can define explanations as either data in the input relations or relational operations in a query plan, and (iii) we can give graded degrees of responsibility to individual causes, thus allowing us to rank causes. In particular, our approach allows us to explain contributions to relational aggregate functions and to rank causes according to their respective responsibilities. We give complexity results and describe polynomial algorithms for evaluating causality in tractable cases. Throughout the paper, we illustrate the applicability of our framework with several examples. Overall, we develop in this paper the theoretical foundations of causality theory in a database context.
Title: A survey of statistical network models
Abstract: Networks are ubiquitous in science and have become a focal point for discussion in everyday life. Formal statistical models for the analysis of network data have emerged as a major topic of interest in diverse areas of study, and most of these involve a form of graphical representation. Probability models on graphs date back to 1959. Along with empirical studies in social psychology and sociology from the 1960s, these early works generated an active network community and a substantial literature in the 1970s. This effort moved into the statistical literature in the late 1970s and 1980s, and the past decade has seen a burgeoning network literature in statistical physics and computer science. The growth of the World Wide Web and the emergence of online networking communities such as Facebook, MySpace, and LinkedIn, and a host of more specialized professional network communities has intensified interest in the study of networks and network data. Our goal in this review is to provide the reader with an entry point to this burgeoning literature. We begin with an overview of the historical development of statistical network modeling and then we introduce a number of examples that have been studied in the network literature. Our subsequent discussion focuses on a number of prominent static and dynamic network models and their interconnections. We emphasize formal model descriptions, and pay special attention to the interpretation of parameters and their estimation. We end with a description of some open problems and challenges for machine learning and statistics.
Title: Computing Optimal Designs of multiresponse Experiments reduces to Second-Order Cone Programming
Abstract: Elfving's Theorem is a major result in the theory of optimal experimental design, which gives a geometrical characterization of $c-$optimality. In this paper, we extend this theorem to the case of multiresponse experiments, and we show that when the number of experiments is finite, $c-,A-,T-$ and $D-$optimal design of multiresponse experiments can be computed by Second-Order Cone Programming (SOCP). Moreover, our SOCP approach can deal with design problems in which the variable is subject to several linear constraints. We give two proofs of this generalization of Elfving's theorem. One is based on Lagrangian dualization techniques and relies on the fact that the semidefinite programming (SDP) formulation of the multiresponse $c-$optimal design always has a solution which is a matrix of rank $1$. Therefore, the complexity of this problem fades. We also investigate a generalization of $c-$optimality, for which an Elfving-type theorem was established by Dette (1993). We show with the same Lagrangian approach that these model robust designs can be computed efficiently by minimizing a geometric mean under some norm constraints. Moreover, we show that the optimality conditions of this geometric programming problem yield an extension of Dette's theorem to the case of multiresponse experiments. When the number of unknown parameters is small, or when the number of linear functions of the parameters to be estimated is small, we show by numerical examples that our approach can be between 10 and 1000 times faster than the classic, state-of-the-art algorithms.
Title: On multivariate quantiles under partial orders
Abstract: This paper focuses on generalizing quantiles from the ordering point of view. We propose the concept of partial quantiles, which are based on a given partial order. We establish that partial quantiles are equivariant under order-preserving transformations of the data, robust to outliers, characterize the probability distribution if the partial order is sufficiently rich, generalize the concept of efficient frontier, and can measure dispersion from the partial order perspective. We also study several statistical aspects of partial quantiles. We provide estimators, associated rates of convergence, and asymptotic distributions that hold uniformly over a continuum of quantile indices. Furthermore, we provide procedures that can restore monotonicity properties that might have been disturbed by estimation error, establish computational complexity bounds, and point out a concentration of measure phenomenon (the latter under independence and the componentwise natural order). Finally, we illustrate the concepts by discussing several theoretical examples and simulations. Empirical applications to compare intake nutrients within diets, to evaluate the performance of investment funds, and to study the impact of policies on tobacco awareness are also presented to illustrate the concepts and their use.
Title: Writer Identification Using Inexpensive Signal Processing Techniques
Abstract: We propose to use novel and classical audio and text signal-processing and otherwise techniques for "inexpensive" fast writer identification tasks of scanned hand-written documents "visually". The "inexpensive" refers to the efficiency of the identification process in terms of CPU cycles while preserving decent accuracy for preliminary identification. This is a comparative study of multiple algorithm combinations in a pattern recognition pipeline implemented in Java around an open-source Modular Audio Recognition Framework (MARF) that can do a lot more beyond audio. We present our preliminary experimental findings in such an identification task. We simulate "visual" identification by "looking" at the hand-written document as a whole rather than trying to extract fine-grained features out of it prior classification.
Title: MedLDA: A General Framework of Maximum Margin Supervised Topic Models
Abstract: Supervised topic models utilize document's side information for discovering predictive low dimensional representations of documents. Existing models apply the likelihood-based estimation. In this paper, we present a general framework of max-margin supervised topic models for both continuous and categorical response variables. Our approach, the maximum entropy discrimination latent Dirichlet allocation (MedLDA), utilizes the max-margin principle to train supervised topic models and estimate predictive topic representations that are arguably more suitable for prediction tasks. The general principle of MedLDA can be applied to perform joint max-margin learning and maximum likelihood estimation for arbitrary topic models, directed or undirected, and supervised or unsupervised, when the supervised side information is available. We develop efficient variational methods for posterior inference and parameter estimation, and demonstrate qualitatively and quantitatively the advantages of MedLDA over likelihood-based topic models on movie review and 20 Newsgroups data sets.
Title: A general approach to belief change in answer set programming
Abstract: We address the problem of belief change in (nonmonotonic) logic programming under answer set semantics. Unlike previous approaches to belief change in logic programming, our formal techniques are analogous to those of distance-based belief revision in propositional logic. In developing our results, we build upon the model theory of logic programs furnished by SE models. Since SE models provide a formal, monotonic characterisation of logic programs, we can adapt techniques from the area of belief revision to belief change in logic programs. We introduce methods for revising and merging logic programs, respectively. For the former, we study both subset-based revision as well as cardinality-based revision, and we show that they satisfy the majority of the AGM postulates for revision. For merging, we consider operators following arbitration merging and IC merging, respectively. We also present encodings for computing the revision as well as the merging of logic programs within the same logic programming framework, giving rise to a direct implementation of our approach in terms of off-the-shelf answer set solvers. These encodings reflect in turn the fact that our change operators do not increase the complexity of the base formalism.
Title: Oriented Straight Line Segment Algebra: Qualitative Spatial Reasoning about Oriented Objects
Abstract: Nearly 15 years ago, a set of qualitative spatial relations between oriented straight line segments (dipoles) was suggested by Schlieder. This work received substantial interest amongst the qualitative spatial reasoning community. However, it turned out to be difficult to establish a sound constraint calculus based on these relations. In this paper, we present the results of a new investigation into dipole constraint calculi which uses algebraic methods to derive sound results on the composition of relations and other properties of dipole calculi. Our results are based on a condensed semantics of the dipole relations. In contrast to the points that are normally used, dipoles are extended and have an intrinsic direction. Both features are important properties of natural objects. This allows for a straightforward representation of prototypical reasoning tasks for spatial agents. As an example, we show how to generate survey knowledge from local observations in a street network. The example illustrates the fast constraint-based reasoning capabilities of the dipole calculus. We integrate our results into two reasoning tools which are publicly available.
Title: The Computational Structure of Spike Trains
Abstract: Neurons perform computations, and convey the results of those computations through the statistical structure of their output spike trains. Here we present a practical method, grounded in the information-theoretic analysis of prediction, for inferring a minimal representation of that structure and for characterizing its complexity. Starting from spike trains, our approach finds their causal state models (CSMs), the minimal hidden Markov models or stochastic automata capable of generating statistically identical time series. We then use these CSMs to objectively quantify both the generalizable structure and the idiosyncratic randomness of the spike train. Specifically, we show that the expected algorithmic information content (the information needed to describe the spike train exactly) can be split into three parts describing (1) the time-invariant structure (complexity) of the minimal spike-generating process, which describes the spike train statistically; (2) the randomness (internal entropy rate) of the minimal spike-generating process; and (3) a residual pure noise term not described by the minimal spike-generating process. We use CSMs to approximate each of these quantities. The CSMs are inferred nonparametrically from the data, making only mild regularity assumptions, via the causal state splitting reconstruction algorithm. The methods presented here complement more traditional spike train analyses by describing not only spiking probability and spike train entropy, but also the complexity of a spike train's structure. We demonstrate our approach using both simulated spike trains and experimental data recorded in rat barrel cortex during vibrissa stimulation.
Title: Cryptographic Implications for Artificially Mediated Games
Abstract: There is currently an intersection in the research of game theory and cryptography. Generally speaking, there are two aspects to this partnership. First there is the application of game theory to cryptography. Yet, the purpose of this paper is to focus on the second aspect, the converse of the first, the application of cryptography to game theory. Chiefly, there exist a branch of non-cooperative games which have a correlated equilibrium as their solution. These equilibria tend to be superior to the conventional Nash equilibria. The primary condition for a correlated equilibrium is the presence of a mediator within the game. This is simply a neutral and mutually trusted entity. It is the role of the mediator to make recommendations in terms of strategy profiles to all players, who then act (supposedly) on this advice. Each party privately provides the mediator with the necessary information, and the referee responds privately with their optimized strategy set. However, there seem to be a multitude of situations in which no mediator could exist. Thus, games modeling these sorts of cases could not use these entities as tools for analysis. Yet, if these equilibria are in the best interest of players, it would be rational to construct a machine, or protocol, to calculate them. Of course, this machine would need to satisfy some standard for secure transmission between a player and itself. The requirement that no third party could detect either the input or strategy profile would need to be satisfied by this scheme. Here is the synthesis of cryptography into game theory; analyzing the ability of the players to construct a protocol which can be used successfully in the place of a mediator.
Title: On a Model for Integrated Information
Abstract: In this paper we give a thorough presentation of a model proposed by Tononi et al. for modeling , i.e. how much information is generated in a system transitioning from one state to the next one by the causal interaction of its parts and the information given by the sum of its parts. We also provides a more general formulation of such a model, independent from the time chosen for the analysis and from the uniformity of the probability distribution at the initial time instant. Finally, we prove that integrated information is null for disconnected systems.