id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
physics/0001048
|
High-resolution path-integral development of financial options
|
physics.comp-ph cs.CE physics.data-an q-fin.PR
|
The Black-Scholes theory of option pricing has been considered for many years
as an important but very approximate zeroth-order description of actual market
behavior. We generalize the functional form of the diffusion of these systems
and also consider multi-factor models including stochastic volatility. Daily
Eurodollar futures prices and implied volatilities are fit to determine
exponents of functional behavior of diffusions using methods of global
optimization, Adaptive Simulated Annealing (ASA), to generate tight fits across
moving time windows of Eurodollar contracts. These short-time fitted
distributions are then developed into long-time distributions using a robust
non-Monte Carlo path-integral algorithm, PATHINT, to generate prices and
derivatives commonly used by option traders.
|
physics/0002054
|
Evolution of differentiated expression patterns in digital organisms
|
physics.bio-ph cs.NE q-bio.PE
|
We investigate the evolutionary processes behind the development and
optimization of multiple threads of execution in digital organisms using the
avida platform, a software package that implements Darwinian evolution on
populations of self-replicating computer programs. The system is seeded with a
linearly executed ancestor capable only of reproducing its own genome, whereas
its underlying language has the capacity for multiple threads of execution
(i.e., simultaneous expression of sections of the genome.) We witness the
evolution to multi-threaded organisms and track the development of distinct
expression patterns. Additionally, we examine both the evolvability of
multi-threaded organisms and the level of thread differentiation as a function
of environmental complexity, and find that differentiation is more pronounced
in complex environments.
|
physics/0004057
|
The information bottleneck method
|
physics.data-an cond-mat.dis-nn cs.LG nlin.AO
|
We define the relevant information in a signal $x\in X$ as being the
information that this signal provides about another signal $y\in \Y$. Examples
include the information that face images provide about the names of the people
portrayed, or the information that speech sounds provide about the words
spoken. Understanding the signal $x$ requires more than just predicting $y$, it
also requires specifying which features of $\X$ play a role in the prediction.
We formalize this problem as that of finding a short code for $\X$ that
preserves the maximum information about $\Y$. That is, we squeeze the
information that $\X$ provides about $\Y$ through a `bottleneck' formed by a
limited set of codewords $\tX$. This constrained optimization problem can be
seen as a generalization of rate distortion theory in which the distortion
measure $d(x,\x)$ emerges from the joint statistics of $\X$ and $\Y$. This
approach yields an exact set of self consistent equations for the coding rules
$X \to \tX$ and $\tX \to \Y$. Solutions to these equations can be found by a
convergent re-estimation method that generalizes the Blahut-Arimoto algorithm.
Our variational principle provides a surprisingly rich framework for discussing
a variety of problems in signal processing and learning, as will be described
in detail elsewhere.
|
physics/0005062
|
Applying MDL to Learning Best Model Granularity
|
physics.data-an cs.AI cs.CV
|
The Minimum Description Length (MDL) principle is solidly based on a provably
ideal method of inference using Kolmogorov complexity. We test how the theory
behaves in practice on a general problem in model selection: that of learning
the best model granularity. The performance of a model depends critically on
the granularity, for example the choice of precision of the parameters. Too
high precision generally involves modeling of accidental noise and too low
precision may lead to confusion of models that should be distinguished. This
precision is often determined ad hoc. In MDL the best model is the one that
most compresses a two-part code of the data set: this embodies ``Occam's
Razor.'' In two quite different experimental settings the theoretical value
determined using MDL coincides with the best value found experimentally. In the
first experiment the task is to recognize isolated handwritten characters in
one subject's handwriting, irrespective of size and orientation. Based on a new
modification of elastic matching, using multiple prototypes per character, the
optimal prediction rate is predicted for the learned parameter (length of
sampling interval) considered most likely by MDL, which is shown to coincide
with the best value found experimentally. In the second experiment the task is
to model a robot arm with two degrees of freedom using a three layer
feed-forward neural network where we need to determine the number of nodes in
the hidden layer giving best modeling performance. The optimal model (the one
that extrapolizes best on unseen examples) is predicted for the number of nodes
in the hidden layer considered most likely by MDL, which again is found to
coincide with the best value found experimentally.
|
physics/0007070
|
Predictability, complexity and learning
|
physics.data-an cond-mat.dis-nn cond-mat.other cs.LG nlin.AO q-bio.OT
|
We define {\em predictive information} $I_{\rm pred} (T)$ as the mutual
information between the past and the future of a time series. Three
qualitatively different behaviors are found in the limit of large observation
times $T$: $I_{\rm pred} (T)$ can remain finite, grow logarithmically, or grow
as a fractional power law. If the time series allows us to learn a model with a
finite number of parameters, then $I_{\rm pred} (T)$ grows logarithmically with
a coefficient that counts the dimensionality of the model space. In contrast,
power--law growth is associated, for example, with the learning of infinite
parameter (or nonparametric) models such as continuous functions with
smoothness constraints. There are connections between the predictive
information and measures of complexity that have been defined both in learning
theory and in the analysis of physical systems through statistical mechanics
and dynamical systems theory. Further, in the same way that entropy provides
the unique measure of available information consistent with some simple and
plausible conditions, we argue that the divergent part of $I_{\rm pred} (T)$
provides the unique measure for the complexity of dynamics underlying a time
series. Finally, we discuss how these ideas may be useful in different problems
in physics, statistics, and biology.
|
physics/0007075
|
Optimization of Trading Physics Models of Markets
|
physics.comp-ph cond-mat.stat-mech cs.CE physics.data-an q-fin.ST
|
We describe an end-to-end real-time S&P futures trading system. Inner-shell
stochastic nonlinear dynamic models are developed, and Canonical Momenta
Indicators (CMI) are derived from a fitted Lagrangian used by outer-shell
trading models dependent on these indicators. Recursive and adaptive
optimization using Adaptive Simulated Annealing (ASA) is used for fitting
parameters shared across these shells of dynamic and trading models.
|
physics/0009032
|
Information theory and learning: a physical approach
|
physics.data-an cond-mat.dis-nn cs.LG nlin.AO
|
We try to establish a unified information theoretic approach to learning and
to explore some of its applications. First, we define {\em predictive
information} as the mutual information between the past and the future of a
time series, discuss its behavior as a function of the length of the series,
and explain how other quantities of interest studied previously in learning
theory - as well as in dynamical systems and statistical mechanics - emerge
from this universally definable concept. We then prove that predictive
information provides the {\em unique measure for the complexity} of dynamics
underlying the time series and show that there are classes of models
characterized by {\em power-law growth of the predictive information} that are
qualitatively more complex than any of the systems that have been investigated
before. Further, we investigate numerically the learning of a nonparametric
probability density, which is an example of a problem with power-law
complexity, and show that the proper Bayesian formulation of this problem
provides for the `Occam' factors that punish overly complex models and thus
allow one {\em to learn not only a solution within a specific model class, but
also the class itself} using the data only and with very few a priori
assumptions. We study a possible {\em information theoretic method} that
regularizes the learning of an undersampled discrete variable, and show that
learning in such a setup goes through stages of very different complexities.
Finally, we discuss how all of these ideas may be useful in various problems in
physics, statistics, and, most importantly, biology.
|
physics/0101021
|
Adaptive evolution on neutral networks
|
physics.bio-ph cond-mat.stat-mech cs.NE nlin.AO q-bio.PE
|
We study the evolution of large but finite asexual populations evolving in
fitness landscapes in which all mutations are either neutral or strongly
deleterious. We demonstrate that despite the absence of higher fitness
genotypes, adaptation takes place as regions with more advantageous
distributions of neutral genotypes are discovered. Since these discoveries are
typically rare events, the population dynamics can be subdivided into separate
epochs, with rapid transitions between them. Within one epoch, the average
fitness in the population is approximately constant. The transitions between
epochs, however, are generally accompanied by a significant increase in the
average fitness. We verify our theoretical considerations with two analytically
tractable bitstring models.
|
physics/0102009
|
Self-adaptive exploration in evolutionary search
|
physics.bio-ph cs.NE nlin.AO q-bio
|
We address a primary question of computational as well as biological research
on evolution: How can an exploration strategy adapt in such a way as to exploit
the information gained about the problem at hand? We first introduce an
integrated formalism of evolutionary search which provides a unified view on
different specific approaches. On this basis we discuss the implications of
indirect modeling (via a ``genotype-phenotype mapping'') on the exploration
strategy. Notions such as modularity, pleiotropy and functional phenotypic
complex are discussed as implications. Then, rigorously reflecting the notion
of self-adaptability, we introduce a new definition that captures
self-adaptability of exploration: different genotypes that map to the same
phenotype may represent (also topologically) different exploration strategies;
self-adaptability requires a variation of exploration strategies along such a
``neutral space''. By this definition, the concept of neutrality becomes a
central concern of this paper. Finally, we present examples of these concepts:
For a specific grammar-type encoding, we observe a large variability of
exploration strategies for a fixed phenotype, and a self-adaptive drift towards
short representations with highly structured exploration strategy that matches
the ``problem's structure''.
|
physics/0209085
|
The calculation of a normal force between multiparticle contacts using
fractional operators
|
physics.comp-ph cs.CE cs.NA math.NA physics.class-ph physics.geo-ph
|
This paper deals with the complex problem of how to simulate multiparticle
contacts. The collision process is responsible for the transfer and dissipation
of energy in granular media. A novel model of the interaction force between
particles has been proposed and tested. Such model allows us to simulate
multiparticle collisions and granular cohesion dynamics.
|
physics/0307117
|
Symbolic stochastic dynamical systems viewed as binary N-step Markov
chains
|
physics.data-an cond-mat.stat-mech cs.CL math-ph math.MP nlin.AO physics.class-ph
|
A theory of systems with long-range correlations based on the consideration
of binary N-step Markov chains is developed. In the model, the conditional
probability that the i-th symbol in the chain equals zero (or unity) is a
linear function of the number of unities among the preceding N symbols. The
correlation and distribution functions as well as the variance of number of
symbols in the words of arbitrary length L are obtained analytically and
numerically. A self-similarity of the studied stochastic process is revealed
and the similarity group transformation of the chain parameters is presented.
The diffusion Fokker-Planck equation governing the distribution function of the
L-words is explored. If the persistent correlations are not extremely strong,
the distribution function is shown to be the Gaussian with the variance being
nonlinearly dependent on L. The applicability of the developed theory to the
coarse-grained written and DNA texts is discussed.
|
physics/0308041
|
Ensembles of Protein Molecules as Statistical Analog Computers
|
physics.bio-ph cs.AI cs.NE physics.comp-ph physics.data-an q-bio.NC
|
A class of analog computers built from large numbers of microscopic
probabilistic machines is discussed. It is postulated that such computers are
implemented in biological systems as ensembles of protein molecules. The
formalism is based on an abstract computational model referred to as Protein
Molecule Machine (PMM). A PMM is a continuous-time first-order Markov system
with real input and output vectors, a finite set of discrete states, and the
input-dependent conditional probability densities of state transitions. The
output of a PMM is a function of its input and state. The components of input
vector, called generalized potentials, can be interpreted as membrane
potential, and concentrations of neurotransmitters. The components of output
vector, called generalized currents, can represent ion currents, and the flows
of second messengers. An Ensemble of PMMs (EPMM) is a set of independent
identical PMMs with the same input vector, and the output vector equal to the
sum of output vectors of individual PMMs. The paper suggests that biological
neurons have much more sophisticated computational resources than the presently
popular models of artificial neurons.
|
physics/0405044
|
Least Dependent Component Analysis Based on Mutual Information
|
physics.comp-ph cs.IT math.IT physics.data-an q-bio.QM
|
We propose to use precise estimators of mutual information (MI) to find least
dependent components in a linearly mixed signal. On the one hand this seems to
lead to better blind source separation than with any other presently available
algorithm. On the other hand it has the advantage, compared to other
implementations of `independent' component analysis (ICA) some of which are
based on crude approximations for MI, that the numerical values of the MI can
be used for:
(i) estimating residual dependencies between the output components;
(ii) estimating the reliability of the output, by comparing the pairwise MIs
with those of re-mixed components;
(iii) clustering the output according to the residual interdependencies.
For the MI estimator we use a recently proposed k-nearest neighbor based
algorithm. For time sequences we combine this with delay embedding, in order to
take into account non-trivial time correlations. After several tests with
artificial data, we apply the resulting MILCA (Mutual Information based Least
dependent Component Analysis) algorithm to a real-world dataset, the ECG of a
pregnant woman.
The software implementation of the MILCA algorithm is freely available at
http://www.fz-juelich.de/nic/cs/software
|
physics/0406023
|
Maximum Entropy Multivariate Density Estimation: An exact
goodness-of-fit approach
|
physics.data-an cs.IT math.IT math.ST stat.TH
|
We consider the problem of estimating the population probability distribution
given a finite set of multivariate samples, using the maximum entropy approach.
In strict keeping with Jaynes' original definition, our precise formulation of
the problem considers contributions only from the smoothness of the estimated
distribution (as measured by its entropy) and the loss functional associated
with its goodness-of-fit to the sample data, and in particular does not make
use of any additional constraints that cannot be justified from the sample data
alone. By mapping the general multivariate problem to a tractable univariate
one, we are able to write down exact expressions for the goodness-of-fit of an
arbitrary multivariate distribution to any given set of samples using both the
traditional likelihood-based approach and a rigorous information-theoretic
approach, thus solving a long-standing problem. As a corollary we also give an
exact solution to the `forward problem' of determining the expected
distributions of samples taken from a population with known probability
distribution.
|
physics/0412029
|
Spectral Mixture Decomposition by Least Dependent Component Analysis
|
physics.data-an cs.IT math.IT physics.chem-ph
|
A recently proposed mutual information based algorithm for decomposing data
into least dependent components (MILCA) is applied to spectral analysis, namely
to blind recovery of concentrations and pure spectra from their linear
mixtures. The algorithm is based on precise estimates of mutual information
between measured spectra, which allows to assess and make use of actual
statistical dependencies between them. We show that linear filtering performed
by taking second derivatives effectively reduces the dependencies caused by
overlapping spectral bands and, thereby, assists resolving pure spectra. In
combination with second derivative preprocessing and alternating least squares
postprocessing, MILCA shows decomposition performance comparable with or
superior to specialized chemometrics algorithms. The results are illustrated on
a number of simulated and experimental (infrared and Raman) mixture problems,
including spectroscopy of complex biological materials.
MILCA is available online at http://www.fz-juelich.de/nic/cs/software
|
physics/0504185
|
Frequency of occurrence of numbers in the World Wide Web
|
physics.soc-ph cond-mat.stat-mech cs.DB math.ST stat.TH
|
The distribution of numbers in human documents is determined by a variety of
diverse natural and human factors, whose relative significance can be evaluated
by studying the numbers' frequency of occurrence. Although it has been studied
since the 1880's, this subject remains poorly understood. Here, we obtain the
detailed statistics of numbers in the World Wide Web, finding that their
distribution is a heavy-tailed dependence which splits in a set of power-law
ones. In particular, we find that the frequency of numbers associated to
western calendar years shows an uneven behavior: 2004 represents a `singular
critical' point, appearing with a strikingly high frequency; as we move away
from it, the decreasing frequency allows us to compare the amounts of existing
information on the past and on the future. Moreover, while powers of ten occur
extremely often, allowing us to obtain statistics up to the huge 10^127,
`non-round' numbers occur in a much more limited range, the variations of their
frequencies being dramatically different from standard statistical
fluctuations. These findings provide a view of the array of numbers used by
humans as a highly non-equilibrium and inhomogeneous system, and shed a new
light on an issue that, once fully investigated, could lead to a better
understanding of many sociological and psychological phenomena.
|
physics/0509039
|
The Dynamics of Viral Marketing
|
physics.soc-ph cond-mat.stat-mech cs.DB cs.DS
|
We present an analysis of a person-to-person recommendation network,
consisting of 4 million people who made 16 million recommendations on half a
million products. We observe the propagation of recommendations and the cascade
sizes, which we explain by a simple stochastic model. We analyze how user
behavior varies within user communities defined by a recommendation network.
Product purchases follow a 'long tail' where a significant share of purchases
belongs to rarely sold items. We establish how the recommendation network grows
over time and how effective it is from the viewpoint of the sender and receiver
of the recommendations. While on average recommendations are not very effective
at inducing purchases and do not spread very far, we present a model that
successfully identifies communities, product and pricing categories for which
viral marketing seems to be very effective.
|
physics/0509075
|
Sharp transition towards shared vocabularies in multi-agent systems
|
physics.soc-ph cond-mat.stat-mech cs.GT cs.MA
|
What processes can explain how very large populations are able to converge on
the use of a particular word or grammatical construction without global
coordination? Answering this question helps to understand why new language
constructs usually propagate along an S-shaped curve with a rather sudden
transition towards global agreement. It also helps to analyze and design new
technologies that support or orchestrate self-organizing communication systems,
such as recent social tagging systems for the web. The article introduces and
studies a microscopic model of communicating autonomous agents performing
language games without any central control. We show that the system undergoes a
disorder/order transition, going trough a sharp symmetry breaking process to
reach a shared set of conventions. Before the transition, the system builds up
non-trivial scale-invariant correlations, for instance in the distribution of
competing synonyms, which display a Zipf-like law. These correlations make the
system ready for the transition towards shared conventions, which, observed on
the time-scale of collective behaviors, becomes sharper and sharper with system
size. This surprising result not only explains why human language can scale up
to very large populations but also suggests ways to optimize artificial
semiotic dynamics.
|
physics/0510117
|
Modeling bursts and heavy tails in human dynamics
|
physics.soc-ph cs.MA
|
Current models of human dynamics, used from risk assessment to
communications, assume that human actions are randomly distributed in time and
thus well approximated by Poisson processes. We provide direct evidence that
for five human activity patterns the timing of individual human actions follow
non-Poisson statistics, characterized by bursts of rapidly occurring events
separated by long periods of inactivity. We show that the bursty nature of
human behavior is a consequence of a decision based queuing process: when
individuals execute tasks based on some perceived priority, the timing of the
tasks will be heavy tailed, most tasks being rapidly executed, while a few
experiencing very long waiting times. We discuss two queueing models that
capture human activity. The first model assumes that there are no limitations
on the number of tasks an individual can hadle at any time, predicting that the
waiting time of the individual tasks follow a heavy tailed distribution with
exponent alpha=3/2. The second model imposes limitations on the queue length,
resulting in alpha=1. We provide empirical evidence supporting the relevance of
these two models to human activity patterns. Finally, we discuss possible
extension of the proposed queueing models and outline some future challenges in
exploring the statistical mechanisms of human dynamics.
|
physics/0511201
|
Strategies for fast convergence in semiotic dynamics
|
physics.soc-ph cond-mat.stat-mech cs.GT cs.MA
|
Semiotic dynamics is a novel field that studies how semiotic conventions
spread and stabilize in a population of agents. This is a central issue both
for theoretical and technological reasons since large system made up of
communicating agents, like web communities or artificial embodied agents teams,
are getting widespread. In this paper we discuss a recently introduced simple
multi-agent model which is able to account for the emergence of a shared
vocabulary in a population of agents. In particular we introduce a new
deterministic agents' playing strategy that strongly improves the performance
of the game in terms of faster convergence and reduced cognitive effort for the
agents.
|
physics/0512045
|
Topology Induced Coarsening in Language Games
|
physics.soc-ph cond-mat.stat-mech cs.GT cs.MA
|
We investigate how very large populations are able to reach a global
consensus, out of local "microscopic" interaction rules, in the framework of a
recently introduced class of models of semiotic dynamics, the so-called Naming
Game. We compare in particular the convergence mechanism for interacting agents
embedded in a low-dimensional lattice with respect to the mean-field case. We
highlight that in low-dimensions consensus is reached through a coarsening
process which requires less cognitive effort of the agents, with respect to the
mean-field case, but takes longer to complete. In 1-d the dynamics of the
boundaries is mapped onto a truncated Markov process from which we analytically
computed the diffusion coefficient. More generally we show that the convergence
process requires a memory per agent scaling as N and lasts a time N^{1+2/d} in
dimension d<5 (d=4 being the upper critical dimension), while in mean-field
both memory and time scale as N^{3/2}, for a population of N agents. We present
analytical and numerical evidences supporting this picture.
|
physics/0601118
|
Learning about knowledge: A complex network approach
|
physics.soc-ph cond-mat.dis-nn cs.NE physics.comp-ph
|
This article describes an approach to modeling knowledge acquisition in terms
of walks along complex networks. Each subset of knowledge is represented as a
node, and relations between such knowledge are expressed as edges. Two types of
edges are considered, corresponding to free and conditional transitions. The
latter case implies that a node can only be reached after visiting previously a
set of nodes (the required conditions). The process of knowledge acquisition
can then be simulated by considering the number of nodes visited as a single
agent moves along the network, starting from its lowest layer. It is shown that
hierarchical networks, i.e. networks composed of successive interconnected
layers, arise naturally as a consequence of compositions of the prerequisite
relationships between the nodes. In order to avoid deadlocks, i.e. unreachable
nodes, the subnetwork in each layer is assumed to be a connected component.
Several configurations of such hierarchical knowledge networks are simulated
and the performance of the moving agent quantified in terms of the percentage
of visited nodes after each movement. The Barab\'asi-Albert and random models
are considered for the layer and interconnecting subnetworks. Although all
subnetworks in each realization have the same number of nodes, several
interconnectivities, defined by the average node degree of the interconnection
networks, have been considered. Two visiting strategies are investigated:
random choice among the existing edges and preferential choice to so far
untracked edges. A series of interesting results are obtained, including the
identification of a series of plateaux of knowledge stagnation in the case of
the preferential movements strategy in presence of conditional edges.
|
physics/0601161
|
Monte Carlo Algorithm for Least Dependent Non-Negative Mixture
Decomposition
|
physics.chem-ph cond-mat.stat-mech cs.IT math.IT math.PR math.ST physics.comp-ph physics.data-an stat.TH
|
We propose a simulated annealing algorithm (called SNICA for "stochastic
non-negative independent component analysis") for blind decomposition of linear
mixtures of non-negative sources with non-negative coefficients. The de-mixing
is based on a Metropolis type Monte Carlo search for least dependent
components, with the mutual information between recovered components as a cost
function and their non-negativity as a hard constraint. Elementary moves are
shears in two-dimensional subspaces and rotations in three-dimensional
subspaces. The algorithm is geared at decomposing signals whose probability
densities peak at zero, the case typical in analytical spectroscopy and
multivariate curve resolution. The decomposition performance on large samples
of synthetic mixtures and experimental data is much better than that of
traditional blind source separation methods based on principal component
analysis (MILCA, FastICA, RADICAL) and chemometrics techniques (SIMPLISMA, ALS,
BTEM)
The source codes of SNICA, MILCA and the MI estimator are freely available
online at http://www.fz-juelich.de/nic/cs/software
|
physics/0602033
|
Community Structure in the United States House of Representatives
|
physics.soc-ph cond-mat.stat-mech cs.MA nlin.AO physics.data-an
|
We investigate the networks of committee and subcommittee assignments in the
United States House of Representatives from the 101st--108th Congresses, with
the committees connected by ``interlocks'' or common membership. We examine the
community structure in these networks using several methods, revealing strong
links between certain committees as well as an intrinsic hierarchical structure
in the House as a whole. We identify structural changes, including additional
hierarchical levels and higher modularity, resulting from the 1994 election, in
which the Republican party earned majority status in the House for the first
time in more than forty years. We also combine our network approach with
analysis of roll call votes using singular value decomposition to uncover
correlations between the political and organizational structure of House
committees.
|
physics/0603002
|
Functional dissipation microarrays for classification
|
physics.data-an cs.CV
|
In this article, we describe a new method of extracting information from
signals, called functional dissipation, that proves to be very effective for
enhancing classification of high resolution, texture-rich data. Our algorithm
bypasses to some extent the need to have very specialized feature extraction
techniques, and can potentially be used as an intermediate, feature enhancement
step in any classification scheme.
Functional dissipation is based on signal transforms, but uses the transforms
recursively to uncover new features. We generate a variety of masking functions
and `extract' features with several generalized matching pursuit iterations. In
each iteration, the recursive process modifies several coefficients of the
transformed signal with the largest absolute values according to the specific
masking function; in this way the greedy pursuit is turned into a slow,
controlled, dissipation of the structure of the signal that, for some masking
functions, enhances separation among classes.
Our case study in this paper is the classification of crystallization
patterns of amino acids solutions affected by the addition of small quantities
of proteins.
|
physics/0606053
|
Optimal estimation for Large-Eddy Simulation of turbulence and
application to the analysis of subgrid models
|
physics.class-ph cs.NE
|
The tools of optimal estimation are applied to the study of subgrid models
for Large-Eddy Simulation of turbulence. The concept of optimal estimator is
introduced and its properties are analyzed in the context of applications to a
priori tests of subgrid models. Attention is focused on the Cook and Riley
model in the case of a scalar field in isotropic turbulence. Using DNS data,
the relevance of the beta assumption is estimated by computing (i) generalized
optimal estimators and (ii) the error brought by this assumption alone. Optimal
estimators are computed for the subgrid variance using various sets of
variables and various techniques (histograms and neural networks). It is shown
that optimal estimators allow a thorough exploration of models. Neural networks
are proved to be relevant and very efficient in this framework, and further
usages are suggested.
|
physics/0607116
|
Utilisation de la substitution sensorielle par \'{e}lectro-stimulation
linguale pour la pr\'{e}vention des escarres chez les parapl\'{e}giques.
Etude pr\'{e}liminaire
|
physics.med-ph cs.RO q-bio.NC
|
Pressure ulcers are recognized as a major health issue in individuals with
spinal cord injuries and new approaches to prevent this pathology are
necessary. An innovative health strategy is being developed through the use of
computer and sensory substitution via the tongue in order to compensate for the
sensory loss in the buttock area for individuals with paraplegia. This sensory
compensation will enable individuals with spinal cord injuries to be aware of a
localized excess of pressure at the skin/seat interface and, consequently, will
enable them to prevent the formation of pressure ulcers by relieving the
cutaneous area of suffering. This work reports an initial evaluation of this
approach and the feasibility of creating an adapted behavior, with a change in
pressure as a response to electro-stimulated information on the tongue.
Obtained during a clinical study in 10 healthy seated subjects, the first
results are encouraging, with 92% success in 100 performed tests. These
results, which have to be completed and validated in the paraplegic population,
may lead to a new approach to education in health to prevent the formation of
pressure ulcers within this population. Keywords: Spinal Cord Injuries,
Pressure Ulcer, Sensory Substitution, Health Education, Biomedical Informatics.
|
physics/0608166
|
Information filtering via Iterative Refinement
|
physics.data-an cs.IR physics.soc-ph
|
With the explosive growth of accessible information, expecially on the
Internet, evaluation-based filtering has become a crucial task. Various systems
have been devised aiming to sort through large volumes of information and
select what is likely to be more relevant. In this letter we analyse a new
ranking method, where the reputation of information providers is determined
self-consistently.
|
physics/0608185
|
Updating Probabilities
|
physics.data-an cond-mat.stat-mech cs.IT math.IT
|
We show that Skilling's method of induction leads to a unique general theory
of inductive inference, the method of Maximum relative Entropy (ME). The main
tool for updating probabilities is the logarithmic relative entropy; other
entropies such as those of Renyi or Tsallis are ruled out. We also show that
Bayes updating is a special case of ME updating and thus, that the two are
completely compatible.
|
physics/0608293
|
Automatic Trading Agent. RMT based Portfolio Theory and Portfolio
Selection
|
physics.soc-ph cs.CE q-fin.PM stat.AP
|
Portfolio theory is a very powerful tool in the modern investment theory. It
is helpful in estimating risk of an investor's portfolio, which arises from our
lack of information, uncertainty and incomplete knowledge of reality, which
forbids a perfect prediction of future price changes. Despite of many
advantages this tool is not known and is not widely used among investors on
Warsaw Stock Exchange. The main reason for abandoning this method is a high
level of complexity and immense calculations. The aim of this paper is to
introduce an automatic decision - making system, which allows a single investor
to use such complex methods of Modern Portfolio Theory (MPT). The key tool in
MPT is an analysis of an empirical covariance matrix. This matrix, obtained
from historical data is biased by such a high amount of statistical
uncertainty, that it can be seen as random. By bringing into practice the ideas
of Random Matrix Theory (RMT), the noise is removed or significantly reduced,
so the future risk and return are better estimated and controlled. This
concepts are applied to the Warsaw Stock Exchange Simulator http://gra.onet.pl.
The result of the simulation is 18 % level of gains in comparison for
respective 10 % loss of the Warsaw Stock Exchange main index WIG.
|
physics/0609097
|
F.A.S.T. - Floor field- and Agent-based Simulation Tool
|
physics.comp-ph cs.MA physics.soc-ph
|
In this paper a model of pedestrian motion is presented. As application its
parameters are fitted to one run in a primary school evacuation exercise.
Simulations with these parameters are compared to further runs during the same
exercise.
|
physics/0610051
|
Structural Inference of Hierarchies in Networks
|
physics.soc-ph cs.LG physics.data-an
|
One property of networks that has received comparatively little attention is
hierarchy, i.e., the property of having vertices that cluster together in
groups, which then join to form groups of groups, and so forth, up through all
levels of organization in the network. Here, we give a precise definition of
hierarchical structure, give a generic model for generating arbitrary
hierarchical structure in a random graph, and describe a statistically
principled way to learn the set of hierarchical features that most plausibly
explain a particular real-world network. By applying this approach to two
example networks, we demonstrate its advantages for the interpretation of
network data, the annotation of graphs with edge, vertex and community
properties, and the generation of generic null models for further hypothesis
testing.
|
physics/0701081
|
Spatio-Temporal Electromagnetic Field Shapes and their Logical
Processing
|
physics.comp-ph cs.CV physics.gen-ph
|
This paper is on the spatio-temporal signals with the topologically modulated
electromagnetic fields. The carrier of the digital information is the
topological scheme composed of the separatrices-manifolds and equilibrium
positions of the field. The signals and developed hardware for their processing
in the space-time domain are considered
|
physics/0703126
|
The Laplace-Jaynes approach to induction
|
physics.data-an cs.AI quant-ph
|
An approach to induction is presented, based on the idea of analysing the
context of a given problem into `circumstances'. This approach, fully Bayesian
in form and meaning, provides a complement or in some cases an alternative to
that based on de Finetti's representation theorem and on the notion of infinite
exchangeability. In particular, it gives an alternative interpretation of those
formulae that apparently involve `unknown probabilities' or `propensities'.
Various advantages and applications of the presented approach are discussed,
especially in comparison to that based on exchangeability. Generalisations are
also discussed.
|
physics/0703164
|
Cultural route to the emergence of linguistic categories
|
physics.soc-ph cond-mat.dis-nn cs.MA
|
Categories provide a coarse grained description of the world. A fundamental
question is whether categories simply mirror an underlying structure of nature,
or instead come from the complex interactions of human beings among themselves
and with the environment. Here we address this question by modelling a
population of individuals who co-evolve their own system of symbols and
meanings by playing elementary language games. The central result is the
emergence of a hierarchical category structure made of two distinct levels: a
basic layer, responsible for fine discrimination of the environment, and a
shared linguistic layer that groups together perceptions to guarantee
communicative success. Remarkably, the number of linguistic categories turns
out to be finite and small, as observed in natural languages.
|
physics/9911006
|
Genetic Algorithms in Time-Dependent Environments
|
physics.bio-ph adap-org cs.NE nlin.AO q-bio
|
The influence of time-dependent fitnesses on the infinite population dynamics
of simple genetic algorithms (without crossover) is analyzed. Based on general
arguments, a schematic phase diagram is constructed that allows one to
characterize the asymptotic states in dependence on the mutation rate and the
time scale of changes. Furthermore, the notion of regular changes is raised for
which the population can be shown to converge towards a generalized
quasispecies. Based on this, error thresholds and an optimal mutation rate are
approximately calculated for a generational genetic algorithm with a moving
needle-in-the-haystack landscape. The so found phase diagram is fully
consistent with our general considerations.
|
q-bio/0310011
|
Complex Independent Component Analysis of Frequency-Domain
Electroencephalographic Data
|
q-bio.QM cs.CE physics.data-an q-bio.NC
|
Independent component analysis (ICA) has proven useful for modeling brain and
electroencephalographic (EEG) data. Here, we present a new, generalized method
to better capture the dynamics of brain signals than previous ICA algorithms.
We regard EEG sources as eliciting spatio-temporal activity patterns,
corresponding to, e.g., trajectories of activation propagating across cortex.
This leads to a model of convolutive signal superposition, in contrast with the
commonly used instantaneous mixing model. In the frequency-domain, convolutive
mixing is equivalent to multiplicative mixing of complex signal sources within
distinct spectral bands. We decompose the recorded spectral-domain signals into
independent components by a complex infomax ICA algorithm. First results from a
visual attention EEG experiment exhibit (1) sources of spatio-temporal dynamics
in the data, (2) links to subject behavior, (3) sources with a limited spectral
extent, and (4) a higher degree of independence compared to sources derived by
standard ICA.
|
q-bio/0310025
|
Pattern Excitation-Based Processing: The Music of The Brain
|
q-bio.NC cs.NE physics.bio-ph
|
An approach to information processing based on the excitation of patterns of
activity by non-linear active resonators in response to their input patterns is
proposed. Arguments are presented to show that any computation performed by a
conventional Turing machine-based computer, called T-machine in this paper,
could also be performed by the pattern excitation-based machine, which will be
called P-machine. A realization of this processing scheme by neural networks is
discussed. In this realization, the role of the resonators is played by neural
pattern excitation networks, which are the neural circuits capable of exciting
different spatio-temporal patterns of activity in response to different inputs.
Learning in the neural pattern excitation networks is also considered. It is
shown that there is a duality between pattern excitation and pattern
recognition neural networks, which allows to create new pattern excitation
modes corresponding to recognizable input patterns, based on Hebbian learning
rules. Hierarchically organized, such networks can produce complex behavior.
Animal behavior, human language and thought are treated as examples produced by
such networks.
|
q-bio/0401033
|
Parametric Inference for Biological Sequence Analysis
|
q-bio.GN cs.LG math.ST stat.TH
|
One of the major successes in computational biology has been the unification,
using the graphical model formalism, of a multitude of algorithms for
annotating and comparing biological sequences. Graphical models that have been
applied towards these problems include hidden Markov models for annotation,
tree models for phylogenetics, and pair hidden Markov models for alignment. A
single algorithm, the sum-product algorithm, solves many of the inference
problems associated with different statistical models. This paper introduces
the \emph{polytope propagation algorithm} for computing the Newton polytope of
an observation from a graphical model. This algorithm is a geometric version of
the sum-product algorithm and is used to analyze the parametric behavior of
maximum a posteriori inference calculations for graphical models.
|
q-bio/0402029
|
Fluctuation-dissipation theorem and models of learning
|
q-bio.NC cs.LG nlin.AO physics.data-an
|
Advances in statistical learning theory have resulted in a multitude of
different designs of learning machines. But which ones are implemented by
brains and other biological information processors? We analyze how various
abstract Bayesian learners perform on different data and argue that it is
difficult to determine which learning-theoretic computation is performed by a
particular organism using just its performance in learning a stationary target
(learning curve). Basing on the fluctuation-dissipation relation in statistical
physics, we then discuss a different experimental setup that might be able to
solve the problem.
|
q-bio/0403011
|
Memorization in a neural network with adjustable transfer function and
conditional gating
|
q-bio.NC cs.NE
|
The main problem about replacing LTP as a memory mechanism has been to find
other highly abstract, easily understandable principles for induced plasticity.
In this paper we attempt to lay out such a basic mechanism, namely intrinsic
plasticity. Important empirical observations with theoretical significance are
time-layering of neural plasticity mediated by additional constraints to enter
into later stages, various manifestations of intrinsic neural properties, and
conditional gating of synaptic connections. An important consequence of the
proposed mechanism is that it can explain the usually latent nature of
memories.
|
q-bio/0403022
|
Intelligent encoding and economical communication in the visual stream
|
q-bio.NC cs.AI cs.CC nlin.AO
|
The theory of computational complexity is used to underpin a recent model of
neocortical sensory processing. We argue that encoding into reconstruction
networks is appealing for communicating agents using Hebbian learning and
working on hard combinatorial problems, which are easy to verify. Computational
definition of the concept of intelligence is provided. Simulations illustrate
the idea.
|
q-bio/0403036
|
The Triplet Genetic Code had a Doublet Predecessor
|
q-bio.GN cs.CE q-bio.BM quant-ph
|
Information theoretic analysis of genetic languages indicates that the
naturally occurring 20 amino acids and the triplet genetic code arose by
duplication of 10 amino acids of class-II and a doublet genetic code having
codons NNY and anticodons $\overleftarrow{\rm GNN}$. Evidence for this scenario
is presented based on the properties of aminoacyl-tRNA synthetases, amino acids
and nucleotide bases.
|
q-bio/0406015
|
Information theory, multivariate dependence, and genetic network
inference
|
q-bio.QM cs.IT math.IT math.ST physics.data-an q-bio.GN stat.TH
|
We define the concept of dependence among multiple variables using maximum
entropy techniques and introduce a graphical notation to denote the
dependencies. Direct inference of information theoretic quantities from data
uncovers dependencies even in undersampled regimes when the joint probability
distribution cannot be reliably estimated. The method is tested on synthetic
data. We anticipate it to be useful for inference of genetic circuits and other
biological signaling networks.
|
q-bio/0411030
|
Statistical Mechanics Characterization of Neuronal Mosaics
|
q-bio.NC cond-mat.dis-nn cs.CV physics.bio-ph q-bio.QM
|
The spatial distribution of neuronal cells is an important requirement for
achieving proper neuronal function in several parts of the nervous system of
most animals. For instance, specific distribution of photoreceptors and related
neuronal cells, particularly the ganglion cells, in mammal's retina is required
in order to properly sample the projected scene. This work presents how two
concepts from the areas of statistical mechanics and complex systems, namely
the \emph{lacunarity} and the \emph{multiscale entropy} (i.e. the entropy
calculated over progressively diffused representations of the cell mosaic),
have allowed effective characterization of the spatial distribution of retinal
cells.
|
q-bio/0501021
|
Spike timing precision and neural error correction: local behavior
|
q-bio.NC cs.NE math.DS
|
The effects of spike timing precision and dynamical behavior on error
correction in spiking neurons were investigated. Stationary discharges -- phase
locked, quasiperiodic, or chaotic -- were induced in a simulated neuron by
presenting pacemaker presynaptic spike trains across a model of a prototypical
inhibitory synapse. Reduced timing precision was modeled by jittering
presynaptic spike times. Aftereffects of errors -- in this communication,
missed presynaptic spikes -- were determined by comparing postsynaptic spike
times between simulations identical except for the presence or absence of
errors. Results show that the effects of an error vary greatly depending on the
ongoing dynamical behavior. In the case of phase lockings, a high degree of
presynaptic spike timing precision can provide significantly faster error
recovery. For non-locked behaviors, isolated missed spikes can have little or
no discernible aftereffects (or even serve to paradoxically reduce uncertainty
in postsynaptic spike timing), regardless of presynaptic imprecision. This
suggests two possible categories of error correction: high-precision locking
with rapid recovery and low-precision non-locked with error immunity.
|
q-bio/0502023
|
Learning intrinsic excitability in medium spiny neurons
|
q-bio.NC cs.NE
|
We present an unsupervised, local activation-dependent learning rule for
intrinsic plasticity (IP) which affects the composition of ion channel
conductances for single neurons in a use-dependent way. We use a
single-compartment conductance-based model for medium spiny striatal neurons in
order to show the effects of parametrization of individual ion channels on the
neuronal activation function. We show that parameter changes within the
physiological ranges are sufficient to create an ensemble of neurons with
significantly different activation functions. We emphasize that the effects of
intrinsic neuronal variability on spiking behavior require a distributed mode
of synaptic input and can be eliminated by strongly correlated input. We show
how variability and adaptivity in ion channel conductances can be utilized to
store patterns without an additional contribution by synaptic plasticity (SP).
The adaptation of the spike response may result in either "positive" or
"negative" pattern learning. However, read-out of stored information depends on
a distributed pattern of synaptic activity to let intrinsic variability
determine spike response. We briefly discuss the implications of this
conditional memory on learning and addiction.
|
q-bio/0505021
|
Characterizing Self-Developing Biological Neural Networks: A First Step
Towards their Application To Computing Systems
|
q-bio.NC cs.AR cs.NE nlin.AO
|
Carbon nanotubes are often seen as the only alternative technology to silicon
transistors. While they are the most likely short-term one, other longer-term
alternatives should be studied as well. While contemplating biological neurons
as an alternative component may seem preposterous at first sight, significant
recent progress in CMOS-neuron interface suggests this direction may not be
unrealistic; moreover, biological neurons are known to self-assemble into very
large networks capable of complex information processing tasks, something that
has yet to be achieved with other emerging technologies. The first step to
designing computing systems on top of biological neurons is to build an
abstract model of self-assembled biological neural networks, much like computer
architects manipulate abstract models of transistors and circuits. In this
article, we propose a first model of the structure of biological neural
networks. We provide empirical evidence that this model matches the biological
neural networks found in living organisms, and exhibits the small-world graph
structure properties commonly found in many large and self-organized systems,
including biological neural networks. More importantly, we extract the simple
local rules and characteristics governing the growth of such networks, enabling
the development of potentially large but realistic biological neural networks,
as would be needed for complex information processing/computing tasks. Based on
this model, future work will be targeted to understanding the evolution and
learning properties of such networks, and how they can be used to build
computing systems.
|
q-bio/0505050
|
HLA and HIV Infection Progression: Application of the Minimum
Description Length Principle to Statistical Genetics
|
q-bio.QM cs.IT math.IT
|
The minimum description length (MDL) principle states that the best model to
account for some data minimizes the sum of the lengths, in bits, of the
descriptions of the model and the residual error. The description length is
thus a criterion for model selection. Description-length analysis of HLA
alleles from the Chicago MACS cohort enables classification of alleles
associated with plasma HIV RNA, an indicator of infection progression.
Progression variation is most strongly associated with HLA-B. Individuals
without B58s supertype alleles average viral RNA levels 3.6-fold greater than
individuals with them.
|
q-bio/0507037
|
Neuromodulation Influences Synchronization and Intrinsic Read-out
|
q-bio.NC cs.NE nlin.AO
|
Background: The roles of neuromodulation in a neural network, such as in a
cortical microcolumn, are still incompletely understood. Neuromodulation
influences neural processing by presynaptic and postsynaptic regulation of
synaptic efficacy. Neuromodulation also affects ion channels and intrinsic
excitability. Methods: Synaptic efficacy modulation is an effective way to
rapidly alter network density and topology. We alter network topology and
density to measure the effect on spike synchronization. We also operate with
differently parameterized neuron models which alter the neurons intrinsic
excitability, i.e., activation function. Results: We find that (a) fast
synaptic efficacy modulation influences the amount of correlated spiking in a
network. Also, (b) synchronization in a network influences the read-out of
intrinsic properties. Highly synchronous input drives neurons, such that
differences in intrinsic properties disappear, while asynchronous input lets
intrinsic properties determine output behavior. Thus, altering network topology
can alter the balance between intrinsically vs. synaptically driven network
activity. Conclusion: We conclude that neuromodulation may allow a network to
shift between a more synchronized transmission mode and a more asynchronous
intrinsic read-out mode. This has significant implications for our
understanding of the flexibility of cortical computations.
|
q-bio/0510007
|
The fitness value of information
|
q-bio.PE cs.IT math.IT q-bio.NC
|
Biologists measure information in different ways. Neurobiologists and
researchers in bioinformatics often measure information using
information-theoretic measures such as Shannon's entropy or mutual information.
Behavioral biologists and evolutionary ecologists more commonly use
decision-theoretic measures, such the value of information, which assess the
worth of information to a decision maker. Here we show that these two kinds of
measures are intimately related in the context of biological evolution. We
present a simple model of evolution in an uncertain environment, and calculate
the increase in Darwinian fitness that is made possible by information about
the environmental state. This fitness increase -- the fitness value of
information -- is a composite of both Shannon's mutual information and the
decision-theoretic value of information. Furthermore, we show that in certain
cases the fitness value of responding to a cue is exactly equal to the mutual
information between the cue and the environment. In general the Shannon entropy
of the environment, which seemingly fails to take anything about organismal
fitness into account, nonetheless imposes an upper bound on the fitness value
of information.
|
q-bio/0511045
|
The use of the GARP genetic algorithm and internet grid computing in the
Lifemapper world atlas of species biodiversity
|
q-bio.QM cs.DC cs.NE q-bio.OT
|
Lifemapper (http://www.lifemapper.org) is a predictive electronic atlas of
the Earth's biological biodiversity. Using a screensaver version of the GARP
genetic algorithm for modeling species distributions, Lifemapper harnesses vast
computing resources through 'volunteers' PCs similar to SETI@home, to develop
models of the distribution of the worlds fauna and flora. The Lifemapper
project's primary goal is to provide an up to date and comprehensive database
of species maps and prediction models (i.e. a fauna and flora of the world)
using available data on species' locations. The models are developed using
specimen data from distributed museum collections and an archive of geospatial
environmental correlates. A central server maintains a dynamic archive of
species maps and models for research, outreach to the general community, and
feedback to museum data providers. This paper is a case study in the role, use
and justification of a genetic algorithm in development of large-scale
environmental informatics infrastructure.
|
q-bio/0511046
|
Improving ecological niche models by data mining large environmental
datasets for surrogate models
|
q-bio.QM cs.AI
|
WhyWhere is a new ecological niche modeling (ENM) algorithm for mapping and
explaining the distribution of species. The algorithm uses image processing
methods to efficiently sift through large amounts of data to find the few
variables that best predict species occurrence. The purpose of this paper is to
describe and justify the main parameterizations and to show preliminary success
at rapidly providing accurate, scalable, and simple ENMs. Preliminary results
for 6 species of plants and animals in different regions indicate a significant
(p<0.01) 14% increase in accuracy over the GARP algorithm using models with
few, typically two, variables. The increase is attributed to access to
additional data, particularly monthly vs. annual climate averages. WhyWhere is
also 6 times faster than GARP on large data sets. A data mining based approach
with transparent access to remote data archives is a new paradigm for ENM,
particularly suited to finding correlates in large databases of fine resolution
surfaces. Software for WhyWhere is freely available, both as a service and in a
desktop downloadable form from the web site http://biodi.sdsc.edu/ww_home.html.
|
q-bio/0603007
|
Compression ratios based on the Universal Similarity Metric still yield
protein distances far from CATH distances
|
q-bio.QM cs.CE physics.data-an q-bio.OT
|
Kolmogorov complexity has inspired several alignment-free distance measures,
based on the comparison of lengths of compressions, which have been applied
successfully in many areas. One of these measures, the so-called Universal
Similarity Metric (USM), has been used by Krasnogor and Pelta to compare simple
protein contact maps, showing that it yielded good clustering on four small
datasets. We report an extensive test of this metric using a much larger and
representative protein dataset: the domain dataset used by Sierk and Pearson to
evaluate seven protein structure comparison methods and two protein sequence
comparison methods. One result is that Krasnogor-Pelta method has less domain
discriminant power than any one of the methods considered by Sierk and Pearson
when using these simple contact maps. In another test, we found that the USM
based distance has low agreement with the CATH tree structure for the same
benchmark of Sierk and Pearson. In any case, its agreement is lower than the
one of a standard sequential alignment method, SSEARCH. Finally, we manually
found lots of small subsets of the database that are better clustered using
SSEARCH than USM, to confirm that Krasnogor-Pelta's conclusions were based on
datasets that were too small.
|
q-bio/0604024
|
The transposition distance for phylogenetic trees
|
q-bio.PE cs.CE math.GR q-bio.OT
|
The search for similarity and dissimilarity measures on phylogenetic trees
has been motivated by the computation of consensus trees, the search by
similarity in phylogenetic databases, and the assessment of clustering results
in bioinformatics. The transposition distance for fully resolved phylogenetic
trees is a recent addition to the extensive collection of available metrics for
comparing phylogenetic trees. In this paper, we generalize the transposition
distance from fully resolved to arbitrary phylogenetic trees, through a
construction that involves an embedding of the set of phylogenetic trees with a
fixed number of labeled leaves into a symmetric group and a generalization of
Reidys-Stadler's involution metric for RNA contact structures. We also present
simple linear-time algorithms for computing it.
|
q-bio/0605020
|
Laws in Darwinian Evolutionary Theory
|
q-bio.PE cond-mat.stat-mech cs.NE math.OC nlin.AO physics.bio-ph q-bio.QM
|
In the present article the recent works to formulate laws in Darwinian
evolutionary dynamics are discussed. Although there is a strong consensus that
general laws in biology may exist, opinions opposing such suggestion are
abundant. Based on recent progress in both mathematics and biology, another
attempt to address this issue is made in the present article. Specifically,
three laws which form a mathematical framework for the evolutionary dynamics in
biology are postulated. The second law is most quantitative and is explicitly
expressed in the unique form of a stochastic differential equation. Salient
features of Darwinian evolutionary dynamics are captured by this law: the
probabilistic nature of evolution, ascendancy, and the adaptive landscape. Four
dynamical elements are introduced in this formulation: the ascendant matrix,
the transverse matrix, the Wright evolutionary potential, and the stochastic
drive. The first law may be regarded as a special case of the second law. It
gives the reference point to discuss the evolutionary dynamics. The third law
describes the relationship between the focused level of description to its
lower and higher ones, and defines the dichotomy of deterministic and
stochastic drives. It is an acknowledgement of the hierarchical structure in
biology. A new interpretation of Fisher's fundamental theorem of natural
selection is provided in terms of the F-Theorem. The proposed laws are based on
continuous representation in both time and population. Their generic nature is
demonstrated through their equivalence to classical formulations. The present
three laws appear to provide a coherent framework for the further development
of the subject.
|
q-bio/0607018
|
A p-Adic Model of DNA Sequence and Genetic Code
|
q-bio.GN cs.IT math-ph math.IT math.MP physics.bio-ph
|
Using basic properties of p-adic numbers, we consider a simple new approach
to describe main aspects of DNA sequence and genetic code. Central role in our
investigation plays an ultrametric p-adic information space which basic
elements are nucleotides, codons and genes. We show that a 5-adic model is
appropriate for DNA sequence. This 5-adic model, combined with 2-adic distance,
is also suitable for genetic code and for a more advanced employment in
genomics. We find that genetic code degeneracy is related to the p-adic
distance between codons.
|
q-bio/0610040
|
Metric learning pairwise kernel for graph inference
|
q-bio.QM cs.LG
|
Much recent work in bioinformatics has focused on the inference of various
types of biological networks, representing gene regulation, metabolic
processes, protein-protein interactions, etc. A common setting involves
inferring network edges in a supervised fashion from a set of high-confidence
edges, possibly characterized by multiple, heterogeneous data sets (protein
sequence, gene expression, etc.). Here, we distinguish between two modes of
inference in this setting: direct inference based upon similarities between
nodes joined by an edge, and indirect inference based upon similarities between
one pair of nodes and another pair of nodes. We propose a supervised approach
for the direct case by translating it into a distance metric learning problem.
A relaxation of the resulting convex optimization problem leads to the support
vector machine (SVM) algorithm with a particular kernel for pairs, which we
call the metric learning pairwise kernel (MLPK). We demonstrate, using several
real biological networks, that this direct approach often improves upon the
state-of-the-art SVM for indirect inference with the tensor product pairwise
kernel.
|
q-bio/0612013
|
Clustering fetal heart rate tracings by compression
|
q-bio.TO cs.CV cs.IR q-bio.QM
|
Fetal heart rate (FHR) monitoring, before and during labor, is a very
important medical practice in the detection of fetuses in danger. We clustered
FHR tracings by compression in order to identify abnormal ones. We use a
recently introduced approach based on algorithmic information theory, a
theoretical, rigorous and well-studied notion of information content in
individual objects. The new method can mine patterns in completely different
areas, there are no domain-specific parameters to set, and it does not require
specific background knowledge. At the highest level the FHR tracings were
clustered according to an unanticipated feature, namely the technology used in
signal acquisition. At the lower levels all tracings with abnormal or
suspicious patterns were clustered together, independent of the technology
used. Moreover, FHR tracings with future poor neonatal outcomes were included
in the cluster with other suspicious patterns.
|
q-bio/0701009
|
Attribute Exploration of Discrete Temporal Transitions
|
q-bio.QM cs.AI q-bio.MN
|
Discrete temporal transitions occur in a variety of domains, but this work is
mainly motivated by applications in molecular biology: explaining and analyzing
observed transcriptome and proteome time series by literature and database
knowledge. The starting point of a formal concept analysis model is presented.
The objects of a formal context are states of the interesting entities, and the
attributes are the variable properties defining the current state (e.g.
observed presence or absence of proteins). Temporal transitions assign a
relation to the objects, defined by deterministic or non-deterministic
transition rules between sets of pre- and postconditions. This relation can be
generalized to its transitive closure, i.e. states are related if one results
from the other by a transition sequence of arbitrary length. The focus of the
work is the adaptation of the attribute exploration algorithm to such a
relational context, so that questions concerning temporal dependencies can be
asked during the exploration process and be answered from the computed stem
base. Results are given for the abstract example of a game and a small gene
regulatory network relevant to a biomedical question.
|
q-bio/0703044
|
On the existence of potential landscape in the evolution of complex
systems
|
q-bio.QM cond-mat.stat-mech cs.IT math.DS math.IT nlin.AO q-bio.MN
|
A recently developed treatment of stochastic processes leads to the
construction of a potential landscape for the dynamical evolution of complex
systems. Since the existence of a potential function in generic settings has
been frequently questioned in literature,herewe study several related
theoretical issues that lie at core of the construction. We showthat the novel
treatment,via a transformation,is closely related to the symplectic structure
that is central in many branches of theoretical physics. Using this insight, we
demonstrate an invariant under the transformation. We further explicitly
demonstrate, in one-dimensional case, the contradistinction among the new
treatment to those of Ito and Stratonovich, as well as others.Our results
strongly suggest that the method from statistical physics can be useful in
studying stochastic, complex systems in general.
|
quant-ph/0011122
|
Algorithmic Theories of Everything
|
quant-ph cs.AI cs.CC cs.LG hep-th math-ph math.MP physics.comp-ph
|
The probability distribution P from which the history of our universe is
sampled represents a theory of everything or TOE. We assume P is formally
describable. Since most (uncountably many) distributions are not, this imposes
a strong inductive bias. We show that P(x) is small for any universe x lacking
a short description, and study the spectrum of TOEs spanned by two Ps, one
reflecting the most compact constructive descriptions, the other the fastest
way of computing everything. The former derives from generalizations of
traditional computability, Solomonoff's algorithmic probability, Kolmogorov
complexity, and objects more random than Chaitin's Omega, the latter from
Levin's universal search and a natural resource-oriented postulate: the
cumulative prior probability of all x incomputable within time t by this
optimal algorithm should be 1/t. Between both Ps we find a universal
cumulatively enumerable measure that dominates traditional enumerable measures;
any such CEM must assign low probability to any universe lacking a short
enumerating program. We derive P-specific consequences for evolving observers,
inductive reasoning, quantum physics, philosophy, and the expected duration of
our universe.
|
quant-ph/0012111
|
Quantum error-correcting codes associated with graphs
|
quant-ph cs.IT math-ph math.IT math.MP
|
We present a construction scheme for quantum error correcting codes. The
basic ingredients are a graph and a finite abelian group, from which the code
can explicitly be obtained. We prove necessary and sufficient conditions for
the graph such that the resulting code corrects a certain number of errors.
This allows a simple verification of the 1-error correcting property of
fivefold codes in any dimension. As new examples we construct a large class of
codes saturating the singleton bound, as well as a tenfold code detecting 3
errors.
|
quant-ph/0102108
|
Quantum Kolmogorov Complexity Based on Classical Descriptions
|
quant-ph cs.CC cs.IT math.IT math.LO
|
We develop a theory of the algorithmic information in bits contained in an
individual pure quantum state. This extends classical Kolmogorov complexity to
the quantum domain retaining classical descriptions. Quantum Kolmogorov
complexity coincides with the classical Kolmogorov complexity on the classical
domain. Quantum Kolmogorov complexity is upper bounded and can be effectively
approximated from above under certain conditions. With high probability a
quantum object is incompressible. Upper- and lower bounds of the quantum
complexity of multiple copies of individual pure quantum states are derived and
may shed some light on the no-cloning properties of quantum states. In the
quantum situation complexity is not sub-additive. We discuss some relations
with ``no-cloning'' and ``approximate cloning'' properties.
|
quant-ph/0107129
|
Algebraic geometric construction of a quantum stabilizer code
|
quant-ph cs.IT math.AG math.IT math.SG
|
The stabilizer code is the most general algebraic construction of quantum
error-correcting codes proposed so far. A stabilizer code can be constructed
from a self-orthogonal subspace of a symplectic space over a finite field. We
propose a construction method of such a self-orthogonal space using an
algebraic curve. By using the proposed method we construct an asymptotically
good sequence of binary stabilizer codes. As a byproduct we improve the
Ashikhmin-Litsyn-Tsfasman bound of quantum codes. The main results in this
paper can be understood without knowledge of quantum mechanics.
|
quant-ph/0108073
|
Quantum Information in Space and Time
|
quant-ph cond-mat.mes-hall cs.IT gr-qc hep-ph hep-th math-ph math.IT math.MP math.PR
|
Many important results in modern quantum information theory have been
obtained for an idealized situation when the spacetime dependence of quantum
phenomena is neglected. However the transmission and processing of (quantum)
information is a physical process in spacetime. Therefore such basic notions in
quantum information theory as the notions of composite systems, entangled
states and the channel should be formulated in space and time. We emphasize the
importance of the investigation of quantum information in space and time.
Entangled states in space and time are considered. A modification of Bell`s
equation which includes the spacetime variables is suggested. A general
relation between quantum theory and theory of classical stochastic processes is
proposed. It expresses the condition of local realism in the form of a {\it
noncommutative spectral theorem}. Applications of this relation to the security
of quantum key distribution in quantum cryptography are considered.
|
quant-ph/0108133
|
On Classical and Quantum Cryptography
|
quant-ph cond-mat.mes-hall cs.IT hep-th math-ph math.IT math.MP
|
Lectures on classical and quantum cryptography. Contents: Private key
cryptosystems. Elements of number theory. Public key cryptography and RSA
cryptosystem. Shannon`s entropy and mutual information. Entropic uncertainty
relations. The no cloning theorem. The BB84 quantum cryptographic protocol.
Security proofs. Bell`s theorem. The EPRBE quantum cryptographic protocol.
|
quant-ph/0110103
|
Quantum entanglement and geometry of determinantal varieties
|
quant-ph cs.IT math.AG math.IT
|
Quantum entanglement was first recognized as a feature of quantum mechanics
in the famous paper of Einstein, Podolsky and Rosen [18]. Recently it has been
realized that quantum entanglement is a key ingredient in quantum computation,
quantum communication and quantum cryptography ([16],[17],[6]). In this paper,
we introduce algebraic sets, which are determinantal varieties in the complex
projective spaces or the products of complex projective spaces, for the mixed
states in bipartite or multipartite quantum systems as their invariants under
local unitary transformations. These invariants are naturally arised from the
physical consideration of measuring mixed states by separable pure states. In
this way algebraic geometry and complex differential geometry of these
algebraic sets turn to be powerful tools for the understanding of quantum
enatanglement. Our construction has applications in the following important
topics in quantum information theory: 1) separability criterion, it is proved
the algebraic sets have to be the sum of the linear subspaces if the mixed
states are separable; 2) lower bound of Schmidt numbers, that is, generic low
rank bipartite mixed states are entangled in many degrees of freedom; 3)
simulation of Hamiltonians, it is proved the simulation of semi-positive
Hamiltonians of the same rank implies the projective isomorphisms of the
corresponding algebraic sets; 4) construction of bound enatanglement, examples
of the entangled mixed states which are invariant under partial transpositions
(thus PPT bound entanglement) are constructed systematically from our new
separability criterion. On the other hand many examples of entangled mixed
states with rich algebraic-geometric structure in their associated
determinantal varieties are constructed and studied from this point of view.
|
quant-ph/0202015
|
Semiclassical Neural Network
|
quant-ph cond-mat.dis-nn cs.AI q-bio
|
We have constructed a simple semiclassical model of neural network where
neurons have quantum links with one another in a chosen way and affect one
another in a fashion analogous to action potentials. We have examined the role
of stochasticity introduced by the quantum potential and compare the system
with the classical system of an integrate-and-fire model by Hopfield. Average
periodicity and short term retentivity of input memory are noted.
|
quant-ph/0202016
|
Neural Networks with c-NOT Gated Nodes
|
quant-ph cond-mat.dis-nn cs.AI q-bio
|
We try to design a quantum neural network with qubits instead of classical
neurons with deterministic states, and also with quantum operators replacing
teh classical action potentials. With our choice of gates interconnecting teh
neural lattice, it appears that the state of the system behaves in ways
reflecting both the strengths of coupling between neurons as well as initial
conditions. We find that depending whether there is a threshold for emission
from excited to ground state, the system shows either aperiodic oscillations or
coherent ones with periodicity depending on the strength of coupling.
|
quant-ph/0203010
|
Entangled Quantum Networks
|
quant-ph cond-mat.dis-nn cs.AI
|
We present some results from simulation of a network of nodes connected by
c-NOT gates with nearest neighbors. Though initially we begin with pure states
of varying boundary conditions, the updating with time quickly involves a
complicated entanglement involving all or most nodes. As a normal c-NOT gate,
though unitary for a single pair of nodes, seems to be not so when used in a
network in a naive way, we use a manifestly unitary form of the transition
matrix with c?-NOT gates, which invert the phase as well as flipping the qubit.
This leads to complete entanglement of the net, but with variable coefficients
for the different components of the superposition. It is interesting to note
that by a simple logical back projection the original input state can be
recovered in most cases. We also prove that it is not possible for a sequence
of unitary operators working on a net to make it move from an aperiodic regime
to a periodic one, unlike some classical cases where phase-locking happens in
course of evolution. However, we show that it is possible to introduce by hand
periodic orbits to sets of initial states, which may be useful in forming
dynamic pattern recognition systems.
|
quant-ph/0203105
|
The capacity of hybrid quantum memory
|
quant-ph cs.IT math-ph math.IT math.MP math.OA
|
The general stable quantum memory unit is a hybrid consisting of a classical
digit with a quantum digit (qudit) assigned to each classical state. The shape
of the memory is the vector of sizes of these qudits, which may differ. We
determine when N copies of a quantum memory A embed in N(1+o(1)) copies of
another quantum memory B. This relationship captures the notion that B is as at
least as useful as A for all purposes in the bulk limit. We show that the
embeddings exist if and only if for all p >= 1, the p-norm of the shape of A
does not exceed the p-norm of the shape of B. The log of the p-norm of the
shape of A can be interpreted as the maximum of S(\rho) + H(\rho)/p (quantum
entropy plus discounted classical entropy) taken over all mixed states \rho on
A. We also establish a noiseless coding theorem that justifies these entropies.
The noiseless coding theorem and the bulk embedding theorem together say that
either A blindly bulk-encodes into B with perfect fidelity, or A admits a state
that does not visibly bulk-encode into B with high fidelity.
In conclusion, the utility of a hybrid quantum memory is determined by its
simultaneous capacity for classical and quantum entropy, which is not a finite
list of numbers, but rather a convex region in the classical-quantum entropy
plane.
|
quant-ph/0205161
|
Contextualizing Concepts using a Mathematical Generalization of the
Quantum Formalism
|
quant-ph cs.AI q-bio.NC
|
We outline the rationale and preliminary results of using the State Context
Property (SCOP) formalism, originally developed as a generalization of quantum
mechanics, to describe the contextual manner in which concepts are evoked,
used, and combined to generate meaning. The quantum formalism was developed to
cope with problems arising in the description of (1) the measurement process,
and (2) the generation of new states with new properties when particles become
entangled. Similar problems arising with concepts motivated the formal
treatment introduced here. Concepts are viewed not as fixed representations,
but entities existing in states of potentiality that require interaction with a
context--a stimulus or another concept--to 'collapse' to an instantiated form
(e.g. exemplar, prototype, or other possibly imaginary instance). The stimulus
situation plays the role of the measurement in physics, acting as context that
induces a change of the cognitive state from superposition state to collapsed
state. The collapsed state is more likely to consist of a conjunction of
concepts for associative than analytic thought because more stimulus or concept
properties take part in the collapse. We provide two contextual measures of
conceptual distance--one using collapse probabilities and the other weighted
properties--and show how they can be applied to conjunctions using the pet fish
problem
|
quant-ph/0207069
|
Data compression limit for an information source of interacting qubits
|
quant-ph cs.IT math-ph math.IT math.MP
|
A system of interacting qubits can be viewed as a non-i.i.d quantum
information source. A possible model of such a source is provided by a quantum
spin system, in which spin-1/2 particles located at sites of a lattice interact
with each other. We establish the limit for the compression of information from
such a source and show that asymptotically it is given by the von Neumann
entropy rate. Our result can be viewed as a quantum analog of Shannon's
noiseless coding theorem for a class of non - i.i.d. quantum information
sources.
|
quant-ph/0210176
|
Quantum Pattern Recognition
|
quant-ph cond-mat.dis-nn cs.IR nlin.AO q-bio.NC
|
I review and expand the model of quantum associative memory that I have
recently proposed. In this model binary patterns of n bits are stored in the
quantum superposition of the appropriate subset of the computational basis of n
qbits. Information can be retrieved by performing an input-dependent rotation
of the memory quantum state within this subset and measuring the resulting
state. The amplitudes of this rotated memory state are peaked on those stored
patterns which are closest in Hamming distance to the input, resulting in a
high probability of measuring a memory pattern very similar to it. The accuracy
of pattern recall can be tuned by adjusting a parameter playing the role of an
effective temperature. This model solves the well-known capacity shortage
problem of classical associative memories, providing an exponential improvement
in capacity. The price to pay is the probabilistic nature of information
retrieval, a feature that, however, this model shares with our own brain.
|
quant-ph/0301075
|
Selective pressures on genomes in molecular evolution
|
quant-ph cs.NE nlin.AO physics.bio-ph q-bio.PE
|
We describe the evolution of macromolecules as an information transmission
process and apply tools from Shannon information theory to it. This allows us
to isolate three independent, competing selective pressures that we term
compression, transmission, and neutrality selection. The first two affect
genome length: the pressure to conserve resources by compressing the code, and
the pressure to acquire additional information that improves the channel,
increasing the rate of information transmission into each offspring. Noisy
transmission channels (replication with mutations) gives rise to a third
pressure that acts on the actual encoding of information; it maximizes the
fraction of mutations that are neutral with respect to the phenotype. This
neutrality selection has important implications for the evolution of
evolvability. We demonstrate each selective pressure in experiments with
digital organisms.
|
quant-ph/0307170
|
Quantum Stein's lemma revisited, inequalities for quantum entropies, and
a concavity theorem of Lieb
|
quant-ph cs.IT math-ph math.IT math.MP
|
We derive the monotonicity of the quantum relative entropy by an elementary
operational argument based on Stein's lemma in quantum hypothesis testing. For
the latter we present an elementary and short proof that requires the law of
large numbers only. Joint convexity of the quantum relative entropy is proven
too, resulting in a self-contained elementary version of Tropp's approach to
Lieb's concavity theorem, according to which the map tr(exp(h+log a)) is
concave in a on positive operators for self-adjoint h.
|
quant-ph/0308158
|
New Approachs to Quantum Computer Simulaton in a Classical Supercomputer
|
quant-ph cs.CE
|
Classical simulation is important because it sets a benchmark for quantum
computer performance. Classical simulation is currently the only way to
exercise larger numbers of qubits. To achieve larger simulations, sparse matrix
processing is emphasized below while trading memory for processing. It
performed well within NCSA supercomputers, giving a state vector in convenient
continuous portions ready for post processing.
|
quant-ph/0309022
|
Quantum Aspects of Semantic Analysis and Symbolic Artificial
Intelligence
|
quant-ph cs.CL
|
Modern approaches to semanic analysis if reformulated as Hilbert-space
problems reveal formal structures known from quantum mechanics. Similar
situation is found in distributed representations of cognitive structures
developed for the purposes of neural networks. We take a closer look at
similarites and differences between the above two fields and quantum
information theory.
|
quant-ph/0310075
|
Symmetric Informationally Complete Quantum Measurements
|
quant-ph cs.IT math.FA math.IT
|
We consider the existence in arbitrary finite dimensions d of a POVM
comprised of d^2 rank-one operators all of whose operator inner products are
equal. Such a set is called a ``symmetric, informationally complete'' POVM
(SIC-POVM) and is equivalent to a set of d^2 equiangular lines in C^d.
SIC-POVMs are relevant for quantum state tomography, quantum cryptography, and
foundational issues in quantum mechanics. We construct SIC-POVMs in dimensions
two, three, and four. We further conjecture that a particular kind of
group-covariant SIC-POVM exists in arbitrary dimensions, providing numerical
results up to dimension 45 to bolster this claim.
|
quant-ph/0411140
|
Improved Bounds on Quantum Learning Algorithms
|
quant-ph cs.LG
|
In this article we give several new results on the complexity of algorithms
that learn Boolean functions from quantum queries and quantum examples.
Hunziker et al. conjectured that for any class C of Boolean functions, the
number of quantum black-box queries which are required to exactly identify an
unknown function from C is $O(\frac{\log |C|}{\sqrt{{\hat{\gamma}}^{C}}})$,
where $\hat{\gamma}^{C}$ is a combinatorial parameter of the class C. We
essentially resolve this conjecture in the affirmative by giving a quantum
algorithm that, for any class C, identifies any unknown function from C using
$O(\frac{\log |C| \log \log |C|}{\sqrt{{\hat{\gamma}}^{C}}})$ quantum black-box
queries.
We consider a range of natural problems intermediate between the exact
learning problem (in which the learner must obtain all bits of information
about the black-box function) and the usual problem of computing a predicate
(in which the learner must obtain only one bit of information about the
black-box function). We give positive and negative results on when the quantum
and classical query complexities of these intermediate problems are
polynomially related to each other.
Finally, we improve the known lower bounds on the number of quantum examples
(as opposed to quantum black-box queries) required for $(\epsilon,\delta)$-PAC
learning any concept class of Vapnik-Chervonenkis dimension d over the domain
$\{0,1\}^n$ from $\Omega(\frac{d}{n})$ to $\Omega(\frac{1}{\epsilon}\log
\frac{1}{\delta}+d+\frac{\sqrt{d}}{\epsilon})$. This new lower bound comes
closer to matching known upper bounds for classical PAC learning.
|
quant-ph/0501099
|
Simple Rate-1/3 Convolutional and Tail-Biting Quantum Error-Correcting
Codes
|
quant-ph cs.IT math.IT
|
Simple rate-1/3 single-error-correcting unrestricted and CSS-type quantum
convolutional codes are constructed from classical self-orthogonal
$\F_4$-linear and $\F_2$-linear convolutional codes, respectively. These
quantum convolutional codes have higher rate than comparable quantum block
codes or previous quantum convolutional codes, and are simple to decode. A
block single-error-correcting [9, 3, 3] tail-biting code is derived from the
unrestricted convolutional code, and similarly a [15, 5, 3] CSS-type block code
from the CSS-type convolutional code.
|
quant-ph/0501126
|
Primitive Quantum BCH Codes over Finite Fields
|
quant-ph cs.IT math.IT
|
An attractive feature of BCH codes is that one can infer valuable information
from their design parameters (length, size of the finite field, and designed
distance), such as bounds on the minimum distance and dimension of the code. In
this paper, it is shown that one can also deduce from the design parameters
whether or not a primitive, narrow-sense BCH contains its Euclidean or
Hermitian dual code. This information is invaluable in the construction of
quantum BCH codes. A new proof is provided for the dimension of BCH codes with
small designed distance, and simple bounds on the minimum distance of such
codes and their duals are derived as a consequence. These results allow us to
derive the parameters of two families of primitive quantum BCH codes as a
function of their design parameters.
|
quant-ph/0501152
|
A generalized skew information and uncertainty relation
|
quant-ph cs.IT math.IT
|
A generalized skew information is defined and a generalized uncertainty
relation is established with the help of a trace inequality which was recently
proven by J.I.Fujii. In addition, we prove the trace inequality conjectured by
S.Luo and Z.Zhang. Finally we point out that Theorem 1 in {\it S.Luo and
Q.Zhang, IEEE Trans.IT, Vol.50, pp.1778-1782 (2004)} is incorrect in general,
by giving a simple counter-example.
|
quant-ph/0503236
|
On Self-Dual Quantum Codes, Graphs, and Boolean Functions
|
quant-ph cs.IT math.IT
|
A short introduction to quantum error correction is given, and it is shown
that zero-dimensional quantum codes can be represented as self-dual additive
codes over GF(4) and also as graphs. We show that graphs representing several
such codes with high minimum distance can be described as nested regular graphs
having minimum regular vertex degree and containing long cycles. Two graphs
correspond to equivalent quantum codes if they are related by a sequence of
local complementations. We use this operation to generate orbits of graphs, and
thus classify all inequivalent self-dual additive codes over GF(4) of length up
to 12, where previously only all codes of length up to 9 were known. We show
that these codes can be interpreted as quadratic Boolean functions, and we
define non-quadratic quantum codes, corresponding to Boolean functions of
higher degree. We look at various cryptographic properties of Boolean
functions, in particular the propagation criteria. The new aperiodic
propagation criterion (APC) and the APC distance are then defined. We show that
the distance of a zero-dimensional quantum code is equal to the APC distance of
the corresponding Boolean function. Orbits of Boolean functions with respect to
the {I,H,N}^n transform set are generated. We also study the peak-to-average
power ratio with respect to the {I,H,N}^n transform set (PAR_IHN), and prove
that PAR_IHN of a quadratic Boolean function is related to the size of the
maximum independent set over the corresponding orbit of graphs. A construction
technique for non-quadratic Boolean functions with low PAR_IHN is proposed. It
is finally shown that both PAR_IHN and APC distance can be interpreted as
partial entanglement measures.
|
quant-ph/0506080
|
Entropy and Quantum Kolmogorov Complexity: A Quantum Brudno's Theorem
|
quant-ph cs.IT math-ph math.DS math.IT math.MP
|
In classical information theory, entropy rate and Kolmogorov complexity per
symbol are related by a theorem of Brudno. In this paper, we prove a quantum
version of this theorem, connecting the von Neumann entropy rate and two
notions of quantum Kolmogorov complexity, both based on the shortest qubit
descriptions of qubit strings that, run by a universal quantum Turing machine,
reproduce them as outputs.
|
quant-ph/0507231
|
Algebras of Measurements: the logical structure of Quantum Mechanics
|
quant-ph cs.AI
|
In Quantum Physics, a measurement is represented by a projection on some
closed subspace of a Hilbert space. We study algebras of operators that
abstract from the algebra of projections on closed subspaces of a Hilbert
space. The properties of such operators are justified on epistemological
grounds. Commutation of measurements is a central topic of interest. Classical
logical systems may be viewed as measurement algebras in which all measurements
commute. Keywords: Quantum measurements, Measurement algebras, Quantum Logic.
PACS: 02.10.-v.
|
quant-ph/0508070
|
Nonbinary stabilizer codes over finite fields
|
quant-ph cs.IT math.IT
|
One formidable difficulty in quantum communication and computation is to
protect information-carrying quantum states against undesired interactions with
the environment. In past years, many good quantum error-correcting codes had
been derived as binary stabilizer codes. Fault-tolerant quantum computation
prompted the study of nonbinary quantum codes, but the theory of such codes is
not as advanced as that of binary quantum codes. This paper describes the basic
theory of stabilizer codes over finite fields. The relation between stabilizer
codes and general quantum codes is clarified by introducing a Galois theory for
these objects. A characterization of nonbinary stabilizer codes over GF(q) in
terms of classical codes over GF(q^2) is provided that generalizes the
well-known notion of additive codes over GF(4) of the binary case. This paper
derives lower and upper bounds on the minimum distance of stabilizer codes,
gives several code constructions, and derives numerous families of stabilizer
codes, including quantum Hamming codes, quadratic residue codes, quantum Melas
codes, quantum BCH codes, and quantum character codes. The puncturing theory by
Rains is generalized to additive codes that are not necessarily pure. Bounds on
the maximal length of maximum distance separable stabilizer codes are given. A
discussion of open problems concludes this paper.
|
quant-ph/0511016
|
Convolutional and tail-biting quantum error-correcting codes
|
quant-ph cs.IT math.IT
|
Rate-(n-2)/n unrestricted and CSS-type quantum convolutional codes with up to
4096 states and minimum distances up to 10 are constructed as stabilizer codes
from classical self-orthogonal rate-1/n F_4-linear and binary linear
convolutional codes, respectively. These codes generally have higher rate and
less decoding complexity than comparable quantum block codes or previous
quantum convolutional codes. Rate-(n-2)/n block stabilizer codes with the same
rate and error-correction capability and essentially the same decoding
algorithms are derived from these convolutional codes via tail-biting.
|
quant-ph/0511172
|
On Classical Teleportation and Classical Nonlocality
|
quant-ph cs.IT math.IT
|
An interesting protocol for classical teleportation of an unknown classical
state was recently suggested by Cohen, and by Gour and Meyer. In that protocol,
Bob can sample from a probability distribution P that is given to Alice, even
if Alice has absolutely no knowledge about P. Pursuing a similar line of
thought, we suggest here a limited form of nonlocality - "classical
nonlocality". Our nonlocality is the (somewhat limited) classical analogue of
the Hughston-Jozsa-Wootters (HJW) quantum nonlocality. The HJW nonlocality
tells us how, for a given density matrix rho, Alice can generate any
rho-ensemble on the North Star. This is done using surprisingly few resources -
one shared entangled state (prepared in advance), one generalized quantum
measurement, and no communication. Similarly, our classical nonlocality
presents how, for a given probability distribution P, Alice can generate any
P-ensemble on the North Star, using only one correlated state (prepared in
advance), one (generalized) classical measurement, and no communication.
It is important to clarify that while the classical teleportation and the
classical non-locality protocols are probably rather insignificant from a
classical information processing point of view, they significantly contribute
to our understanding of what exactly is quantum in their well established and
highly famous quantum analogues.
|
quant-ph/0511175
|
A Proof of the Security of Quantum Key Distribution
|
quant-ph cs.CR cs.IT math.IT
|
We prove the security of theoretical quantum key distribution against the
most general attacks which can be performed on the channel, by an eavesdropper
who has unlimited computation abilities, and the full power allowed by the
rules of classical and quantum physics. A key created that way can then be used
to transmit secure messages such that their security is also unaffected in the
future.
|
quant-ph/0601115
|
Phase-Remapping Attack in Practical Quantum Key Distribution Systems
|
quant-ph cs.IT math.IT
|
Quantum key distribution (QKD) can be used to generate secret keys between
two distant parties. Even though QKD has been proven unconditionally secure
against eavesdroppers with unlimited computation power, practical
implementations of QKD may contain loopholes that may lead to the generated
secret keys being compromised. In this paper, we propose a phase-remapping
attack targeting two practical bidirectional QKD systems (the "plug & play"
system and the Sagnac system). We showed that if the users of the systems are
unaware of our attack, the final key shared between them can be compromised in
some situations. Specifically, we showed that, in the case of the
Bennett-Brassard 1984 (BB84) protocol with ideal single-photon sources, when
the quantum bit error rate (QBER) is between 14.6% and 20%, our attack renders
the final key insecure, whereas the same range of QBER values has been proved
secure if the two users are unaware of our attack; also, we demonstrated three
situations with realistic devices where positive key rates are obtained without
the consideration of Trojan horse attacks but in fact no key can be distilled.
We remark that our attack is feasible with only current technology. Therefore,
it is very important to be aware of our attack in order to ensure absolute
security. In finding our attack, we minimize the QBER over individual
measurements described by a general POVM, which has some similarity with the
standard quantum state discrimination problem.
|
quant-ph/0602129
|
Non-catastrophic Encoders and Encoder Inverses for Quantum Convolutional
Codes
|
quant-ph cs.IT math.IT
|
We present an algorithm to construct quantum circuits for encoding and
inverse encoding of quantum convolutional codes. We show that any quantum
convolutional code contains a subcode of finite index which has a
non-catastrophic encoding circuit. Our work generalizes the conditions for
non-catastrophic encoders derived in a paper by Ollivier and Tillich
(quant-ph/0401134) which are applicable only for a restricted class of quantum
convolutional codes. We also show that the encoders and their inverses
constructed by our method naturally can be applied online, i.e., qubits can be
sent and received with constant delay.
|
quant-ph/0603031
|
Channel capacities of classical and quantum list decoding
|
quant-ph cs.IT math.IT
|
We focus on classical and quantum list decoding. The capacity of list
decoding was obtained by Nishimura in the case when the number of list does not
increase exponentially. However, the capacity of the exponential-list case is
open even in the classical case while its converse part was obtained by
Nishimura. We derive the channel capacities in the classical and quantum case
with an exponentially increasing list. The converse part of the quantum case is
obtained by modifying Nagaoka's simple proof for strong converse theorem for
channel capacity. The direct part is derived by a quite simple argument.
|
quant-ph/0603098
|
Quantum broadcast channels
|
quant-ph cs.IT math.IT
|
We consider quantum channels with one sender and two receivers, used in
several different ways for the simultaneous transmission of independent
messages. We begin by extending the technique of superposition coding to
quantum channels with a classical input to give a general achievable region. We
also give outer bounds to the capacity regions for various special cases from
the classical literature and prove that superposition coding is optimal for a
class of channels. We then consider extensions of superposition coding for
channels with a quantum input, where some of the messages transmitted are
quantum instead of classical, in the sense that the parties establish bipartite
or tripartite GHZ entanglement. We conclude by using state merging to give
achievable rates for establishing bipartite entanglement between different
pairs of parties with the assistance of free classical communication.
|
quant-ph/0603135
|
Interaction in Quantum Communication
|
quant-ph cs.CC cs.IT math.IT
|
In some scenarios there are ways of conveying information with many fewer,
even exponentially fewer, qubits than possible classically. Moreover, some of
these methods have a very simple structure--they involve only few message
exchanges between the communicating parties. It is therefore natural to ask
whether every classical protocol may be transformed to a ``simpler'' quantum
protocol--one that has similar efficiency, but uses fewer message exchanges.
We show that for any constant k, there is a problem such that its k+1 message
classical communication complexity is exponentially smaller than its k message
quantum communication complexity. This, in particular, proves a round hierarchy
theorem for quantum communication complexity, and implies, via a simple
reduction, an Omega(N^{1/k}) lower bound for k message quantum protocols for
Set Disjointness for constant k.
Enroute, we prove information-theoretic lemmas, and define a related measure
of correlation, the informational distance, that we believe may be of
significance in other contexts as well.
|
quant-ph/0604013
|
Beyond i.i.d. in Quantum Information Theory
|
quant-ph cs.IT math.IT
|
The information spectrum approach gives general formulae for optimal rates of
codes in many areas of information theory. In this paper the quantum spectral
divergence rates are defined and properties of the rates are derived. The
entropic rates, conditional entropic rates, and spectral mutual information
rates are then defined in terms of the spectral divergence rates. Properties
including subadditivity, chain rules, Araki-Lieb inequalities, and monotonicity
are then explored.
|
quant-ph/0604161
|
Clifford Code Constructions of Operator Quantum Error Correcting Codes
|
quant-ph cs.IT math.IT
|
Recently, operator quantum error-correcting codes have been proposed to unify
and generalize decoherence free subspaces, noiseless subsystems, and quantum
error-correcting codes. This note introduces a natural construction of such
codes in terms of Clifford codes, an elegant generalization of stabilizer codes
due to Knill. Character-theoretic methods are used to derive a simple method to
construct operator quantum error-correcting codes from any classical additive
code over a finite field.
|
quant-ph/0605030
|
Strongly Universal Quantum Turing Machines and Invariance of Kolmogorov
Complexity
|
quant-ph cs.IT math-ph math.IT math.MP
|
We show that there exists a universal quantum Turing machine (UQTM) that can
simulate every other QTM until the other QTM has halted and then halt itself
with probability one. This extends work by Bernstein and Vazirani who have
shown that there is a UQTM that can simulate every other QTM for an arbitrary,
but preassigned number of time steps. As a corollary to this result, we give a
rigorous proof that quantum Kolmogorov complexity as defined by Berthiaume et
al. is invariant, i.e. depends on the choice of the UQTM only up to an additive
constant. Our proof is based on a new mathematical framework for QTMs,
including a thorough analysis of their halting behaviour. We introduce the
notion of mutually orthogonal halting spaces and show that the information
encoded in an input qubit string can always be effectively decomposed into a
classical and a quantum part.
|
quant-ph/0605041
|
Invertible Quantum Operations and Perfect Encryption of Quantum States
|
quant-ph cs.CR cs.IT math.IT
|
In this note, we characterize the form of an invertible quantum operation,
i.e., a completely positive trace preserving linear transformation (a CPTP map)
whose inverse is also a CPTP map. The precise form of such maps becomes
important in contexts such as self-testing and encryption. We show that these
maps correspond to applying a unitary transformation to the state along with an
ancilla initialized to a fixed state, which may be mixed.
The characterization of invertible quantum operations implies that one-way
schemes for encrypting quantum states using a classical key may be slightly
more general than the ``private quantum channels'' studied by Ambainis, Mosca,
Tapp and de Wolf (FOCS 2000). Nonetheless, we show that their results, most
notably a lower bound of 2n bits of key to encrypt n quantum bits, extend in a
straightforward manner to the general case.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.