id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0712.0873
|
The price of ignorance: The impact of side-information on delay for
lossless source-coding
|
cs.IT math.IT
|
Inspired by the context of compressing encrypted sources, this paper
considers the general tradeoff between rate, end-to-end delay, and probability
of error for lossless source coding with side-information. The notion of
end-to-end delay is made precise by considering a sequential setting in which
source symbols are revealed in real time and need to be reconstructed at the
decoder within a certain fixed latency requirement. Upper bounds are derived on
the reliability functions with delay when side-information is known only to the
decoder as well as when it is also known at the encoder.
When the encoder is not ignorant of the side-information (including the
trivial case when there is no side-information), it is possible to have
substantially better tradeoffs between delay and probability of error at all
rates. This shows that there is a fundamental price of ignorance in terms of
end-to-end delay when the encoder is not aware of the side information. This
effect is not visible if only fixed-block-length codes are considered. In this
way, side-information in source-coding plays a role analogous to that of
feedback in channel coding.
While the theorems in this paper are asymptotic in terms of long delays and
low probabilities of error, an example is used to show that the qualitative
effects described here are significant even at short and moderate delays.
|
0712.0932
|
Dimensionality Reduction and Reconstruction using Mirroring Neural
Networks and Object Recognition based on Reduced Dimension Characteristic
Vector
|
cs.CV cs.AI cs.NE
|
In this paper, we present a Mirroring Neural Network architecture to perform
non-linear dimensionality reduction and Object Recognition using a reduced
lowdimensional characteristic vector. In addition to dimensionality reduction,
the network also reconstructs (mirrors) the original high-dimensional input
vector from the reduced low-dimensional data. The Mirroring Neural Network
architecture has more number of processing elements (adalines) in the outer
layers and the least number of elements in the central layer to form a
converging-diverging shape in its configuration. Since this network is able to
reconstruct the original image from the output of the innermost layer (which
contains all the information about the input pattern), these outputs can be
used as object signature to classify patterns. The network is trained to
minimize the discrepancy between actual output and the input by back
propagating the mean squared error from the output layer to the input layer.
After successfully training the network, it can reduce the dimension of input
vectors and mirror the patterns fed to it. The Mirroring Neural Network
architecture gave very good results on various test patterns.
|
0712.0938
|
Automatic Pattern Classification by Unsupervised Learning Using
Dimensionality Reduction of Data with Mirroring Neural Networks
|
cs.LG cs.AI cs.NE
|
This paper proposes an unsupervised learning technique by using Multi-layer
Mirroring Neural Network and Forgy's clustering algorithm. Multi-layer
Mirroring Neural Network is a neural network that can be trained with
generalized data inputs (different categories of image patterns) to perform
non-linear dimensionality reduction and the resultant low-dimensional code is
used for unsupervised pattern classification using Forgy's algorithm. By
adapting the non-linear activation function (modified sigmoidal function) and
initializing the weights and bias terms to small random values, mirroring of
the input pattern is initiated. In training, the weights and bias terms are
changed in such a way that the input presented is reproduced at the output by
back propagating the error. The mirroring neural network is capable of reducing
the input vector to a great degree (approximately 1/30th the original size) and
also able to reconstruct the input pattern at the output layer from this
reduced code units. The feature set (output of central hidden layer) extracted
from this network is fed to Forgy's algorithm, which classify input data
patterns into distinguishable classes. In the implementation of Forgy's
algorithm, initial seed points are selected in such a way that they are distant
enough to be perfectly grouped into different categories. Thus a new method of
unsupervised learning is formulated and demonstrated in this paper. This method
gave impressive results when applied to classification of different image
patterns.
|
0712.0948
|
A Common View on Strong, Uniform, and Other Notions of Equivalence in
Answer-Set Programming
|
cs.AI cs.LO
|
Logic programming under the answer-set semantics nowadays deals with numerous
different notions of program equivalence. This is due to the fact that
equivalence for substitution (known as strong equivalence) and ordinary
equivalence are different concepts. The former holds, given programs P and Q,
iff P can be faithfully replaced by Q within any context R, while the latter
holds iff P and Q provide the same output, that is, they have the same answer
sets. Notions in between strong and ordinary equivalence have been introduced
as theoretical tools to compare incomplete programs and are defined by either
restricting the syntactic structure of the considered context programs R or by
bounding the set A of atoms allowed to occur in R (relativized equivalence).For
the latter approach, different A yield properly different equivalence notions,
in general. For the former approach, however, it turned out that any
``reasonable'' syntactic restriction to R coincides with either ordinary,
strong, or uniform equivalence. In this paper, we propose a parameterization
for equivalence notions which takes care of both such kinds of restrictions
simultaneously by bounding, on the one hand, the atoms which are allowed to
occur in the rule heads of the context and, on the other hand, the atoms which
are allowed to occur in the rule bodies of the context. We introduce a general
semantical characterization which includes known ones as SE-models (for strong
equivalence) or UE-models (for uniform equivalence) as special cases.
Moreover,we provide complexity bounds for the problem in question and sketch a
possible implementation method.
To appear in Theory and Practice of Logic Programming (TPLP).
|
0712.0975
|
Random quantum codes from Gaussian ensembles and an uncertainty relation
|
quant-ph cs.IT math.IT
|
Using random Gaussian vectors and an information-uncertainty relation, we
give a proof that the coherent information is an achievable rate for
entanglement transmission through a noisy quantum channel. The codes are random
subspaces selected according to the Haar measure, but distorted as a function
of the sender's input density operator. Using large deviations techniques, we
show that classical data transmitted in either of two Fourier-conjugate bases
for the coding subspace can be decoded with low probability of error. A
recently discovered information-uncertainty relation then implies that the
quantum mutual information for entanglement encoded into the subspace and
transmitted through the channel will be high. The monogamy of quantum
correlations finally implies that the environment of the channel cannot be
significantly coupled to the entanglement, and concluding, which ensures the
existence of a decoding by the receiver.
|
0712.1097
|
On Using Unsatisfiability for Solving Maximum Satisfiability
|
cs.AI cs.DS
|
Maximum Satisfiability (MaxSAT) is a well-known optimization pro- blem, with
several practical applications. The most widely known MAXS AT algorithms are
ineffective at solving hard problems instances from practical application
domains. Recent work proposed using efficient Boolean Satisfiability (SAT)
solvers for solving the MaxSAT problem, based on identifying and eliminating
unsatisfiable subformulas. However, these algorithms do not scale in practice.
This paper analyzes existing MaxSAT algorithms based on unsatisfiable
subformula identification. Moreover, the paper proposes a number of key
optimizations to these MaxSAT algorithms and a new alternative algorithm. The
proposed optimizations and the new algorithm provide significant performance
improvements on MaxSAT instances from practical applications. Moreover, the
efficiency of the new generation of unsatisfiability-based MaxSAT solvers
becomes effectively indexed to the ability of modern SAT solvers to proving
unsatisfiability and identifying unsatisfiable subformulas.
|
0712.1169
|
Opportunistic Relaying in Wireless Networks
|
cs.IT math.IT
|
Relay networks having $n$ source-to-destination pairs and $m$ half-duplex
relays, all operating in the same frequency band in the presence of block
fading, are analyzed. This setup has attracted significant attention and
several relaying protocols have been reported in the literature. However, most
of the proposed solutions require either centrally coordinated scheduling or
detailed channel state information (CSI) at the transmitter side. Here, an
opportunistic relaying scheme is proposed, which alleviates these limitations.
The scheme entails a two-hop communication protocol, in which sources
communicate with destinations only through half-duplex relays. The key idea is
to schedule at each hop only a subset of nodes that can benefit from
\emph{multiuser diversity}. To select the source and destination nodes for each
hop, it requires only CSI at receivers (relays for the first hop, and
destination nodes for the second hop) and an integer-value CSI feedback to the
transmitters. For the case when $n$ is large and $m$ is fixed, it is shown that
the proposed scheme achieves a system throughput of $m/2$ bits/s/Hz. In
contrast, the information-theoretic upper bound of $(m/2)\log \log n$ bits/s/Hz
is achievable only with more demanding CSI assumptions and cooperation between
the relays. Furthermore, it is shown that, under the condition that the product
of block duration and system bandwidth scales faster than $\log n$, the
achievable throughput of the proposed scheme scales as $\Theta ({\log n})$.
Notably, this is proven to be the optimal throughput scaling even if
centralized scheduling is allowed, thus proving the optimality of the proposed
scheme in the scaling law sense.
|
0712.1182
|
Cumulative and Averaging Fission of Beliefs
|
cs.AI cs.LO
|
Belief fusion is the principle of combining separate beliefs or bodies of
evidence originating from different sources. Depending on the situation to be
modelled, different belief fusion methods can be applied. Cumulative and
averaging belief fusion is defined for fusing opinions in subjective logic, and
for fusing belief functions in general. The principle of fission is the
opposite of fusion, namely to eliminate the contribution of a specific belief
from an already fused belief, with the purpose of deriving the remaining
belief. This paper describes fission of cumulative belief as well as fission of
averaging belief in subjective logic. These operators can for example be
applied to belief revision in Bayesian belief networks, where the belief
contribution of a given evidence source can be determined as a function of a
given fused belief and its other contributing beliefs.
|
0712.1310
|
About Algorithm for Transformation of Logic Functions (ATLF)
|
cs.LO cs.AI
|
In this article the algorithm for transformation of logic functions which are
given by truth tables is considered. The suggested algorithm allows the
transformation of many-valued logic functions with the required number of
variables and can be looked in this sense as universal.
|
0712.1339
|
Joint Receiver and Transmitter Optimization for Energy-Efficient CDMA
Communications
|
cs.IT cs.GT math.IT
|
This paper focuses on the cross-layer issue of joint multiuser detection and
resource allocation for energy efficiency in wireless CDMA networks. In
particular, assuming that a linear multiuser detector is adopted in the uplink
receiver, the case considered is that in which each terminal is allowed to vary
its transmit power, spreading code, and uplink receiver in order to maximize
its own utility, which is defined as the ratio of data throughput to transmit
power. Resorting to a game-theoretic formulation, a non-cooperative game for
utility maximization is formulated, and it is proved that a unique Nash
equilibrium exists, which, under certain conditions, is also Pareto-optimal.
Theoretical results concerning the relationship between the problems of SINR
maximization and MSE minimization are given, and, resorting to the tools of
large system analysis, a new distributed power control algorithm is
implemented, based on very little prior information about the user of interest.
The utility profile achieved by the active users in a large CDMA system is also
computed, and, moreover, the centralized socially optimum solution is analyzed.
Considerations on the extension of the proposed framework to a multi-cell
scenario are also briefly detailed. Simulation results confirm that the
proposed non-cooperative game largely outperforms competing alternatives, and
that it exhibits a quite small performance loss with respect to the socially
optimum solution, and only in the case in which the users number exceeds the
processing gain. Finally, results also show an excellent agreement between the
theoretical closed-form formulas based on large system analysis and the outcome
of numerical experiments.
|
0712.1345
|
Sequential operators in computability logic
|
cs.LO cs.AI math.LO
|
Computability logic (CL) (see http://www.cis.upenn.edu/~giorgi/cl.html) is a
semantical platform and research program for redeveloping logic as a formal
theory of computability, as opposed to the formal theory of truth which it has
more traditionally been. Formulas in CL stand for (interactive) computational
problems, understood as games between a machine and its environment; logical
operators represent operations on such entities; and "truth" is understood as
existence of an effective solution, i.e., of an algorithmic winning strategy.
The formalism of CL is open-ended, and may undergo series of extensions as
the study of the subject advances. The main groups of operators on which CL has
been focused so far are the parallel, choice, branching, and blind operators.
The present paper introduces a new important group of operators, called
sequential. The latter come in the form of sequential conjunction and
disjunction, sequential quantifiers, and sequential recurrences. As the name
may suggest, the algorithmic intuitions associated with this group are those of
sequential computations, as opposed to the intuitions of parallel computations
associated with the parallel group of operations: playing a sequential
combination of games means playing its components in a sequential fashion, one
after one.
The main technical result of the present paper is a sound and complete
axiomatization of the propositional fragment of computability logic whose
vocabulary, together with negation, includes all three -- parallel, choice and
sequential -- sorts of conjunction and disjunction. An extension of this result
to the first-order level is also outlined.
|
0712.1365
|
Population stratification using a statistical model on hypergraphs
|
q-bio.PE cs.AI physics.data-an
|
Population stratification is a problem encountered in several areas of
biology and public health. We tackle this problem by mapping a population and
its elements attributes into a hypergraph, a natural extension of the concept
of graph or network to encode associations among any number of elements. On
this hypergraph, we construct a statistical model reflecting our intuition
about how the elements attributes can emerge from a postulated population
structure. Finally, we introduce the concept of stratification
representativeness as a mean to identify the simplest stratification already
containing most of the information about the population structure. We
demonstrate the power of this framework stratifying an animal and a human
population based on phenotypic and genotypic properties, respectively.
|
0712.1402
|
Reconstruction of Markov Random Fields from Samples: Some Easy
Observations and Algorithms
|
cs.CC cs.LG
|
Markov random fields are used to model high dimensional distributions in a
number of applied areas. Much recent interest has been devoted to the
reconstruction of the dependency structure from independent samples from the
Markov random fields. We analyze a simple algorithm for reconstructing the
underlying graph defining a Markov random field on $n$ nodes and maximum degree
$d$ given observations. We show that under mild non-degeneracy conditions it
reconstructs the generating graph with high probability using $\Theta(d
\epsilon^{-2}\delta^{-4} \log n)$ samples where $\epsilon,\delta$ depend on the
local interactions. For most local interaction $\eps,\delta$ are of order
$\exp(-O(d))$.
Our results are optimal as a function of $n$ up to a multiplicative constant
depending on $d$ and the strength of the local interactions. Our results seem
to be the first results for general models that guarantee that {\em the}
generating model is reconstructed. Furthermore, we provide explicit $O(n^{d+2}
\epsilon^{-2}\delta^{-4} \log n)$ running time bound. In cases where the
measure on the graph has correlation decay, the running time is $O(n^2 \log n)$
for all fixed $d$. We also discuss the effect of observing noisy samples and
show that as long as the noise level is low, our algorithm is effective. On the
other hand, we construct an example where large noise implies
non-identifiability even for generic noise and interactions. Finally, we
briefly show that in some simple cases, models with hidden nodes can also be
recovered.
|
0712.1442
|
On types of growth for graph-different permutations
|
math.CO cs.IT math.IT
|
We consider an infinite graph G whose vertex set is the set of natural
numbers and adjacency depends solely on the difference between vertices. We
study the largest cardinality of a set of permutations of [n] any pair of which
differ somewhere in a pair of adjacent vertices of G and determine it
completely in an interesting special case. We give estimates for other cases
and compare the results in case of complementary graphs. We also explore the
close relationship between our problem and the concept of Shannon capacity
"within a given type".
|
0712.1529
|
Ontology and Formal Semantics - Integration Overdue
|
cs.AI cs.CL
|
In this note we suggest that difficulties encountered in natural language
semantics are, for the most part, due to the use of mere symbol manipulation
systems that are devoid of any content. In such systems, where there is hardly
any link with our common-sense view of the world, and it is quite difficult to
envision how one can formally account for the considerable amount of content
that is often implicit, but almost never explicitly stated in our everyday
discourse. The solution, in our opinion, is a compositional semantics grounded
in an ontology that reflects our commonsense view of the world and the way we
talk about it in ordinary language. In the compositional logic we envision
there are ontological (or first-intension) concepts, and logical (or
second-intension) concepts, and where the ontological concepts include not only
Davidsonian events, but other abstract objects as well (e.g., states,
processes, properties, activities, attributes, etc.) It will be demonstrated
here that in such a framework, a number of challenges in the semantics of
natural language (e.g., metonymy, intensionality, metaphor, etc.) can be
properly and uniformly addressed.
|
0712.1609
|
Distributed Consensus Algorithms in Sensor Networks: Quantized Data and
Random Link Failures
|
cs.MA cs.IT math.IT
|
The paper studies the problem of distributed average consensus in sensor
networks with quantized data and random link failures. To achieve consensus,
dither (small noise) is added to the sensor states before quantization. When
the quantizer range is unbounded (countable number of quantizer levels),
stochastic approximation shows that consensus is asymptotically achieved with
probability one and in mean square to a finite random variable. We show that
the meansquared error (m.s.e.) can be made arbitrarily small by tuning the link
weight sequence, at a cost of the convergence rate of the algorithm. To study
dithered consensus with random links when the range of the quantizer is
bounded, we establish uniform boundedness of the sample paths of the unbounded
quantizer. This requires characterization of the statistical properties of the
supremum taken over the sample paths of the state of the quantizer. This is
accomplished by splitting the state vector of the quantizer in two components:
one along the consensus subspace and the other along the subspace orthogonal to
the consensus subspace. The proofs use maximal inequalities for submartingale
and supermartingale sequences. From these, we derive probability bounds on the
excursions of the two subsequences, from which probability bounds on the
excursions of the quantizer state vector follow. The paper shows how to use
these probability bounds to design the quantizer parameters and to explore
tradeoffs among the number of quantizer levels, the size of the quantization
steps, the desired probability of saturation, and the desired level of accuracy
$\epsilon$ away from consensus. Finally, the paper illustrates the quantizer
design with a numerical study.
|
0712.1659
|
Non-linear and Linear Broadcasting with QoS Requirements: Tractable
Approaches for Bounded Channel Uncertainties
|
cs.IT math.IT
|
We consider the downlink of a cellular system in which the base station
employs multiple transmit antennas, each receiver has a single antenna, and the
users specify. We consider communication schemes in which the users have
certain Quality of Service (QoS) requirements. We study the design of robust
broadcasting schemes that minimize the transmission power necessary to
guarantee that the QoS requirements are satisfied for all channels within
bounded uncertainty regions around the transmitter's estimate of each user's
channel. Each user's QoS requirement is formulated as a constraint on the mean
square error (MSE) in its received signal, and we show that these MSE
constraints imply constraints on the received SINR. Using the MSE constraints,
we present a unified design approach for robust linear and non-linear
transceivers with QoS requirements. The proposed designs overcome the
limitations of existing approaches that provide conservative designs or are
only applicable to the case of linear precoding. Furthermore, we provide
computationally-efficient design formulations for a rather general model of
channel uncertainty that subsumes many natural choices for the uncertainty
region. We also consider the problem of the robust counterpart to precoding
schemes that maximize the fidelity of the weakest user's signal subject to a
power constraint. For this problem, we provide quasi-convex formulations, for
both linear and non-linear transceivers, that can be efficiently solved using a
one-dimensional bisection search. Our numerical results demonstrate that in the
presence of CSI uncertainty, the proposed designs provide guarantees for a
larger range of QoS requirements than the existing approaches, and consume
require less transmission power in providing these guarantees.
|
0712.1775
|
On Computation of Error Locations and Values in Hermitian Codes
|
cs.IT math.IT
|
We obtain a technique to reduce the computational complexity associated with
decoding of Hermitian codes. In particular, we propose a method to compute the
error locations and values using an uni-variate error locator and an
uni-variate error evaluator polynomial. To achieve this, we introduce the
notion of Semi-Erasure Decoding of Hermitian codes and prove that decoding of
Hermitian codes can always be performed using semi-erasure decoding. The
central results are:
* Searching for error locations require evaluating an univariate error
locator polynomial over $q^2$ points as in Chien search for Reed-Solomon codes.
* Forney's formula for error value computation in Reed-Solomon codes can
directly be applied to compute the error values in Hermitian codes.
The approach develops from the idea that transmitting a modified form of the
information may be more efficient that the information itself.
|
0712.1863
|
Constructing Bio-molecular Databases on a DNA-based Computer
|
cs.NE cs.DB q-bio.OT
|
Codd [Codd 1970] wrote the first paper in which the model of a relational
database was proposed. Adleman [Adleman 1994] wrote the first paper in which
DNA strands in a test tube were used to solve an instance of the Hamiltonian
path problem. From [Adleman 1994], it is obviously indicated that for storing
information in molecules of DNA allows for an information density of
approximately 1 bit per cubic nm (nanometer) and a dramatic improvement over
existing storage media such as video tape which store information at a density
of approximately 1 bit per 1012 cubic nanometers. This paper demonstrates that
biological operations can be applied to construct bio-molecular databases where
data records in relational tables are encoded as DNA strands. In order to
achieve the goal, DNA algorithms are proposed to perform eight operations of
relational algebra (calculus) on bio-molecular relational databases, which
include Cartesian product, union, set difference, selection, projection,
intersection, join and division. Furthermore, this work presents clear evidence
of the ability of molecular computing to perform data retrieval operations on
bio-molecular relational databases.
|
0712.1875
|
Critique du rapport signal \`a bruit en th\'eorie de l'information -- A
critical appraisal of the signal to noise ratio in information theory
|
cs.IT math.IT math.LO math.PR math.RA quant-ph
|
The signal to noise ratio, which plays such an important role in information
theory, is shown to become pointless in digital communications where - symbols
are modulating carriers, which are solutions of linear differential equations
with polynomial coefficients, - demodulations is achieved thanks to new
algebraic estimation techniques. Operational calculus, differential algebra and
nonstandard analysis are the main mathematical tools.
|
0712.1878
|
Hierarchy construction schemes within the Scale set framework
|
cs.CV
|
Segmentation algorithms based on an energy minimisation framework often
depend on a scale parameter which balances a fit to data and a regularising
term. Irregular pyramids are defined as a stack of graphs successively reduced.
Within this framework, the scale is often defined implicitly as the height in
the pyramid. However, each level of an irregular pyramid can not usually be
readily associated to the global optimum of an energy or a global criterion on
the base level graph. This last drawback is addressed by the scale set
framework designed by Guigues. The methods designed by this author allow to
build a hierarchy and to design cuts within this hierarchy which globally
minimise an energy. This paper studies the influence of the construction scheme
of the initial hierarchy on the resulting optimal cuts. We propose one
sequential and one parallel method with two variations within both. Our
sequential methods provide partitions near the global optima while parallel
methods require less execution times than the sequential method of Guigues even
on sequential machines.
|
0712.1987
|
A New Outer Bound and the Noisy-Interference Sum-Rate Capacity for
Gaussian Interference Channels
|
cs.IT math.IT
|
A new outer bound on the capacity region of Gaussian interference channels is
developed. The bound combines and improves existing genie-aided methods and is
shown to give the sum-rate capacity for noisy interference as defined in this
paper. Specifically, it is shown that if the channel coefficients and power
constraints satisfy a simple condition then single-user detection at each
receiver is sum-rate optimal, i.e., treating the interference as noise incurs
no loss in performance. This is the first concrete (finite signal-to-noise
ratio) capacity result for the Gaussian interference channel with weak to
moderate interference. Furthermore, for certain mixed (weak and strong)
interference scenarios, the new outer bounds give a corner point of the
capacity region.
|
0712.1996
|
A case study of the difficulty of quantifier elimination in constraint
databases: the alibi query in moving object databases
|
cs.LO cs.CC cs.DB
|
In the constraint database model, spatial and spatio-temporal data are stored
by boolean combinations of polynomial equalities and inequalities over the real
numbers. The relational calculus augmented with polynomial constraints is the
standard first-order query language for constraint databases. Although the
expressive power of this query language has been studied extensively, the
difficulty of the efficient evaluation of queries, usually involving some form
of quantifier elimination, has received considerably less attention. The
inefficiency of existing quantifier-elimination software and the intrinsic
difficulty of quantifier elimination have proven to be a bottle-neck for for
real-world implementations of constraint database systems. In this paper, we
focus on a particular query, called the \emph{alibi query}, that asks whether
two moving objects whose positions are known at certain moments in time, could
have possibly met, given certain speed constraints. This query can be seen as a
constraint database query and its evaluation relies on the elimination of a
block of three existential quantifiers. Implementations of general purpose
elimination algorithms are in the specific case, for practical purposes, too
slow in answering the alibi query and fail completely in the parametric case.
The main contribution of this paper is an analytical solution to the parametric
alibi query, which can be used to answer this query in the specific case in
constant time. We also give an analytic solution to the alibi query at a fixed
moment in time. The solutions we propose are based on geometric argumentation
and they illustrate the fact that some practical problems require creative
solutions, where at least in theory, existing systems could provide a solution.
|
0712.2063
|
An axiomatic approach to intrinsic dimension of a dataset
|
cs.IR
|
We perform a deeper analysis of an axiomatic approach to the concept of
intrinsic dimension of a dataset proposed by us in the IJCNN'07 paper
(arXiv:cs/0703125). The main features of our approach are that a high intrinsic
dimension of a dataset reflects the presence of the curse of dimensionality (in
a certain mathematically precise sense), and that dimension of a discrete
i.i.d. sample of a low-dimensional manifold is, with high probability, close to
that of the manifold. At the same time, the intrinsic dimension of a sample is
easily corrupted by moderate high-dimensional noise (of the same amplitude as
the size of the manifold) and suffers from prohibitevely high computational
complexity (computing it is an $NP$-complete problem). We outline a possible
way to overcome these difficulties.
|
0712.2100
|
Medical image computing and computer-aided medical interventions applied
to soft tissues. Work in progress in urology
|
cs.OH cs.RO
|
Until recently, Computer-Aided Medical Interventions (CAMI) and Medical
Robotics have focused on rigid and non deformable anatomical structures.
Nowadays, special attention is paid to soft tissues, raising complex issues due
to their mobility and deformation. Mini-invasive digestive surgery was probably
one of the first fields where soft tissues were handled through the development
of simulators, tracking of anatomical structures and specific assistance
robots. However, other clinical domains, for instance urology, are concerned.
Indeed, laparoscopic surgery, new tumour destruction techniques (e.g. HIFU,
radiofrequency, or cryoablation), increasingly early detection of cancer, and
use of interventional and diagnostic imaging modalities, recently opened new
challenges to the urologist and scientists involved in CAMI. This resulted in
the last five years in a very significant increase of research and developments
of computer-aided urology systems. In this paper, we propose a description of
the main problems related to computer-aided diagnostic and therapy of soft
tissues and give a survey of the different types of assistance offered to the
urologist: robotization, image fusion, surgical navigation. Both research
projects and operational industrial systems are discussed.
|
0712.2141
|
Numerical Sensitivity and Efficiency in the Treatment of Epistemic and
Aleatory Uncertainty
|
cs.AI math.PR
|
The treatment of both aleatory and epistemic uncertainty by recent methods
often requires an high computational effort. In this abstract, we propose a
numerical sampling method allowing to lighten the computational burden of
treating the information by means of so-called fuzzy random variables.
|
0712.2182
|
Optimal codes for correcting a single (wrap-around) burst of errors
|
cs.IT math.IT
|
In 2007, Martinian and Trott presented codes for correcting a burst of
erasures with a minimum decoding delay. Their construction employs [n,k] codes
that can correct any burst of erasures (including wrap-around bursts) of length
n-k. The raised the question if such [n,k] codes exist for all integers k and n
with 1<= k <= n and all fields (in particular, for the binary field). In this
note, we answer this question affirmatively by giving two recursive
constructions and a direct one.
|
0712.2223
|
Entanglement-Assisted Quantum Convolutional Coding
|
quant-ph cs.IT math.IT
|
We show how to protect a stream of quantum information from decoherence
induced by a noisy quantum communication channel. We exploit preshared
entanglement and a convolutional coding structure to develop a theory of
entanglement-assisted quantum convolutional coding. Our construction produces a
Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code
from two arbitrary classical binary convolutional codes. The rate and
error-correcting properties of the classical convolutional codes directly
determine the corresponding properties of the resulting entanglement-assisted
quantum convolutional code. We explain how to encode our CSS
entanglement-assisted quantum convolutional codes starting from a stream of
information qubits, ancilla qubits, and shared entangled bits.
|
0712.2245
|
Exact and Approximate Expressions for the Probability of Undetected
Error of Varshamov-Tenengol'ts Codes
|
cs.IT math.IT
|
Computation of the undetected error probability for error correcting codes
over the Z-channel is an important issue, explored only in part in previous
literature. In this paper we consider the case of Varshamov-Tenengol'ts codes,
by presenting some analytical, numerical, and heuristic methods for unveiling
this additional feature. Possible comparisons with Hamming codes are also shown
and discussed.
|
0712.2255
|
Human-Machine Symbiosis, 50 Years On
|
cs.DC cs.CE cs.HC
|
Licklider advocated in 1960 the construction of computers capable of working
symbiotically with humans to address problems not easily addressed by humans
working alone. Since that time, many of the advances that he envisioned have
been achieved, yet the time spent by human problem solvers in mundane
activities remains large. I propose here four areas in which improved tools can
further advance the goal of enhancing human intellect: services, provenance,
knowledge communities, and automation of problem-solving protocols.
|
0712.2262
|
The Earth System Grid: Supporting the Next Generation of Climate
Modeling Research
|
cs.CE cs.DC cs.NI
|
Understanding the earth's climate system and how it might be changing is a
preeminent scientific challenge. Global climate models are used to simulate
past, present, and future climates, and experiments are executed continuously
on an array of distributed supercomputers. The resulting data archive, spread
over several sites, currently contains upwards of 100 TB of simulation data and
is growing rapidly. Looking toward mid-decade and beyond, we must anticipate
and prepare for distributed climate research data holdings of many petabytes.
The Earth System Grid (ESG) is a collaborative interdisciplinary project aimed
at addressing the challenge of enabling management, discovery, access, and
analysis of these critically important datasets in a distributed and
heterogeneous computational environment. The problem is fundamentally a Grid
problem. Building upon the Globus toolkit and a variety of other technologies,
ESG is developing an environment that addresses authentication, authorization
for data access, large-scale data transport and management, services and
abstractions for high-performance remote data access, mechanisms for scalable
data replication, cataloging with rich semantic and syntactic information, data
discovery, distributed monitoring, and Web-based portals for using the system.
|
0712.2371
|
Maximum-rate, Minimum-Decoding-Complexity STBCs from Clifford Algebras
|
cs.IT math.IT
|
It is well known that Space-Time Block Codes (STBCs) from orthogonal designs
(ODs) are single-symbol decodable/symbol-by-symbol decodable (SSD) and are
obtainable from unitary matrix representations of Clifford algebras. However,
SSD codes are obtainable from designs that are not orthogonal also. Recently,
two such classes of SSD codes have been studied: (i) Coordinate Interleaved
Orthogonal Designs (CIODs) and (ii) Minimum-Decoding-Complexity (MDC) STBCs
from Quasi-ODs (QODs). Codes from ODs, CIODs and MDC-QODs are mutually
non-intersecting classes of codes. The class of CIODs have {\it non-unitary
weight matrices} when written as a Linear Dispersion Code (LDC) proposed by
Hassibi and Hochwald, whereas several known SSD codes including CODs have {\it
unitary weight matrices}. In this paper, we obtain SSD codes with unitary
weight matrices (that are not CODs) called Clifford Unitary Weight SSDs
(CUW-SSDs) from matrix representations of Clifford algebras. A main result of
this paper is the derivation of an achievable upper bound on the rate of any
unitary weight SSD code as $\frac{a}{2^{a-1}}$ for $2^a$ antennas which is
larger than that of the CODs which is $\frac{a+1}{2^a}$. It is shown that
several known classes of SSD codes are CUW-SSD codes and CUW-SSD codes meet
this upper bound. Also, for the codes of this paper conditions on the signal
sets which ensure full-diversity and expressions for the coding gain are
presented. A large class of SSD codes with non-unitary weight matrices are
obtained which include CIODs as a proper subclass.
|
0712.2384
|
Multi-group ML Decodable Collocated and Distributed Space Time Block
Codes
|
cs.IT cs.DM math.IT math.RA
|
In this paper, collocated and distributed space-time block codes (DSTBCs)
which admit multi-group maximum likelihood (ML) decoding are studied. First the
collocated case is considered and the problem of constructing space-time block
codes (STBCs) which optimally tradeoff rate and ML decoding complexity is
posed. Recently, sufficient conditions for multi-group ML decodability have
been provided in the literature and codes meeting these sufficient conditions
were called Clifford Unitary Weight (CUW) STBCs. An algebraic framework based
on extended Clifford algebras is proposed to study CUW STBCs and using this
framework, the optimal tradeoff between rate and ML decoding complexity of CUW
STBCs is obtained for few specific cases. Code constructions meeting this
tradeoff optimally are also provided. The paper then focuses on multi-group ML
decodable DSTBCs for application in synchronous wireless relay networks and
three constructions of four-group ML decodable DSTBCs are provided. Finally,
the OFDM based Alamouti space-time coded scheme proposed by Li-Xia for a 2
relay asynchronous relay network is extended to a more general transmission
scheme that can achieve full asynchronous cooperative diversity for arbitrary
number of relays. It is then shown how differential encoding at the source can
be combined with the proposed transmission scheme to arrive at a new
transmission scheme that can achieve full cooperative diversity in asynchronous
wireless relay networks with no channel information and also no timing error
knowledge at the destination node. Four-group decodable DSTBCs applicable in
the proposed OFDM based transmission scheme are also given.
|
0712.2389
|
Decomposition During Search for Propagation-Based Constraint Solvers
|
cs.AI
|
We describe decomposition during search (DDS), an integration of And/Or tree
search into propagation-based constraint solvers. The presented search
algorithm dynamically decomposes sub-problems of a constraint satisfaction
problem into independent partial problems, avoiding redundant work.
The paper discusses how DDS interacts with key features that make
propagation-based solvers successful: constraint propagation, especially for
global constraints, and dynamic search heuristics.
We have implemented DDS for the Gecode constraint programming library. Two
applications, solution counting in graph coloring and protein structure
prediction, exemplify the benefits of DDS in practice.
|
0712.2430
|
Limits to consistent on-line forecasting for ergodic time series
|
math.PR cs.IT math.IT
|
This study concerns problems of time-series forecasting under the weakest of
assumptions. Related results are surveyed and are points of departure for the
developments here, some of which are new and others are new derivations of
previous findings. The contributions in this study are all negative, showing
that various plausible prediction problems are unsolvable, or in other cases,
are not solvable by predictors which are known to be consistent when mixing
conditions hold.
|
0712.2467
|
Rethinking Information Theory for Mobile Ad Hoc Networks
|
cs.IT math.IT
|
The subject of this paper is the long-standing open problem of developing a
general capacity theory for wireless networks, particularly a theory capable of
describing the fundamental performance limits of mobile ad hoc networks
(MANETs). A MANET is a peer-to-peer network with no pre-existing
infrastructure. MANETs are the most general wireless networks, with single-hop,
relay, interference, mesh, and star networks comprising special cases. The lack
of a MANET capacity theory has stunted the development and commercialization of
many types of wireless networks, including emergency, military, sensor, and
community mesh networks. Information theory, which has been vital for links and
centralized networks, has not been successfully applied to decentralized
wireless networks. Even if this was accomplished, for such a theory to truly
characterize the limits of deployed MANETs it must overcome three key
roadblocks. First, most current capacity results rely on the allowance of
unbounded delay and reliability. Second, spatial and timescale decompositions
have not yet been developed for optimally modeling the spatial and temporal
dynamics of wireless networks. Third, a useful network capacity theory must
integrate rather than ignore the important role of overhead messaging and
feedback. This paper describes some of the shifts in thinking that may be
needed to overcome these roadblocks and develop a more general theory that we
refer to as non-equilibrium information theory.
|
0712.2469
|
Directed Percolation in Wireless Networks with Interference and Noise
|
cs.IT cs.NI math.IT math.PR
|
Previous studies of connectivity in wireless networks have focused on
undirected geometric graphs. More sophisticated models such as
Signal-to-Interference-and-Noise-Ratio (SINR) model, however, usually leads to
directed graphs. In this paper, we study percolation processes in wireless
networks modelled by directed SINR graphs. We first investigate
interference-free networks, where we define four types of phase transitions and
show that they take place at the same time. By coupling the directed SINR graph
with two other undirected SINR graphs, we further obtain analytical upper and
lower bounds on the critical density. Then, we show that with interference,
percolation in directed SINR graphs depends not only on the density but also on
the inverse system processing gain. We also provide bounds on the critical
value of the inverse system processing gain.
|
0712.2497
|
A New Theoretic Foundation for Cross-Layer Optimization
|
cs.NI cs.LG
|
Cross-layer optimization solutions have been proposed in recent years to
improve the performance of network users operating in a time-varying,
error-prone wireless environment. However, these solutions often rely on ad-hoc
optimization approaches, which ignore the different environmental dynamics
experienced at various layers by a user and violate the layered network
architecture of the protocol stack by requiring layers to provide access to
their internal protocol parameters to other layers. This paper presents a new
theoretic foundation for cross-layer optimization, which allows each layer to
make autonomous decisions individually, while maximizing the utility of the
wireless user by optimally determining what information needs to be exchanged
among layers. Hence, this cross-layer framework does not change the current
layered architecture. Specifically, because the wireless user interacts with
the environment at various layers of the protocol stack, the cross-layer
optimization problem is formulated as a layered Markov decision process (MDP)
in which each layer adapts its own protocol parameters and exchanges
information (messages) with other layers in order to cooperatively maximize the
performance of the wireless user. The message exchange mechanism for
determining the optimal cross-layer transmission strategies has been designed
for both off-line optimization and on-line dynamic adaptation. We also show
that many existing cross-layer optimization algorithms can be formulated as
simplified, sub-optimal, versions of our layered MDP framework.
|
0712.2552
|
The PBD-Closure of Constant-Composition Codes
|
cs.IT cs.DM math.CO math.IT
|
We show an interesting PBD-closure result for the set of lengths of
constant-composition codes whose distance and size meet certain conditions. A
consequence of this PBD-closure result is that the size of optimal
constant-composition codes can be determined for infinite families of parameter
sets from just a single example of an optimal code. As an application, the size
of several infinite families of optimal constant-composition codes are derived.
In particular, the problem of determining the size of optimal
constant-composition codes having distance four and weight three is solved for
all lengths sufficiently large. This problem was previously unresolved for odd
lengths, except for lengths seven and eleven.
|
0712.2553
|
Constructions for Difference Triangle Sets
|
cs.IT cs.DM math.CO math.IT
|
Difference triangle sets are useful in many practical problems of information
transmission. This correspondence studies combinatorial and computational
constructions for difference triangle sets having small scopes. Our algorithms
have been used to produce difference triangle sets whose scopes are the best
currently known.
|
0712.2579
|
On the Information of the Second Moments Between Random Variables Using
Mutually Unbiased Bases
|
cs.IT math.IT
|
The notation of mutually unbiased bases(MUB) was first introduced by Ivanovic
to reconstruct density matrixes\cite{Ivanovic}. The subject about how to use
MUB to analyze, process, and utilize the information of the second moments
between random variables is studied in this paper. In the first part, the
mathematical foundation will be built. It will be shown that the spectra of MUB
have complete information for the correlation matrixes of finite discrete
signals, and the nice properties of them. Roughly speaking, it will be shown
that each spectrum from MUB plays an equal role for finite discrete signals,
and the effect between any two spectra can be treated as a global constant
shift. These properties will be used to find some important and natural
characterizations of random vectors and random discrete operators/filters. For
a technical reason, it will be shown that any MUB spectra can be found as fast
as Fourier spectrum when the length of the signal is a prime number.
In the second part, some applications will be presented. First of all, a
protocol about how to increase the number of users in a basic digital
communication model will be studied, which has bring some deep insights about
how to encode the information into the second moments between random variables.
Secondly, the application of signal analysis will be studied. It is suggested
that complete "MUB" spectra analysis works well in any case, and people can
just choose the spectra they are interested in to do analysis. For instance,
single Fourier spectra analysis can be also applied in nonstationary case.
Finally, the application of MUB in dimensionality reduction will be considered,
when the prior knowledge of the data isn't reliable.
|
0712.2587
|
Maximum-Likelihood Priority-First Search Decodable Codes for Combined
Channel Estimation and Error Protection
|
cs.IT math.IT
|
The code that combines channel estimation and error protection has received
general attention recently, and has been considered a promising methodology to
compensate multi-path fading effect. It has been shown by simulations that such
code design can considerably improve the system performance over the
conventional design with separate channel estimation and error protection
modules under the same code rate. Nevertheless, the major obstacle that
prevents from the practice of the codes is that the existing codes are mostly
searched by computers, and hence exhibit no good structure for efficient
decoding. Hence, the time-consuming exhaustive search becomes the only decoding
choice, and the decoding complexity increases dramatically with the codeword
length. In this paper, by optimizing the signal-tonoise ratio, we found a
systematic construction for the codes for combined channel estimation and error
protection, and confirmed its equivalence in performance to the
computer-searched codes by simulations. Moreover, the structural codes that we
construct by rules can now be maximum-likelihoodly decodable in terms of a
newly derived recursive metric for use of the priority-first search decoding
algorithm. Thus,the decoding complexity reduces significantly when compared
with that of the exhaustive decoder. The extension code design for fast-fading
channels is also presented. Simulations conclude that our constructed extension
code is robust in performance even if the coherent period is shorter than the
codeword length.
|
0712.2592
|
Strongly consistent nonparametric forecasting and regression for
stationary ergodic sequences
|
math.PR cs.IT math.IT
|
Let $\{(X_i,Y_i)\}$ be a stationary ergodic time series with $(X,Y)$ values
in the product space $\R^d\bigotimes \R .$ This study offers what is believed
to be the first strongly consistent (with respect to pointwise, least-squares,
and uniform distance) algorithm for inferring $m(x)=E[Y_0|X_0=x]$ under the
presumption that $m(x)$ is uniformly Lipschitz continuous. Auto-regression, or
forecasting, is an important special case, and as such our work extends the
literature of nonparametric, nonlinear forecasting by circumventing customary
mixing assumptions. The work is motivated by a time series model in stochastic
finance and by perspectives of its contribution to the issues of universal time
series estimation.
|
0712.2619
|
A New Lower Bound for A(17,6,6)
|
cs.IT cs.DM math.CO math.IT
|
We construct a record-breaking binary code of length 17, minimal distance 6,
constant weight 6, and containing 113 codewords.
|
0712.2630
|
Evolving XSLT stylesheets
|
cs.NE cs.PL
|
This paper introduces a procedure based on genetic programming to evolve XSLT
programs (usually called stylesheets or logicsheets). XSLT is a general
purpose, document-oriented functional language, generally used to transform XML
documents (or, in general, solve any problem that can be coded as an XML
document). The proposed solution uses a tree representation for the stylesheets
as well as diverse specific operators in order to obtain, in the studied cases
and a reasonable time, a XSLT stylesheet that performs the transformation.
Several types of representation have been compared, resulting in different
performance and degree of success.
|
0712.2640
|
Optimal Memoryless Encoding for Low Power Off-Chip Data Buses
|
cs.AR cs.DM cs.IT math.IT
|
Off-chip buses account for a significant portion of the total system power
consumed in embedded systems. Bus encoding schemes have been proposed to
minimize power dissipation, but none has been demonstrated to be optimal with
respect to any measure. In this paper, we give the first provably optimal and
explicit (polynomial-time constructible) families of memoryless codes for
minimizing bit transitions in off-chip buses. Our results imply that having
access to a clock does not make a memoryless encoding scheme that minimizes bit
transitions more powerful.
|
0712.2643
|
Changing Levels of Description in a Fluid Flow Simulation
|
physics.flu-dyn cs.CE
|
We describe here our perception of complex systems, of how we feel the
different layers of description are important part of a correct complex system
simulation. We describe a rough models categorization between rules based and
law based, of how these categories handled the levels of descriptions or
scales. We then describe our fluid flow simulation, which combines different
fineness of grain in a mixed approach of these categories. This simulation is
built keeping in mind an ulterior use inside a more general aquatic ecosystem.
|
0712.2684
|
An Economic Model of Coupled Exponential Maps
|
q-fin.GN cs.MA nlin.AO physics.soc-ph
|
In this work, an ensemble of economic interacting agents is considered. The
agents are arranged in a linear array where only local couplings are allowed.
The deterministic dynamics of each agent is given by a map. This map is
expressed by two factors. The first one is a linear term that models the
expansion of the agent's economy and that is controlled by the {\it growth
capacity parameter}. The second one is an inhibition exponential term that is
regulated by the {\it local environmental pressure}. Depending on the parameter
setting, the system can display Pareto or Boltzmann-Gibbs behavior in the
asymptotic dynamical regime. The regions of parameter space where the system
exhibits one of these two statistical behaviors are delimited. Other properties
of the system, such as the mean wealth, the standard deviation and the Gini
coefficient, are also calculated.
|
0712.2773
|
Middleware-based Database Replication: The Gaps between Theory and
Practice
|
cs.DB cs.DC cs.PF
|
The need for high availability and performance in data management systems has
been fueling a long running interest in database replication from both academia
and industry. However, academic groups often attack replication problems in
isolation, overlooking the need for completeness in their solutions, while
commercial teams take a holistic approach that often misses opportunities for
fundamental innovation. This has created over time a gap between academic
research and industrial practice.
This paper aims to characterize the gap along three axes: performance,
availability, and administration. We build on our own experience developing and
deploying replication systems in commercial and academic settings, as well as
on a large body of prior related work. We sift through representative examples
from the last decade of open-source, academic, and commercial database
replication systems and combine this material with case studies from real
systems deployed at Fortune 500 customers. We propose two agendas, one for
academic research and one for industrial R&D, which we believe can bridge the
gap within 5-10 years. This way, we hope to both motivate and help researchers
in making the theory and practice of middleware-based database replication more
relevant to each other.
|
0712.2789
|
Trading in Risk Dimensions (TRD)
|
cs.CE cs.NA
|
Previous work, mostly published, developed two-shell recursive trading
systems. An inner-shell of Canonical Momenta Indicators (CMI) is adaptively fit
to incoming market data. A parameterized trading-rule outer-shell uses the
global optimization code Adaptive Simulated Annealing (ASA) to fit the trading
system to historical data. A simple fitting algorithm, usually not requiring
ASA, is used for the inner-shell fit. An additional risk-management
middle-shell has been added to create a three-shell recursive
optimization/sampling/fitting algorithm. Portfolio-level distributions of
copula-transformed multivariate distributions (with constituent markets
possessing different marginal distributions in returns space) are generated by
Monte Carlo samplings. ASA is used to importance-sample weightings of these
markets.
The core code, Trading in Risk Dimensions (TRD), processes Training and
Testing trading systems on historical data, and consistently interacts with
RealTime trading platforms at minute resolutions, but this scale can be
modified. This approach transforms constituent probability distributions into a
common space where it makes sense to develop correlations to further develop
probability distributions and risk/uncertainty analyses of the full portfolio.
ASA is used for importance-sampling these distributions and for optimizing
system parameters.
|
0712.2857
|
Single-Exclusion Number and the Stopping Redundancy of MDS Codes
|
cs.IT cs.DM math.CO math.IT
|
For a linear block code C, its stopping redundancy is defined as the smallest
number of check nodes in a Tanner graph for C, such that there exist no
stopping sets of size smaller than the minimum distance of C. Schwartz and
Vardy conjectured that the stopping redundancy of an MDS code should only
depend on its length and minimum distance.
We define the (n,t)-single-exclusion number, S(n,t) as the smallest number of
t-subsets of an n-set, such that for each i-subset of the n-set, i=1,...,t+1,
there exists a t-subset that contains all but one element of the i-subset. New
upper bounds on the single-exclusion number are obtained via probabilistic
methods, recurrent inequalities, as well as explicit constructions. The new
bounds are used to better understand the stopping redundancy of MDS codes. In
particular, it is shown that for [n,k=n-d+1,d] MDS codes, as n goes to
infinity, the stopping redundancy is asymptotic to S(n,d-2), if d=o(\sqrt{n}),
or if k=o(\sqrt{n}) and k goes to infinity, thus giving partial confirmation of
the Schwartz-Vardy conjecture in the asymptotic sense.
|
0712.2869
|
Density estimation in linear time
|
cs.LG
|
We consider the problem of choosing a density estimate from a set of
distributions F, minimizing the L1-distance to an unknown distribution
(Devroye, Lugosi 2001). Devroye and Lugosi analyze two algorithms for the
problem: Scheffe tournament winner and minimum distance estimate. The Scheffe
tournament estimate requires fewer computations than the minimum distance
estimate, but has strictly weaker guarantees than the latter.
We focus on the computational aspect of density estimation. We present two
algorithms, both with the same guarantee as the minimum distance estimate. The
first one, a modification of the minimum distance estimate, uses the same
number (quadratic in |F|) of computations as the Scheffe tournament. The second
one, called ``efficient minimum loss-weight estimate,'' uses only a linear
number of computations, assuming that F is preprocessed.
We also give examples showing that the guarantees of the algorithms cannot be
improved and explore randomized algorithms for density estimation.
|
0712.2870
|
The source coding game with a cheating switcher
|
cs.IT cs.CV math.IT
|
Motivated by the lossy compression of an active-vision video stream, we
consider the problem of finding the rate-distortion function of an arbitrarily
varying source (AVS) composed of a finite number of subsources with known
distributions. Berger's paper `The Source Coding Game', \emph{IEEE Trans.
Inform. Theory}, 1971, solves this problem under the condition that the
adversary is allowed only strictly causal access to the subsource realizations.
We consider the case when the adversary has access to the subsource
realizations non-causally. Using the type-covering lemma, this new
rate-distortion function is determined to be the maximum of the IID
rate-distortion function over a set of source distributions attainable by the
adversary. We then extend the results to allow for partial or noisy
observations of subsource realizations. We further explore the model by
attempting to find the rate-distortion function when the adversary is actually
helpful.
Finally, a bound is developed on the uniform continuity of the IID
rate-distortion function for finite-alphabet sources. The bound is used to give
a sufficient number of distributions that need to be sampled to compute the
rate-distortion function of an AVS to within a certain accuracy. The bound is
also used to give a rate of convergence for the estimate of the rate-distortion
function for an unknown IID finite-alphabet source .
|
0712.2872
|
Low SNR Capacity of Noncoherent Fading Channels
|
cs.IT math.IT
|
Discrete-time Rayleigh fading single-input single-output (SISO) and
multiple-input multiple-output (MIMO) channels are considered, with no channel
state information at the transmitter or the receiver. The fading is assumed to
be stationary and correlated in time, but independent from antenna to antenna.
Peak-power and average-power constraints are imposed on the transmit antennas.
For MIMO channels, these constraints are either imposed on the sum over
antennas, or on each individual antenna. For SISO channels and MIMO channels
with sum power constraints, the asymptotic capacity as the peak signal-to-noise
ratio tends to zero is identified; for MIMO channels with individual power
constraints, this asymptotic capacity is obtained for a class of channels
called transmit separable channels. The results for MIMO channels with
individual power constraints are carried over to SISO channels with delay
spread (i.e. frequency selective fading).
|
0712.2923
|
A Class of LULU Operators on Multi-Dimensional Arrays
|
cs.CV
|
The LULU operators for sequences are extended to multi-dimensional arrays via
the morphological concept of connection in a way which preserves their
essential properties, e.g. they are separators and form a four element fully
ordered semi-group. The power of the operators is demonstrated by deriving a
total variation preserving discrete pulse decomposition of images.
|
0712.2959
|
Joint Source-Channel Coding Revisited: Information-Spectrum Approach
|
cs.IT math.IT
|
Given a general source with countably infinite source alphabet and a general
channel with arbitrary abstract channel input/channel output alphabets, we
study the joint source-channel coding problem from the information-spectrum
point of view. First, we generalize Feinstein's lemma (direct part) and
Verdu-Han's lemma (converse part) so as to be applicable to the general joint
source-channel coding problem. Based on these lemmas, we establish a sufficient
condition as well as a necessary condition for the source to be reliably
transmissible over the channel with asymptotically vanishing probability of
error. It is shown that our sufficient condition is equivalent to the
sufficient condition derived by Vembu, Verdu and Steinberg, whereas our
necessary condition is shown to be stronger than or equivalent to the necessary
condition derived by them. It turns out, as a direct consequence, that
separation principle in a relevantly generalized sense holds for a wide class
of sources and channels, as was shown in a quite dfifferent manner by Vembu,
Verdu and Steinberg. It should also be remarked that a nice duality is found
between our necessary and sufficient conditions, whereas we cannot fully enjoy
such a duality between the necessary condition and the sufficient condition by
Vembu, Verdu and Steinberg. In addition, we demonstrate a sufficient condition
as well as a necessary condition for the epsilon-transmissibility. Finally, the
separation theorem of the traditional standard form is shown to hold for the
class of sources and channels that satisfy the semi-strong converse property.
|
0712.3147
|
Common knowledge logic in a higher order proof assistant?
|
cs.AI cs.LO
|
This paper presents experiments on common knowledge logic, conducted with the
help of the proof assistant Coq. The main feature of common knowledge logic is
the eponymous modality that says that a group of agents shares a knowledge
about a certain proposition in a inductive way. This modality is specified by
using a fixpoint approach. Furthermore, from these experiments, we discuss and
compare the structure of theorems that can be proved in specific theories that
use common knowledge logic. Those structures manifests the interplay between
the theory (as implemented in the proof assistant Coq) and the metatheory.
|
0712.3277
|
On the Capacity and Energy Efficiency of Training-Based Transmissions
over Fading Channels
|
cs.IT math.IT
|
In this paper, the capacity and energy efficiency of training-based
communication schemes employed for transmission over a-priori unknown Rayleigh
block fading channels are studied. In these schemes, periodically transmitted
training symbols are used at the receiver to obtain the minimum
mean-square-error (MMSE) estimate of the channel fading coefficients.
Initially, the case in which the product of the estimate error and transmitted
signal is assumed to be Gaussian noise is considered. In this case, it is shown
that bit energy requirements grow without bound as the signal-to-noise ratio
(SNR) goes to zero, and the minimum bit energy is achieved at a nonzero SNR
value below which one should not operate. The effect of the block length on
both the minimum bit energy and the SNR value at which the minimum is achieved
is investigated. Flash training and transmission schemes are analyzed and shown
to improve the energy efficiency in the low-SNR regime.
In the second part of the paper, the capacity and energy efficiency of
training-based schemes are investigated when the channel input is subject to
peak power constraints. The capacity-achieving input structure is characterized
and the magnitude distribution of the optimal input is shown to be discrete
with a finite number of mass points. The capacity, bit energy requirements, and
optimal resource allocation strategies are obtained through numerical analysis.
The bit energy is again shown to grow without bound as SNR decreases to zero
due to the presence of peakedness constraints. The improvements in energy
efficiency when on-off keying with fixed peak power and vanishing duty cycle is
employed are studied. Comparisons of the performances of training-based and
noncoherent transmission schemes are provided.
|
0712.3286
|
Error Rate Analysis for Peaky Signaling over Fading Channels
|
cs.IT math.IT
|
In this paper, the performance of signaling strategies with high
peak-to-average power ratio is analyzed over both coherent and noncoherent
fading channels. Two modulation schemes, namely on-off phase-shift keying
(OOPSK) and on-off frequency-shift keying (OOFSK), are considered. Initially,
uncoded systems are analyzed. For OOPSK and OOFSK, the optimal detector
structures are identified and analytical expressions for the error
probabilities are obtained for arbitrary constellation sizes. Numerical
techniques are employed to compute the error probabilities. It is concluded
that increasing the peakedness of the signals results in reduced error rates
for a given power level and hence equivalently improves the energy efficiency
for fixed error probabilities. The coded performance is also studied by
analyzing the random coding error exponents achieved by OOPSK and OOFSK
signaling.
|
0712.3298
|
CLAIRLIB Documentation v1.03
|
cs.IR cs.CL
|
The Clair library is intended to simplify a number of generic tasks in
Natural Language Processing (NLP), Information Retrieval (IR), and Network
Analysis. Its architecture also allows for external software to be plugged in
with very little effort. Functionality native to Clairlib includes
Tokenization, Summarization, LexRank, Biased LexRank, Document Clustering,
Document Indexing, PageRank, Biased PageRank, Web Graph Analysis, Network
Generation, Power Law Distribution Analysis, Network Analysis (clustering
coefficient, degree distribution plotting, average shortest path, diameter,
triangles, shortest path matrices, connected components), Cosine Similarity,
Random Walks on Graphs, Statistics (distributions, tests), Tf, Idf, Community
Finding.
|
0712.3299
|
Computer- and robot-assisted urological surgery
|
cs.OH cs.RO
|
The author reviews the computer and robotic tools available to urologists to
help in diagnosis and technical procedures. The first part concerns the
contribution of robotics and presents several systems at various stages of
development (laboratory prototypes, systems under validation or marketed
systems). The second part describes image fusion tools and navigation systems
currently under development or evaluation. Several studies on computerized
simulation of urological procedures are also presented.
|
0712.3327
|
The capacity of a class of 3-receiver broadcast channels with degraded
message sets
|
cs.IT math.IT
|
Korner and Marton established the capacity region for the 2-receiver
broadcast channel with degraded message sets. Recent results and conjectures
suggest that a straightforward extension of the Korner-Marton region to more
than 2 receivers is optimal. This paper shows that this is not the case. We
establish the capacity region for a class of 3-receiver broadcast channels with
2 degraded message sets and show that it can be strictly larger than the
straightforward extension of the Korner-Marton region. The key new idea is
indirect decoding, whereby a receiver who cannot directly decode a cloud
center, finds it indirectly by decoding satellite codewords. This idea is then
used to establish new inner and outer bounds on the capacity region of the
general 3-receiver broadcast channel with 2 and 3 degraded message sets. We
show that these bounds are tight for some nontrivial cases. The results suggest
that the capacity of the 3-receiver broadcast channel with degraded message
sets is as at least as hard to find as the capacity of the general 2-receiver
broadcast channel with common and private message.
|
0712.3329
|
Universal Intelligence: A Definition of Machine Intelligence
|
cs.AI
|
A fundamental problem in artificial intelligence is that nobody really knows
what intelligence is. The problem is especially acute when we need to consider
artificial systems which are significantly different to humans. In this paper
we approach this problem in the following way: We take a number of well known
informal definitions of human intelligence that have been given by experts, and
extract their essential features. These are then mathematically formalised to
produce a general measure of intelligence for arbitrary machines. We believe
that this equation formally captures the concept of machine intelligence in the
broadest reasonable sense. We then show how this formal definition is related
to the theory of universal optimal learning agents. Finally, we survey the many
other tests and definitions of intelligence that have been proposed for
machines.
|
0712.3402
|
Graph kernels between point clouds
|
cs.LG
|
Point clouds are sets of points in two or three dimensions. Most kernel
methods for learning on sets of points have not yet dealt with the specific
geometrical invariances and practical constraints associated with point clouds
in computer vision and graphics. In this paper, we present extensions of graph
kernels for point clouds, which allow to use kernel methods for such ob jects
as shapes, line drawings, or any three-dimensional point clouds. In order to
design rich and numerically efficient kernels with as few free parameters as
possible, we use kernels between covariance matrices and their factorizations
on graphical models. We derive polynomial time dynamic programming recursions
and present applications to recognition of handwritten digits and Chinese
characters from few training examples.
|
0712.3423
|
Tuplix Calculus
|
cs.LO cs.CE
|
We introduce a calculus for tuplices, which are expressions that generalize
matrices and vectors. Tuplices have an underlying data type for quantities that
are taken from a zero-totalized field. We start with the core tuplix calculus
CTC for entries and tests, which are combined using conjunctive composition. We
define a standard model and prove that CTC is relatively complete with respect
to it. The core calculus is extended with operators for choice, information
hiding, scalar multiplication, clearing and encapsulation. We provide two
examples of applications; one on incremental financial budgeting, and one on
modular financial budget design.
|
0712.3501
|
The Impact of Hard-Decision Detection on the Energy Efficiency of Phase
and Frequency Modulation
|
cs.IT math.IT
|
The central design challenge in next generation wireless systems is to have
these systems operate at high bandwidths and provide high data rates while
being cognizant of the energy consumption levels especially in mobile
applications. Since communicating at very high data rates prohibits obtaining
high bit resolutions from the analog-to-digital (A/D) converters, analysis of
the energy efficiency under the assumption of hard-decision detection is called
for to accurately predict the performance levels. In this paper, transmission
over the additive white Gaussian noise (AWGN) channel, and coherent and
noncoherent fading channels is considered, and the impact of hard-decision
detection on the energy efficiency of phase and frequency modulations is
investigated. Energy efficiency is analyzed by studying the capacity of these
modulation schemes and the energy required to send one bit of information
reliably in the low signal-to-noise ratio (SNR) regime. The capacity of
hard-decision-detected phase and frequency modulations is characterized at low
SNR levels through closed-form expressions for the first and second derivatives
of the capacity at zero SNR. Subsequently, bit energy requirements in the
low-SNR regime are identified. The increases in the bit energy incurred by
hard-decision detection and channel fading are quantified. Moreover, practical
design guidelines for the selection of the constellation size are drawn from
the analysis of the spectral efficiency--bit energy tradeoff.
|
0712.3576
|
Protocols For Half-Duplex Multiple Relay Networks
|
cs.IT math.IT
|
In this paper we present several strategies for multiple relay networks which
are constrained by a half-duplex operation, i. e., each node either transmits
or receives on a particular resource. Using the discrete memoryless multiple
relay channel we present achievable rates for a multilevel partial
decode-and-forward approach which generalizes previous results presented by
Kramer and Khojastepour et al.. Furthermore, we derive a compress-and-forward
approach using a regular encoding scheme which simplifies the encoding and
decoding scheme and improves the achievable rates in general. Finally, we give
achievable rates for a mixed strategy used in a four-terminal network with
alternately transmitting relay nodes.
|
0712.3587
|
Pattern Recognition System Design with Linear Encoding for Discrete
Patterns
|
cs.IT cs.CV math.IT
|
In this paper, designs and analyses of compressive recognition systems are
discussed, and also a method of establishing a dual connection between designs
of good communication codes and designs of recognition systems is presented.
Pattern recognition systems based on compressed patterns and compressed sensor
measurements can be designed using low-density matrices. We examine truncation
encoding where a subset of the patterns and measurements are stored perfectly
while the rest is discarded. We also examine the use of LDPC parity check
matrices for compressing measurements and patterns. We show how more general
ensembles of good linear codes can be used as the basis for pattern recognition
system design, yielding system design strategies for more general noise models.
|
0712.3617
|
A Unified Framework for Pricing Credit and Equity Derivatives
|
cs.CE
|
We propose a model which can be jointly calibrated to the corporate bond term
structure and equity option volatility surface of the same company. Our purpose
is to obtain explicit bond and equity option pricing formulas that can be
calibrated to find a risk neutral model that matches a set of observed market
prices. This risk neutral model can then be used to price more exotic, illiquid
or over-the-counter derivatives. We observe that the model implied credit
default swap (CDS) spread matches the market CDS spread and that our model
produces a very desirable CDS spread term structure. This is observation is
worth noticing since without calibrating any parameter to the CDS spread data,
it is matched by the CDS spread that our model generates using the available
information from the equity options and corporate bond markets. We also observe
that our model matches the equity option implied volatility surface well since
we properly account for the default risk premium in the implied volatility
surface. We demonstrate the importance of accounting for the default risk and
stochastic interest rate in equity option pricing by comparing our results to
Fouque, Papanicolaou, Sircar and Solna (2003), which only accounts for
stochastic volatility.
|
0712.3654
|
Improving the Performance of PieceWise Linear Separation Incremental
Algorithms for Practical Hardware Implementations
|
cs.NE cs.AI cs.LG
|
In this paper we shall review the common problems associated with Piecewise
Linear Separation incremental algorithms. This kind of neural models yield poor
performances when dealing with some classification problems, due to the
evolving schemes used to construct the resulting networks. So as to avoid this
undesirable behavior we shall propose a modification criterion. It is based
upon the definition of a function which will provide information about the
quality of the network growth process during the learning phase. This function
is evaluated periodically as the network structure evolves, and will permit, as
we shall show through exhaustive benchmarks, to considerably improve the
performance(measured in terms of network complexity and generalization
capabilities) offered by the networks generated by these incremental models.
|
0712.3705
|
Framework and Resources for Natural Language Parser Evaluation
|
cs.CL
|
Because of the wide variety of contemporary practices used in the automatic
syntactic parsing of natural languages, it has become necessary to analyze and
evaluate the strengths and weaknesses of different approaches. This research is
all the more necessary because there are currently no genre- and
domain-independent parsers that are able to analyze unrestricted text with 100%
preciseness (I use this term to refer to the correctness of analyses assigned
by a parser). All these factors create a need for methods and resources that
can be used to evaluate and compare parsing systems. This research describes:
(1) A theoretical analysis of current achievements in parsing and parser
evaluation. (2) A framework (called FEPa) that can be used to carry out
practical parser evaluations and comparisons. (3) A set of new evaluation
resources: FiEval is a Finnish treebank under construction, and MGTS and RobSet
are parser evaluation resources in English. (4) The results of experiments in
which the developed evaluation framework and the two resources for English were
used for evaluating a set of selected parsers.
|
0712.3807
|
Improved Collaborative Filtering Algorithm via Information
Transformation
|
cs.LG cs.CY
|
In this paper, we propose a spreading activation approach for collaborative
filtering (SA-CF). By using the opinion spreading process, the similarity
between any users can be obtained. The algorithm has remarkably higher accuracy
than the standard collaborative filtering (CF) using Pearson correlation.
Furthermore, we introduce a free parameter $\beta$ to regulate the
contributions of objects to user-user correlations. The numerical results
indicate that decreasing the influence of popular objects can further improve
the algorithmic accuracy and personality. We argue that a better algorithm
should simultaneously require less computation and generate higher accuracy.
Accordingly, we further propose an algorithm involving only the top-$N$ similar
neighbors for each target user, which has both less computational complexity
and higher algorithmic accuracy.
|
0712.3823
|
Multidimensional reconciliation for continuous-variable quantum key
distribution
|
quant-ph cs.IT math.IT
|
We propose a method for extracting an errorless secret key in a
continuous-variable quantum key distribution protocol, which is based on
Gaussian modulation of coherent states and homodyne detection. The crucial
feature is an eight-dimensional reconciliation method, based on the algebraic
properties of octonions. Since the protocol does not use any postselection, it
can be proven secure against arbitrary collective attacks, by using
well-established theorems on the optimality of Gaussian attacks. By using this
new coding scheme with an appropriate signal to noise ratio, the distance for
secure continuous-variable quantum key distribution can be significantly
extended.
|
0712.3825
|
Tests of Machine Intelligence
|
cs.AI
|
Although the definition and measurement of intelligence is clearly of
fundamental importance to the field of artificial intelligence, no general
survey of definitions and tests of machine intelligence exists. Indeed few
researchers are even aware of alternatives to the Turing test and its many
derivatives. In this paper we fill this gap by providing a short survey of the
many tests of machine intelligence that have been proposed.
|
0712.3896
|
Tighter and Stable Bounds for Marcum Q-Function
|
cs.IT math.IT
|
This paper proposes new bounds for Marcum Q-function, which prove extremely
tight and outperform all the bounds previously proposed in the literature. What
is more, the proposed bounds are good and stable both for large values and
small values of the parameters of the Marcum Q-function, where the previously
introduced bounds are bad and even useless under some conditions. The new
bounds are derived by refined approximations for the 0th order modified Bessel
function in the integration region of the Marcum Q-function. They should be
useful since they are always tight no matter the parameters are large or small.
|
0712.3925
|
QIS-XML: A metadata specification for Quantum Information Science
|
cs.SE cs.DB quant-ph
|
While Quantum Information Science (QIS) is still in its infancy, the ability
for quantum based hardware or computers to communicate and integrate with their
classical counterparts will be a major requirement towards their success.
Little attention however has been paid to this aspect of QIS. To manage and
exchange information between systems, today's classic Information Technology
(IT) commonly uses the eXtensible Markup Language (XML) and its related tools.
XML is composed of numerous specifications related to various fields of
expertise. No such global specification however has been defined for quantum
computers. QIS-XML is a proposed XML metadata specification for the description
of fundamental components of QIS (gates & circuits) and a platform for the
development of a hardware independent low level pseudo-code for quantum
algorithms. This paper lays out the general characteristics of the QIS-XML
specification and outlines practical applications through prototype use cases.
|
0712.3973
|
GUIDE: Unifying Evolutionary Engines through a Graphical User Interface
|
cs.NE
|
Many kinds of Evolutionary Algorithms (EAs) have been described in the
literature since the last 30 years. However, though most of them share a common
structure, no existing software package allows the user to actually shift from
one model to another by simply changing a few parameters, e.g. in a single
window of a Graphical User Interface. This paper presents GUIDE, a Graphical
User Interface for DREAM Experiments that, among other user-friendly features,
unifies all kinds of EAs into a single panel, as far as evolution parameters
are concerned. Such a window can be used either to ask for one of the well
known ready-to-use algorithms, or to very easily explore new combinations that
have not yet been studied. Another advantage of grouping all necessary elements
to describe virtually all kinds of EAs is that it creates a fantastic pedagogic
tool to teach EAs to students and newcomers to the field.
|
0712.4011
|
Asymptotic Mutual Information Statistics of Separately-Correlated Rician
Fading MIMO Channels
|
cs.IT math.IT
|
Precise characterization of the mutual information of MIMO systems is
required to assess the throughput of wireless communication channels in the
presence of Rician fading and spatial correlation. Here, we present an
asymptotic approach allowing to approximate the distribution of the mutual
information as a Gaussian distribution in order to provide both the average
achievable rate and the outage probability. More precisely, the mean and
variance of the mutual information of the separatelycorrelated Rician fading
MIMO channel are derived when the number of transmit and receive antennas grows
asymptotically large and their ratio approaches a finite constant. The
derivation is based on the replica method, an asymptotic technique widely used
in theoretical physics and, more recently, in the performance analysis of
communication (CDMA and MIMO) systems. The replica method allows to analyze
very difficult system cases in a comparatively simple way though some authors
pointed out that its assumptions are not always rigorous. Being aware of this,
we underline the key assumptions made in this setting, quite similar to the
assumptions made in the technical literature using the replica method in their
asymptotic analyses. As far as concerns the convergence of the mutual
information to the Gaussian distribution, it is shown that it holds under some
mild technical conditions, which are tantamount to assuming that the spatial
correlation structure has no asymptotically dominant eigenmodes. The accuracy
of the asymptotic approach is assessed by providing a sizeable number of
numerical results. It is shown that the approximation is very accurate in a
wide variety of system settings even when the number of transmit and receive
antennas is as small as a few units.
|
0712.4015
|
A Fast Hierarchical Multilevel Image Segmentation Method using Unbiased
Estimators
|
cs.CV
|
This paper proposes a novel method for segmentation of images by hierarchical
multilevel thresholding. The method is global, agglomerative in nature and
disregards pixel locations. It involves the optimization of the ratio of the
unbiased estimators of within class to between class variances. We obtain a
recursive relation at each step for the variances which expedites the process.
The efficacy of the method is shown in a comparison with some well-known
methods.
|
0712.4059
|
On Distributed Computation in Noisy Random Planar Networks
|
cs.IT math.IT
|
We consider distributed computation of functions of distributed data in
random planar networks with noisy wireless links. We present a new algorithm
for computation of the maximum value which is order optimal in the number of
transmissions and computation time.We also adapt the histogram computation
algorithm of Ying et al to make the histogram computation time optimal.
|
0712.4075
|
Polytope Representations for Linear-Programming Decoding of Non-Binary
Linear Codes
|
cs.IT math.IT
|
In previous work, we demonstrated how decoding of a non-binary linear code
could be formulated as a linear-programming problem. In this paper, we study
different polytopes for use with linear-programming decoding, and show that for
many classes of codes these polytopes yield a complexity advantage for
decoding. These representations lead to polynomial-time decoders for a wide
variety of classical non-binary linear codes.
|
0712.4096
|
Error-Correction of Multidimensional Bursts
|
cs.IT math.IT
|
In this paper we present several constructions to generate codes for
correcting a multidimensional cluster-error. The goal is to correct a
cluster-error whose shape can be a box-error, a Lee sphere error, or an error
with an arbitrary shape. Our codes have very low redundancy, close to optimal,
and large range of parameters of arrays and clusters. Our main results are
summarized as follows: 1) A construction of two-dimensional codes capable to
correct a rectangular-error with considerably more flexible parameters from
previously known constructions. Another advantage of this construction is that
it is easily generalized for D dimensions. 2) A novel method based on D
colorings of the D-dimensional space for constructing D-dimensional codes
correcting D-dimensional cluster-error of various shapes. This method is
applied efficiently to correct a D-dimensional cluster error of parameters not
covered efficiently by previous onstructions. 3) A transformation of the
D-dimensional space into another D-dimensional space such that a D-dimensional
Lee sphere is transformed into a shape located in a D-dimensional box of a
relatively small size. We use the previous constructions to correct a
D-dimensional error whose shape is a D-dimensional Lee sphere. 4) Applying the
coloring method to correct more efficiently a two-dimensional error whose shape
is a Lee sphere. The D-dimensional case is also discussed. 5) A construction of
one-dimensional codes capable to correct a burst-error of length b in which the
number of erroneous positions is relatively small compared to b. This
construction is generalized for D-dimensional codes. 6) Applying the
constructions correcting a Lee sphere error and a cluster-error with small
number of erroneous positions, to correct an arbitrary cluster-error.
|
0712.4099
|
Digital Ecosystems: Optimisation by a Distributed Intelligence
|
cs.NE
|
Can intelligence optimise Digital Ecosystems? How could a distributed
intelligence interact with the ecosystem dynamics? Can the software components
that are part of genetic selection be intelligent in themselves, as in an
adaptive technology? We consider the effect of a distributed intelligence
mechanism on the evolutionary and ecological dynamics of our Digital Ecosystem,
which is the digital counterpart of a biological ecosystem for evolving
software services in a distributed network. We investigate Neural Networks and
Support Vector Machine for the learning based pattern recognition functionality
of our distributed intelligence. Simulation results imply that the Digital
Ecosystem performs better with the application of a distributed intelligence,
marginally more effectively when powered by Support Vector Machine than Neural
Networks, and suggest that it can contribute to optimising the operation of our
Digital Ecosystem.
|
0712.4101
|
Digital Ecosystems: Stability of Evolving Agent Populations
|
cs.NE
|
Stability is perhaps one of the most desirable features of any engineered
system, given the importance of being able to predict its response to various
environmental conditions prior to actual deployment. Engineered systems are
becoming ever more complex, approaching the same levels of biological
ecosystems, and so their stability becomes ever more important, but taking on
more and more differential dynamics can make stability an ever more elusive
property. The Chli-DeWilde definition of stability views a Multi-Agent System
as a discrete time Markov chain with potentially unknown transition
probabilities. With a Multi-Agent System being considered stable when its
state, a stochastic process, has converged to an equilibrium distribution,
because stability of a system can be understood intuitively as exhibiting
bounded behaviour. We investigate an extension to include Multi-Agent Systems
with evolutionary dynamics, focusing on the evolving agent populations of our
Digital Ecosystem. We then built upon this to construct an entropy-based
definition for the degree of instability (entropy of the limit probabilities),
which was later used to perform a stability analysis. The Digital Ecosystem is
considered to investigate the stability of an evolving agent population through
simulations, for which the results were consistent with the original
Chli-DeWilde definition.
|
0712.4102
|
Digital Ecosystems: Evolving Service-Oriented Architectures
|
cs.NE
|
We view Digital Ecosystems to be the digital counterparts of biological
ecosystems, exploiting the self-organising properties of biological ecosystems,
which are considered to be robust, self-organising and scalable architectures
that can automatically solve complex, dynamic problems. Digital Ecosystems are
a novel optimisation technique where the optimisation works at two levels: a
first optimisation, migration of agents (representing services) which are
distributed in a decentralised peer-to-peer network, operating continuously in
time; this process feeds a second optimisation based on evolutionary computing
that operates locally on single peers and is aimed at finding solutions to
satisfy locally relevant constraints. We created an Ecosystem-Oriented
Architecture of Digital Ecosystems by extending Service-Oriented Architectures
with distributed evolutionary computing, allowing services to recombine and
evolve over time, constantly seeking to improve their effectiveness for the
user base. Individuals within our Digital Ecosystem will be applications
(groups of services), created in response to user requests by using
evolutionary optimisation to aggregate the services. These individuals will
migrate through the Digital Ecosystem and adapt to find niches where they are
useful in fulfilling other user requests for applications. Simulation results
imply that the Digital Ecosystem performs better at large scales than a
comparable Service-Oriented Architecture, suggesting that incorporating ideas
from theoretical ecology can contribute to useful self-organising properties in
digital ecosystems.
|
0712.4103
|
On the Monotonicity of the Generalized Marcum and Nuttall Q-Functions
|
cs.IT math.IT
|
Monotonicity criteria are established for the generalized Marcum Q-function,
$\emph{Q}_{M}$, the standard Nuttall Q-function, $\emph{Q}_{M,N}$, and the
normalized Nuttall Q-function, $\mathcal{Q}_{M,N}$, with respect to their real
order indices M,N. Besides, closed-form expressions are derived for the
computation of the standard and normalized Nuttall Q-functions for the case
when M,N are odd multiples of 0.5 and $M\geq N$. By exploiting these results,
novel upper and lower bounds for $\emph{Q}_{M,N}$ and $\mathcal{Q}_{M,N}$ are
proposed. Furthermore, specific tight upper and lower bounds for
$\emph{Q}_{M}$, previously reported in the literature, are extended for real
values of M. The offered theoretical results can be efficiently applied in the
study of digital communications over fading channels, in the
information-theoretic analysis of multiple-input multiple-output systems and in
the description of stochastic processes in probability theory, among others.
|
0712.4115
|
A Class of Quantum LDPC Codes Constructed From Finite Geometries
|
quant-ph cs.IT math.IT
|
Low-density parity check (LDPC) codes are a significant class of classical
codes with many applications. Several good LDPC codes have been constructed
using random, algebraic, and finite geometries approaches, with containing
cycles of length at least six in their Tanner graphs. However, it is impossible
to design a self-orthogonal parity check matrix of an LDPC code without
introducing cycles of length four.
In this paper, a new class of quantum LDPC codes based on lines and points of
finite geometries is constructed. The parity check matrices of these codes are
adapted to be self-orthogonal with containing only one cycle of length four.
Also, the column and row weights, and bounds on the minimum distance of these
codes are given. As a consequence, the encoding and decoding algorithms of
these codes as well as their performance over various quantum depolarizing
channels will be investigated.
|
0712.4126
|
TRUST-TECH based Methods for Optimization and Learning
|
cs.AI cs.CE cs.MS cs.NA cs.NE
|
Many problems that arise in machine learning domain deal with nonlinearity
and quite often demand users to obtain global optimal solutions rather than
local optimal ones. Optimization problems are inherent in machine learning
algorithms and hence many methods in machine learning were inherited from the
optimization literature. Popularly known as the initialization problem, the
ideal set of parameters required will significantly depend on the given
initialization values. The recently developed TRUST-TECH (TRansformation Under
STability-reTaining Equilibria CHaracterization) methodology systematically
explores the subspace of the parameters to obtain a complete set of local
optimal solutions. In this thesis work, we propose TRUST-TECH based methods for
solving several optimization and machine learning problems. Two stages namely,
the local stage and the neighborhood-search stage, are repeated alternatively
in the solution space to achieve improvements in the quality of the solutions.
Our methods were tested on both synthetic and real datasets and the advantages
of using this novel framework are clearly manifested. This framework not only
reduces the sensitivity to initialization, but also allows the flexibility for
the practitioners to use various global and local methods that work well for a
particular problem of interest. Other hierarchical stochastic algorithms like
evolutionary algorithms and smoothing algorithms are also studied and
frameworks for combining these methods with TRUST-TECH have been proposed and
evaluated on several test systems.
|
0712.4135
|
On the Throughput of Secure Hybrid-ARQ Protocols for Gaussian
Block-Fading Channels
|
cs.IT math.IT
|
The focus of this paper is an information-theoretic study of retransmission
protocols for reliable packet communication under a secrecy constraint. The
hybrid automatic retransmission request (HARQ) protocol is revisited for a
block-fading wire-tap channel, in which two legitimate users communicate over a
block-fading channel in the presence of a passive eavesdropper who intercepts
the transmissions through an independent block-fading channel. In this model,
the transmitter obtains a 1-bit ACK/NACK feedback from the legitimate receiver
via an error-free public channel. Both reliability and confidentiality of
secure HARQ protocols are studied by the joint consideration of channel coding,
secrecy coding, and retransmission protocols. In particular, the error and
secrecy performance of repetition time diversity (RTD) and incremental
redundancy (INR) protocols are investigated based on good Wyner code sequences,
which ensure that the confidential message is decoded successfully by the
legitimate receiver and is kept in total ignorance by the eavesdropper for a
given set of channel realizations. This paper first illustrates that there
exists a good rate-compatible Wyner code family which ensures a secure INR
protocol. Next, two types of outage probabilities, connection outage and
secrecy outage probabilities are defined in order to characterize the tradeoff
between the reliability of the legitimate communication link and the
confidentiality with respect to the eavesdropper's link. For a given
connection/secrecy outage probability pair, an achievable throughput of secure
HARQ protocols is derived for block-fading channels. Finally, both asymptotic
analysis and numerical computations demonstrate the benefits of HARQ protocols
to throughput and secrecy.
|
0712.4153
|
Biology of Applied Digital Ecosystems
|
cs.NE cs.MA
|
A primary motivation for our research in Digital Ecosystems is the desire to
exploit the self-organising properties of biological ecosystems. Ecosystems are
thought to be robust, scalable architectures that can automatically solve
complex, dynamic problems. However, the biological processes that contribute to
these properties have not been made explicit in Digital Ecosystems research.
Here, we discuss how biological properties contribute to the self-organising
features of biological ecosystems, including population dynamics, evolution, a
complex dynamic environment, and spatial distributions for generating local
interactions. The potential for exploiting these properties in artificial
systems is then considered. We suggest that several key features of biological
ecosystems have not been fully explored in existing digital ecosystems, and
discuss how mimicking these features may assist in developing robust, scalable
self-organising architectures. An example architecture, the Digital Ecosystem,
is considered in detail. The Digital Ecosystem is then measured experimentally
through simulations, with measures originating from theoretical ecology, to
confirm its likeness to a biological ecosystem. Including the responsiveness to
requests for applications from the user base, as a measure of the 'ecological
succession' (development).
|
0712.4159
|
Creating a Digital Ecosystem: Service-Oriented Architectures with
Distributed Evolutionary Computing
|
cs.NE
|
We start with a discussion of the relevant literature, including Nature
Inspired Computing as a framework in which to understand this work, and the
process of biomimicry to be used in mimicking the necessary biological
processes to create Digital Ecosystems. We then consider the relevant
theoretical ecology in creating the digital counterpart of a biological
ecosystem, including the topological structure of ecosystems, and evolutionary
processes within distributed environments. This leads to a discussion of the
relevant fields from computer science for the creation of Digital Ecosystems,
including evolutionary computing, Multi-Agent Systems, and Service-Oriented
Architectures. We then define Ecosystem-Oriented Architectures for the creation
of Digital Ecosystems, imbibed with the properties of self-organisation and
scalability from biological ecosystems, including a novel form of distributed
evolutionary computing.
|
0712.4183
|
Probabilistic Visual Secret Sharing Schemes for Gray-scale images and
Color images
|
cs.CR cs.CV
|
Visual secrete sharing (VSS) is an encryption technique that utilizes human
visual system in the recovering of the secret image and it does not require any
complex calculation. Pixel expansion has been a major issue of VSS schemes. A
number of probabilistic VSS schemes with minimum pixel expansion have been
proposed for binary secret images. This paper presents a general probabilistic
(k, n)-VSS scheme for gray-scale images and another scheme for color images.
With our schemes, the pixel expansion can be set to a user-defined value. When
this value is 1, there is no pixel expansion at all. The quality of
reconstructed secret images, measured by Average Relative Difference, is
equivalent to Relative Difference of existing deterministic schemes. Previous
probabilistic VSS schemes for black-and-white images with respect to pixel
expansion can be viewed as special cases of the schemes proposed here
|
0712.4209
|
The Generalized Random Energy Model and its Application to the
Statistical Physics of Ensembles of Hierarchical Codes
|
cs.IT math.IT
|
In an earlier work, the statistical physics associated with
finite--temperature decoding of code ensembles, along with the relation to
their random coding error exponents, were explored in a framework that is
analogous to Derrida's random energy model (REM) of spin glasses, according to
which the energy levels of the various spin configurations are independent
random variables. The generalized REM (GREM) extends the REM in that it
introduces correlations between energy levels in an hierarchical structure. In
this paper, we explore some analogies between the behavior of the GREM and that
of code ensembles which have parallel hierarchical structures. In particular,
in analogy to the fact that the GREM may have different types of phase
transition effects, depending on the parameters of the model, then the
above--mentioned hierarchical code ensembles behave substantially differently
in the various domains of the design parameters of these codes. We make an
attempt to explore the insights that can be imported from the statistical
mechanics of the GREM and be harnessed to serve for code design considerations
and guidelines.
|
0712.4273
|
Online EM Algorithm for Latent Data Models
|
stat.CO cs.LG
|
In this contribution, we propose a generic online (also sometimes called
adaptive or recursive) version of the Expectation-Maximisation (EM) algorithm
applicable to latent variable models of independent observations. Compared to
the algorithm of Titterington (1984), this approach is more directly connected
to the usual EM algorithm and does not rely on integration with respect to the
complete data distribution. The resulting algorithm is usually simpler and is
shown to achieve convergence to the stationary points of the Kullback-Leibler
divergence between the marginal distribution of the observation and the model
distribution at the optimal rate, i.e., that of the maximum likelihood
estimator. In addition, the proposed approach is also suitable for conditional
(or regression) models, as illustrated in the case of the mixture of linear
regressions model.
|
0712.4318
|
Convergence of Expected Utilities with Algorithmic Probability
Distributions
|
cs.AI
|
We consider an agent interacting with an unknown environment. The environment
is a function which maps natural numbers to natural numbers; the agent's set of
hypotheses about the environment contains all such functions which are
computable and compatible with a finite set of known input-output pairs, and
the agent assigns a positive probability to each such hypothesis. We do not
require that this probability distribution be computable, but it must be
bounded below by a positive computable function. The agent has a utility
function on outputs from the environment. We show that if this utility function
is bounded below in absolute value by an unbounded computable function, then
the expected utility of any input is undefined. This implies that a computable
utility function will have convergent expected utilities iff that function is
bounded.
|
0712.4321
|
Subsystem Code Constructions
|
quant-ph cs.IT math.IT
|
Subsystem codes are the most versatile class of quantum error-correcting
codes known to date that combine the best features of all known passive and
active error-control schemes. The subsystem code is a subspace of the quantum
state space that is decomposed into a tensor product of two vector spaces: the
subsystem and the co-subsystem. A generic method to derive subsystem codes from
existing subsystem codes is given that allows one to trade the dimensions of
subsystem and co-subsystem while maintaining or improving the minimum distance.
As a consequence, it is shown that all pure MDS subsystem codes are derived
from MDS stabilizer codes. The existence of numerous families of MDS subsystem
codes is established. Propagation rules are derived that allow one to obtain
longer and shorter subsystem codes from given subsystem codes. Furthermore,
propagation rules are derived that allow one to construct a new subsystem code
by combining two given subsystem codes.
|
0712.4402
|
Judgment
|
math.PR cs.AI math.LO
|
The concept of a judgment as a logical action which introduces new
information into a deductive system is examined. This leads to a way of
mathematically representing implication which is distinct from the familiar
material implication, according to which "If A then B" is considered to be
equivalent to "B or not-A". This leads, in turn, to a resolution of the paradox
of the raven.
|
0801.0061
|
Security for Wiretap Networks via Rank-Metric Codes
|
cs.IT cs.CR math.IT
|
The problem of securing a network coding communication system against a
wiretapper adversary is considered. The network implements linear network
coding to deliver $n$ packets from source to each receiver, and the wiretapper
can eavesdrop on $\mu$ arbitrarily chosen links. A coding scheme is proposed
that can achieve the maximum possible rate of $k=n-\mu$ packets that are
information-theoretically secure from the adversary. A distinctive feature of
our scheme is that it is universal: it can be applied on top of any
communication network without requiring knowledge of or any modifications on
the underlying network code. In fact, even a randomized network code can be
used. Our approach is based on Rouayheb-Soljanin's formulation of a wiretap
network as a generalization of the Ozarow-Wyner wiretap channel of type II.
Essentially, the linear MDS code in Ozarow-Wyner's coset coding scheme is
replaced by a maximum-rank-distance code over an extension of the field in
which linear network coding operations are performed.
|
0801.0102
|
Reserved-Length Prefix Coding
|
cs.IT cs.DS math.IT
|
Huffman coding finds an optimal prefix code for a given probability mass
function. Consider situations in which one wishes to find an optimal code with
the restriction that all codewords have lengths that lie in a user-specified
set of lengths (or, equivalently, no codewords have lengths that lie in a
complementary set). This paper introduces a polynomial-time dynamic programming
algorithm that finds optimal codes for this reserved-length prefix coding
problem. This has applications to quickly encoding and decoding lossless codes.
In addition, one modification of the approach solves any quasiarithmetic prefix
coding problem, while another finds optimal codes restricted to the set of
codes with g codeword lengths for user-specified g (e.g., g=2).
|
0801.0131
|
Two-Level Concept-Oriented Data Model
|
cs.DB
|
In this paper we describe a new approach to data modelling called the
concept-oriented model (CoM). This model is based on the formalism of nested
ordered sets which uses inclusion relation to produce hierarchical structure of
sets and ordering relation to produce multi-dimensional structure among its
elements. Nested ordered set is defined as an ordered set where an each element
can be itself an ordered set. Ordering relation in CoM is used to define data
semantics and operations with data such as projection and de-projection. This
data model can be applied to very different problems and the paper describes
some its uses such grouping with aggregation and multi-dimensional analysis.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.