id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0506089
|
Field geology with a wearable computer: 1st results of the Cyborg
Astrobiologist System
|
cs.CV astro-ph cs.AI cs.CE cs.HC cs.RO
|
We present results from the first geological field tests of the `Cyborg
Astrobiologist', which is a wearable computer and video camcorder system that
we are using to test and train a computer-vision system towards having some of
the autonomous decision-making capabilities of a field-geologist. The Cyborg
Astrobiologist platform has thus far been used for testing and development of
these algorithms and systems: robotic acquisition of quasi-mosaics of images,
real-time image segmentation, and real-time determination of interesting points
in the image mosaics. This work is more of a test of the whole system, rather
than of any one part of the system. However, beyond the concept of the system
itself, the uncommon map (despite its simplicity) is the main innovative part
of the system. The uncommon map helps to determine interest-points in a
context-free manner. Overall, the hardware and software systems function
reliably, and the computer-vision algorithms are adequate for the first field
tests. In addition to the proof-of-concept aspect of these field tests, the
main result of these field tests is the enumeration of those issues that we can
improve in the future, including: dealing with structural shadow and
microtexture, and also, controlling the camera's zoom lens in an intelligent
manner. Nonetheless, despite these and other technical inadequacies, this
Cyborg Astrobiologist system, consisting of a camera-equipped wearable-computer
and its computer-vision algorithms, has demonstrated its ability of finding
genuinely interesting points in real-time in the geological scenery, and then
gathering more information about these interest points in an automated manner.
We use these capabilities for autonomous guidance towards geological
points-of-interest.
|
cs/0506091
|
A New Construction for LDPC Codes using Permutation Polynomials over
Integer Rings
|
cs.IT math.IT
|
A new construction is proposed for low density parity check (LDPC) codes
using quadratic permutation polynomials over finite integer rings. The
associated graphs for the new codes have both algebraic and pseudo-random
nature, and the new codes are quasi-cyclic. Graph isomorphisms and
automorphisms are identified and used in an efficient search for good codes.
Graphs with girth as large as 12 were found. Upper bounds on the minimum
Hamming distance are found both analytically and algorithmically. The bounds
indicate that the minimum distance grows with block length. Near-codewords are
one of the causes for error floors in LDPC codes; the new construction provides
a good framework for studying near-codewords in LDPC codes. Nine example codes
are given, and computer simulation results show the excellent error performance
of these codes. Finally, connections are made between this new LDPC
construction and turbo codes using interleavers generated by quadratic
permutation polynomials.
|
cs/0506092
|
Emergent Statistical Wealth Distributions in Simple Monetary Exchange
Models: A Critical Review
|
cs.MA
|
This paper reviews recent attempts at modelling inequality of wealth as an
emergent phenomenon of interacting-agent processes. We point out that recent
models of wealth condensation which draw their inspiration from molecular
dynamics have, in fact, reinvented a process introduced quite some time ago by
Angle (1986) in the sociological literature. We emphasize some problematic
aspects of simple wealth exchange models and contrast them with a monetary
model based on economic principles of market mediated exchange. The paper also
reports new results on the influence of market power on the wealth distribution
in statistical equilibrium. As it turns out, inequality increases but market
power alone is not sufficient for changing the exponential tails of simple
exchange models into Pareto tails.
|
cs/0506093
|
On Maximum Contention-Free Interleavers and Permutation Polynomials over
Integer Rings
|
cs.IT math.IT
|
An interleaver is a critical component for the channel coding performance of
turbo codes. Algebraic constructions are of particular interest because they
admit analytical designs and simple, practical hardware implementation.
Contention-free interleavers have been recently shown to be suitable for
parallel decoding of turbo codes. In this correspondence, it is shown that
permutation polynomials generate maximum contention-free interleavers, i.e.,
every factor of the interleaver length becomes a possible degree of parallel
processing of the decoder. Further, it is shown by computer simulations that
turbo codes using these interleavers perform very well for the 3rd Generation
Partnership Project (3GPP) standard.
|
cs/0506094
|
Universal Codes as a Basis for Nonparametric Testing of Serial
Independence for Time Series
|
cs.IT math.IT
|
We consider a stationary and ergodic source $p$ generated symbols $x_1 ...
x_t$ from some finite set $A$ and a null hypothesis $H_0$ that $p$ is Markovian
source with memory (or connectivity) not larger than $m, (m >= 0).$ The
alternative hypothesis $H_1$ is that the sequence is generated by a stationary
and ergodic source, which differs from the source under $H_0$. In particular,
if $m= 0$ we have the null hypothesis $H_0$ that the sequence is generated by
Bernoully source (or the hypothesis that $x_1 ...x_t$ are independent.) Some
new tests which are based on universal codes and universal predictors, are
suggested.
|
cs/0506095
|
Deriving a Stationary Dynamic Bayesian Network from a Logic Program with
Recursive Loops
|
cs.AI cs.LG cs.LO
|
Recursive loops in a logic program present a challenging problem to the PLP
framework. On the one hand, they loop forever so that the PLP backward-chaining
inferences would never stop. On the other hand, they generate cyclic
influences, which are disallowed in Bayesian networks. Therefore, in existing
PLP approaches logic programs with recursive loops are considered to be
problematic and thus are excluded. In this paper, we propose an approach that
makes use of recursive loops to build a stationary dynamic Bayesian network.
Our work stems from an observation that recursive loops in a logic program
imply a time sequence and thus can be used to model a stationary dynamic
Bayesian network without using explicit time parameters. We introduce a
Bayesian knowledge base with logic clauses of the form $A \leftarrow
A_1,...,A_l, true, Context, Types$, which naturally represents the knowledge
that the $A_i$s have direct influences on $A$ in the context $Context$ under
the type constraints $Types$. We then use the well-founded model of a logic
program to define the direct influence relation and apply SLG-resolution to
compute the space of random variables together with their parental connections.
We introduce a novel notion of influence clauses, based on which a declarative
semantics for a Bayesian knowledge base is established and algorithms for
building a two-slice dynamic Bayesian network from a logic program are
developed.
|
cs/0506101
|
Efficient Multiclass Implementations of L1-Regularized Maximum Entropy
|
cs.LG cs.CL
|
This paper discusses the application of L1-regularized maximum entropy
modeling or SL1-Max [9] to multiclass categorization problems. A new
modification to the SL1-Max fast sequential learning algorithm is proposed to
handle conditional distributions. Furthermore, unlike most previous studies,
the present research goes beyond a single type of conditional distribution. It
describes and compares a variety of modeling assumptions about the class
distribution (independent or exclusive) and various types of joint or
conditional distributions. It results in a new methodology for combining binary
regularized classifiers to achieve multiclass categorization. In this context,
Maximum Entropy can be considered as a generic and efficient regularized
classification tool that matches or outperforms the state-of-the art
represented by AdaBoost and SVMs.
|
cs/0506102
|
On $m$-dimensional toric codes
|
cs.IT math.AC math.AG math.IT
|
Toric codes are a class of $m$-dimensional cyclic codes introduced recently
by J. Hansen. They may be defined as evaluation codes obtained from monomials
corresponding to integer lattice points in an integral convex polytope $P
\subseteq \R^m$. As such, they are in a sense a natural extension of
Reed-Solomon codes. Several authors have used intersection theory on toric
surfaces to derive bounds on the minimum distance of some toric codes with $m =
2$. In this paper, we will provide a more elementary approach that applies
equally well to many toric codes for all $m \ge 2$. Our methods are based on a
sort of multivariate generalization of Vandermonde determinants that has also
been used in the study of multivariate polynomial interpolation. We use these
Vandermonde determinants to determine the minimum distance of toric codes from
rectangular polytopes and simplices. We also prove a general result showing
that if there is a unimodular integer affine transformation taking one polytope
$P_1$ to a second polytope $P_2$, then the corresponding toric codes are
monomially equivalent (hence have the same parameters). We use this to begin a
classification of two-dimensional toric codes with small dimension.
|
cs/0507001
|
Asymptotically Optimal Tree-based Group Key Management Schemes
|
cs.IT cs.CR math.IT
|
In key management schemes that realize secure multicast communications
encrypted by group keys on a public network, tree structures are often used to
update the group keys efficiently. Selcuk and Sidhu have proposed an efficient
scheme which updates dynamically the tree structures based on the withdrawal
probabilities of members. In this paper, it is shown that Selcuk-Sidhu scheme
is asymptotically optimal for the cost of withdrawal. Furthermore, a new key
management scheme, which takes account of key update costs of joining in
addition to withdrawal, is proposed. It is proved that the proposed scheme is
also asymptotically optimal, and it is shown by simulation that it can attain
good performance for nonasymptotic cases.
|
cs/0507002
|
The Three Node Wireless Network: Achievable Rates and Cooperation
Strategies
|
cs.IT math.IT
|
We consider a wireless network composed of three nodes and limited by the
half-duplex and total power constraints. This formulation encompasses many of
the special cases studied in the literature and allows for capturing the common
features shared by them. Here, we focus on three special cases, namely 1) Relay
Channel, 2) Multicast Channel, and 3) Conference Channel. These special cases
are judicially chosen to reflect varying degrees of complexity while
highlighting the common ground shared by the different variants of the three
node wireless network. For the relay channel, we propose a new cooperation
scheme that exploits the wireless feedback gain. This scheme combines the
benefits of decode-and-forward and compress-and-forward strategies and avoids
the idealistic feedback assumption adopted in earlier works. Our analysis of
the achievable rate of this scheme reveals the diminishing feedback gain at
both the low and high signal-to-noise ratio regimes. Inspired by the proposed
feedback strategy, we identify a greedy cooperation framework applicable to
both the multicast and conference channels. Our performance analysis reveals
several nice properties of the proposed greedy approach and the central role of
cooperative source-channel coding in exploiting the receiver side information
in the wireless network setting. Our proofs for the cooperative multicast with
side-information rely on novel nested and independent binning encoders along
with a list decoder.
|
cs/0507004
|
An End-to-End Probabilistic Network Calculus with Moment Generating
Functions
|
cs.IT cs.PF math.IT
|
Network calculus is a min-plus system theory for performance evaluation of
queuing networks. Its elegance stems from intuitive convolution formulas for
concatenation of deterministic servers. Recent research dispenses with the
worst-case assumptions of network calculus to develop a probabilistic
equivalent that benefits from statistical multiplexing. Significant
achievements have been made, owing for example to the theory of effective
bandwidths, however, the outstanding scalability set up by concatenation of
deterministic servers has not been shown.
This paper establishes a concise, probabilistic network calculus with moment
generating functions. The presented work features closed-form, end-to-end,
probabilistic performance bounds that achieve the objective of scaling linearly
in the number of servers in series. The consistent application of moment
generating functions put forth in this paper utilizes independence beyond the
scope of current statistical multiplexing of flows. A relevant additional gain
is demonstrated for tandem servers with independent cross-traffic.
|
cs/0507005
|
A Genetic Algorithm Based Finger Selection Scheme for UWB MMSE Rake
Receivers
|
cs.IT math.IT
|
Due to a large number of multipath components in a typical ultra wideband
(UWB) system, selective Rake (SRake) receivers, which combine energy from a
subset of multipath components, are commonly employed. In order to optimize
system performance, an optimal selection of multipath components to be employed
at fingers of an SRake receiver needs to be considered. In this paper, this
finger selection problem is investigated for a minimum mean square error (MMSE)
UWB SRake receiver. Since the optimal solution is NP hard, a genetic algorithm
(GA) based iterative scheme is proposed, which can achieve near-optimal
performance after a reasonable number of iterations. Simulation results are
presented to compare the performance of the proposed finger selection algorithm
with those of the conventional and optimal schemes.
|
cs/0507006
|
A Two-Step Time of Arrival Estimation Algorithm for Impulse Radio Ultra
Wideband Systems
|
cs.IT math.IT
|
High time resolution of ultra wideband (UWB) signals facilitates very precise
positioning capabilities based on time-of-arrival (TOA) measurements. Although
the theoretical lower bound for TOA estimation can be achieved by the maximum
likelihood principle, it is impractical due to the need for extremely high-rate
sampling and the presence of large number of multipath components. On the other
hand, the conventional correlation-based algorithm, which serially searches
possible signal delays, takes a very long time to estimate the TOA of a
received UWB signal. Moreover, the first signal path does not always have the
strongest correlation output. Therefore, first path detection algorithms need
to be considered. In this paper, a data-aided two-step TOA estimation algorithm
is proposed. In order to speed up the estimation process, the first step
estimates the rough TOA of the received signal based on received signal energy.
Then, in the second step, the arrival time of the first signal path is
estimated by considering a hypothesis testing approach. The proposed scheme
uses low-rate correlation outputs, and is able to perform accurate TOA
estimation in reasonable time intervals. The simulation results are presented
to analyze the performance of the estimator.
|
cs/0507010
|
A Study for the Feature Core of Dynamic Reduct
|
cs.AI
|
To the reduct problems of decision system, the paper proposes the notion of
dynamic core according to the dynamic reduct model. It describes various formal
definitions of dynamic core, and discusses some properties about dynamic core.
All of these show that dynamic core possesses the essential characters of the
feature core.
|
cs/0507011
|
A Utility-Based Approach to Power Control and Receiver Design in
Wireless Data Networks
|
cs.IT math.IT
|
In this work, the cross-layer design problem of joint multiuser detection and
power control is studied using a game-theoretic approach. The uplink of a
direct-sequence code division multiple access (DS-CDMA) data network is
considered and a non-cooperative game is proposed in which users in the network
are allowed to choose their uplink receivers as well as their transmit powers
to maximize their own utilities. The utility function measures the number of
reliable bits transmitted by the user per joule of energy consumed. Focusing on
linear receivers, the Nash equilibrium for the proposed game is derived. It is
shown that the equilibrium is one where the powers are SIR-balanced with the
minimum mean square error (MMSE) detector as the receiver. In addition, this
framework is used to study power control games for the matched filter, the
decorrelator, and the MMSE detector; and the receivers' performance is compared
in terms of the utilities achieved at equilibrium (in bits/Joule). The optimal
cooperative solution is also discussed and compared with the non-cooperative
approach. Extensions of the results to the case of multiple receive antennas
are also presented. In addition, an admission control scheme based on
maximizing the total utility in the network is proposed.
|
cs/0507015
|
Duality between Packings and Coverings of the Hamming Space
|
cs.IT cs.DM math.IT
|
We investigate the packing and covering densities of linear and nonlinear
binary codes, and establish a number of duality relationships between the
packing and covering problems. Specifically, we prove that if almost all codes
(in the class of linear or nonlinear codes) are good packings, then only a
vanishing fraction of codes are good coverings, and vice versa: if almost all
codes are good coverings, then at most a vanishing fraction of codes are good
packings. We also show that any specific maximal binary code is either a good
packing or a good covering, in a certain well-defined sense.
|
cs/0507018
|
Optimal and Suboptimal Detection of Gaussian Signals in Noise:
Asymptotic Relative Efficiency
|
cs.IT math.IT
|
The performance of Bayesian detection of Gaussian signals using noisy
observations is investigated via the error exponent for the average error
probability. Under unknown signal correlation structure or limited processing
capability it is reasonable to use the simple quadratic detector that is
optimal in the case of an independent and identically distributed (i.i.d.)
signal. Using the large deviations principle, the performance of this detector
(which is suboptimal for non-i.i.d. signals) is compared with that of the
optimal detector for correlated signals via the asymptotic relative efficiency
defined as the ratio between sample sizes of two detectors required for the
same performance in the large-sample-size regime. The effects of SNR on the ARE
are investigated. It is shown that the asymptotic efficiency of the simple
quadratic detector relative to the optimal detector converges to one as the SNR
increases without bound for any bounded spectrum, and that the simple quadratic
detector performs as well as the optimal detector for a wide range of the
correlation values at high SNR.
|
cs/0507022
|
On Hilberg's Law and Its Links with Guiraud's Law
|
cs.CL cs.IT math.IT
|
Hilberg (1990) supposed that finite-order excess entropy of a random human
text is proportional to the square root of the text length. Assuming that
Hilberg's hypothesis is true, we derive Guiraud's law, which states that the
number of word types in a text is greater than proportional to the square root
of the text length. Our derivation is based on some mathematical conjecture in
coding theory and on several experiments suggesting that words can be defined
approximately as the nonterminals of the shortest context-free grammar for the
text. Such operational definition of words can be applied even to texts
deprived of spaces, which do not allow for Mandelbrot's ``intermittent
silence'' explanation of Zipf's and Guiraud's laws. In contrast to
Mandelbrot's, our model assumes some probabilistic long-memory effects in human
narration and might be capable of explaining Menzerath's law.
|
cs/0507023
|
Two-dimensional cellular automata and the analysis of correlated time
series
|
cs.AI
|
Correlated time series are time series that, by virtue of the underlying
process to which they refer, are expected to influence each other strongly. We
introduce a novel approach to handle such time series, one that models their
interaction as a two-dimensional cellular automaton and therefore allows them
to be treated as a single entity. We apply our approach to the problems of
filling gaps and predicting values in rainfall time series. Computational
results show that the new approach compares favorably to Kalman smoothing and
filtering.
|
cs/0507024
|
Experiments in Clustering Homogeneous XML Documents to Validate an
Existing Typology
|
cs.IR
|
This paper presents some experiments in clustering homogeneous XMLdocuments
to validate an existing classification or more generally anorganisational
structure. Our approach integrates techniques for extracting knowledge from
documents with unsupervised classification (clustering) of documents. We focus
on the feature selection used for representing documents and its impact on the
emerging classification. We mix the selection of structured features with fine
textual selection based on syntactic characteristics.We illustrate and evaluate
this approach with a collection of Inria activity reports for the year 2003.
The objective is to cluster projects into larger groups (Themes), based on the
keywords or different chapters of these activity reports. We then compare the
results of clustering using different feature selections, with the official
theme structure used by Inria.
|
cs/0507025
|
Comparison of Resampling Schemes for Particle Filtering
|
cs.CE
|
This contribution is devoted to the comparison of various resampling
approaches that have been proposed in the literature on particle filtering. It
is first shown using simple arguments that the so-called residual and
stratified methods do yield an improvement over the basic multinomial
resampling approach. A simple counter-example showing that this property does
not hold true for systematic resampling is given. Finally, some results on the
large-sample behavior of the simple bootstrap filter algorithm are given. In
particular, a central limit theorem is established for the case where
resampling is performed using the residual approach.
|
cs/0507026
|
Hard Problems of Algebraic Geometry Codes
|
cs.IT math.IT
|
The minimum distance is one of the most important combinatorial
characterizations of a code. The maximum likelihood decoding problem is one of
the most important algorithmic problems of a code. While these problems are
known to be hard for general linear codes, the techniques used to prove their
hardness often rely on the construction of artificial codes. In general, much
less is known about the hardness of the specific classes of natural linear
codes. In this paper, we show that both problems are
NP-hard for algebraic geometry codes. We achieve this by reducing a
well-known NP-complete problem to these problems using a randomized algorithm.
The family of codes in the reductions are based on elliptic curves. They have
positive rates, but the alphabet sizes are exponential in the block lengths.
|
cs/0507027
|
Anyone but Him: The Complexity of Precluding an Alternative
|
cs.GT cs.CC cs.MA
|
Preference aggregation in a multiagent setting is a central issue in both
human and computer contexts. In this paper, we study in terms of complexity the
vulnerability of preference aggregation to destructive control. That is, we
study the ability of an election's chair to, through such mechanisms as
voter/candidate addition/suppression/partition, ensure that a particular
candidate (equivalently, alternative) does not win. And we study the extent to
which election systems can make it impossible, or computationally costly
(NP-complete), for the chair to execute such control. Among the systems we
study--plurality, Condorcet, and approval voting--we find cases where systems
immune or computationally resistant to a chair choosing the winner nonetheless
are vulnerable to the chair blocking a victory. Beyond that, we see that among
our studied systems no one system offers the best protection against
destructive control. Rather, the choice of a preference aggregation system will
depend closely on which types of control one wishes to be protected against. We
also find concrete cases where the complexity of or susceptibility to control
varies dramatically based on the choice among natural tie-handling rules.
|
cs/0507029
|
ATNoSFERES revisited
|
cs.AI
|
ATNoSFERES is a Pittsburgh style Learning Classifier System (LCS) in which
the rules are represented as edges of an Augmented Transition Network.
Genotypes are strings of tokens of a stack-based language, whose execution
builds the labeled graph. The original ATNoSFERES, using a bitstring to
represent the language tokens, has been favorably compared in previous work to
several Michigan style LCSs architectures in the context of Non Markov
problems. Several modifications of ATNoSFERES are proposed here: the most
important one conceptually being a representational change: each token is now
represented by an integer, hence the genotype is a string of integers; several
other modifications of the underlying grammar language are also proposed. The
resulting ATNoSFERES-II is validated on several standard animat Non Markov
problems, on which it outperforms all previously published results in the LCS
literature. The reasons for these improvement are carefully analyzed, and some
assumptions are proposed on the underlying mechanisms in order to explain these
good results.
|
cs/0507031
|
The error-floor of LDPC codes in the Laplacian channel
|
cs.IT cond-mat.dis-nn math.IT
|
We analyze the performance of Low-Density-Parity-Check codes in the
error-floor domain where the Signal-to-Noise-Ratio, s, is large, s >> 1. We
describe how the instanton method of theoretical physics, recently adapted to
coding theory, solves the problem of characterizing the error-floor domain in
the Laplacian channel. An example of the (155,64,20) LDPC code with four
iterations (each iteration consisting of two semi-steps: from bits-to-checks
and from checks-to-bits) of the min-sum decoding is discussed. A generalized
computational tree analysis is devised to explain the rational structure of the
leading instantons. The asymptotic for the symbol Bit-Error-Rate in the
error-floor domain is comprised of individual instanton contributions, each
estimated as ~ \exp(-l_{inst;L} s), where the effective distances, l_{inst;L},
of the the leading instantons are 7.6, 8.0 and 8.0 respectively. (The Hamming
distance of the code is 20.) The analysis shows that the instantons are
distinctly different from the ones found for the same coding/decoding scheme
performing over the Gaussian channel. We validate instanton results against
direct simulations and offer an explanation for remarkable performance of the
instanton approximation not only in the extremal, s -> \infty, limit but also
at the moderate s values of practical interest.
|
cs/0507032
|
Introduction to Quantum Message Space
|
cs.IT math.IT math.OA quant-ph
|
This paper develops the quantum analog of the message ensemble of classical
information theory as developed by Shannon and Khinchin. The principal
mathematical tool is harmonic analysis on the free group with two generators.
|
cs/0507033
|
Multiresolution Kernels
|
cs.LG
|
We present in this work a new methodology to design kernels on data which is
structured with smaller components, such as text, images or sequences. This
methodology is a template procedure which can be applied on most kernels on
measures and takes advantage of a more detailed "bag of components"
representation of the objects. To obtain such a detailed description, we
consider possible decompositions of the original bag into a collection of
nested bags, following a prior knowledge on the objects' structure. We then
consider these smaller bags to compare two objects both in a detailed
perspective, stressing local matches between the smaller bags, and in a global
or coarse perspective, by considering the entire bag. This multiresolution
approach is likely to be best suited for tasks where the coarse approach is not
precise enough, and where a more subtle mixture of both local and global
similarities is necessary to compare objects. The approach presented here would
not be computationally tractable without a factorization trick that we
introduce before presenting promising results on an image retrieval task.
|
cs/0507035
|
Enhancing Global SLS-Resolution with Loop Cutting and Tabling Mechanisms
|
cs.LO cs.AI
|
Global SLS-resolution is a well-known procedural semantics for top-down
computation of queries under the well-founded model. It inherits from
SLDNF-resolution the {\em linearity} property of derivations, which makes it
easy and efficient to implement using a simple stack-based memory structure.
However, like SLDNF-resolution it suffers from the problem of infinite loops
and redundant computations. To resolve this problem, in this paper we develop a
new procedural semantics, called {\em SLTNF-resolution}, by enhancing Global
SLS-resolution with loop cutting and tabling mechanisms. SLTNF-resolution is
sound and complete w.r.t. the well-founded semantics for logic programs with
the bounded-term-size property, and is superior to existing linear tabling
procedural semantics such as SLT-resolution.
|
cs/0507039
|
Distributed Regression in Sensor Networks: Training Distributively with
Alternating Projections
|
cs.LG cs.AI cs.CV cs.DC cs.IT math.IT
|
Wireless sensor networks (WSNs) have attracted considerable attention in
recent years and motivate a host of new challenges for distributed signal
processing. The problem of distributed or decentralized estimation has often
been considered in the context of parametric models. However, the success of
parametric methods is limited by the appropriateness of the strong statistical
assumptions made by the models. In this paper, a more flexible nonparametric
model for distributed regression is considered that is applicable in a variety
of WSN applications including field estimation. Here, starting with the
standard regularized kernel least-squares estimator, a message-passing
algorithm for distributed estimation in WSNs is derived. The algorithm can be
viewed as an instantiation of the successive orthogonal projection (SOP)
algorithm. Various practical aspects of the algorithm are discussed and several
numerical simulations validate the potential of the approach.
|
cs/0507040
|
Pattern Recognition for Conditionally Independent Data
|
cs.LG cs.AI cs.CV
|
In this work we consider the task of relaxing the i.i.d assumption in pattern
recognition (or classification), aiming to make existing learning algorithms
applicable to a wider range of tasks. Pattern recognition is guessing a
discrete label of some object based on a set of given examples (pairs of
objects and labels). We consider the case of deterministically defined labels.
Traditionally, this task is studied under the assumption that examples are
independent and identically distributed. However, it turns out that many
results of pattern recognition theory carry over a weaker assumption. Namely,
under the assumption of conditional independence and identical distribution of
objects, while the only assumption on the distribution of labels is that the
rate of occurrence of each label should be above some positive threshold.
We find a broad class of learning algorithms for which estimations of the
probability of a classification error achieved under the classical i.i.d.
assumption can be generalised to the similar estimates for the case of
conditionally i.i.d. examples.
|
cs/0507041
|
Monotone Conditional Complexity Bounds on Future Prediction Errors
|
cs.LG cs.AI cs.IT math.IT
|
We bound the future loss when predicting any (computably) stochastic sequence
online. Solomonoff finitely bounded the total deviation of his universal
predictor M from the true distribution m by the algorithmic complexity of m.
Here we assume we are at a time t>1 and already observed x=x_1...x_t. We bound
the future prediction performance on x_{t+1}x_{t+2}... by a new variant of
algorithmic complexity of m given x, plus the complexity of the randomness
deficiency of x. The new complexity is monotone in its condition in the sense
that this complexity can only decrease if the condition is prolonged. We also
briefly discuss potential generalizations to Bayesian model classes and to
classification problems.
|
cs/0507042
|
The MammoGrid Virtual Organisation - Federating Distributed Mammograms
|
cs.DC cs.DB
|
The MammoGrid project aims to deliver a prototype which enables the effective
collaboration between radiologists using grid, service-orientation and database
solutions. The grid technologies and service-based database management solution
provide the platform for integrating diverse and distributed resources,
creating what is called a virtual organisation. The MammoGrid Virtual
Organisation facilitates the sharing and coordinated access to mammography
data, medical imaging software and computing resources of participating
hospitals. Hospitals manage their local database of mammograms, but in
addition, radiologists who are part of this organisation can share mammograms,
reports, results and image analysis software. The MammoGrid Virtual
Organisation is a federation of autonomous multi-centres sites which transcends
national boundaries. This paper outlines the service-based approach in the
creation and management of the federated distributed mammography database and
discusses the role of virtual organisations in distributed image analysis.
|
cs/0507044
|
Defensive Universal Learning with Experts
|
cs.LG
|
This paper shows how universal learning can be achieved with expert advice.
To this aim, we specify an experts algorithm with the following
characteristics: (a) it uses only feedback from the actions actually chosen
(bandit setup), (b) it can be applied with countably infinite expert classes,
and (c) it copes with losses that may grow in time appropriately slowly. We
prove loss bounds against an adaptive adversary. From this, we obtain a master
algorithm for "reactive" experts problems, which means that the master's
actions may influence the behavior of the adversary. Our algorithm can
significantly outperform standard experts algorithms on such problems. Finally,
we combine it with a universal expert class. The resulting universal learner
performs -- in a certain sense -- almost as well as any computable strategy,
for any online decision problem. We also specify the (worst-case) convergence
speed, which is very slow.
|
cs/0507045
|
In the beginning was game semantics
|
cs.LO cs.AI math.LO
|
This article presents an overview of computability logic -- the
game-semantically constructed logic of interactive computational tasks and
resources. There is only one non-overview, technical section in it, devoted to
a proof of the soundness of affine logic with respect to the semantics of
computability logic. A comprehensive online source on the subject can be found
at http://www.cis.upenn.edu/~giorgi/cl.html
|
cs/0507048
|
Redundancy in Logic III: Non-Mononotonic Reasoning
|
cs.LO cs.AI cs.CC
|
Results about the redundancy of circumscriptive and default theories are
presented. In particular, the complexity of establishing whether a given theory
is redundant is establihsed.
|
cs/0507053
|
Nonrepetitive Paths and Cycles in Graphs with Application to Sudoku
|
cs.DS cs.AI
|
We provide a simple linear time transformation from a directed or undirected
graph with labeled edges to an unlabeled digraph, such that paths in the input
graph in which no two consecutive edges have the same label correspond to paths
in the transformed graph and vice versa. Using this transformation, we provide
efficient algorithms for finding paths and cycles with no two consecutive equal
labels. We also consider related problems where the paths and cycles are
required to be simple; we find efficient algorithms for the undirected case of
these problems but show the directed case to be NP-complete. We apply our path
and cycle finding algorithms in a program for generating and solving Sudoku
puzzles, and show experimentally that they lead to effective puzzle-solving
rules that may also be of interest to human Sudoku puzzle solvers.
|
cs/0507055
|
ReacProc: A Tool to Process Reactions Describing Particle Interactions
|
cs.CE
|
ReacProc is a program written in C/C++ programming language which can be used
(1) to check out of reactions describing particles interactions against
conservation laws and (2) to reduce input reaction into some canonical form. A
table with particles properties is available within ReacProc package.
|
cs/0507056
|
Explorations in engagement for humans and robots
|
cs.AI cs.CL cs.RO
|
This paper explores the concept of engagement, the process by which
individuals in an interaction start, maintain and end their perceived
connection to one another. The paper reports on one aspect of engagement among
human interactors--the effect of tracking faces during an interaction. It also
describes the architecture of a robot that can participate in conversational,
collaborative interactions with engagement gestures. Finally, the paper reports
on findings of experiments with human participants who interacted with a robot
when it either performed or did not perform engagement gestures. Results of the
human-robot studies indicate that people become engaged with robots: they
direct their attention to the robot more often in interactions where engagement
gestures are present, and they find interactions more appropriate when
engagement gestures are present than when they are not.
|
cs/0507058
|
Paving the Way for Image Understanding: A New Kind of Image
Decomposition is Desired
|
cs.CV
|
In this paper we present an unconventional image segmentation approach which
is devised to meet the requirements of image understanding and pattern
recognition tasks. Generally image understanding assumes interplay of two
sub-processes: image information content discovery and image information
content interpretation. Despite of its widespread use, the notion of "image
information content" is still ill defined, intuitive, and ambiguous. Most
often, it is used in the Shannon's sense, which means information content
assessment averaged over the whole signal ensemble. Humans, however,rarely
resort to such estimates. They are very effective in decomposing images into
their meaningful constituents and focusing attention to the perceptually
relevant image parts. We posit that following the latest findings in human
attention vision studies and the concepts of Kolmogorov's complexity theory an
unorthodox segmentation approach can be proposed that provides effective image
decomposition to information preserving image fragments well suited for
subsequent image interpretation. We provide some illustrative examples,
demonstrating effectiveness of this approach.
|
cs/0507059
|
Data complexity of answering conjunctive queries over SHIQ knowledge
bases
|
cs.LO cs.AI cs.CC
|
An algorithm for answering conjunctive queries over SHIQ knowledge bases that
is coNP in data complexity is given. The algorithm is based on the tableau
algorithm for reasoning with individuals in SHIQ. The blocking conditions of
the tableau are weakened in such a way that the set of models the modified
algorithm yields suffices to check query entailment. The modified blocking
conditions are based on the ones proposed by Levy and Rousset for reasoning
with Horn Rules in the description logic ALCNR.
|
cs/0507060
|
The Entropy of a Binary Hidden Markov Process
|
cs.IT cond-mat.stat-mech math.IT math.ST stat.TH
|
The entropy of a binary symmetric Hidden Markov Process is calculated as an
expansion in the noise parameter epsilon. We map the problem onto a
one-dimensional Ising model in a large field of random signs and calculate the
expansion coefficients up to second order in epsilon. Using a conjecture we
extend the calculation to 11th order and discuss the convergence of the
resulting series.
|
cs/0507062
|
FPL Analysis for Adaptive Bandits
|
cs.LG
|
A main problem of "Follow the Perturbed Leader" strategies for online
decision problems is that regret bounds are typically proven against oblivious
adversary. In partial observation cases, it was not clear how to obtain
performance guarantees against adaptive adversary, without worsening the
bounds. We propose a conceptually simple argument to resolve this problem.
Using this, a regret bound of O(t^(2/3)) for FPL in the adversarial multi-armed
bandit problem is shown. This bound holds for the common FPL variant using only
the observations from designated exploration rounds. Using all observations
allows for the stronger bound of O(t^(1/2)), matching the best bound known so
far (and essentially the known lower bound) for adversarial bandits.
Surprisingly, this variant does not even need explicit exploration, it is
self-stabilizing. However the sampling probabilities have to be either
externally provided or approximated to sufficient accuracy, using O(t^2 log t)
samples in each step.
|
cs/0507065
|
A Fast Greedy Algorithm for Outlier Mining
|
cs.DB cs.AI
|
The task of outlier detection is to find small groups of data objects that
are exceptional when compared with rest large amount of data. In [38], the
problem of outlier detection in categorical data is defined as an optimization
problem and a local-search heuristic based algorithm (LSA) is presented.
However, as is the case with most iterative type algorithms, the LSA algorithm
is still very time-consuming on very large datasets. In this paper, we present
a very fast greedy algorithm for mining outliers under the same optimization
model. Experimental results on real datasets and large synthetic datasets show
that: (1) Our algorithm has comparable performance with respect to those
state-of-art outlier detection algorithms on identifying true outliers and (2)
Our algorithm can be an order of magnitude faster than LSA algorithm.
|
cs/0507067
|
Conjunctive Query Containment and Answering under Description Logics
Constraints
|
cs.DB cs.AI
|
Query containment and query answering are two important computational tasks
in databases. While query answering amounts to compute the result of a query
over a database, query containment is the problem of checking whether for every
database, the result of one query is a subset of the result of another query.
In this paper, we deal with unions of conjunctive queries, and we address
query containment and query answering under Description Logic constraints.
Every such constraint is essentially an inclusion dependencies between concepts
and relations, and their expressive power is due to the possibility of using
complex expressions, e.g., intersection and difference of relations, special
forms of quantification, regular expressions over binary relations, in the
specification of the dependencies. These types of constraints capture a great
variety of data models, including the relational, the entity-relationship, and
the object-oriented model, all extended with various forms of constraints, and
also the basic features of the ontology languages used in the context of the
Semantic Web.
We present the following results on both query containment and query
answering. We provide a method for query containment under Description Logic
constraints, thus showing that the problem is decidable, and analyze its
computational complexity. We prove that query containment is undecidable in the
case where we allow inequalities in the right-hand side query, even for very
simple constraints and queries. We show that query answering under Description
Logic constraints can be reduced to query containment, and illustrate how such
a reduction provides upper bound results with respect to both combined and data
complexity.
|
cs/0507068
|
On parity check collections for iterative erasure decoding that correct
all correctable erasure patterns of a given size
|
cs.IT cs.DM math.IT
|
Recently there has been interest in the construction of small parity check
sets for iterative decoding of the Hamming code with the property that each
uncorrectable (or stopping) set of size three is the support of a codeword and
hence uncorrectable anyway. Here we reformulate and generalise the problem, and
improve on this construction. First we show that a parity check collection that
corrects all correctable erasure patterns of size m for the r-th order Hamming
code (i.e, the Hamming code with codimension r) provides for all codes of
codimension $r$ a corresponding ``generic'' parity check collection with this
property. This leads naturally to a necessary and sufficient condition on such
generic parity check collections. We use this condition to construct a generic
parity check collection for codes of codimension r correcting all correctable
erasure patterns of size at most m, for all r and m <= r, thus generalising the
known construction for m=3. Then we discussoptimality of our construction and
show that it can be improved for m>=3 and r large enough. Finally we discuss
some directions for further research.
|
cs/0507069
|
Users and Assessors in the Context of INEX: Are Relevance Dimensions
Relevant?
|
cs.IR
|
The main aspects of XML retrieval are identified by analysing and comparing
the following two behaviours: the behaviour of the assessor when judging the
relevance of returned document components; and the behaviour of users when
interacting with components of XML documents. We argue that the two INEX
relevance dimensions, Exhaustivity and Specificity, are not orthogonal
dimensions; indeed, an empirical analysis of each dimension reveals that the
grades of the two dimensions are correlated to each other. By analysing the
level of agreement between the assessor and the users, we aim at identifying
the best units of retrieval. The results of our analysis show that the highest
level of agreement is on highly relevant and on non-relevant document
components, suggesting that only the end points of the INEX 10-point relevance
scale are perceived in the same way by both the assessor and the users. We
propose a new definition of relevance for XML retrieval and argue that its
corresponding relevance scale would be a better choice for INEX.
|
cs/0507070
|
Hybrid XML Retrieval: Combining Information Retrieval and a Native XML
Database
|
cs.IR
|
This paper investigates the impact of three approaches to XML retrieval:
using Zettair, a full-text information retrieval system; using eXist, a native
XML database; and using a hybrid system that takes full article answers from
Zettair and uses eXist to extract elements from those articles. For the
content-only topics, we undertake a preliminary analysis of the INEX 2003
relevance assessments in order to identify the types of highly relevant
document components. Further analysis identifies two complementary sub-cases of
relevance assessments ("General" and "Specific") and two categories of topics
("Broad" and "Narrow"). We develop a novel retrieval module that for a
content-only topic utilises the information from the resulting answer list of a
native XML database and dynamically determines the preferable units of
retrieval, which we call "Coherent Retrieval Elements". The results of our
experiments show that -- when each of the three systems is evaluated against
different retrieval scenarios (such as different cases of relevance
assessments, different topic categories and different choices of evaluation
metrics) -- the XML retrieval systems exhibit varying behaviour and the best
performance can be reached for different values of the retrieval parameters. In
the case of INEX 2003 relevance assessments for the content-only topics, our
newly developed hybrid XML retrieval system is substantially more effective
than either Zettair or eXist, and yields a robust and a very effective XML
retrieval.
|
cs/0508001
|
Dimensions of Copeland-Erdos Sequences
|
cs.CC cs.IT math.IT
|
The base-$k$ {\em Copeland-Erd\"os sequence} given by an infinite set $A$ of
positive integers is the infinite sequence $\CE_k(A)$ formed by concatenating
the base-$k$ representations of the elements of $A$ in numerical order. This
paper concerns the following four quantities.
The {\em finite-state dimension} $\dimfs (\CE_k(A))$, a finite-state version
of classical Hausdorff dimension introduced in 2001.
The {\em finite-state strong dimension} $\Dimfs(\CE_k(A))$, a finite-state
version of classical packing dimension introduced in 2004. This is a dual of
$\dimfs(\CE_k(A))$ satisfying $\Dimfs(\CE_k(A))$ $\geq \dimfs(\CE_k(A))$.
The {\em zeta-dimension} $\Dimzeta(A)$, a kind of discrete fractal dimension
discovered many times over the past few decades.
The {\em lower zeta-dimension} $\dimzeta(A)$, a dual of $\Dimzeta(A)$
satisfying $\dimzeta(A)\leq \Dimzeta(A)$.
We prove the following.
$\dimfs(\CE_k(A))\geq \dimzeta(A)$. This extends the 1946 proof by Copeland
and Erd\"os that the sequence $\CE_k(\mathrm{PRIMES})$ is Borel normal.
$\Dimfs(\CE_k(A))\geq \Dimzeta(A)$.
These bounds are tight in the strong sense that these four quantities can
have (simultaneously) any four values in $[0,1]$ satisfying the four
above-mentioned inequalities.
|
cs/0508007
|
Regularity of Position Sequences
|
cs.CV cs.AI cs.LG q-bio.NC
|
A person is given a numbered sequence of positions on a sheet of paper. The
person is asked, "Which will be the next (or the next after that) position?"
Everyone has an opinion as to how he or she would proceed. There are regular
sequences for which there is general agreement on how to continue. However,
there are less regular sequences for which this assessment is less certain.
There are sequences for which every continuation is perceived to be arbitrary.
I would like to present a mathematical model that reflects these opinions and
perceptions with the aid of a valuation function. It is necessary to apply a
rich set of invariant features of position sequences to ensure the quality of
this model. All other properties of the model are arbitrary.
|
cs/0508008
|
The accurate optimal-success/error-rate calculations applied to the
realizations of the reliable and short-period integer ambiguity resolution in
carrier-phase GPS/GNSS positioning
|
cs.IT math.IT
|
The maximum-marginal-a-posteriori success rate of statistical decision under
multivariate Gaussian error distribution on an integer lattice is almost
rigorously calculated by using union-bound approximation and Monte Carlo
integration. These calculations are applied to the revelation of the various
possible realizations of the reliable and short-period integer ambiguity
resolution in precise carrier-phase relative positioning by GPS/GNSS. The
theoretical foundation and efficient methodology are systematically developed,
and two types of the enhancement of union-bound approximation are proposed and
examined.
The results revealed include an extremely high reliability under the
condition of accurate carrier-phase measurements and a large number of visible
satellites, its heavy degradation caused by the slight amount of differentiated
ionospheric delays due to the nonvanishing baseline length between rover and
reference receivers, and the advantages of the use of the multiple carrier
frequencies. The succeeding initialization of the integer ambiguities is shown
to overcome the disadvantageous condition of the nonvanishing baseline length
effectively due to the reasonably assumed temporal and spatial constancy of
differentiated ionospheric delays.
|
cs/0508012
|
n-Channel Asymmetric Multiple-Description Lattice Vector Quantization
|
cs.IT math.IT
|
We present analytical expressions for optimal entropy-constrained
multiple-description lattice vector quantizers which, under high-resolutions
assumptions, minimize the expected distortion for given packet-loss
probabilities. We consider the asymmetric case where packet-loss probabilities
and side entropies are allowed to be unequal and find optimal quantizers for
any number of descriptions in any dimension. We show that the normalized second
moments of the side-quantizers are given by that of an $L$-dimensional sphere
independent of the choice of lattices. Furthermore, we show that the optimal
bit-distribution among the descriptions is not unique. In fact, within certain
limits, bits can be arbitrarily distributed.
|
cs/0508013
|
Relations between the Local Weight Distributions of a Linear Block Code,
Its Extended Code, and Its Even Weight Subcode
|
cs.IT math.IT
|
Relations between the local weight distributions of a binary linear code, its
extended code, and its even weight subcode are presented. In particular, for a
code of which the extended code is transitive invariant and contains only
codewords with weight multiples of four, the local weight distribution can be
obtained from that of the extended code. Using the relations, the local weight
distributions of the $(127,k)$ primitive BCH codes for $k\leq50$, the
$(127,64)$ punctured third-order Reed-Muller, and their even weight subcodes
are obtained from the local weight distribution of the $(128,k)$ extended
primitive BCH codes for $k\leq50$ and the $(128,64)$ third-order Reed-Muller
code. We also show an approach to improve an algorithm for computing the local
weight distribution proposed before.
|
cs/0508014
|
The Benefit of Thresholding in LP Decoding of LDPC Codes
|
cs.IT math.IT
|
Consider data transmission over a binary-input additive white Gaussian noise
channel using a binary low-density parity-check code. We ask the following
question: Given a decoder that takes log-likelihood ratios as input, does it
help to modify the log-likelihood ratios before decoding? If we use an optimal
decoder then it is clear that modifying the log-likelihoods cannot possibly
help the decoder's performance, and so the answer is "no." However, for a
suboptimal decoder like the linear programming decoder, the answer might be
"yes": In this paper we prove that for certain interesting classes of
low-density parity-check codes and large enough SNRs, it is advantageous to
truncate the log-likelihood ratios before passing them to the linear
programming decoder.
|
cs/0508015
|
Chosen-ciphertext attack on noncommutative Polly Cracker
|
cs.IT cs.CR math.IT
|
We propose a chosen-ciphertext attack on recently presented noncommutative
variant of the well-known Polly Cracker cryptosystem. We show that if one
chooses parameters for this noncommutative Polly Cracker as initially proposed,
than the system should be claimed as insecure.
|
cs/0508017
|
Enhancing Content-And-Structure Information Retrieval using a Native XML
Database
|
cs.IR
|
Three approaches to content-and-structure XML retrieval are analysed in this
paper: first by using Zettair, a full-text information retrieval system; second
by using eXist, a native XML database, and third by using a hybrid XML
retrieval system that uses eXist to produce the final answers from likely
relevant articles retrieved by Zettair. INEX 2003 content-and-structure topics
can be classified in two categories: the first retrieving full articles as
final answers, and the second retrieving more specific elements within articles
as final answers. We show that for both topic categories our initial hybrid
system improves the retrieval effectiveness of a native XML database. For
ranking the final answer elements, we propose and evaluate a novel retrieval
model that utilises the structural relationships between the answer elements of
a native XML database and retrieves Coherent Retrieval Elements. The final
results of our experiments show that when the XML retrieval task focusses on
highly relevant elements our hybrid XML retrieval system with the Coherent
Retrieval Elements module is 1.8 times more effective than Zettair and 3 times
more effective than eXist, and yields an effective content-and-structure XML
retrieval.
|
cs/0508018
|
Spectral Factorization, Whitening- and Estimation Filter -- Stability,
Smoothness Properties and FIR Approximation Behavior
|
cs.IT math.IT
|
A Wiener filter can be interpreted as a cascade of a whitening- and an
estimation filter. This paper gives a detailed investigates of the properties
of these two filters. Then the practical consequences for the overall Wiener
filter are ascertained. It is shown that if the given spectral densities are
smooth (Hoelder continuous) functions, the resulting Wiener filter will always
be stable and can be approximated arbitrarily well by a finite impulse response
(FIR) filter. Moreover, the smoothness of the spectral densities characterizes
how fast the FIR filter approximates the desired filter characteristic. If on
the other hand the spectral densities are continuous but not smooth enough, the
resulting Wiener filter may not be stable.
|
cs/0508019
|
On the Minimal Pseudo-Codewords of Codes from Finite Geometries
|
cs.IT cs.DM math.IT
|
In order to understand the performance of a code under maximum-likelihood
(ML) decoding, it is crucial to know the minimal codewords. In the context of
linear programming (LP) decoding, it turns out to be necessary to know the
minimal pseudo-codewords. This paper studies the minimal codewords and minimal
pseudo-codewords of some families of codes derived from projective and
Euclidean planes. Although our numerical results are only for codes of very
modest length, they suggest that these code families exhibit an interesting
property. Namely, all minimal pseudo-codewords that are not multiples of a
minimal codeword have an AWGNC pseudo-weight that is strictly larger than the
minimum Hamming weight of the code. This observation has positive consequences
not only for LP decoding but also for iterative decoding.
|
cs/0508020
|
Capacity Gain from Transmitter and Receiver Cooperation
|
cs.IT math.IT
|
Capacity gain from transmitter and receiver cooperation are compared in a
relay network where the cooperating nodes are close together. When all nodes
have equal average transmit power along with full channel state information
(CSI), it is proved that transmitter cooperation outperforms receiver
cooperation, whereas the opposite is true when power is optimally allocated
among the nodes but only receiver phase CSI is available. In addition, when the
nodes have equal average power with receiver phase CSI only, cooperation is
shown to offer no capacity improvement over a non-cooperative scheme with the
same average network power. When the system is under optimal power allocation
with full CSI, the decode-and-forward transmitter cooperation rate is close to
its cut-set capacity upper bound, and outperforms compress-and-forward receiver
cooperation. Moreover, it is shown that full CSI is essential in transmitter
cooperation, while optimal power allocation is essential in receiver
cooperation.
|
cs/0508022
|
Matrix Construction Using Cyclic Shifts of a Column
|
cs.DM cs.CR cs.IT math.IT
|
This paper describes the synthesis of matrices with good correlation, from
cyclic shifts of pseudonoise columns. Optimum matrices result whenever the
shift sequence satisfies the constant difference property. Known shift
sequences with the constant (or almost constant) difference property are:
Quadratic (Polynomial) and Reciprocal Shift modulo prime, Exponential Shift,
Legendre Shift, Zech Logarithm Shift, and the shift sequences of some m-arrays.
We use these shift sequences to produce arrays for watermarking of digital
images. Matrices can also be unfolded into long sequences by diagonal unfolding
(with no deterioration in correlation) or row-by-row unfolding, with some
degradation in correlation.
|
cs/0508023
|
Software Libraries and Their Reuse: Entropy, Kolmogorov Complexity, and
Zipf's Law
|
cs.SE cs.IT cs.PL math.IT
|
We analyze software reuse from the perspective of information theory and
Kolmogorov complexity, assessing our ability to ``compress'' programs by
expressing them in terms of software components reused from libraries. A common
theme in the software reuse literature is that if we can only get the right
environment in place-- the right tools, the right generalizations, economic
incentives, a ``culture of reuse'' -- then reuse of software will soar, with
consequent improvements in productivity and software quality. The analysis
developed in this paper paints a different picture: the extent to which
software reuse can occur is an intrinsic property of a problem domain, and
better tools and culture can have only marginal impact on reuse rates if the
domain is inherently resistant to reuse. We define an entropy parameter $H \in
[0,1]$ of problem domains that measures program diversity, and deduce from this
upper bounds on code reuse and the scale of components with which we may work.
For ``low entropy'' domains with $H$ near 0, programs are highly similar to one
another and the domain is amenable to the Component-Based Software Engineering
(CBSE) dream of programming by composing large-scale components. For problem
domains with $H$ near 1, programs require substantial quantities of new code,
with only a modest proportion of an application comprised of reused,
small-scale components. Preliminary empirical results from Unix platforms
support some of the predictions of our model.
|
cs/0508024
|
New Codes for OFDM with Low PMEPR
|
cs.IT math.IT
|
In this paper new codes for orthogonal frequency-division multiplexing (OFDM)
with tightly controlled peak-to-mean envelope power ratio (PMEPR) are proposed.
We identify a new family of sequences occuring in complementary sets and show
that such sequences form subsets of a new generalization of the Reed--Muller
codes. Contrarily to previous constructions we present a compact description of
such codes, which makes them suitable even for larger block lengths. We also
show that some previous constructions just occur as special cases in our
construction.
|
cs/0508025
|
Signature coding for OR channel with asynchronous access
|
cs.IT math.IT
|
Signature coding for multiple access OR channel is considered. We prove that
in block asynchronous case the upper bound on the minimum code length
asymptotically is the same as in the case of synchronous access.
|
cs/0508026
|
Simple Maximum-Likelihood Decoding of Generalized First-order
Reed-Muller Codes
|
cs.IT math.IT
|
An efficient decoder for the generalized first-order Reed-Muller code
RM_q(1,m) is essential for the decoding of various block-coding schemes for
orthogonal frequency-division multiplexing with reduced peak-to-mean power
ratio. We present an efficient and simple maximum-likelihood decoding algorithm
for RM_q(1,m). It is shown that this algorithm has lower complexity than other
previously known maximum-likelihood decoders for RM_q(1,m).
|
cs/0508027
|
Expectation maximization as message passing
|
cs.IT cs.LG math.IT
|
Based on prior work by Eckford, it is shown how expectation maximization (EM)
may be viewed, and used, as a message passing algorithm in factor graphs.
|
cs/0508028
|
Truth-telling Reservations
|
cs.GT cond-mat.stat-mech cs.MA
|
We present a mechanism for reservations of bursty resources that is both
truthful and robust. It consists of option contracts whose pricing structure
induces users to reveal the true likelihoods that they will purchase a given
resource. Users are also allowed to adjust their options as their likelihood
changes. This scheme helps users save cost and the providers to plan ahead so
as to reduce the risk of under-utilization and overbooking. The mechanism
extracts revenue similar to that of a monopoly provider practicing temporal
pricing discrimination with a user population whose preference distribution is
known in advance.
|
cs/0508029
|
Selfish vs. Unselfish Optimization of Network Creation
|
cs.NI cs.AR cs.MA
|
We investigate several variants of a network creation model: a group of
agents builds up a network between them while trying to keep the costs of this
network small. The cost function consists of two addends, namely (i) a constant
amount for each edge an agent buys and (ii) the minimum number of hops it takes
sending messages to other agents. Despite the simplicity of this model, various
complex network structures emerge depending on the weight between the two
addends of the cost function and on the selfish or unselfish behaviour of the
agents.
|
cs/0508030
|
Terminated LDPC Convolutional Codes with Thresholds Close to Capacity
|
cs.IT math.IT
|
An ensemble of LDPC convolutional codes with parity-check matrices composed
of permutation matrices is considered. The convergence of the iterative belief
propagation based decoder for terminated convolutional codes in the ensemble is
analyzed for binary-input output-symmetric memoryless channels using density
evolution techniques. We observe that the structured irregularity in the Tanner
graph of the codes leads to significantly better thresholds when compared to
corresponding LDPC block codes.
|
cs/0508031
|
Capacity Theorems for Quantum Multiple Access Channels
|
cs.IT math.IT quant-ph
|
We consider quantum channels with two senders and one receiver. For an
arbitrary such channel, we give multi-letter characterizations of two different
two-dimensional capacity regions. The first region characterizes the rates at
which it is possible for one sender to send classical information while the
other sends quantum information. The second region gives the rates at which
each sender can send quantum information. We give an example of a channel for
which each region has a single-letter description, concluding with a
characterization of the rates at which each user can simultaneously send
classical and quantum information.
|
cs/0508032
|
Polymorphic Self-* Agents for Stigmergic Fault Mitigation in Large-Scale
Real-Time Embedded Systems
|
cs.AI cs.MA
|
Organization and coordination of agents within large-scale, complex,
distributed environments is one of the primary challenges in the field of
multi-agent systems. A lot of interest has surfaced recently around self-*
(self-organizing, self-managing, self-optimizing, self-protecting) agents. This
paper presents polymorphic self-* agents that evolve a core set of roles and
behavior based on environmental cues. The agents adapt these roles based on the
changing demands of the environment, and are directly implementable in computer
systems applications. The design combines strategies from game theory,
stigmergy, and other biologically inspired models to address fault mitigation
in large-scale, real-time, distributed systems. The agents are embedded within
the individual digital signal processors of BTeV, a High Energy Physics
experiment consisting of 2500 such processors. Results obtained using a SWARM
simulation of the BTeV environment demonstrate the polymorphic character of the
agents, and show how this design exceeds performance and reliability metrics
obtained from comparable centralized, and even traditional decentralized
approaches.
|
cs/0508034
|
Channel combining and splitting for cutoff rate improvement
|
cs.IT math.IT
|
The cutoff rate $R_0(W)$ of a discrete memoryless channel (DMC) $W$ is often
used as a figure of merit, alongside the channel capacity $C(W)$. Given a
channel $W$ consisting of two possibly correlated subchannels $W_1$, $W_2$, the
capacity function always satisfies $C(W_1)+C(W_2) \le C(W)$, while there are
examples for which $R_0(W_1)+R_0(W_2) > R_0(W)$. This fact that cutoff rate can
be ``created'' by channel splitting was noticed by Massey in his study of an
optical modulation system modeled as a $M$'ary erasure channel. This paper
demonstrates that similar gains in cutoff rate can be achieved for general
DMC's by methods of channel combining and splitting. Relation of the proposed
method to Pinsker's early work on cutoff rate improvement and to Imai-Hirakawa
multi-level coding are also discussed.
|
cs/0508035
|
Codes for error detection, good or not good
|
cs.IT math.IT
|
Linear codes for error detection on a q-ary symmetric channel are studied. It
is shown that for given dimension k and minimum distance d, there exists a
value \mu(d,k) such that if C is a code of length n >= \mu(d,k), then neither C
nor its dual are good for error detection. For d >> k or k << d good
approximations for \mu(d,k) are given. A generalization to non-linear codes is
also given.
|
cs/0508036
|
Exp\'{e}riences de classification d'une collection de documents XML de
structure homog\`{e}ne
|
cs.IR
|
This paper presents some experiments in clustering homogeneous XMLdocuments
to validate an existing classification or more generally anorganisational
structure. Our approach integrates techniques for extracting knowledge from
documents with unsupervised classification (clustering) of documents. We focus
on the feature selection used for representing documents and its impact on the
emerging classification. We mix the selection of structured features with fine
textual selection based on syntactic characteristics.We illustrate and evaluate
this approach with a collection of Inria activity reports for the year 2003.
The objective is to cluster projects into larger groups (Themes), based on the
keywords or different chapters of these activity reports. We then compare the
results of clustering using different feature selections, with the official
theme structure used by Inria.
|
cs/0508039
|
Tight Bounds on the Redundancy of Huffman Codes
|
cs.IT math.IT
|
In this paper we study the redundancy of Huffman codes. In particular, we
consider sources for which the probability of one of the source symbols is
known. We prove a conjecture of Ye and Yeung regarding the upper bound on the
redundancy of such Huffman codes, which yields in a tight upper bound. We also
derive a tight lower bound for the redundancy under the same assumption.
We further apply the method introduced in this paper to other related
problems. It is shown that several other previously known bounds with different
constraints follow immediately from our results.
|
cs/0508040
|
Bounds on the Capacity of the Blockwise Noncoherent APSK-AWGN Channels
|
cs.IT math.IT
|
Capacity of M-ary Amplitude and Phase-Shift Keying(M-APSK) over an Additive
White Gaussian Noise(AWGN) channel that also introduces an unknown carrier
phase rotation is considered. The phase remains constant over a block of L
symbols and it is independent from block to block. Aiming to design codes with
equally probable symbols, uniformly distributed channel inputs are assumed.
Based on results of Peleg and Shamai for M-ary Phase Shift Keying(M-PSK)
modulation, easily computable upper and lower bounds on the effective M-APSK
capacity are derived. For moderate M and L and a broad range of Signal-to-Noise
Ratios(SNR's), the bounds come close together. As in the case of M-PSK
modulation, for large L the coherent capacity is approached.
|
cs/0508043
|
Sequential Predictions based on Algorithmic Complexity
|
cs.IT cs.LG math.IT
|
This paper studies sequence prediction based on the monotone Kolmogorov
complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is
extremely close to Solomonoff's universal prior M, the latter being an
excellent predictor in deterministic as well as probabilistic environments,
where performance is measured in terms of convergence of posteriors or losses.
Despite this closeness to M, it is difficult to assess the prediction quality
of m, since little is known about the closeness of their posteriors, which are
the important quantities for prediction. We show that for deterministic
computable environments, the "posterior" and losses of m converge, but rapid
convergence could only be shown on-sequence; the off-sequence convergence can
be slow. In probabilistic environments, neither the posterior nor the losses
converge, in general.
|
cs/0508046
|
Relaxation Bounds on the Minimum Pseudo-Weight of Linear Block Codes
|
cs.IT math.IT
|
Just as the Hamming weight spectrum of a linear block code sheds light on the
performance of a maximum likelihood decoder, the pseudo-weight spectrum
provides insight into the performance of a linear programming decoder. Using
properties of polyhedral cones, we find the pseudo-weight spectrum of some
short codes. We also present two general lower bounds on the minimum
pseudo-weight. The first bound is based on the column weight of the
parity-check matrix. The second bound is computed by solving an optimization
problem. In some cases, this bound is more tractable to compute than previously
known bounds and thus can be applied to longer codes.
|
cs/0508047
|
Further Results on Coding for Reliable Communication over Packet
Networks
|
cs.IT cs.NI math.IT
|
In "On Coding for Reliable Communication over Packet Networks" (Lun, Medard,
and Effros, Proc. 42nd Annu. Allerton Conf. Communication, Control, and
Computing, 2004), a capacity-achieving coding scheme for unicast or multicast
over lossy wireline or wireless packet networks is presented. We extend that
paper's results in two ways: First, we extend the network model to allow
packets received on a link to arrive according to any process with an average
rate, as opposed to the assumption of Poisson traffic with i.i.d. losses that
was previously made. Second, in the case of Poisson traffic with i.i.d. losses,
we derive error exponents that quantify the rate at which the probability of
error decays with coding delay.
|
cs/0508049
|
Characterizations of Pseudo-Codewords of LDPC Codes
|
cs.IT cs.DM math.IT
|
An important property of high-performance, low complexity codes is the
existence of highly efficient algorithms for their decoding. Many of the most
efficient, recent graph-based algorithms, e.g. message passing algorithms and
decoding based on linear programming, crucially depend on the efficient
representation of a code in a graphical model. In order to understand the
performance of these algorithms, we argue for the characterization of codes in
terms of a so called fundamental cone in Euclidean space which is a function of
a given parity check matrix of a code, rather than of the code itself. We give
a number of properties of this fundamental cone derived from its connection to
unramified covers of the graphical models on which the decoding algorithms
operate. For the class of cycle codes, these developments naturally lead to a
characterization of the fundamental polytope as the Newton polytope of the
Hashimoto edge zeta function of the underlying graph.
|
cs/0508050
|
Duality between channel capacity and rate distortion with two-sided
state information
|
cs.IT math.IT
|
We show that the duality between channel capacity and data compression is
retained when state information is available to the sender, to the receiver, to
both, or to neither. We present a unified theory for eight special cases of
channel capacity and rate distortion with state information, which also extends
existing results to arbitrary pairs of independent and identically distributed
(i.i.d.) correlated state information available at the sender and at the
receiver, respectively. In particular, the resulting general formula for
channel capacity assumes the same form as the generalized Wyner Ziv rate
distortion function.
|
cs/0508051
|
Trellis-Based Equalization for Sparse ISI Channels Revisited
|
cs.IT math.IT
|
Sparse intersymbol-interference (ISI) channels are encountered in a variety
of high-data-rate communication systems. Such channels have a large channel
memory length, but only a small number of significant channel coefficients. In
this paper, trellis-based equalization of sparse ISI channels is revisited. Due
to the large channel memory length, the complexity of maximum-likelihood
detection, e.g., by means of the Viterbi algorithm (VA), is normally
prohibitive. In the first part of the paper, a unified framework based on
factor graphs is presented for complexity reduction without loss of optimality.
In this new context, two known reduced-complexity algorithms for sparse ISI
channels are recapitulated: The multi-trellis VA (M-VA) and the
parallel-trellis VA (P-VA). It is shown that the M-VA, although claimed, does
not lead to a reduced computational complexity. The P-VA, on the other hand,
leads to a significant complexity reduction, but can only be applied for a
certain class of sparse channels. In the second part of the paper, a unified
approach is investigated to tackle general sparse channels: It is shown that
the use of a linear filter at the receiver renders the application of standard
reduced-state trellis-based equalizer algorithms feasible, without significant
loss of optimality. Numerical results verify the efficiency of the proposed
receiver structure.
|
cs/0508053
|
Measuring Semantic Similarity by Latent Relational Analysis
|
cs.LG cs.CL cs.IR
|
This paper introduces Latent Relational Analysis (LRA), a method for
measuring semantic similarity. LRA measures similarity in the semantic
relations between two pairs of words. When two pairs have a high degree of
relational similarity, they are analogous. For example, the pair cat:meow is
analogous to the pair dog:bark. There is evidence from cognitive science that
relational similarity is fundamental to many cognitive and linguistic tasks
(e.g., analogical reasoning). In the Vector Space Model (VSM) approach to
measuring relational similarity, the similarity between two pairs is calculated
by the cosine of the angle between the vectors that represent the two pairs.
The elements in the vectors are based on the frequencies of manually
constructed patterns in a large corpus. LRA extends the VSM approach in three
ways: (1) patterns are derived automatically from the corpus, (2) Singular
Value Decomposition is used to smooth the frequency data, and (3) synonyms are
used to reformulate word pairs. This paper describes the LRA algorithm and
experimentally compares LRA to VSM on two tasks, answering college-level
multiple-choice word analogy questions and classifying semantic relations in
noun-modifier expressions. LRA achieves state-of-the-art results, reaching
human-level performance on the analogy questions and significantly exceeding
VSM performance on both tasks.
|
cs/0508054
|
Sensing Capacity for Markov Random Fields
|
cs.IT math.IT
|
This paper computes the sensing capacity of a sensor network, with sensors of
limited range, sensing a two-dimensional Markov random field, by modeling the
sensing operation as an encoder. Sensor observations are dependent across
sensors, and the sensor network output across different states of the
environment is neither identically nor independently distributed. Using a
random coding argument, based on the theory of types, we prove a lower bound on
the sensing capacity of the network, which characterizes the ability of the
sensor network to distinguish among environments with Markov structure, to
within a desired accuracy.
|
cs/0508055
|
DNA Codes that Avoid Secondary Structures
|
cs.DM cs.IT math.IT
|
In this paper, we consider the problem of designing DNA sequences (codewords)
for DNA storage systems and DNA computing that are unlikely to fold back onto
themselves to form undesirable secondary structures. The paper addresses both
the issue of enumerating the sequences with such properties and the problem of
practical code construction.
|
cs/0508056
|
Very Simple Chaitin Machines for Concrete AIT
|
cs.IT math.IT
|
In 1975, Chaitin introduced his celebrated Omega number, the halting
probability of a universal Chaitin machine, a universal Turing machine with a
prefix-free domain. The Omega number's bits are {\em algorithmically
random}--there is no reason the bits should be the way they are, if we define
``reason'' to be a computable explanation smaller than the data itself. Since
that time, only {\em two} explicit universal Chaitin machines have been
proposed, both by Chaitin himself.
Concrete algorithmic information theory involves the study of particular
universal Turing machines, about which one can state theorems with specific
numerical bounds, rather than include terms like O(1). We present several new
tiny Chaitin machines (those with a prefix-free domain) suitable for the study
of concrete algorithmic information theory. One of the machines, which we call
Keraia, is a binary encoding of lambda calculus based on a curried lambda
operator. Source code is included in the appendices.
We also give an algorithm for restricting the domain of blank-endmarker
machines to a prefix-free domain over an alphabet that does not include the
endmarker; this allows one to take many universal Turing machines and construct
universal Chaitin machines from them.
|
cs/0508057
|
On the Performance of Turbo Codes in Quasi-Static Fading Channels
|
cs.IT math.IT
|
In this paper, we investigate in detail the performance of turbo codes in
quasi-static fading channels both with and without antenna diversity. First, we
develop a simple and accurate analytic technique to evaluate the performance of
turbo codes in quasi-static fading channels. The proposed analytic technique
relates the frame error rate of a turbo code to the iterative decoder
convergence threshold, rather than to the turbo code distance spectrum.
Subsequently, we compare the performance of various turbo codes in quasi-static
fading channels. We show that, in contrast to the situation in the AWGN
channel, turbo codes with different interleaver sizes or turbo codes based on
RSC codes with different constraint lengths and generator polynomials exhibit
identical performance. Moreover, we also compare the performance of turbo codes
and convolutional codes in quasi-static fading channels under the condition of
identical decoding complexity. In particular, we show that turbo codes do not
outperform convolutional codes in quasi-static fading channels with no antenna
diversity; and that turbo codes only outperform convolutional codes in
quasi-static fading channels with antenna diversity.
|
cs/0508058
|
Entropy coding with Variable Length Re-writing Systems
|
cs.IT math.IT
|
This paper describes a new set of block source codes well suited for data
compression. These codes are defined by sets of productions rules of the form
a.l->b, where a in A represents a value from the source alphabet A and l, b are
-small- sequences of bits. These codes naturally encompass other Variable
Length Codes (VLCs) such as Huffman codes. It is shown that these codes may
have a similar or even a shorter mean description length than Huffman codes for
the same encoding and decoding complexity. A first code design method allowing
to preserve the lexicographic order in the bit domain is described. The
corresponding codes have the same mean description length (mdl) as Huffman
codes from which they are constructed. Therefore, they outperform from a
compression point of view the Hu-Tucker codes designed to offer the
lexicographic property in the bit domain. A second construction method allows
to obtain codes such that the marginal bit probability converges to 0.5 as the
sequence length increases and this is achieved even if the probability
distribution function is not known by the encoder.
|
cs/0508060
|
Algorithms for Discrete Denoising Under Channel Uncertainty
|
cs.IT math.IT
|
The goal of a denoising algorithm is to reconstruct a signal from its
noise-corrupted observations. Perfect reconstruction is seldom possible and
performance is measured under a given fidelity criterion. In a recent work, the
authors addressed the problem of denoising unknown discrete signals corrupted
by a discrete memoryless channel when the channel, rather than being completely
known, is only known to lie in some uncertainty set of possible channels. A
sequence of denoisers was derived for this case and shown to be asymptotically
optimal with respect to a worst-case criterion argued most relevant to this
setting. In the present work we address the implementation and complexity of
this denoiser for channels parametrized by a scalar, establishing its
practicality. We show that for symmetric channels, the problem can be mapped
into a convex optimization problem, which can be solved efficiently. We also
present empirical results suggesting the potential of these schemes to do well
in practice. A key component of our schemes is an estimator of the subset of
channels in the uncertainty set that are feasible in the sense of being able to
give rise to the noise-corrupted signal statistics for some channel input
distribution. We establish the efficiency of this estimator, both
algorithmically and experimentally. We also present a modification of the
recently developed discrete universal denoiser (DUDE) that assumes a channel
based on the said estimator, and show that, in practice, the resulting scheme
performs well. For concreteness, we focus on the binary alphabet case and
binary symmetric channels, but also discuss the extensions of the algorithms to
general finite alphabets and to general channels parameterized by a scalar.
|
cs/0508062
|
Decoding of Expander Codes at Rates Close to Capacity
|
cs.IT math.IT
|
The decoding error probability of codes is studied as a function of their
block length. It is shown that the existence of codes with a polynomially small
decoding error probability implies the existence of codes with an exponentially
small decoding error probability. Specifically, it is assumed that there exists
a family of codes of length N and rate R=(1-\epsilon)C (C is a capacity of a
binary symmetric channel), whose decoding probability decreases polynomially in
1/N. It is shown that if the decoding probability decreases sufficiently fast,
but still only polynomially fast in 1/N, then there exists another such family
of codes whose decoding error probability decreases exponentially fast in N.
Moreover, if the decoding time complexity of the assumed family of codes is
polynomial in N and 1/\epsilon, then the decoding time complexity of the
presented family is linear in N and polynomial in 1/\epsilon. These codes are
compared to the recently presented codes of Barg and Zemor, ``Error Exponents
of Expander Codes,'' IEEE Trans. Inform. Theory, 2002, and ``Concatenated
Codes: Serial and Parallel,'' IEEE Trans. Inform. Theory, 2005. It is shown
that the latter families can not be tuned to have exponentially decaying (in N)
error probability, and at the same time to have decoding time complexity linear
in N and polynomial in 1/\epsilon.
|
cs/0508064
|
Layered Orthogonal Lattice Detector for Two Transmit Antenna
Communications
|
cs.IT math.IT
|
A novel detector for multiple-input multiple-output (MIMO) communications is
presented. The algorithm belongs to the class of the lattice detectors, i.e. it
finds a reduced complexity solution to the problem of finding the closest
vector to the received observations. The algorithm achieves optimal
maximum-likelihood (ML) performance in case of two transmit antennas, at the
same time keeping a complexity much lower than the exhaustive search-based ML
detection technique. Also, differently from the state-of-art lattice detector
(namely sphere decoder), the proposed algorithm is suitable for a highly
parallel hardware architecture and for a reliable bit soft-output information
generation, thus making it a promising option for real-time high-data rate
transmission.
|
cs/0508066
|
Can Small Museums Develop Compelling, Educational and Accessible Web
Resources? The Case of Accademia Carrara
|
cs.MM cs.CY cs.DL cs.IR
|
Due to the lack of budget, competence, personnel and time, small museums are
often unable to develop compelling, educational and accessible web resources
for their permanent collections or temporary exhibitions. In an attempt to
prove that investing in these types of resources can be very fruitful even for
small institutions, we will illustrate the case of Accademia Carrara, a museum
in Bergamo, northern Italy, which, for a current temporary exhibition on
Cezanne and Renoir's masterpieces from the Paul Guillaume collection, developed
a series of multimedia applications, including an accessible website, rich in
content and educational material [www.cezannerenoir.it].
|
cs/0508068
|
Lossy source encoding via message-passing and decimation over
generalized codewords of LDGM codes
|
cs.IT cs.AI math.IT
|
We describe message-passing and decimation approaches for lossy source coding
using low-density generator matrix (LDGM) codes. In particular, this paper
addresses the problem of encoding a Bernoulli(0.5) source: for randomly
generated LDGM codes with suitably irregular degree distributions, our methods
yield performance very close to the rate distortion limit over a range of
rates. Our approach is inspired by the survey propagation (SP) algorithm,
originally developed by Mezard et al. for solving random satisfiability
problems. Previous work by Maneva et al. shows how SP can be understood as
belief propagation (BP) for an alternative representation of satisfiability
problems. In analogy to this connection, our approach is to define a family of
Markov random fields over generalized codewords, from which local
message-passing rules can be derived in the standard way. The overall source
encoding method is based on message-passing, setting a subset of bits to their
preferred values (decimation), and reducing the code.
|
cs/0508070
|
MAP estimation via agreement on (hyper)trees: Message-passing and linear
programming
|
cs.IT cs.AI math.IT
|
We develop and analyze methods for computing provably optimal {\em maximum a
posteriori} (MAP) configurations for a subclass of Markov random fields defined
on graphs with cycles. By decomposing the original distribution into a convex
combination of tree-structured distributions, we obtain an upper bound on the
optimal value of the original problem (i.e., the log probability of the MAP
assignment) in terms of the combined optimal values of the tree problems. We
prove that this upper bound is tight if and only if all the tree distributions
share an optimal configuration in common. An important implication is that any
such shared configuration must also be a MAP configuration for the original
distribution. Next we develop two approaches to attempting to obtain tight
upper bounds: (a) a {\em tree-relaxed linear program} (LP), which is derived
from the Lagrangian dual of the upper bounds; and (b) a {\em tree-reweighted
max-product message-passing algorithm} that is related to but distinct from the
max-product algorithm. In this way, we establish a connection between a certain
LP relaxation of the mode-finding problem, and a reweighted form of the
max-product (min-sum) message-passing algorithm.
|
cs/0508072
|
On Achievable Rates and Complexity of LDPC Codes for Parallel Channels
with Application to Puncturing
|
cs.IT math.IT
|
This paper considers the achievable rates and decoding complexity of
low-density parity-check (LDPC) codes over statistically independent parallel
channels. The paper starts with the derivation of bounds on the conditional
entropy of the transmitted codeword given the received sequence at the output
of the parallel channels; the component channels are considered to be
memoryless, binary-input, and output-symmetric (MBIOS). These results serve for
the derivation of an upper bound on the achievable rates of ensembles of LDPC
codes under optimal maximum-likelihood (ML) decoding when their transmission
takes place over parallel MBIOS channels. The paper relies on the latter bound
for obtaining upper bounds on the achievable rates of ensembles of randomly and
intentionally punctured LDPC codes over MBIOS channels. The paper also provides
a lower bound on the decoding complexity (per iteration) of ensembles of LDPC
codes under message-passing iterative decoding over parallel MBIOS channels;
the bound is given in terms of the gap between the rate of these codes for
which reliable communication is achievable and the channel capacity. The paper
presents a diagram which shows interconnections between the theorems introduced
in this paper and some other previously reported results. The setting which
serves for the derivation of the bounds on the achievable rates and decoding
complexity is general, and the bounds can be applied to other scenarios which
can be treated as different forms of communication over parallel channels.
|
cs/0508073
|
Universal Learning of Repeated Matrix Games
|
cs.LG cs.AI
|
We study and compare the learning dynamics of two universal learning
algorithms, one based on Bayesian learning and the other on prediction with
expert advice. Both approaches have strong asymptotic performance guarantees.
When confronted with the task of finding good long-term strategies in repeated
2x2 matrix games, they behave quite differently.
|
cs/0508074
|
Throughput and Delay in Random Wireless Networks with Restricted
Mobility
|
cs.IT cs.NI math.IT
|
Grossglauser and Tse (2001) introduced a mobile random network model where
each node moves independently on a unit disk according to a stationary uniform
distribution and showed that a throughput of $\Theta(1)$ is achievable. El
Gamal, Mammen, Prabhakar and Shah (2004) showed that the delay associated with
this throughput scales as $\Theta(n\log n)$, when each node moves according to
an independent random walk. In a later work, Diggavi, Grossglauser and Tse
(2002) considered a random network on a sphere with a restricted mobility
model, where each node moves along a randomly chosen great circle on the unit
sphere. They showed that even with this one-dimensional restriction on
mobility, constant throughput scaling is achievable. Thus, this particular
mobility restriction does not affect the throughput scaling. This raises the
question whether this mobility restriction affects the delay scaling.
This paper studies the delay scaling at $\Theta(1)$ throughput for a random
network with restricted mobility. First, a variant of the scheme presented by
Diggavi, Grossglauser and Tse (2002) is presented and it is shown to achieve
$\Theta(1)$ throughput using different (and perhaps simpler) techniques. The
exact order of delay scaling for this scheme is determined, somewhat
surprisingly, to be of $\Theta(n\log n)$, which is the same as that without the
mobility restriction. Thus, this particular mobility restriction \emph{does
not} affect either the maximal throughput scaling or the corresponding delay
scaling of the network. This happens because under this 1-D restriction, each
node is in the proximity of every other node in essentially the same manner as
without this restriction.
|
cs/0508075
|
Complexity of Networks
|
cs.IT math.IT
|
Network or graph structures are ubiquitous in the study of complex systems.
Often, we are interested in complexity trends of these system as it evolves
under some dynamic. An example might be looking at the complexity of a food web
as species enter an ecosystem via migration or speciation, and leave via
extinction.
In this paper, a complexity measure of networks is proposed based on the {\em
complexity is information content} paradigm. To apply this paradigm to any
object, one must fix two things: a representation language, in which strings of
symbols from some alphabet describe, or stand for the objects being considered;
and a means of determining when two such descriptions refer to the same object.
With these two things set, the information content of an object can be computed
in principle from the number of equivalent descriptions describing a particular
object.
I propose a simple representation language for undirected graphs that can be
encoded as a bitstring, and equivalence is a topological equivalence. I also
present an algorithm for computing the complexity of an arbitrary undirected
network.
|
cs/0508076
|
Myopic Coding in Multiple Relay Channels
|
cs.IT math.IT
|
In this paper, we investigate achievable rates for data transmission from
sources to sinks through multiple relay networks. We consider myopic coding, a
constrained communication strategy in which each node has only a local view of
the network, meaning that nodes can only transmit to and decode from
neighboring nodes. We compare this with omniscient coding, in which every node
has a global view of the network and all nodes can cooperate. Using Gaussian
channels as examples, we find that when the nodes transmit at low power, the
rates achievable with two-hop myopic coding are as large as that under
omniscient coding in a five-node multiple relay channel and close to that under
omniscient coding in a six-node multiple relay channel. These results suggest
that we may do local coding and cooperation without compromising much on the
transmission rate. Practically, myopic coding schemes are more robust to
topology changes because encoding and decoding at a node are not affected when
there are changes at remote nodes. Furthermore, myopic coding mitigates the
high computational complexity and large buffer/memory requirements of
omniscient coding.
|
cs/0508077
|
Families of unitary matrices achieving full diversity
|
cs.IT math.IT
|
This paper presents an algebraic construction of families of unitary matrices
that achieve full diversity. They are obtained as subsets of cyclic division
algebras.
|
cs/0508083
|
A General Framework for Codes Involving Redundancy Minimization
|
cs.IT cs.DS math.IT
|
A framework with two scalar parameters is introduced for various problems of
finding a prefix code minimizing a coding penalty function. The framework
encompasses problems previously proposed by Huffman, Campbell, Nath, and Drmota
and Szpankowski, shedding light on the relationships among these problems. In
particular, Nath's range of problems can be seen as bridging the minimum
average redundancy problem of Huffman with the minimum maximum pointwise
redundancy problem of Drmota and Szpankowski. Using this framework, two
linear-time Huffman-like algorithms are devised for the minimum maximum
pointwise redundancy problem, the only one in the framework not previously
solved with a Huffman-like algorithm. Both algorithms provide solutions common
to this problem and a subrange of Nath's problems, the second algorithm being
distinguished by its ability to find the minimum variance solution among all
solutions common to the minimum maximum pointwise redundancy and Nath problems.
Simple redundancy bounds are also presented.
|
cs/0508084
|
Source Coding for Quasiarithmetic Penalties
|
cs.IT cs.DS math.IT
|
Huffman coding finds a prefix code that minimizes mean codeword length for a
given probability distribution over a finite number of items. Campbell
generalized the Huffman problem to a family of problems in which the goal is to
minimize not mean codeword length but rather a generalized mean known as a
quasiarithmetic or quasilinear mean. Such generalized means have a number of
diverse applications, including applications in queueing. Several
quasiarithmetic-mean problems have novel simple redundancy bounds in terms of a
generalized entropy. A related property involves the existence of optimal
codes: For ``well-behaved'' cost functions, optimal codes always exist for
(possibly infinite-alphabet) sources having finite generalized entropy. Solving
finite instances of such problems is done by generalizing an algorithm for
finding length-limited binary codes to a new algorithm for finding optimal
binary codes for any quasiarithmetic mean with a convex cost function. This
algorithm can be performed using quadratic time and linear space, and can be
extended to other penalty functions, some of which are solvable with similar
space and time complexity, and others of which are solvable with slightly
greater complexity. This reduces the computational complexity of a problem
involving minimum delay in a queue, allows combinations of previously
considered problems to be optimized, and greatly expands the space of problems
solvable in quadratic time and linear space. The algorithm can be extended for
purposes such as breaking ties among possibly different optimal codes, as with
bottom-merge Huffman coding.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.