id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0601060
|
Robot Swarms in an Uncertain World: Controllable Adaptability
|
cs.RO
|
There is a belief that complexity and chaos are essential for adaptability.
But life deals with complexity every moment, without the chaos that engineers
fear so, by invoking goal-directed behaviour. Goals can be programmed. That is
why living organisms give us hope to achieve adaptability in robots. In this
paper a method for the description of a goal-directed, or programmed,
behaviour, interacting with uncertainty of environment, is described. We
suggest reducing the structural (goals, intentions) and stochastic components
(probability to realise the goal) of individual behaviour to random variables
with nominal values to apply probabilistic approach. This allowed us to use a
Normalized Entropy Index to detect the system state by estimating the
contribution of each agent to the group behaviour. The number of possible group
states is 27. We argue that adaptation has a limited number of possible paths
between these 27 states. Paths and states can be programmed so that after
adjustment to any particular case of task and conditions, adaptability will
never involve chaos. We suggest the application of the model to operation of
robots or other devices in remote and/or dangerous places.
|
cs/0601061
|
Modular Adaptive System Based on a Multi-Stage Neural Structure for
Recognition of 2D Objects of Discontinuous Production
|
cs.RO
|
This is a presentation of a new system for invariant recognition of 2D
objects with overlapping classes, that can not be effectively recognized with
the traditional methods. The translation, scale and partial rotation invariant
contour object description is transformed in a DCT spectrum space. The obtained
frequency spectrums are decomposed into frequency bands in order to feed
different BPG neural nets (NNs). The NNs are structured in three stages -
filtering and full rotation invariance; partial recognition; general
classification. The designed multi-stage BPG Neural Structure shows very good
accuracy and flexibility when tested with 2D objects used in the discontinuous
production. The reached speed and the opportunuty for an easy restructuring and
reprogramming of the system makes it suitable for application in different
applied systems for real time work.
|
cs/0601062
|
Study of Self-Organization Model of Multiple Mobile Robot
|
cs.RO
|
A good organization model of multiple mobile robot should be able to improve
the efficiency of the system, reduce the complication of robot interactions,
and detract the difficulty of computation. From the sociology aspect of
topology, structure and organization, this paper studies the multiple mobile
robot organization formation and running mechanism in the dynamic, complicated
and unknown environment. It presents and describes in detail a Hierarchical-
Web Recursive Organization Model (HWROM) and forming algorithm. It defines the
robot society leader; robotic team leader and individual robot as the same
structure by the united framework and describes the organization model by the
recursive structure. The model uses task-oriented and top-down method to
dynamically build and maintain structures and organization. It uses
market-based techniques to assign task, form teams and allocate resources in
dynamic environment. The model holds several characteristics of
self-organization, dynamic, conciseness, commonness and robustness.
|
cs/0601063
|
Optimal Point-to-Point Trajectory Tracking of Redundant Manipulators
using Generalized Pattern Search
|
cs.RO
|
Optimal point-to-point trajectory planning for planar redundant manipulator
is considered in this study. The main objective is to minimize the sum of the
position error of the end-effector at each intermediate point along the
trajectory so that the end-effector can track the prescribed trajectory
accurately. An algorithm combining Genetic Algorithm and Pattern Search as a
Generalized Pattern Search GPS is introduced to design the optimal trajectory.
To verify the proposed algorithm, simulations for a 3-D-O-F planar manipulator
with different end-effector trajectories have been carried out. A comparison
between the Genetic Algorithm and the Generalized Pattern Search shows that The
GPS gives excellent tracking performance.
|
cs/0601064
|
Robotics Vision-based Heuristic Reasoning for Underwater Target Tracking
and Navigation
|
cs.RO
|
This paper presents a robotics vision-based heuristic reasoning system for
underwater target tracking and navigation. This system is introduced to improve
the level of automation of underwater Remote Operated Vehicles (ROVs)
operations. A prototype which combines computer vision with an underwater
robotics system is successfully designed and developed to perform target
tracking and intelligent navigation. ...
|
cs/0601065
|
New Intelligent Transmission Concept for Hybrid Mobile Robot Speed
Control
|
cs.RO
|
This paper presents a new concept of a mobile robot speed control by using
two degree of freedom gear transmission. The developed intelligent speed
controller utilizes a gear box which comprises of epicyclic gear train with two
inputs, one coupled with the engine shaft and another with the shaft of a
variable speed dc motor. The net output speed is a combination of the two input
speeds and is governed by the transmission ratio of the planetary gear train.
This new approach eliminates the use of a torque converter which is otherwise
an indispensable part of all available automatic transmissions, thereby
reducing the power loss that occurs in the box during the fluid coupling. By
gradually varying the speed of the dc motor a stepless transmission has been
achieved. The other advantages of the developed controller are pulling over and
reversing the vehicle, implemented by intelligent mixing of the dc motor and
engine speeds. This approach eliminates traditional braking system in entire
vehicle design. The use of two power sources, IC engine and battery driven DC
motor, utilizes the modern idea of hybrid vehicles. The new mobile robot speed
controller is capable of driving the vehicle even in extreme case of IC engine
failure, for example, due to gas depletion.
|
cs/0601066
|
On the Existence of Universally Decodable Matrices
|
cs.IT cs.DM math.IT
|
Universally decodable matrices (UDMs) can be used for coding purposes when
transmitting over slow fading channels. These matrices are parameterized by
positive integers $L$ and $N$ and a prime power $q$. The main result of this
paper is that the simple condition $L \leq q+1$ is both necessary and
sufficient for $(L,N,q)$-UDMs to exist. The existence proof is constructive and
yields a coding scheme that is equivalent to a class of codes that was proposed
by Rosenbloom and Tsfasman. Our work resolves an open problem posed recently in
the literature.
|
cs/0601067
|
Design of Rate-Compatible Serially Concatenated Convolutional Codes
|
cs.IT math.IT
|
Recently a powerful class of rate-compatible serially concatenated
convolutional codes (SCCCs) have been proposed based on minimizing analytical
upper bounds on the error probability in the error floor region. Here this
class of codes is further investigated by combining analytical upper bounds
with extrinsic information transfer charts analysis. Following this approach,
we construct a family of rate-compatible SCCCs with good performance in both
the error floor and the waterfall regions over a broad range of code rates.
|
cs/0601070
|
Instanton analysis of Low-Density-Parity-Check codes in the error-floor
regime
|
cs.IT cond-mat.dis-nn math.IT
|
In this paper we develop instanton method introduced in [1], [2], [3] to
analyze quantitatively performance of Low-Density-Parity-Check (LDPC) codes
decoded iteratively in the so-called error-floor regime. We discuss statistical
properties of the numerical instanton-amoeba scheme focusing on detailed
analysis and comparison of two regular LDPC codes: Tanner's (155, 64, 20) and
Margulis' (672, 336, 16) codes. In the regime of moderate values of the
signal-to-noise ratio we critically compare results of the instanton-amoeba
evaluations against the standard Monte-Carlo calculations of the
Frame-Error-Rate.
|
cs/0601072
|
Fast Frequent Querying with Lazy Control Flow Compilation
|
cs.PL cs.AI cs.SE
|
Control flow compilation is a hybrid between classical WAM compilation and
meta-call, limited to the compilation of non-recursive clause bodies. This
approach is used successfully for the execution of dynamically generated
queries in an inductive logic programming setting (ILP). Control flow
compilation reduces compilation times up to an order of magnitude, without
slowing down execution. A lazy variant of control flow compilation is also
presented. By compiling code by need, it removes the overhead of compiling
unreached code (a frequent phenomenon in practical ILP settings), and thus
reduces the size of the compiled code. Both dynamic compilation approaches have
been implemented and were combined with query packs, an efficient ILP execution
mechanism. It turns out that locality of data and code is important for
performance. The experiments reported in the paper show that lazy control flow
compilation is superior in both artificial and real life settings.
|
cs/0601073
|
A Theory of Routing for Large-Scale Wireless Ad-Hoc Networks
|
cs.IT cs.NI math.IT
|
In this work we develop a new theory to analyse the process of routing in
large-scale ad-hoc wireless networks. We use a path integral formulation to
examine the properties of the paths generated by different routing strategies
in these kinds of networks. Using this theoretical framework, we calculate the
statistical distribution of the distances between any source to any destination
in the network, hence we are able to deduce a length parameter that is unique
for each routing strategy. This parameter, defined as the {\it effective
radius}, effectively encodes the routing information required by a node.
Analysing the aforementioned statistical distribution for different routing
strategies, we obtain a threefold result for practical Large-Scale Wireless
Ad-Hoc Networks: 1) We obtain the distribution of the lengths of all the paths
in a network for any given routing strategy, 2) We are able to identify "good"
routing strategies depending on the evolution of its effective radius as the
number of nodes, $N$, increases to infinity, 3) For any routing strategy with
finite effective radius, we demonstrate that, in a large-scale network, is
equivalent to a random routing strategy and that its transport capacity scales
as $\Theta(\sqrt{N})$ bit-meters per second, thus retrieving the scaling law
that Gupta and Kumar (2000) obtained as the limit for single-route large-scale
wireless networks.
|
cs/0601074
|
Joint universal lossy coding and identification of i.i.d. vector sources
|
cs.IT cs.LG math.IT
|
The problem of joint universal source coding and modeling, addressed by
Rissanen in the context of lossless codes, is generalized to fixed-rate lossy
coding of continuous-alphabet memoryless sources. We show that, for bounded
distortion measures, any compactly parametrized family of i.i.d. real vector
sources with absolutely continuous marginals (satisfying appropriate smoothness
and Vapnik--Chervonenkis learnability conditions) admits a joint scheme for
universal lossy block coding and parameter estimation, and give nonasymptotic
estimates of convergence rates for distortion redundancies and variational
distances between the active source and the estimated source. We also present
explicit examples of parametric sources admitting such joint universal
compression and modeling schemes.
|
cs/0601075
|
On Universally Decodable Matrices for Space-Time Coding
|
cs.IT cs.DM math.IT
|
The notion of universally decodable matrices (UDMs) was recently introduced
by Tavildar and Viswanath while studying slow fading channels. It turns out
that the problem of constructing UDMs is tightly connected to the problem of
constructing maximum distance separable (MDS) codes. In this paper, we first
study the properties of UDMs in general and then we discuss an explicit
construction of a class of UDMs, a construction which can be seen as an
extension of Reed-Solomon codes. In fact, we show that this extension is, in a
sense to be made more precise later on, unique. Moreover, the structure of this
class of UDMs allows us to answer some open conjectures by Tavildar, Viswanath,
and Doshi in the positive, and it also allows us to formulate an efficient
decoding algorithm for this class of UDMs. It turns out that our construction
yields a coding scheme that is essentially equivalent to a class of codes that
was proposed by Rosenbloom and Tsfasman. Moreover, we point out connections to
so-called repeated-root cyclic codes.
|
cs/0601077
|
IDBE - An Intelligent Dictionary Based Encoding Algorithm for Text Data
Compression for High Speed Data Transmission Over Internet
|
cs.IT math.IT
|
Compression algorithms reduce the redundancy in data representation to
decrease the storage required for that data. Data compression offers an
attractive approach to reducing communication costs by using available
bandwidth effectively. Over the last decade there has been an unprecedented
explosion in the amount of digital data transmitted via the Internet,
representing text, images, video, sound, computer programs, etc. With this
trend expected to continue, it makes sense to pursue research on developing
algorithms that can most effectively use available network bandwidth by
maximally compressing data. This research paper is focused on addressing this
problem of lossless compression of text files. Lossless compression researchers
have developed highly sophisticated approaches, such as Huffman encoding,
arithmetic encoding, the Lempel-Ziv family, Dynamic Markov Compression (DMC),
Prediction by Partial Matching (PPM), and Burrows-Wheeler Transform (BWT) based
algorithms. However, none of these methods has been able to reach the
theoretical best-case compression ratio consistently, which suggests that
better algorithms may be possible. One approach for trying to attain better
compression ratios is to develop new compression algorithms. An alternative
approach, however, is to develop intelligent, reversible transformations that
can be applied to a source text that improve an existing, or backend,
algorithm's ability to compress. The latter strategy has been explored here.
|
cs/0601080
|
On Measure Theoretic definitions of Generalized Information Measures and
Maximum Entropy Prescriptions
|
cs.IT math.IT
|
Though Shannon entropy of a probability measure $P$, defined as $- \int_{X}
\frac{\ud P}{\ud \mu} \ln \frac{\ud P}{\ud\mu} \ud \mu$ on a measure space $(X,
\mathfrak{M},\mu)$, does not qualify itself as an information measure (it is
not a natural extension of the discrete case), maximum entropy (ME)
prescriptions in the measure-theoretic case are consistent with that of
discrete case. In this paper, we study the measure-theoretic definitions of
generalized information measures and discuss the ME prescriptions. We present
two results in this regard: (i) we prove that, as in the case of classical
relative-entropy, the measure-theoretic definitions of generalized
relative-entropies, R\'{e}nyi and Tsallis, are natural extensions of their
respective discrete cases, (ii) we show that, ME prescriptions of
measure-theoretic Tsallis entropy are consistent with the discrete case.
|
cs/0601081
|
An O(1) Solution to the Prefix Sum Problem on a Specialized Memory
Architecture
|
cs.DS cs.CC cs.IR
|
In this paper we study the Prefix Sum problem introduced by Fredman.
We show that it is possible to perform both update and retrieval in O(1) time
simultaneously under a memory model in which individual bits may be shared by
several words.
We also show that two variants (generalizations) of the problem can be solved
optimally in $\Theta(\lg N)$ time under the comparison based model of
computation.
|
cs/0601083
|
Multilevel Coding for Channels with Non-uniform Inputs and Rateless
Transmission over the BSC
|
cs.IT math.IT
|
We consider coding schemes for channels with non-uniform inputs (NUI), where
standard linear block codes can not be applied directly. We show that
multilevel coding (MLC) with a set of linear codes and a deterministic mapper
can achieve the information rate of the channel with NUI. The mapper, however,
does not have to be one-to-one. As an application of the proposed MLC scheme,
we present a rateless transmission scheme over the binary symmetric channel
(BSC).
|
cs/0601087
|
Processing of Test Matrices with Guessing Correction
|
cs.LG
|
It is suggested to insert into test matrix 1s for correct responses, 0s for
response refusals, and negative corrective elements for incorrect responses.
With the classical test theory approach test scores of examinees and items are
calculated traditionally as sums of matrix elements, organized in rows and
columns. Correlation coefficients are estimated using correction coefficients.
In item response theory approach examinee and item logits are estimated using
maximum likelihood method and probabilities of all matrix elements.
|
cs/0601088
|
An Algorithm for Constructing All Families of Codes of Arbitrary
Requirement in an OCDMA System
|
cs.IT math.IT
|
A novel code construction algorithm is presented to find all the possible
code families for code reconfiguration in an OCDMA system. The algorithm is
developed through searching all the complete subgraphs of a constructed graph.
The proposed algorithm is flexible and practical for constructing optical
orthogonal codes (OOCs) of arbitrary requirement. Simulation results show that
one should choose an appropriate code length in order to obtain sufficient
number of code families for code reconfiguration with reasonable cost.
|
cs/0601089
|
Distributed Kernel Regression: An Algorithm for Training Collaboratively
|
cs.LG cs.AI cs.DC cs.IT math.IT
|
This paper addresses the problem of distributed learning under communication
constraints, motivated by distributed signal processing in wireless sensor
networks and data mining with distributed databases. After formalizing a
general model for distributed learning, an algorithm for collaboratively
training regularized kernel least-squares regression estimators is derived.
Noting that the algorithm can be viewed as an application of successive
orthogonal projection algorithms, its convergence properties are investigated
and the statistical behavior of the estimator is discussed in a simplified
theoretical setting.
|
cs/0601090
|
Improved Nearly-MDS Expander Codes
|
cs.IT math.IT
|
A construction of expander codes is presented with the following three
properties:
(i) the codes lie close to the Singleton bound, (ii) they can be encoded in
time complexity that is linear in their code length, and (iii) they have a
linear-time bounded-distance decoder.
By using a version of the decoder that corrects also erasures, the codes can
replace MDS outer codes in concatenated constructions, thus resulting in
linear-time encodable and decodable codes that approach the Zyablov bound or
the capacity of memoryless channels. The presented construction improves on an
earlier result by Guruswami and Indyk in that any rate and relative minimum
distance that lies below the Singleton bound is attainable for a significantly
smaller alphabet size.
|
cs/0601091
|
Communication Over MIMO Broadcast Channels Using Lattice-Basis Reduction
|
cs.IT math.IT
|
A simple scheme for communication over MIMO broadcast channels is introduced
which adopts the lattice reduction technique to improve the naive channel
inversion method. Lattice basis reduction helps us to reduce the average
transmitted energy by modifying the region which includes the constellation
points. Simulation results show that the proposed scheme performs well, and as
compared to the more complex methods (such as the perturbation method) has a
negligible loss. Moreover, the proposed method is extended to the case of
different rates for different users. The asymptotic behavior of the symbol
error rate of the proposed method and the perturbation technique, and also the
outage probability for the case of fixed-rate users is analyzed. It is shown
that the proposed method, based on LLL lattice reduction, achieves the optimum
asymptotic slope of symbol-error-rate (called the precoding diversity). Also,
the outage probability for the case of fixed sum-rate is analyzed.
|
cs/0601092
|
LLL Reduction Achieves the Receive Diversity in MIMO Decoding
|
cs.IT math.IT
|
Diversity order is an important measure for the performance of communication
systems over MIMO fading channels. In this paper, we prove that in MIMO
multiple access systems (or MIMO point-to-point systems with V-BLAST
transmission), lattice-reduction-aided decoding achieves the maximum receive
diversity (which is equal to the number of receive antennas). Also, we prove
that the naive lattice decoding (which discards the out-of-region decoded
points) achieves the maximum diversity.
|
cs/0601093
|
Stability of Scheduled Multi-access Communication over Quasi-static Flat
Fading Channels with Random Coding and Joint Maximum Likelihood Decoding
|
cs.IT math.IT
|
We consider stability of scheduled multiaccess message communication with
random coding and joint maximum-likehood decoding of messages. The framework we
consider here models both the random message arrivals and the subsequent
reliable communication by suitably combining techniques from queueing theory
and information theory. The number of messages that may be scheduled for
simultaneous transmission is limited to a given maximum value, and the channels
from transmitters to receiver are quasi-static, flat, and have independent
fades. Requests for message transmissions are assumed to arrive according to an
i.i.d. arrival process. Then, (i) we derive an outer bound to the region of
message arrival rate vectors achievable by the class of stationary scheduling
policies, (ii) we show for any message arrival rate vector that satisfies the
outerbound, that there exists a stationary state-independent policy that
results in a stable system for the corresponding message arrival process, and
(iii) in the limit of large message lengths, we show that the stability region
of message nat arrival rate vectors has information-theoretic capacity region
interpretation.
|
cs/0601094
|
Stability of Scheduled Message Communication over Degraded Broadcast
Channels
|
cs.IT math.IT
|
We consider scheduled message communication over a discrete memoryless
degraded broadcast channel. The framework we consider here models both the
random message arrivals and the subsequent reliable communication by suitably
combining techniques from queueing theory and information theory. The channel
from the transmitter to each of the receivers is quasi-static, flat, and with
independent fades across the receivers. Requests for message transmissions are
assumed to arrive according to an i.i.d. arrival process. Then, (i) we derive
an outer bound to the region of message arrival vectors achievable by the class
of stationary scheduling policies, (ii) we show for any message arrival vector
that satisfies the outerbound, that there exists a stationary
``state-independent'' policy that results in a stable system for the
corresponding message arrival process, and (iii) under two asymptotic regimes,
we show that the stability region of nat arrival rate vectors has
information-theoretic capacity region interpretation.
|
cs/0601095
|
On the Weight Enumerator and the Maximum Likelihood Performance of
Linear Product Codes
|
cs.IT math.IT
|
Product codes are widely used in data-storage, optical and wireless
applications. Their analytical performance evaluation usually relies on the
truncated union bound, which provides a low error rate approximation based on
the minimum distance term only. In fact, the complete weight enumerator of most
product codes remains unknown. In this paper, concatenated representations are
introduced and applied to compute the complete average enumerators of arbitrary
product codes over a field Fq. The split weight enumerators of some important
constituent codes (Hamming, Reed-Solomon) are studied and used in the analysis.
The average binary weight enumerators of Reed Solomon product codes are also
derived. Numerical results showing the enumerator behavior are presented. By
using the complete enumerators, Poltyrev bounds on the maximum likelihood
performance, holding at both high and low error rates, are finally shown and
compared against truncated union bounds and simulation results.
|
cs/0601098
|
Energy Efficiency and Delay Quality-of-Service in Wireless Networks
|
cs.IT math.IT
|
The energy-delay tradeoffs in wireless networks are studied using a
game-theoretic framework. A multi-class multiple-access network is considered
in which users choose their transmit powers, and possibly transmission rates,
in a distributed manner to maximize their own utilities while satisfying their
delay quality-of-service (QoS) requirements. The utility function considered
here measures the number of reliable bits transmitted per Joule of energy
consumed and is particularly useful for energy-constrained networks. The Nash
equilibrium solution for the proposed non-cooperative game is presented and
closed-form expressions for the users' utilities at equilibrium are obtained.
Based on this, the losses in energy efficiency and network capacity due to
presence of delay-sensitive users are quantified. The analysis is extended to
the scenario where the QoS requirements include both the average source rate
and a bound on the average total delay (including queuing delay). It is shown
that the incoming traffic rate and the delay constraint of a user translate
into a "size" for the user, which is an indication of the amount of resources
consumed by the user. Using this framework, the tradeoffs among throughput,
delay, network capacity and energy efficiency are also quantified.
|
cs/0601099
|
Adaptive Linear Programming Decoding
|
cs.IT math.IT
|
Detectability of failures of linear programming (LP) decoding and its
potential for improvement by adding new constraints motivate the use of an
adaptive approach in selecting the constraints for the LP problem. In this
paper, we make a first step in studying this method, and show that it can
significantly reduce the complexity of the problem, which was originally
exponential in the maximum check-node degree. We further show that adaptively
adding new constraints, e.g. by combining parity checks, can provide large
gains in the performance.
|
cs/0601102
|
Geometric symmetry in the quadratic Fisher discriminant operating on
image pixels
|
cs.IT cs.CV math.IT
|
This article examines the design of Quadratic Fisher Discriminants (QFDs)
that operate directly on image pixels, when image ensembles are taken to
comprise all rotated and reflected versions of distinct sample images. A
procedure based on group theory is devised to identify and discard QFD
coefficients made redundant by symmetry, for arbitrary sampling lattices. This
procedure introduces the concept of a degeneracy matrix. Tensor representations
are established for the square lattice point group (8-fold symmetry) and
hexagonal lattice point group (12-fold symmetry). The analysis is largely
applicable to the symmetrisation of any quadratic filter, and generalises to
higher order polynomial (Volterra) filters. Experiments on square lattice
sampled synthetic aperture radar (SAR) imagery verify that symmetrisation of
QFDs can improve their generalisation and discrimination ability.
|
cs/0601103
|
Google Web APIs - an Instrument for Webometric Analyses?
|
cs.IR
|
This paper introduces Google Web APIs (Google APIs) as an instrument and
playground for webometric studies. Several examples of Google APIs
implementations are given. Our examples show that this Google Web Service can
be used successfully for informetric Internet based studies albeit with some
restrictions.
|
cs/0601105
|
The Perceptron Algorithm: Image and Signal Decomposition, Compression,
and Analysis by Iterative Gaussian Blurring
|
cs.CV
|
A novel algorithm for tunable compression to within the precision of
reproduction targets, or storage, is proposed. The new algorithm is termed the
`Perceptron Algorithm', which utilises simple existing concepts in a novel way,
has multiple immediate commercial application aspects as well as it opens up a
multitude of fronts in computational science and technology. The aims of this
paper are to present the concepts underlying the algorithm, observations by its
application to some example cases, and the identification of a multitude of
potential areas of applications such as: image compression by orders of
magnitude, signal compression including sound as well, image analysis in a
multilayered detailed analysis, pattern recognition and matching and rapid
database searching (e.g. face recognition), motion analysis, biomedical
applications e.g. in MRI and CAT scan image analysis and compression, as well
as hints on the link of these ideas to the way how biological memory might work
leading to new points of view in neural computation. Commercial applications of
immediate interest are the compression of images at the source (e.g.
photographic equipment, scanners, satellite imaging systems), DVD film
compression, pay-per-view downloads acceleration and many others identified in
the present paper at its conclusion and future work section.
|
cs/0601106
|
The `Face on Mars': a photographic approach for the search of signs of
past civilizations from a macroscopic point of view, factoring long-term
erosion in image reconstruction
|
cs.CV
|
This short article presents an alternative view of high resolution imaging
from various sources with the aim of the discovery of potential sites of
archaeological importance, or sites that exhibit `anomalies' such that they may
merit closer inspection and analysis. It is conjectured, and to a certain
extent demonstrated here, that it is possible for advanced civilizations to
factor in erosion by natural processes into a large scale design so that main
features be preserved even with the passage of millions of years. Alternatively
viewed, even without such intent embedded in a design left for posterity, it is
possible that a gigantic construction may naturally decay in such a way that
even cataclysmic (massive) events may leave sufficient information intact with
the passage of time, provided one changes the point of view from high
resolution images to enhanced blurred renderings of the sites in question.
|
cs/0601107
|
Structure of Optimal Input Covariance Matrices for MIMO Systems with
Covariance Feedback under General Correlated Fading
|
cs.IT math.IT
|
We describe the structure of optimal Input covariance matrices for single
user multiple-input/multiple-output (MIMO) communication system with covariance
feedback and for general correlated fading. Our approach is based on the novel
concept of right commutant and recovers previously derived results for the
Kronecker product models. Conditions are derived which allow a significant
simplification of the optimization problem.
|
cs/0601108
|
Fast Lexically Constrained Viterbi Algorithm (FLCVA): Simultaneous
Optimization of Speed and Memory
|
cs.CV cs.AI cs.DS
|
Lexical constraints on the input of speech and on-line handwriting systems
improve the performance of such systems. A significant gain in speed can be
achieved by integrating in a digraph structure the different Hidden Markov
Models (HMM) corresponding to the words of the relevant lexicon. This
integration avoids redundant computations by sharing intermediate results
between HMM's corresponding to different words of the lexicon. In this paper,
we introduce a token passing method to perform simultaneously the computation
of the a posteriori probabilities of all the words of the lexicon. The coding
scheme that we introduce for the tokens is optimal in the information theory
sense. The tokens use the minimum possible number of bits. Overall, we optimize
simultaneously the execution speed and the memory requirement of the
recognition systems.
|
cs/0601109
|
Certainty Closure: Reliable Constraint Reasoning with Incomplete or
Erroneous Data
|
cs.AI
|
Constraint Programming (CP) has proved an effective paradigm to model and
solve difficult combinatorial satisfaction and optimisation problems from
disparate domains. Many such problems arising from the commercial world are
permeated by data uncertainty. Existing CP approaches that accommodate
uncertainty are less suited to uncertainty arising due to incomplete and
erroneous data, because they do not build reliable models and solutions
guaranteed to address the user's genuine problem as she perceives it. Other
fields such as reliable computation offer combinations of models and associated
methods to handle these types of uncertain data, but lack an expressive
framework characterising the resolution methodology independently of the model.
We present a unifying framework that extends the CP formalism in both model
and solutions, to tackle ill-defined combinatorial problems with incomplete or
erroneous data. The certainty closure framework brings together modelling and
solving methodologies from different fields into the CP paradigm to provide
reliable and efficient approches for uncertain constraint problems. We
demonstrate the applicability of the framework on a case study in network
diagnosis. We define resolution forms that give generic templates, and their
associated operational semantics, to derive practical solution methods for
reliable solutions.
|
cs/0601110
|
Mutual Information Games in Multi-user Channels with Correlated Jamming
|
cs.IT math.IT
|
We investigate the behavior of two users and one jammer in an AWGN channel
with and without fading when they participate in a non-cooperative zero-sum
game, with the channel's input/output mutual information as the objective
function. We assume that the jammer can eavesdrop the channel and can use the
information obtained to perform correlated jamming. Under various assumptions
on the channel characteristics, and the extent of information available at the
users and the jammer, we show the existence, or otherwise non-existence of a
simultaneously optimal set of strategies for the users and the jammer. In all
the cases where the channel is non-fading, we show that the game has a
solution, and the optimal strategies are Gaussian signalling for the users and
linear jamming for the jammer. In fading channels, we envision each player's
strategy as a power allocation function over the channel states, together with
the signalling strategies at each channel state. We reduce the game solution to
a set of power allocation functions for the players and show that when the
jammer is uncorrelated, the game has a solution, but when the jammer is
correlated, a set of simultaneously optimal power allocation functions for the
users and the jammer does not always exist. In this case, we characterize the
max-min user power allocation strategies and the corresponding jammer power
allocation strategy.
|
cs/0601113
|
An Efficient Pseudo-Codeword Search Algorithm for Linear Programming
Decoding of LDPC Codes
|
cs.IT cond-mat.dis-nn math.IT
|
In Linear Programming (LP) decoding of a Low-Density-Parity-Check (LDPC) code
one minimizes a linear functional, with coefficients related to log-likelihood
ratios, over a relaxation of the polytope spanned by the codewords
\cite{03FWK}. In order to quantify LP decoding, and thus to describe
performance of the error-correction scheme at moderate and large
Signal-to-Noise-Ratios (SNR), it is important to study the relaxed polytope to
understand better its vertexes, so-called pseudo-codewords, especially those
which are neighbors of the zero codeword. In this manuscript we propose a
technique to heuristically create a list of these neighbors and their
distances. Our pseudo-codeword-search algorithm starts by randomly choosing the
initial configuration of the noise. The configuration is modified through a
discrete number of steps. Each step consists of two sub-steps. Firstly, one
applies an LP decoder to the noise-configuration deriving a pseudo-codeword.
Secondly, one finds configuration of the noise equidistant from the pseudo
codeword and the zero codeword. The resulting noise configuration is used as an
entry for the next step. The iterations converge rapidly to a pseudo-codeword
neighboring the zero codeword. Repeated many times, this procedure is
characterized by the distribution function (frequency spectrum) of the
pseudo-codeword effective distance. The effective distance of the coding scheme
is approximated by the shortest distance pseudo-codeword in the spectrum. The
efficiency of the procedure is demonstrated on examples of the Tanner
$[155,64,20]$ code and Margulis $p=7$ and $p=11$ codes (672 and 2640 bits long
respectively) operating over an Additive-White-Gaussian-Noise (AWGN) channel.
|
cs/0601114
|
Efficient Query Answering over Conceptual Schemas of Relational
Databases : Technical Report
|
cs.DB cs.LO
|
We develop a query answering system, where at the core of the work there is
an idea of query answering by rewriting. For this purpose we extend the DL
DL-Lite with the ability to support n-ary relations, obtaining the DL DLR-Lite,
which is still polynomial in the size of the data. We devise a flexible way of
mapping the conceptual level to the relational level, which provides the users
an SQL-like query language over the conceptual schema. The rewriting technique
adds value to conventional query answering techniques, allowing to formulate
simpler queries, with the ability to infer additional information that was not
stated explicitly in the user query. The formalization of the conceptual schema
and the developed reasoning technique allow checking for consistency between
the database and the conceptual schema, thus improving the trustiness of the
information system.
|
cs/0601115
|
Decision Making with Side Information and Unbounded Loss Functions
|
cs.LG cs.IT math.IT
|
We consider the problem of decision-making with side information and
unbounded loss functions. Inspired by probably approximately correct learning
model, we use a slightly different model that incorporates the notion of side
information in a more generic form to make it applicable to a broader class of
applications including parameter estimation and system identification. We
address sufficient conditions for consistent decision-making with exponential
convergence behavior. In this regard, besides a certain condition on the growth
function of the class of loss functions, it suffices that the class of loss
functions be dominated by a measurable function whose exponential Orlicz
expectation is uniformly bounded over the probabilistic model. Decay exponent,
decay constant, and sample complexity are discussed. Example applications to
method of moments, maximum likelihood estimation, and system identification are
illustrated, as well.
|
cs/0601120
|
On The Minimum Mean-Square Estimation Error of the Normalized Sum of
Independent Narrowband Waves in the Gaussian Channel
|
cs.IT math.IT
|
The minimum mean-square error of the estimation of a signal where observed
from the additive white Gaussian noise (WGN) channel's output, is analyzed. It
is assumed that the channel input's signal is composed of a (normalized) sum of
N narrowband, mutually independent waves. It is shown that if N goes to
infinity, then for any fixed signal energy to noise energy ratio (no mater how
big) both the causal minimum mean-square error CMMSE and the non-causal minimum
mean-square error MMSE converge to the signal energy at a rate which is
proportional to 1/N.
|
cs/0601121
|
A Multi-Relational Network to Support the Scholarly Communication
Process
|
cs.DL cs.AI cs.IR
|
The general pupose of the scholarly communication process is to support the
creation and dissemination of ideas within the scientific community. At a finer
granularity, there exists multiple stages which, when confronted by a member of
the community, have different requirements and therefore different solutions.
In order to take a researcher's idea from an initial inspiration to a community
resource, the scholarly communication infrastructure may be required to 1)
provide a scientist initial seed ideas; 2) form a team of well suited
collaborators; 3) located the most appropriate venue to publish the formalized
idea; 4) determine the most appropriate peers to review the manuscript; and 5)
disseminate the end product to the most interested members of the community.
Through the various delinieations of this process, the requirements of each
stage are tied soley to the multi-functional resources of the community: its
researchers, its journals, and its manuscritps. It is within the collection of
these resources and their inherent relationships that the solutions to
scholarly communication are to be found. This paper describes an associative
network composed of multiple scholarly artifacts that can be used as a medium
for supporting the scholarly communication process.
|
cs/0601123
|
Low density codes achieve the rate-distortion bound
|
cs.IT math.IT
|
We propose a new construction for low-density source codes with multiple
parameters that can be tuned to optimize the performance of the code. In
addition, we introduce a set of analysis techniques for deriving upper bounds
for the expected distortion of our construction, as well as more general
low-density constructions. We show that (with an optimal encoding algorithm)
our codes achieve the rate-distortion bound for a binary symmetric source and
Hamming distortion. Our methods also provide rigorous upper bounds on the
minimum distortion achievable by previously proposed low-density constructions.
|
cs/0601124
|
Power Control for User Cooperation
|
cs.IT math.IT
|
For a fading Gaussian multiple access channel with user cooperation, we
obtain the optimal power allocation policies that maximize the rates achievable
by block Markov superposition coding. The optimal policies result in a coding
scheme that is simpler than the one for a general multiple access channel with
generalized feedback. This simpler coding scheme also leads to the possibility
of formulating an otherwise non-concave optimization problem as a concave one.
Using the channel state information at the transmitters to adapt the powers, we
demonstrate significant gains over the achievable rates for existing
cooperative systems.
|
cs/0601126
|
Approximate Linear Time ML Decoding on Tail-Biting Trellises in Two
Rounds
|
cs.IT math.IT
|
A linear time approximate maximum likelihood decoding algorithm on
tail-biting trellises is prsented, that requires exactly two rounds on the
trellis. This is an adaptation of an algorithm proposed earlier with the
advantage that it reduces the time complexity from O(mlogm) to O(m) where m is
the number of nodes in the tail-biting trellis. A necessary condition for the
output of the algorithm to differ from the output of the ideal ML decoder is
reduced and simulation results on an AWGN channel using tail-biting rrellises
for two rate 1/2 convoluational codes with memory 4 and 6 respectively are
reported
|
cs/0601129
|
Instantaneously Trained Neural Networks
|
cs.NE cs.AI
|
This paper presents a review of instantaneously trained neural networks
(ITNNs). These networks trade learning time for size and, in the basic model, a
new hidden node is created for each training sample. Various versions of the
corner-classification family of ITNNs, which have found applications in
artificial intelligence (AI), are described. Implementation issues are also
considered.
|
cs/0601130
|
From Dumb Wireless Sensors to Smart Networks using Network Coding
|
cs.IT cs.NI math.IT
|
The vision of wireless sensor networks is one of a smart collection of tiny,
dumb devices. These motes may be individually cheap, unintelligent, imprecise,
and unreliable. Yet they are able to derive strength from numbers, rendering
the whole to be strong, reliable and robust. Our approach is to adopt a
distributed and randomized mindset and rely on in network processing and
network coding. Our general abstraction is that nodes should act only locally
and independently, and the desired global behavior should arise as a collective
property of the network. We summarize our work and present how these ideas can
be applied for communication and storage in sensor networks.
|
cs/0601131
|
Scalable Algorithms for Aggregating Disparate Forecasts of Probability
|
cs.AI cs.DC cs.IT math.IT
|
In this paper, computational aspects of the panel aggregation problem are
addressed. Motivated primarily by applications of risk assessment, an algorithm
is developed for aggregating large corpora of internally incoherent probability
assessments. The algorithm is characterized by a provable performance
guarantee, and is demonstrated to be orders of magnitude faster than existing
tools when tested on several real-world data-sets. In addition, unexpected
connections between research in risk assessment and wireless sensor networks
are exposed, as several key ideas are illustrated to be useful in both fields.
|
cs/0601132
|
A Study on the Global Convergence Time Complexity of Estimation of
Distribution Algorithms
|
cs.AI cs.NE
|
The Estimation of Distribution Algorithm is a new class of population based
search methods in that a probabilistic model of individuals is estimated based
on the high quality individuals and used to generate the new individuals. In
this paper we compute 1) some upper bounds on the number of iterations required
for global convergence of EDA 2) the exact number of iterations needed for EDA
to converge to global optima.
|
cs/0602004
|
Conjunctive Queries over Trees
|
cs.DB cs.AI cs.CC cs.LO
|
We study the complexity and expressive power of conjunctive queries over
unranked labeled trees represented using a variety of structure relations such
as ``child'', ``descendant'', and ``following'' as well as unary relations for
node labels. We establish a framework for characterizing structures
representing trees for which conjunctive queries can be evaluated efficiently.
Then we completely chart the tractability frontier of the problem and establish
a dichotomy theorem for our axis relations, i.e., we find all subset-maximal
sets of axes for which query evaluation is in polynomial time and show that for
all other cases, query evaluation is NP-complete. All polynomial-time results
are obtained immediately using the proof techniques from our framework.
Finally, we study the expressiveness of conjunctive queries over trees and show
that for each conjunctive query, there is an equivalent acyclic positive query
(i.e., a set of acyclic conjunctive queries), but that in general this query is
not of polynomial size.
|
cs/0602006
|
A Visual Query Language for Complex-Value Databases
|
cs.DB cs.HC
|
In this paper, a visual language, VCP, for queries on complex-value databases
is proposed. The main strength of the new language is that it is purely visual:
(i) It has no notion of variable, quantification, partiality, join, pattern
matching, regular expression, recursion, or any other construct proper to
logical, functional, or other database query languages and (ii) has a very
natural, strong, and intuitive design metaphor. The main operation is that of
copying and pasting in a schema tree.
We show that despite its simplicity, VCP precisely captures complex-value
algebra without powerset, or equivalently, monad algebra with union and
difference. Thus, its expressive power is precisely that of the language that
is usually considered to play the role of relational algebra for complex-value
databases.
|
cs/0602007
|
Fuzzy Extractors: How to Generate Strong Keys from Biometrics and Other
Noisy Data
|
cs.CR cs.IT math.IT
|
We provide formal definitions and efficient secure techniques for
- turning noisy information into keys usable for any cryptographic
application, and, in particular,
- reliably and securely authenticating biometric data.
Our techniques apply not just to biometric information, but to any keying
material that, unlike traditional cryptographic keys, is (1) not reproducible
precisely and (2) not distributed uniformly. We propose two primitives: a
"fuzzy extractor" reliably extracts nearly uniform randomness R from its input;
the extraction is error-tolerant in the sense that R will be the same even if
the input changes, as long as it remains reasonably close to the original.
Thus, R can be used as a key in a cryptographic application. A "secure sketch"
produces public information about its input w that does not reveal w, and yet
allows exact recovery of w given another value that is close to w. Thus, it can
be used to reliably reproduce error-prone biometric inputs without incurring
the security risk inherent in storing them.
We define the primitives to be both formally secure and versatile,
generalizing much prior work. In addition, we provide nearly optimal
constructions of both primitives for various measures of ``closeness'' of input
data, such as Hamming distance, edit distance, and set difference.
|
cs/0602011
|
The intuitionistic fragment of computability logic at the propositional
level
|
cs.LO cs.AI math.LO
|
This paper presents a soundness and completeness proof for propositional
intuitionistic calculus with respect to the semantics of computability logic.
The latter interprets formulas as interactive computational problems,
formalized as games between a machine and its environment. Intuitionistic
implication is understood as algorithmic reduction in the weakest possible --
and hence most natural -- sense, disjunction and conjunction as
deterministic-choice combinations of problems (disjunction = machine's choice,
conjunction = environment's choice), and "absurd" as a computational problem of
universal strength. See http://www.cis.upenn.edu/~giorgi/cl.html for a
comprehensive online source on computability logic.
|
cs/0602014
|
Game theoretic aspects of distributed spectral coordination with
application to DSL networks
|
cs.IT math.IT
|
In this paper we use game theoretic techniques to study the value of
cooperation in distributed spectrum management problems. We show that the
celebrated iterative water-filling algorithm is subject to the prisoner's
dilemma and therefore can lead to severe degradation of the achievable rate
region in an interference channel environment. We also provide thorough
analysis of a simple two bands near-far situation where we are able to provide
closed form tight bounds on the rate region of both fixed margin iterative
water filling (FM-IWF) and dynamic frequency division multiplexing (DFDM)
methods. This is the only case where such analytic expressions are known and
all previous studies included only simulated results of the rate region. We
then propose an alternative algorithm that alleviates some of the drawbacks of
the IWF algorithm in near-far scenarios relevant to DSL access networks. We
also provide experimental analysis based on measured DSL channels of both
algorithms as well as the centralized optimum spectrum management.
|
cs/0602015
|
On the Asymptotic Performance of Multiple Antenna Channels with Fast
Channel Feedback
|
cs.IT math.IT
|
In this paper, we analyze the asymptotic performance of multiple antenna
channels where the transmitter has either perfect or finite bit channel state
information. Using the diversity-multiplexing tradeoff to characterize the
system performance, we demonstrate that channel feedback can fundamentally
change the system behavior. Even one-bit of information can increase the
diversity order of the system compared to the system with no transmitter
information. In addition, as the amount of channel information at the
transmitter increases, the diversity order for each multiplexing gain increases
and goes to infinity for perfect transmitter information. The major reason for
diversity order gain is a "location-dependent" temporal power control, which
adapts the power control strategy based on the average channel conditions of
the channel.
|
cs/0602018
|
Improving the CSIEC Project and Adapting It to the English Teaching and
Learning in China
|
cs.CY cs.AI cs.CL cs.HC cs.MA
|
In this paper after short review of the CSIEC project initialized by us in
2003 we present the continuing development and improvement of the CSIEC project
in details, including the design of five new Microsoft agent characters
representing different virtual chatting partners and the limitation of
simulated dialogs in specific practical scenarios like graduate job application
interview, then briefly analyze the actual conditions and features of its
application field: web-based English education in China. Finally we introduce
our efforts to adapt this system to the requirements of English teaching and
learning in China and point out the work next to do.
|
cs/0602020
|
Inter-Block Permuted Turbo Codes
|
cs.IT math.IT
|
The structure and size of the interleaver used in a turbo code critically
affect the distance spectrum and the covariance property of a component
decoder's information input and soft output. This paper introduces a new class
of interleavers, the inter-block permutation (IBP) interleavers, that can be
build on any existing "good" block-wise interleaver by simply adding an IBP
stage. The IBP interleavers reduce the above-mentioned correlation and increase
the effective interleaving size. The increased effective interleaving size
improves the distance spectrum while the reduced covariance enhances the
iterative decoder's performance. Moreover, the structure of the
IBP(-interleaved) turbo codes (IBPTC) is naturally fit for high rate
applications that necessitate parallel decoding.
We present some useful bounds and constraints associated with the IBPTC that
can be used as design guidelines. The corresponding codeword weight upper
bounds for weight-2 and weight-4 input sequences are derived. Based on some of
the design guidelines, we propose a simple IBP algorithm and show that the
associated IBPTC yields 0.3 to 1.2 dB performance gain, or equivalently, an
IBPTC renders the same performance with a much reduced interleaving delay. The
EXIT and covariance behaviors provide another numerical proof of the
superiority of the proposed IBPTC.
|
cs/0602021
|
Using Domain Knowledge in Evolutionary System Identification
|
cs.AI math.AP
|
Two example of Evolutionary System Identification are presented to highlight
the importance of incorporating Domain Knowledge: the discovery of an
analytical indentation law in Structural Mechanics using constrained Genetic
Programming, and the identification of the repartition of underground
velocities in Seismic Prospection. Critical issues for sucessful ESI are
discussed in the light of these results.
|
cs/0602022
|
Avoiding the Bloat with Stochastic Grammar-based Genetic Programming
|
cs.AI
|
The application of Genetic Programming to the discovery of empirical laws is
often impaired by the huge size of the search space, and consequently by the
computer resources needed. In many cases, the extreme demand for memory and CPU
is due to the massive growth of non-coding segments, the introns. The paper
presents a new program evolution framework which combines distribution-based
evolution in the PBIL spirit, with grammar-based genetic programming; the
information is stored as a probability distribution on the gra mmar rules,
rather than in a population. Experiments on a real-world like problem show that
this approach gives a practical solution to the problem of intron growth.
|
cs/0602023
|
Information theory and Thermodynamics
|
cs.IT math.IT
|
A communication theory for a transmitter broadcasting to many receivers is
presented. In this case energetic considerations cannot be neglected as in
Shannon theory. It is shown that, when energy is assigned to the information
bit, information theory complies with classical thermodynamic and is part of
it. To provide a thermodynamic theory of communication it is necessary to
define equilibrium for informatics systems that are not in thermal equilibrium
and to calculate temperature, heat, and entropy with accordance to Clausius
inequality. It is shown that for a binary file the temperature is proportional
to the bit energy and that information is thermodynamic entropy. Equilibrium
exists in random files that cannot be compressed. Thermodynamic bounds on the
computing power of a physical device, and the maximum information that an
antenna can broadcast are calculated.
|
cs/0602027
|
Explaining Constraint Programming
|
cs.PL cs.AI
|
We discuss here constraint programming (CP) by using a proof-theoretic
perspective. To this end we identify three levels of abstraction. Each level
sheds light on the essence of CP.
In particular, the highest level allows us to bring CP closer to the
computation as deduction paradigm. At the middle level we can explain various
constraint propagation algorithms. Finally, at the lowest level we can address
the issue of automatic generation and optimization of the constraint
propagation algorithms.
|
cs/0602028
|
Analysis of Belief Propagation for Non-Linear Problems: The Example of
CDMA (or: How to Prove Tanaka's Formula)
|
cs.IT math.IT
|
We consider the CDMA (code-division multiple-access) multi-user detection
problem for binary signals and additive white gaussian noise. We propose a
spreading sequences scheme based on random sparse signatures, and a detection
algorithm based on belief propagation (BP) with linear time complexity. In the
new scheme, each user conveys its power onto a finite number of chips l, in the
large system limit.
We analyze the performances of BP detection and prove that they coincide with
the ones of optimal (symbol MAP) detection in the l->\infty limit. In the same
limit, we prove that the information capacity of the system converges to
Tanaka's formula for random `dense' signatures, thus providing the first
rigorous justification of this formula. Apart from being computationally
convenient, the new scheme allows for optimization in close analogy with
irregular low density parity check code ensembles.
|
cs/0602030
|
Single-Symbol Maximum Likelihood Decodable Linear STBCs
|
cs.IT math.IT
|
Space-Time block codes (STBC) from Orthogonal Designs (OD) and Co-ordinate
Interleaved Orthogonal Designs (CIOD) have been attracting wider attention due
to their amenability for fast (single-symbol) ML decoding, and full-rate with
full-rank over quasi-static fading channels. However, these codes are instances
of single-symbol decodable codes and it is natural to ask, if there exist codes
other than STBCs form ODs and CIODs that allow single-symbol coding?
In this paper, the above question is answered in the affirmative by
characterizing all linear STBCs, that allow single-symbol ML decoding (not
necessarily full-diversity) over quasi-static fading channels-calling them
single-symbol decodable designs (SDD). The class SDD includes ODs and CIODs as
proper subclasses. Further, among the SDD, a class of those that offer
full-diversity, called Full-rank SDD (FSDD) are characterized and classified.
|
cs/0602031
|
Classifying Signals with Local Classifiers
|
cs.AI
|
This paper deals with the problem of classifying signals. The new method for
building so called local classifiers and local features is presented. The
method is a combination of the lifting scheme and the support vector machines.
Its main aim is to produce effective and yet comprehensible classifiers that
would help in understanding processes hidden behind classified signals. To
illustrate the method we present the results obtained on an artificial and a
real dataset.
|
cs/0602032
|
Finite-State Dimension and Real Arithmetic
|
cs.CC cs.IT math.IT
|
We use entropy rates and Schur concavity to prove that, for every integer k
>= 2, every nonzero rational number q, and every real number alpha, the base-k
expansions of alpha, q+alpha, and q*alpha all have the same finite-state
dimension and the same finite-state strong dimension. This extends, and gives a
new proof of, Wall's 1949 theorem stating that the sum or product of a nonzero
rational number and a Borel normal number is always Borel normal.
|
cs/0602035
|
n-Channel Entropy-Constrained Multiple-Description Lattice Vector
Quantization
|
cs.IT math.IT
|
In this paper we derive analytical expressions for the central and side
quantizers which, under high-resolutions assumptions, minimize the expected
distortion of a symmetric multiple-description lattice vector quantization
(MD-LVQ) system subject to entropy constraints on the side descriptions for
given packet-loss probabilities.
We consider a special case of the general n-channel symmetric
multiple-description problem where only a single parameter controls the
redundancy tradeoffs between the central and the side distortions. Previous
work on two-channel MD-LVQ showed that the distortions of the side quantizers
can be expressed through the normalized second moment of a sphere. We show here
that this is also the case for three-channel MD-LVQ. Furthermore, we conjecture
that this is true for the general n-channel MD-LVQ.
For given source, target rate and packet-loss probabilities we find the
optimal number of descriptions and construct the MD-LVQ system that minimizes
the expected distortion. We verify theoretical expressions by numerical
simulations and show in a practical setup that significant performance
improvements can be achieved over state-of-the-art two-channel MD-LVQ by using
three-channel MD-LVQ.
|
cs/0602036
|
R\'{e}seaux d'Automates de Caianiello Revisit\'{e}
|
cs.NE
|
We exhibit a family of neural networks of McCulloch and Pitts of size $2nk+2$
which can be simulated by a neural networks of Caianiello of size $2n+2$ and
memory length $k$. This simulation allows us to find again one of the result of
the following article: [Cycles exponentiels des r\'{e}seaux de Caianiello et
compteurs en arithm\'{e}tique redondante, Technique et Science Informatiques
Vol. 19, pages 985-1008] on the existence of neural networks of Caianiello of
size $2n+2$ and memory length $k$ which describes a cycle of length $k \times
2^{nk}$.
|
cs/0602038
|
Minimum Cost Homomorphisms to Proper Interval Graphs and Bigraphs
|
cs.DM cs.AI
|
For graphs $G$ and $H$, a mapping $f: V(G)\dom V(H)$ is a homomorphism of $G$
to $H$ if $uv\in E(G)$ implies $f(u)f(v)\in E(H).$ If, moreover, each vertex $u
\in V(G)$ is associated with costs $c_i(u), i \in V(H)$, then the cost of the
homomorphism $f$ is $\sum_{u\in V(G)}c_{f(u)}(u)$. For each fixed graph $H$, we
have the {\em minimum cost homomorphism problem}, written as MinHOM($H)$. The
problem is to decide, for an input graph $G$ with costs $c_i(u),$ $u \in V(G),
i\in V(H)$, whether there exists a homomorphism of $G$ to $H$ and, if one
exists, to find one of minimum cost. Minimum cost homomorphism problems
encompass (or are related to) many well studied optimization problems. We
describe a dichotomy of the minimum cost homomorphism problems for graphs $H$,
with loops allowed. When each connected component of $H$ is either a reflexive
proper interval graph or an irreflexive proper interval bigraph, the problem
MinHOM($H)$ is polynomial time solvable. In all other cases the problem
MinHOM($H)$ is NP-hard. This solves an open problem from an earlier paper.
Along the way, we prove a new characterization of the class of proper interval
bigraphs.
|
cs/0602039
|
Path Summaries and Path Partitioning in Modern XML Databases
|
cs.DB
|
We study the applicability of XML path summaries in the context of
current-day XML databases. We find that summaries provide an excellent basis
for optimizing data access methods, which furthermore mixes very well with
path-partitioned stores. We provide practical algorithms for building and
exploiting summaries, and prove its benefits through extensive experiments.
|
cs/0602044
|
Multilevel Thresholding for Image Segmentation through a Fast
Statistical Recursive Algorithm
|
cs.CV
|
A novel algorithm is proposed for segmenting an image into multiple levels
using its mean and variance. Starting from the extreme pixel values at both
ends of the histogram plot, the algorithm is applied recursively on sub-ranges
computed from the previous step, so as to find a threshold level and a new
sub-range for the next step, until no significant improvement in image quality
can be achieved. The method makes use of the fact that a number of
distributions tend towards Dirac delta function, peaking at the mean, in the
limiting condition of vanishing variance. The procedure naturally provides for
variable size segmentation with bigger blocks near the extreme pixel values and
finer divisions around the mean or other chosen value for better visualization.
Experiments on a variety of images show that the new algorithm effectively
segments the image in computationally very less time.
|
cs/0602045
|
Emergence Explained
|
cs.MA cs.DC cs.GL
|
Emergence (macro-level effects from micro-level causes) is at the heart of
the conflict between reductionism and functionalism. How can there be
autonomous higher level laws of nature (the functionalist claim) if everything
can be reduced to the fundamental forces of physics (the reductionist
position)? We cut through this debate by applying a computer science lens to
the way we view nature. We conclude (a) that what functionalism calls the
special sciences (sciences other than physics) do indeed study autonomous laws
and furthermore that those laws pertain to real higher level entities but (b)
that interactions among such higher-level entities is epiphenomenal in that
they can always be reduced to primitive physical forces. In other words,
epiphenomena, which we will identify with emergent phenomena, do real
higher-level work. The proposed perspective provides a framework for
understanding many thorny issues including the nature of entities, stigmergy,
the evolution of complexity, phase transitions, supervenience, and downward
entailment. We also discuss some practical considerations pertaining to systems
of systems and the limitations of modeling.
|
cs/0602046
|
Analysis of LDGM and compound codes for lossy compression and binning
|
cs.IT math.IT
|
Recent work has suggested that low-density generator matrix (LDGM) codes are
likely to be effective for lossy source coding problems. We derive rigorous
upper bounds on the effective rate-distortion function of LDGM codes for the
binary symmetric source, showing that they quickly approach the rate-distortion
function as the degree increases. We also compare and contrast the standard
LDGM construction with a compound LDPC/LDGM construction introduced in our
previous work, which provably saturates the rate-distortion bound with finite
degrees. Moreover, this compound construction can be used to generate nested
codes that are simultaneously good as source and channel codes, and are hence
well-suited to source/channel coding with side information. The sparse and
high-girth graphical structure of our constructions render them well-suited to
message-passing encoding.
|
cs/0602048
|
On the Optimality of the ARQ-DDF Protocol
|
cs.IT math.IT
|
The performance of the automatic repeat request-dynamic decode and forward
(ARQ-DDF) cooperation protocol is analyzed in two distinct scenarios. The first
scenario is the multiple access relay (MAR) channel where a single relay is
dedicated to simultaneously help several multiple access users. For this setup,
it is shown that the ARQ-DDF protocol achieves the optimal diversity
multiplexing tradeoff (DMT) of the channel. The second scenario is the
cooperative vector multiple access (CVMA) channel where the users cooperate in
delivering their messages to a destination equipped with multiple receiving
antennas. For this setup, we develop a new variant of the ARQ-DDF protocol
where the users are purposefully instructed not to cooperate in the first round
of transmission. Lower and upper bounds on the achievable DMT are then derived.
These bounds are shown to converge to the optimal tradeoff as the number of
transmission rounds increases.
|
cs/0602049
|
Cooperative Lattice Coding and Decoding
|
cs.IT math.IT
|
A novel lattice coding framework is proposed for outage-limited cooperative
channels. This framework provides practical implementations for the optimal
cooperation protocols proposed by Azarian et al. In particular, for the relay
channel we implement a variant of the dynamic decode and forward protocol,
which uses orthogonal constellations to reduce the channel seen by the
destination to a single-input single-output time-selective one, while
inheriting the same diversity-multiplexing tradeoff. This simplification allows
for building the receiver using traditional belief propagation or tree search
architectures. Our framework also generalizes the coding scheme of Yang and
Belfiore in the context of amplify and forward cooperation. For the cooperative
multiple access channel, a tree coding approach, matched to the optimal linear
cooperation protocol of Azarain et al, is developed. For this scenario, the
MMSE-DFE Fano decoder is shown to enjoy an excellent tradeoff between
performance and complexity. Finally, the utility of the proposed schemes is
established via a comprehensive simulation study.
|
cs/0602050
|
Outage Capacity of the Fading Relay Channel in the Low SNR Regime
|
cs.IT math.IT
|
In slow fading scenarios, cooperation between nodes can increase the amount
of diversity for communication. We study the performance limit in such
scenarios by analyzing the outage capacity of slow fading relay channels. Our
focus is on the low SNR and low outage probability regime, where the adverse
impact of fading is greatest but so are the potential gains from cooperation.
We showed that while the standard Amplify-Forward protocol performs very poorly
in this regime, a modified version we called the Bursty Amplify-Forward
protocol is optimal and achieves the outage capacity of the network. Moreover,
this performance can be achieved without a priori channel knowledge at the
receivers. In contrast, the Decode-Forward protocol is strictly sub-optimal in
this regime. Our results directly yield the outage capacity per unit energy of
fading relay channels.
|
cs/0602051
|
On the utility of the multimodal problem generator for assessing the
performance of Evolutionary Algorithms
|
cs.NE
|
This paper looks in detail at how an evolutionary algorithm attempts to solve
instances from the multimodal problem generator. The paper shows that in order
to consistently reach the global optimum, an evolutionary algorithm requires a
population size that should grow at least linearly with the number of peaks. It
is also shown a close relationship between the supply and decision making
issues that have been identified previously in the context of population sizing
models for additively decomposable problems.
The most important result of the paper, however, is that solving an instance
of the multimodal problem generator is like solving a peak-in-a-haystack, and
it is argued that evolutionary algorithms are not the best algorithms for such
a task. Finally, and as opposed to what several researchers have been doing, it
is our strong belief that the multimodal problem generator is not adequate for
assessing the performance of evolutionary algorithms.
|
cs/0602052
|
The OverRelational Manifesto
|
cs.DB cs.DS
|
The OverRelational Manifesto (below ORM) proposes a possible approach to
creation of data storage systems of the next generation. ORM starts from the
requirement that information in a relational database is represented by a set
of relation values. Accordingly, it is assumed that the information about any
entity of an enterprise must also be represented as a set of relation values
(the ORM main requirement). A system of types is introduced, which allows one
to fulfill the main requirement. The data are represented in the form of
complex objects, and the state of any object is described as a set of relation
values. Emphasize that the types describing the objects are encapsulated,
inherited, and polymorphic. Then, it is shown that the data represented as a
set of such objects may also be represented as a set of relational values
defined on the set of scalar domains (dual data representation). In the general
case, any class is associated with a set of relation variables (R-variables)
each one containing some data about all objects of this class existing in the
system. One of the key points is the fact that the usage of complex (from the
user's viewpoint) refined names of R-variables and their attributes makes it
possible to preserve the semantics of complex data structures represented in
the form of a set of relation values. The most important part of the data
storage system created on the approach proposed is an object-oriented
translator operating over a relational DBMS. The expressiveness of such a
system is comparable with that of OO programming languages.
|
cs/0602053
|
How to Beat the Adaptive Multi-Armed Bandit
|
cs.DS cs.LG
|
The multi-armed bandit is a concise model for the problem of iterated
decision-making under uncertainty. In each round, a gambler must pull one of
$K$ arms of a slot machine, without any foreknowledge of their payouts, except
that they are uniformly bounded. A standard objective is to minimize the
gambler's regret, defined as the gambler's total payout minus the largest
payout which would have been achieved by any fixed arm, in hindsight. Note that
the gambler is only told the payout for the arm actually chosen, not for the
unchosen arms.
Almost all previous work on this problem assumed the payouts to be
non-adaptive, in the sense that the distribution of the payout of arm $j$ in
round $i$ is completely independent of the choices made by the gambler on
rounds $1, \dots, i-1$. In the more general model of adaptive payouts, the
payouts in round $i$ may depend arbitrarily on the history of past choices made
by the algorithm.
We present a new algorithm for this problem, and prove nearly optimal
guarantees for the regret against both non-adaptive and adaptive adversaries.
After $T$ rounds, our algorithm has regret $O(\sqrt{T})$ with high probability
(the tail probability decays exponentially). This dependence on $T$ is best
possible, and matches that of the full-information version of the problem, in
which the gambler is told the payouts for all $K$ arms after each round.
Previously, even for non-adaptive payouts, the best high-probability bounds
known were $O(T^{2/3})$, due to Auer, Cesa-Bianchi, Freund and Schapire. The
expected regret of their algorithm is $O(T^{1/2}) for non-adaptive payouts, but
as we show, $\Omega(T^{2/3})$ for adaptive payouts.
|
cs/0602054
|
Explicit Space-Time Codes Achieving The Diversity-Multiplexing Gain
Tradeoff
|
cs.IT math.IT
|
A recent result of Zheng and Tse states that over a quasi-static channel,
there exists a fundamental tradeoff, referred to as the diversity-multiplexing
gain (D-MG) tradeoff, between the spatial multiplexing gain and the diversity
gain that can be simultaneously achieved by a space-time (ST) block code. This
tradeoff is precisely known in the case of i.i.d. Rayleigh-fading, for T>=
n_t+n_r-1 where T is the number of time slots over which coding takes place and
n_t,n_r are the number of transmit and receive antennas respectively. For T <
n_t+n_r-1, only upper and lower bounds on the D-MG tradeoff are available.
In this paper, we present a complete solution to the problem of explicitly
constructing D-MG optimal ST codes, i.e., codes that achieve the D-MG tradeoff
for any number of receive antennas. We do this by showing that for the square
minimum-delay case when T=n_t=n, cyclic-division-algebra (CDA) based ST codes
having the non-vanishing determinant property are D-MG optimal. While
constructions of such codes were previously known for restricted values of n,
we provide here a construction for such codes that is valid for all n.
For the rectangular, T > n_t case, we present two general techniques for
building D-MG-optimal rectangular ST codes from their square counterparts. A
byproduct of our results establishes that the D-MG tradeoff for all T>= n_t is
the same as that previously known to hold for T >= n_t + n_r -1.
|
cs/0602055
|
Revisiting Evolutionary Algorithms with On-the-Fly Population Size
Adjustment
|
cs.NE
|
In an evolutionary algorithm, the population has a very important role as its
size has direct implications regarding solution quality, speed, and
reliability. Theoretical studies have been done in the past to investigate the
role of population sizing in evolutionary algorithms. In addition to those
studies, several self-adjusting population sizing mechanisms have been proposed
in the literature. This paper revisits the latter topic and pays special
attention to the genetic algorithm with adaptive population size (APGA), for
which several researchers have claimed to be very effective at autonomously
(re)sizing the population.
As opposed to those previous claims, this paper suggests a complete opposite
view. Specifically, it shows that APGA is not capable of adapting the
population size at all. This claim is supported on theoretical grounds and
confirmed by computer simulations.
|
cs/0602056
|
Building Scenarios for Environmental Management and Planning: An
IT-Based Approach
|
cs.MA
|
Oftentimes, the need to build multidiscipline knowledge bases, oriented to
policy scenarios, entails the involvement of stakeholders in manifold domains,
with a juxtaposition of different languages whose semantics can hardly allow
inter-domain transfers. A useful support for planning is the building up of
durable IT based interactive platforms, where it is possible to modify initial
positions toward a semantic convergence. The present paper shows an area-based
application of these tools, for the integrated distance-management of different
forms of knowledge expressed by selected stakeholders about environmental
planning issues, in order to build alternative development scenarios.
Keywords: Environmental planning, Scenario building, Multi-source knowledge,
IT-based
|
cs/0602058
|
Incremental Redundancy Cooperative Coding for Wireless Networks:
Cooperative Diversity, Coding, and Transmission Energy Gain
|
cs.IT math.IT
|
We study an incremental redundancy (IR) cooperative coding scheme for
wireless networks. To exploit the spatial diversity benefit we propose a
cluster-based collaborating strategy for a quasi-static Rayleigh fading channel
model and based on a network geometric distance profile. Our scheme enhances
the network performance by embedding an IR cooperative coding scheme into an
existing noncooperative route. More precisely, for each hop, we form a
collaborating cluster of M-1 nodes between the (hop) sender and the (hop)
destination. The transmitted message is encoded using a mother code and
partitioned into M blocks corresponding to the each of M slots. In the first
slot, the (hop) sender broadcasts its information by transmitting the first
block, and its helpers attempt to relay this message. In the remaining slots,
the each of left-over M-1 blocks is sent either through a helper which has
successfully decoded the message or directly by the (hop) sender where a
dynamic schedule is based on the ACK-based feedback from the cluster. By
employing powerful good codes (e.g., turbo codes, LDPC codes, and raptor codes)
whose performance is characterized by a threshold behavior, our approach
improves the reliability of a multi-hop routing through not only cooperation
diversity benefit but also a coding advantage. The study of the diversity and
the coding gain of the proposed scheme is based on a new simple threshold bound
on the frame-error rate (FER) of maximum likelihood decoding. A average FER
upper bound and its asymptotic (in large SNR) version are derived as a function
of the average fading channel SNRs and the code threshold.
|
cs/0602060
|
eJournal interface can influence usage statistics: implications for
libraries, publishers, and Project COUNTER
|
cs.IR cs.DL
|
The design of a publisher's electronic interface can have a measurable effect
on electronic journal usage statistics. A study of journal usage from six
COUNTER-compliant publishers at thirty-two research institutions in the United
States, the United Kingdom and Sweden indicates that the ratio of PDF to HTML
views is not consistent across publisher interfaces, even after controlling for
differences in publisher content. The number of fulltext downloads may be
artificially inflated when publishers require users to view HTML versions
before accessing PDF versions or when linking mechanisms, such as CrossRef,
direct users to the full text, rather than the abstract, of each article. These
results suggest that usage reports from COUNTER-compliant publishers are not
directly comparable in their current form. One solution may be to modify
publisher numbers with adjustment factors deemed to be representative of the
benefit or disadvantage due to its interface. Standardization of some interface
and linking protocols may obviate these differences and allow for more accurate
cross-publisher comparisons.
|
cs/0602062
|
Learning rational stochastic languages
|
cs.LG
|
Given a finite set of words w1,...,wn independently drawn according to a
fixed unknown distribution law P called a stochastic language, an usual goal in
Grammatical Inference is to infer an estimate of P in some class of
probabilistic models, such as Probabilistic Automata (PA). Here, we study the
class of rational stochastic languages, which consists in stochastic languages
that can be generated by Multiplicity Automata (MA) and which strictly includes
the class of stochastic languages generated by PA. Rational stochastic
languages have minimal normal representation which may be very concise, and
whose parameters can be efficiently estimated from stochastic samples. We
design an efficient inference algorithm DEES which aims at building a minimal
normal representation of the target. Despite the fact that no recursively
enumerable class of MA computes exactly the set of rational stochastic
languages over Q, we show that DEES strongly identifies tis set in the limit.
We study the intermediary MA output by DEES and show that they compute rational
series which converge absolutely to one and which can be used to provide
stochastic languages which closely estimate the target.
|
cs/0602065
|
Similarity of Objects and the Meaning of Words
|
cs.CV cs.IR
|
We survey the emerging area of compression-based, parameter-free, similarity
distance measures useful in data-mining, pattern recognition, learning and
automatic semantics extraction. Given a family of distances on a set of
objects, a distance is universal up to a certain precision for that family if
it minorizes every distance in the family between every two objects in the set,
up to the stated precision (we do not require the universal distance to be an
element of the family). We consider similarity distances for two types of
objects: literal objects that as such contain all of their meaning, like
genomes or books, and names for objects. The latter may have literal
embodyments like the first type, but may also be abstract like ``red'' or
``christianity.'' For the first type we consider a family of computable
distance measures corresponding to parameters expressing similarity according
to particular featuresdistances generated by web users corresponding to
particular semantic relations between the (names for) the designated objects.
For both families we give universal similarity distance measures, incorporating
all particular distance measures in the family. In the first case the universal
distance is based on compression and in the second case it is based on Google
page counts related to search terms. In both cases experiments on a massive
scale give evidence of the viability of the approaches. between pairs of
literal objects. For the second type we consider similarity
|
cs/0602067
|
Renyi to Renyi -- Source Coding under Siege
|
cs.IT cs.DS math.IT
|
A novel lossless source coding paradigm applies to problems of unreliable
lossless channels with low bit rates, in which a vital message needs to be
transmitted prior to termination of communications. This paradigm can be
applied to Alfred Renyi's secondhand account of an ancient siege in which a spy
was sent to scout the enemy but was captured. After escaping, the spy returned
to his base in no condition to speak and unable to write. His commander asked
him questions that he could answer by nodding or shaking his head, and the
fortress was defended with this information. Renyi told this story with
reference to prefix coding, but maximizing probability of survival in the siege
scenario is distinct from yet related to the traditional source coding
objective of minimizing expected codeword length. Rather than finding a code
minimizing expected codeword length $\sum_{i=1}^n p(i) l(i)$, the siege problem
involves maximizing $\sum_{i=1}^n p(i) \theta^{l(i)}$ for a known $\theta \in
(0,1)$. When there are no restrictions on codewords, this problem can be solve
using a known generalization of Huffman coding. The optimal solution has coding
bounds which are functions of Renyi entropy; in addition to known bounds, new
bounds are derived here. The alphabetically constrained version of this problem
has applications in search trees and diagnostic testing. A novel dynamic
programming algorithm -- based upon the oldest known algorithm for the
traditional alphabetic problem -- optimizes this problem in $O(n^3)$ time and
$O(n^2)$ space, whereas two novel approximation algorithms can find a
suboptimal solution faster: one in linear time, the other in $O(n \log n)$.
Coding bounds for the alphabetic version of this problem are also presented.
|
cs/0602071
|
Geographic Gossip: Efficient Aggregation for Sensor Networks
|
cs.IT math.IT
|
Gossip algorithms for aggregation have recently received significant
attention for sensor network applications because of their simplicity and
robustness in noisy and uncertain environments. However, gossip algorithms can
waste significant energy by essentially passing around redundant information
multiple times. For realistic sensor network model topologies like grids and
random geometric graphs, the inefficiency of gossip schemes is caused by slow
mixing times of random walks on those graphs. We propose and analyze an
alternative gossiping scheme that exploits geographic information. By utilizing
a simple resampling method, we can demonstrate substantial gains over
previously proposed gossip protocols. In particular, for random geometric
graphs, our algorithm computes the true average to accuracy $1/n^a$ using
$O(n^{1.5}\sqrt{\log n})$ radio transmissions, which reduces the energy
consumption by a $\sqrt{\frac{n}{\log n}}$ factor over standard gossip
algorithms.
|
cs/0602072
|
Turbo Decoding on the Binary Erasure Channel: Finite-Length Analysis and
Turbo Stopping Sets
|
cs.IT math.IT
|
This paper is devoted to the finite-length analysis of turbo decoding over
the binary erasure channel (BEC). The performance of iterative
belief-propagation (BP) decoding of low-density parity-check (LDPC) codes over
the BEC can be characterized in terms of stopping sets. We describe turbo
decoding on the BEC which is simpler than turbo decoding on other channels. We
then adapt the concept of stopping sets to turbo decoding and state an exact
condition for decoding failure. Apply turbo decoding until the transmitted
codeword has been recovered, or the decoder fails to progress further. Then the
set of erased positions that will remain when the decoder stops is equal to the
unique maximum-size turbo stopping set which is also a subset of the set of
erased positions. Furthermore, we present some improvements of the basic turbo
decoding algorithm on the BEC. The proposed improved turbo decoding algorithm
has substantially better error performance as illustrated by the given
simulation results. Finally, we give an expression for the turbo stopping set
size enumerating function under the uniform interleaver assumption, and an
efficient enumeration algorithm of small-size turbo stopping sets for a
particular interleaver. The solution is based on the algorithm proposed by
Garello et al. in 2001 to compute an exhaustive list of all low-weight
codewords in a turbo code.
|
cs/0602074
|
The entropy rate of the binary symmetric channel in the rare transitions
regime
|
cs.IT math.IT
|
This note has been withdrawn by the author as the more complete result was
recently proved by A.Quas and Y.Peres
|
cs/0602076
|
Exploring term-document matrices from matrix models in text mining
|
cs.IR cs.DB cs.DL
|
We explore a matrix-space model, that is a natural extension to the vector
space model for Information Retrieval. Each document can be represented by a
matrix that is based on document extracts (e.g. sentences, paragraphs,
sections). We focus on the performance of this model for the specific case in
which documents are originally represented as term-by-sentence matrices. We use
the singular value decomposition to approximate the term-by-sentence matrices
and assemble these results to form the pseudo-``term-document'' matrix that
forms the basis of a text mining method alternative to traditional VSM and LSI.
We investigate the singular values of this matrix and provide experimental
evidence suggesting that the method can be particularly effective in terms of
accuracy for text collections with multi-topic documents, such as web pages
with news.
|
cs/0602079
|
SISO APP Searches in Lattices with Tanner Graphs
|
cs.IT cs.DS math.IT
|
An efficient, low-complexity, soft-output detector for general lattices is
presented, based on their Tanner graph (TG) representations. Closest-point
searches in lattices can be performed as non-binary belief propagation on
associated TGs; soft-information output is naturally generated in the process;
the algorithm requires no backtrack (cf. classic sphere decoding), and extracts
extrinsic information. A lattice's coding gain enables equivalence relations
between lattice points, which can be thereby partitioned in cosets. Total and
extrinsic a posteriori probabilities at the detector's output further enable
the use of soft detection information in iterative schemes. The algorithm is
illustrated via two scenarios that transmit a 32-point, uncoded
super-orthogonal (SO) constellation for multiple-input multiple-output (MIMO)
channels, carved from an 8-dimensional non-orthogonal lattice (a direct sum of
two 4-dimensional checkerboard lattice): it achieves maximum likelihood
performance in quasistatic fading; and, performs close to interference-free
transmission, and identically to list sphere decoding, in independent fading
with coordinate interleaving and iterative equalization and detection. Latter
scenario outperforms former despite the absence of forward error correction
coding---because the inherent lattice coding gain allows for the refining of
extrinsic information. The lattice constellation is the same as the one
employed in the SO space-time trellis codes first introduced for 2-by-2 MIMO by
Ionescu et al., then independently by Jafarkhani and Seshadri. Complexity is
log-linear in lattice dimensionality, vs. cubic in sphere decoders.
|
cs/0602081
|
Low-Density Parity-Check Code with Fast Decoding Speed
|
cs.IT math.IT
|
Low-Density Parity-Check (LDPC) codes received much attention recently due to
their capacity-approaching performance. The iterative message-passing algorithm
is a widely adopted decoding algorithm for LDPC codes \cite{Kschischang01}. An
important design issue for LDPC codes is designing codes with fast decoding
speed while maintaining capacity-approaching performance. In another words, it
is desirable that the code can be successfully decoded in few number of
decoding iterations, at the same time, achieves a significant portion of the
channel capacity. Despite of its importance, this design issue received little
attention so far. In this paper, we address this design issue for the case of
binary erasure channel.
We prove that density-efficient capacity-approaching LDPC codes satisfy a so
called "flatness condition". We show an asymptotic approximation to the number
of decoding iterations. Based on these facts, we propose an approximated
optimization approach to finding the codes with good decoding speed. We further
show that the optimal codes in the sense of decoding speed are
"right-concentrated". That is, the degrees of check nodes concentrate around
the average right degree.
|
cs/0602083
|
A third level trigger programmable on FPGA for the gamma/hadron
separation in a Cherenkov telescope using pseudo-Zernike moments and the SVM
classifier
|
cs.CV cs.AI
|
We studied the application of the Pseudo-Zernike features as image parameters
(instead of the Hillas parameters) for the discrimination between the images
produced by atmospheric electromagnetic showers caused by gamma-rays and the
ones produced by atmospheric electromagnetic showers caused by hadrons in the
MAGIC Experiment. We used a Support Vector Machine as classification algorithm
with the computed Pseudo-Zernike features as classification parameters. We
implemented on a FPGA board a kernel function of the SVM and the Pseudo-Zernike
features to build a third level trigger for the gamma-hadron separation task of
the MAGIC Experiment.
|
cs/0602084
|
Universal Codes as a Basis for Time Series Testing
|
cs.IT math.IT
|
We suggest a new approach to hypothesis testing for ergodic and stationary
processes. In contrast to standard methods, the suggested approach gives a
possibility to make tests, based on any lossless data compression method even
if the distribution law of the codeword lengths is not known. We apply this
approach to the following four problems: goodness-of-fit testing (or identity
testing), testing for independence, testing of serial independence and
homogeneity testing and suggest nonparametric statistical tests for these
problems. It is important to note that practically used so-called archivers can
be used for suggested testing.
|
cs/0602085
|
Twenty (or so) Questions: $D$-ary Length-Bounded Prefix Coding
|
cs.IT cs.DS math.IT
|
Efficient optimal prefix coding has long been accomplished via the Huffman
algorithm. However, there is still room for improvement and exploration
regarding variants of the Huffman problem. Length-limited Huffman coding,
useful for many practical applications, is one such variant, for which codes
are restricted to the set of codes in which none of the $n$ codewords is longer
than a given length, $l_{\max}$. Binary length-limited coding can be done in
$O(n l_{\max})$ time and O(n) space via the widely used Package-Merge algorithm
and with even smaller asymptotic complexity using a lesser-known algorithm. In
this paper these algorithms are generalized without increasing complexity in
order to introduce a minimum codeword length constraint $l_{\min}$, to allow
for objective functions other than the minimization of expected codeword
length, and to be applicable to both binary and nonbinary codes; nonbinary
codes were previously addressed using a slower dynamic programming approach.
These extensions have various applications -- including fast decompression and
a modified version of the game ``Twenty Questions'' -- and can be used to solve
the problem of finding an optimal code with limited fringe, that is, finding
the best code among codes with a maximum difference between the longest and
shortest codewords. The previously proposed method for solving this problem was
nonpolynomial time, whereas solving this using the novel linear-space algorithm
requires only $O(n (l_{\max}- l_{\min})^2)$ time, or even less if $l_{\max}-
l_{\min}$ is not $O(\log n)$.
|
cs/0602086
|
On the Block Error Probability of LP Decoding of LDPC Codes
|
cs.IT math.IT
|
In his thesis, Wiberg showed the existence of thresholds for families of
regular low-density parity-check codes under min-sum algorithm decoding. He
also derived analytic bounds on these thresholds. In this paper, we formulate
similar results for linear programming decoding of regular low-density
parity-check codes.
|
cs/0602087
|
Bounds on the Threshold of Linear Programming Decoding
|
cs.IT math.IT
|
Whereas many results are known about thresholds for ensembles of low-density
parity-check codes under message-passing iterative decoding, this is not the
case for linear programming decoding. Towards closing this knowledge gap, this
paper presents some bounds on the thresholds of low-density parity-check code
ensembles under linear programming decoding.
|
cs/0602088
|
Towards Low-Complexity Linear-Programming Decoding
|
cs.IT math.IT
|
We consider linear-programming (LP) decoding of low-density parity-check
(LDPC) codes. While it is clear that one can use any general-purpose LP solver
to solve the LP that appears in the decoding problem, we argue in this paper
that the LP at hand is equipped with a lot of structure that one should take
advantage of. Towards this goal, we study the dual LP and show how
coordinate-ascent methods lead to very simple update rules that are tightly
connected to the min-sum algorithm. Moreover, replacing minima in the formula
of the dual LP with soft-minima one obtains update rules that are tightly
connected to the sum-product algorithm. This shows that LP solvers with
complexity similar to the min-sum algorithm and the sum-product algorithm are
feasible. Finally, we also discuss some sub-gradient-based methods.
|
cs/0602089
|
Pseudo-Codeword Analysis of Tanner Graphs from Projective and Euclidean
Planes
|
cs.IT cs.DM math.IT
|
In order to understand the performance of a code under maximum-likelihood
(ML) decoding, one studies the codewords, in particular the minimal codewords,
and their Hamming weights. In the context of linear programming (LP) decoding,
one's attention needs to be shifted to the pseudo-codewords, in particular to
the minimal pseudo-codewords, and their pseudo-weights. In this paper we
investigate some families of codes that have good properties under LP decoding,
namely certain families of low-density parity-check (LDPC) codes that are
derived from projective and Euclidean planes: we study the structure of their
minimal pseudo-codewords and give lower bounds on their pseudo-weight.
|
cs/0602091
|
Feedback Capacity of Stationary Gaussian Channels
|
cs.IT math.IT
|
The feedback capacity of additive stationary Gaussian noise channels is
characterized as the solution to a variational problem. Toward this end, it is
proved that the optimal feedback coding scheme is stationary. When specialized
to the first-order autoregressive moving average noise spectrum, this
variational characterization yields a closed-form expression for the feedback
capacity. In particular, this result shows that the celebrated
Schalkwijk-Kailath coding scheme achieves the feedback capacity for the
first-order autoregressive moving average Gaussian channel, positively
answering a long-standing open problem studied by Butman, Schalkwijk-Tiernan,
Wolfowitz, Ozarow, Ordentlich, Yang-Kavcic-Tatikonda, and others. More
generally, it is shown that a k-dimensional generalization of the
Schalkwijk-Kailath coding scheme achieves the feedback capacity for any
autoregressive moving average noise spectrum of order k. Simply put, the
optimal transmitter iteratively refines the receiver's knowledge of the
intended message.
|
cs/0602092
|
Inconsistent parameter estimation in Markov random fields: Benefits in
the computation-limited setting
|
cs.LG cs.IT math.IT math.ST stat.TH
|
Consider the problem of joint parameter estimation and prediction in a Markov
random field: i.e., the model parameters are estimated on the basis of an
initial set of data, and then the fitted model is used to perform prediction
(e.g., smoothing, denoising, interpolation) on a new noisy observation. Working
under the restriction of limited computation, we analyze a joint method in
which the \emph{same convex variational relaxation} is used to construct an
M-estimator for fitting parameters, and to perform approximate marginalization
for the prediction step. The key result of this paper is that in the
computation-limited setting, using an inconsistent parameter estimator (i.e.,
an estimator that returns the ``wrong'' model even in the infinite data limit)
can be provably beneficial, since the resulting errors can partially compensate
for errors made by using an approximate prediction technique. En route to this
result, we analyze the asymptotic properties of M-estimators based on convex
variational relaxations, and establish a Lipschitz stability property that
holds for a broad class of variational methods. We show that joint
estimation/prediction based on the reweighted sum-product algorithm
substantially outperforms a commonly used heuristic based on ordinary
sum-product.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.