id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0508085
|
On the Asymptotic Performance of Iterative Decoders for Product Codes
|
cs.IT cs.DM math.IT
|
We consider hard-decision iterative decoders for product codes over the
erasure channel, which employ repeated rounds of decoding rows and columns
alternatingly. We derive the exact asymptotic probability of decoding failure
as a function of the error-correction capabilities of the row and column codes,
the number of decoding rounds, and the channel erasure probability. We examine
both the case of codes capable of correcting a constant amount of errors, and
the case of codes capable of correcting a constant fraction of their length.
|
cs/0508088
|
Special Cases of Encodings by Generalized Adaptive Codes
|
cs.IT math.IT
|
Adaptive (variable-length) codes associate variable-length codewords to
symbols being encoded depending on the previous symbols in the input data
string. This class of codes has been presented in [Dragos Trinca,
cs.DS/0505007] as a new class of non-standard variable-length codes.
Generalized adaptive codes (GA codes, for short) have been also presented in
[Dragos Trinca, cs.DS/0505007] not only as a new class of non-standard
variable-length codes, but also as a natural generalization of adaptive codes
of any order. This paper is intended to continue developing the theory of
variable-length codes by establishing several interesting connections between
adaptive codes and other classes of codes. The connections are discussed not
only from a theoretical point of view (by proving new results), but also from
an applicative one (by proposing several applications). First, we prove that
adaptive Huffman encodings and Lempel-Ziv encodings are particular cases of
encodings by GA codes. Second, we show that any (n,1,m) convolutional code
satisfying certain conditions can be modelled as an adaptive code of order m.
Third, we describe a cryptographic scheme based on the connection between
adaptive codes and convolutional codes, and present an insightful analysis of
this scheme. Finally, we conclude by generalizing adaptive codes to
(p,q)-adaptive codes, and discussing connections between adaptive codes and
time-varying codes.
|
cs/0508092
|
Summarizing Reports on Evolving Events; Part I: Linear Evolution
|
cs.CL cs.IR
|
We present an approach for summarization from multiple documents which report
on events that evolve through time, taking into account the different document
sources. We distinguish the evolution of an event into linear and non-linear.
According to our approach, each document is represented by a collection of
messages which are then used in order to instantiate the cross-document
relations that determine the summary content. The paper presents the
summarization system that implements this approach through a case study on
linear evolution.
|
cs/0508093
|
Performance of PPM Multipath Synchronization in the Limit of Large
Bandwidth
|
cs.IT math.IT
|
The acquisition, or synchronization, of the multipath profile for an
ultrawideband pulse position modulation (PPM) communication systems is
considered. Synchronization is critical for the proper operation of PPM based
For the multipath channel, it is assumed that channel gains are known, but path
delays are unknown. In the limit of large bandwidth, W, it is assumed that the
number of paths, L, grows. The delay spread of the channel, M, is proportional
to the bandwidth. The rate of growth of L versus M determines whether
synchronization can occur. It is shown that if L/sqrt(M) --> 0, then the
maximum likelihood synchronizer cannot acquire any of the paths and
alternatively if L/M --> 0, the maximum likelihood synchronizer is guaranteed
to miss at least one path.
|
cs/0508094
|
Conference Key Agreement and Quantum Sharing of Classical Secrets with
Noisy GHZ States
|
cs.IT cs.CR math.IT
|
We propose a wide class of distillation schemes for multi-partite entangled
states that are CSS-states. Our proposal provides not only superior efficiency,
but also new insights on the connection between CSS-states and bipartite graph
states. We then consider the applications of our distillation schemes for two
cryptographic tasks--namely, (a) conference key agreement and (b) quantum
sharing of classical secrets. In particular, we construct
``prepare-and-measure'' protocols. Also we study the yield of those protocols
and the threshold value of the fidelity above which the protocols can function
securely. Surprisingly, our protocols will function securely even when the
initial state does not violate the standard Bell-inequalities for GHZ states.
Experimental realization involving only bi-partite entanglement is also
suggested.
|
cs/0508095
|
Capacity of Ultra Wide Band Wireless Ad Hoc Networks
|
cs.IT cs.NI math.IT
|
Throughput capacity is a critical parameter for the design and evaluation of
ad-hoc wireless networks. Consider n identical randomly located nodes, on a
unit area, forming an ad-hoc wireless network. Assuming a fixed per node
transmission capability of T bits per second at a fixed range, it has been
shown that the uniform throughput capacity per node r(n) is Theta((T)/(sqrt{n
log n})), a decreasing function of node density n.
However an alternate communication model may also be considered, with each
node constrained to a maximum transmit power P_0 and capable of utilizing W Hz
of bandwidth. Under the limiting case W rightarrow infinity, such as in Ultra
Wide Band (UWB) networks, the uniform throughput per node is O ((n log
n)^{(alpha-1}/2}) (upper bound) and Omega((n^{(alpha-1)/2})/((log n)^{(alpha
+1)/2})) (achievable lower bound).
These bounds demonstrate that throughput increases with node density $n$, in
contrast to previously published results. This is the result of the large
bandwidth, and the assumed power and rate adaptation, which alleviate
interference. Thus, the effect of physical layer properties on the capacity of
ad hoc wireless networks is demonstrated. Further, the promise of UWB as a
physical layer technology for ad-hoc networks is justified.
|
cs/0508096
|
On Multiple User Channels with Causal State Information at the
Transmitters
|
cs.IT math.IT
|
We extend Shannon's result on the capacity of channels with state information
to multiple user channels. More specifically, we characterize the capacity
(region) of degraded broadcast channels and physically degraded relay channels
where the channel state information is causally available at the transmitters.
We also obtain inner and outer bounds on the capacity region for multiple
access channels with causal state information at the transmitters.
|
cs/0508098
|
An Explicit Construction of Universally Decodable Matrices
|
cs.IT cs.DM math.IT
|
Universally decodable matrices can be used for coding purposes when
transmitting over slow fading channels. These matrices are parameterized by
positive integers $L$ and $n$ and a prime power $q$. Based on Pascal's triangle
we give an explicit construction of universally decodable matrices for any
non-zero integers $L$ and $n$ and any prime power $q$ where $L \leq q+1$. This
is the largest set of possible parameter values since for any list of
universally decodable matrices the value $L$ is upper bounded by $q+1$, except
for the trivial case $n = 1$. For the proof of our construction we use
properties of Hasse derivatives, and it turns out that our construction has
connections to Reed-Solomon codes, Reed-Muller codes, and so-called
repeated-root cyclic codes. Additionally, we show how universally decodable
matrices can be modified so that they remain universally decodable matrices.
|
cs/0508099
|
Search Process and Probabilistic Bifix Approach
|
cs.IT cs.CV math.IT
|
An analytical approach to a search process is a mathematical prerequisite for
digital synchronization acquisition analysis and optimization. A search is
performed for an arbitrary set of sequences within random but not equiprobable
L-ary data. This paper derives in detail an expression for probability
distribution function, from which other statistical parameters - expected value
and variance - can be obtained. The probabilistic nature of (cross-) bifix
indicators is shown and application examples are outlined, ranging beyond the
usual telecommunication field.
|
cs/0508100
|
A primer on Answer Set Programming
|
cs.AI cs.LO
|
A introduction to the syntax and Semantics of Answer Set Programming intended
as an handout to [under]graduate students taking Artificial Intlligence or
Logic Programming classes.
|
cs/0508101
|
Maximum Weight Matching via Max-Product Belief Propagation
|
cs.IT cs.AI math.IT
|
Max-product "belief propagation" is an iterative, local, message-passing
algorithm for finding the maximum a posteriori (MAP) assignment of a discrete
probability distribution specified by a graphical model. Despite the
spectacular success of the algorithm in many application areas such as
iterative decoding, computer vision and combinatorial optimization which
involve graphs with many cycles, theoretical results about both correctness and
convergence of the algorithm are known in few cases (Weiss-Freeman Wainwright,
Yeddidia-Weiss-Freeman, Richardson-Urbanke}.
In this paper we consider the problem of finding the Maximum Weight Matching
(MWM) in a weighted complete bipartite graph. We define a probability
distribution on the bipartite graph whose MAP assignment corresponds to the
MWM. We use the max-product algorithm for finding the MAP of this distribution
or equivalently, the MWM on the bipartite graph. Even though the underlying
bipartite graph has many short cycles, we find that surprisingly, the
max-product algorithm always converges to the correct MAP assignment as long as
the MAP assignment is unique. We provide a bound on the number of iterations
required by the algorithm and evaluate the computational cost of the algorithm.
We find that for a graph of size $n$, the computational cost of the algorithm
scales as $O(n^3)$, which is the same as the computational cost of the best
known algorithm. Finally, we establish the precise relation between the
max-product algorithm and the celebrated {\em auction} algorithm proposed by
Bertsekas. This suggests possible connections between dual algorithm and
max-product algorithm for discrete optimization problems.
|
cs/0508102
|
Investigations of Process Damping Forces in Metal Cutting
|
cs.CE
|
Using finite element software developed for metal cutting by Third Wave
Systems we investigate the forces involved in chatter, a self-sustained
oscillation of the cutting tool. The phenomena is decomposed into a vibrating
tool cutting a flat surface work piece, and motionless tool cutting a work
piece with a wavy surface. While cutting the wavy surface, the shearplane was
seen to oscillate in advance of the oscillation of the depth of cut, as were
the cutting, thrust, and shear plane forces. The vibrating tool was used to
investigate process damping through the interaction of the relief face of the
tool and the workpiece. Crushing forces are isolated and compared to the
contact length between the tool and workpiece. We found that the wavelength
dependence of the forces depended on the relative size of the wavelength to the
length of the relief face of the tool. The results indicate that the damping
force from crushing will be proportional to the cutting speed for short tools,
and inversely proportional for long tools.
|
cs/0508103
|
Corpus-based Learning of Analogies and Semantic Relations
|
cs.LG cs.CL cs.IR
|
We present an algorithm for learning from unlabeled text, based on the Vector
Space Model (VSM) of information retrieval, that can solve verbal analogy
questions of the kind found in the SAT college entrance exam. A verbal analogy
has the form A:B::C:D, meaning "A is to B as C is to D"; for example,
mason:stone::carpenter:wood. SAT analogy questions provide a word pair, A:B,
and the problem is to select the most analogous word pair, C:D, from a set of
five choices. The VSM algorithm correctly answers 47% of a collection of 374
college-level analogy questions (random guessing would yield 20% correct; the
average college-bound senior high school student answers about 57% correctly).
We motivate this research by applying it to a difficult problem in natural
language processing, determining semantic relations in noun-modifier pairs. The
problem is to classify a noun-modifier pair, such as "laser printer", according
to the semantic relation between the noun (printer) and the modifier (laser).
We use a supervised nearest-neighbour algorithm that assigns a class to a given
noun-modifier pair by finding the most analogous noun-modifier pair in the
training data. With 30 classes of semantic relations, on a collection of 600
labeled noun-modifier pairs, the learning algorithm attains an F value of 26.5%
(random guessing: 3.3%). With 5 classes of semantic relations, the F value is
43.2% (random: 20%). The performance is state-of-the-art for both verbal
analogies and noun-modifier relations.
|
cs/0508104
|
A Generalised Hadamard Transform
|
cs.IT cs.DM math.IT
|
A Generalised Hadamard Transform for multi-phase or multilevel signals is
introduced, which includes the Fourier, Generalised, Discrete Fourier,
Walsh-Hadamard and Reverse Jacket Transforms. The jacket construction is
formalised and shown to admit a tensor product decomposition. Primary matrices
under this decomposition are identified. New examples of primary jacket
matrices of orders 8 and 12 are presented.
|
cs/0508107
|
New Upper Bounds on A(n,d)
|
cs.IT cs.DM math.IT
|
Upper bounds on the maximum number of codewords in a binary code of a given
length and minimum Hamming distance are considered. New bounds are derived by a
combination of linear programming and counting arguments. Some of these bounds
improve on the best known analytic bounds. Several new record bounds are
obtained for codes with small lengths.
|
cs/0508114
|
A Family of Binary Sequences with Optimal Correlation Property and Large
Linear Span
|
cs.CR cs.IT math.IT
|
A family of binary sequences is presented and proved to have optimal
correlation property and large linear span. It includes the small set of Kasami
sequences, No sequence set and TN sequence set as special cases. An explicit
lower bound expression on the linear span of sequences in the family is given.
With suitable choices of parameters, it is proved that the family has
exponentially larger linear spans than both No sequences and TN sequences. A
class of ideal autocorrelation sequences is also constructed and proved to have
large linear span.
|
cs/0508115
|
New Sequence Sets with Zero-Correlation Zone
|
cs.IT math.IT
|
A method for constructing sets of sequences with zero-correlation zone (ZCZ
sequences) and sequence sets with low cross correlation is proposed. The method
is to use families of short sequences and complete orthogonal sequence sets to
derive families of long sequences with desired correlation properties. It is a
unification of works of Matsufuji and Torii \emph{et al.}, and there are more
choices of parameters of sets for our method. In particular, ZCZ sequence sets
generated by the method can achieve a related ZCZ bound. Furthermore, the
proposed method can be utilized to derive new ZCZ sets with both longer ZCZ and
larger set size from known ZCZ sets. These sequence sets are applicable in
broadband satellite IP networks.
|
cs/0508117
|
Long-term neuronal behavior caused by two synaptic modification
mechanisms
|
cs.NE cs.CE
|
We report the first results of simulating the coupling of neuronal,
astrocyte, and cerebrovascular activity. It is suggested that the dynamics of
the system is different from systems that only include neurons. In the
neuron-vascular coupling, distribution of synapse strengths affects neuronal
behavior and thus balance of the blood flow; oscillations are induced in the
neuron-to-astrocyte coupling.
|
cs/0508118
|
Unified Theory of Source Coding: Part I -- Two Terminal Problems
|
cs.IT math.IT
|
Since the publication of Shannon's theory of one terminal source coding, a
number of interesting extensions have been derived by researchers such as
Slepian-Wolf, Wyner, Ahlswede-K\"{o}rner, Wyner-Ziv and Berger-Yeung.
Specifically, the achievable rate or rate-distortion region has been described
by a first order information-theoretic functional of the source statistics in
each of the above cases. At the same time several problems have also remained
unsolved. Notable two terminal examples include the joint distortion problem,
where both sources are reconstructed under a combined distortion criterion, as
well as the partial side information problem, where one source is reconstructed
under a distortion criterion using information about the other (side
information) available at a certain rate (partially). In this paper we solve
both of these open problems. Specifically, we give an infinite order
description of the achievable rate-distortion region in each case. In our
analysis we set the above problems in a general framework and formulate a
unified methodology that solves not only the problems at hand but any two
terminal problem with noncooperative encoding. The key to such unification is
held by a fundamental source coding principle which we derive by extending the
typicality arguments of Shannon and Wyner-Ziv. Finally, we demonstrate the
expansive scope of our technique by re-deriving known coding theorems. We shall
observe that our infinite order descriptions simplify to the expected first
order in the known special cases.
|
cs/0508119
|
Unified Theory of Source Coding: Part II -- Multiterminal Problems
|
cs.IT math.IT
|
In the first paper of this two part communication, we solved in a unified
framework a variety of two terminal source coding problems with noncooperative
encoders, thereby consolidating works of Shannon, Slepian-Wolf, Wyner,
Ahlswede-K\"{o}rner, Wyner-Ziv, Berger {\em et al.} and Berger-Yeung. To
achieve such unification we made use of a fundamental principle that
dissociates bulk of the analysis from the distortion criterion at hand (if any)
and extends the typicality arguments of Shannon and Wyner-Ziv. In this second
paper, we generalize the fundamental principle for any number of sources and on
its basis exhaustively solve all multiterminal source coding problems with
noncooperative encoders and one decoder. The distortion criteria, when
applicable, are required to apply to single letters and be bounded. Our
analysis includes cases where side information is, respectively, partially
available, completely available and altogether unavailable at the decoder. As
seen in our first paper, the achievable regions permit infinite order
information-theoretic descriptions. We also show that the entropy-constrained
multiterminal estimation problem can be solved as a special case of our theory.
|
cs/0508120
|
Iterative Algorithm for Finding Frequent Patterns in Transactional
Databases
|
cs.DB
|
A high-performance algorithm for searching for frequent patterns (FPs) in
transactional databases is presented. The search for FPs is carried out by
using an iterative sieve algorithm by computing the set of enclosed cycles. In
each inner cycle of level FPs composed of elements are generated. The assigned
number of enclosed cycles (the parameter of the problem) defines the maximum
length of the desired FPs. The efficiency of the algorithm is produced by (i)
the extremely simple logical searching scheme, (ii) the avoidance of recursive
procedures, and (iii) the usage of only one-dimensional arrays of integers.
|
cs/0508121
|
How Good is Phase-Shift Keying for Peak-Limited Rayleigh Fading Channels
in the Low-SNR Regime?
|
cs.IT math.IT
|
This paper investigates the achievable information rate of phase-shift keying
(PSK) over frequency non-selective Rayleigh fading channels without channel
state information (CSI). The fading process exhibits general temporal
correlation characterized by its spectral density function. We consider both
discrete-time and continuous-time channels, and find their asymptotics at low
signal-to-noise ratio (SNR). Compared to known capacity upper bounds under peak
constraints, these asymptotics usually lead to negligible rate loss in the
low-SNR regime for slowly time-varying fading channels. We further specialize
to case studies of Gauss-Markov and Clarke's fading models.
|
cs/0508122
|
Streaming and Sublinear Approximation of Entropy and Information
Distances
|
cs.DS cs.IT math.IT
|
In many problems in data mining and machine learning, data items that need to
be clustered or classified are not points in a high-dimensional space, but are
distributions (points on a high dimensional simplex). For distributions,
natural measures of distance are not the $\ell_p$ norms and variants, but
information-theoretic measures like the Kullback-Leibler distance, the
Hellinger distance, and others. Efficient estimation of these distances is a
key component in algorithms for manipulating distributions. Thus, sublinear
resource constraints, either in time (property testing) or space (streaming)
are crucial.
We start by resolving two open questions regarding property testing of
distributions. Firstly, we show a tight bound for estimating bounded, symmetric
f-divergences between distributions in a general property testing (sublinear
time) framework (the so-called combined oracle model). This yields optimal
algorithms for estimating such well known distances as the Jensen-Shannon
divergence and the Hellinger distance. Secondly, we close a $(\log n)/H$ gap
between upper and lower bounds for estimating entropy $H$ in this model. In a
stream setting (sublinear space), we give the first algorithm for estimating
the entropy of a distribution. Our algorithm runs in polylogarithmic space and
yields an asymptotic constant factor approximation scheme. We also provide
other results along the space/time/approximation tradeoff curve.
|
cs/0508124
|
Coding Schemes for Line Networks
|
cs.IT cs.DC cs.NI math.IT
|
We consider a simple network, where a source and destination node are
connected with a line of erasure channels. It is well known that in order to
achieve the min-cut capacity, the intermediate nodes are required to process
the information. We propose coding schemes for this setting, and discuss each
scheme in terms of complexity, delay, achievable rate, memory requirement, and
adaptability to unknown channel parameters. We also briefly discuss how these
schemes can be extended to more general networks.
|
cs/0508126
|
A Closed-Form Solution for the Finite Length Constant Modulus Receiver
|
cs.GT cs.IT math.IT
|
In this paper, a closed-form solution minimizing the Godard or Constant
Modulus (CM) cost function under the practical conditions of finite SNR and
finite equalizer length is derived. While previous work has been reported by
Zeng et al., IEEE Trans. Information Theory. 1998, to establish the link
between the constant modulus and Wiener receivers, we show that under the
Gaussian approximation of intersymbol interference at the output of the
equalizer, the CM finite-length receiver is equivalent to the nonblind MMSE
equalizer up to a complex gain factor. Some simulation results are provided to
support the Gaussian approximation assumption.
|
cs/0508127
|
On context-tree prediction of individual sequences
|
cs.IT math.IT
|
Motivated by the evident success of context-tree based methods in lossless
data compression, we explore, in this paper, methods of the same spirit in
universal prediction of individual sequences. By context-tree prediction, we
refer to a family of prediction schemes, where at each time instant $t$, after
having observed all outcomes of the data sequence $x_1,...,x_{t-1}$, but not
yet $x_t$, the prediction is based on a ``context'' (or a state) that consists
of the $k$ most recent past outcomes $x_{t-k},...,x_{t-1}$, where the choice of
$k$ may depend on the contents of a possibly longer, though limited, portion of
the observed past, $x_{t-k_{\max}},...x_{t-1}$. This is different from the
study reported in [1], where general finite-state predictors as well as
``Markov'' (finite-memory) predictors of fixed order, were studied in the
regime of individual sequences.
Another important difference between this study and [1] is the asymptotic
regime. While in [1], the resources of the predictor (i.e., the number of
states or the memory size) were kept fixed regardless of the length $N$ of the
data sequence, here we investigate situations where the number of contexts or
states is allowed to grow concurrently with $N$. We are primarily interested in
the following fundamental question: What is the critical growth rate of the
number of contexts, below which the performance of the best context-tree
predictor is still universally achievable, but above which it is not? We show
that this critical growth rate is linear in $N$. In particular, we propose a
universal context-tree algorithm that essentially achieves optimum performance
as long as the growth rate is sublinear, and show that, on the other hand, this
is impossible in the linear case.
|
cs/0508129
|
Temporal Phylogenetic Networks and Logic Programming
|
cs.LO cs.AI cs.PL
|
The concept of a temporal phylogenetic network is a mathematical model of
evolution of a family of natural languages. It takes into account the fact that
languages can trade their characteristics with each other when linguistic
communities are in contact, and also that a contact is only possible when the
languages are spoken at the same time. We show how computational methods of
answer set programming and constraint logic programming can be used to generate
plausible conjectures about contacts between prehistoric linguistic
communities, and illustrate our approach by applying it to the evolutionary
history of Indo-European languages.
To appear in Theory and Practice of Logic Programming (TPLP).
|
cs/0508130
|
A Fresh Look at the Reliability of Long-term Digital Storage
|
cs.DL cs.DB cs.OS
|
Many emerging Web services, such as email, photo sharing, and web site
archives, need to preserve large amounts of quickly-accessible data
indefinitely into the future. In this paper, we make the case that these
applications' demands on large scale storage systems over long time horizons
require us to re-evaluate traditional storage system designs. We examine
threats to long-lived data from an end-to-end perspective, taking into account
not just hardware and software faults but also faults due to humans and
organizations. We present a simple model of long-term storage failures that
helps us reason about the various strategies for addressing these threats in a
cost-effective manner. Using this model we show that the most important
strategies for increasing the reliability of long-term storage are detecting
latent faults quickly, automating fault repair to make it faster and cheaper,
and increasing the independence of data replicas.
|
cs/0508132
|
Planning with Preferences using Logic Programming
|
cs.AI
|
We present a declarative language, PP, for the high-level specification of
preferences between possible solutions (or trajectories) of a planning problem.
This novel language allows users to elegantly express non-trivial,
multi-dimensional preferences and priorities over such preferences. The
semantics of PP allows the identification of most preferred trajectories for a
given goal. We also provide an answer set programming implementation of
planning problems with PP preferences.
|
cs/0509001
|
Asymptotic Behavior of Error Exponents in the Wideband Regime
|
cs.IT math.IT
|
In this paper, we complement Verd\'{u}'s work on spectral efficiency in the
wideband regime by investigating the fundamental tradeoff between rate and
bandwidth when a constraint is imposed on the error exponent. Specifically, we
consider both AWGN and Rayleigh-fading channels. For the AWGN channel model,
the optimal values of $R_z(0)$ and $\dot{R_z}(0)$ are calculated, where
$R_z(1/B)$ is the maximum rate at which information can be transmitted over a
channel with bandwidth $B/2$ when the error-exponent is constrained to be
greater than or equal to $z.$ Based on this calculation, we say that a sequence
of input distributions is near optimal if both $R_z(0)$ and $\dot{R_z}(0)$ are
achieved. We show that QPSK, a widely-used signaling scheme, is near-optimal
within a large class of input distributions for the AWGN channel. Similar
results are also established for a fading channel where full CSI is available
at the receiver.
|
cs/0509002
|
Component Based Programming in Scientific Computing: The Viable Approach
|
cs.CE
|
Computational scientists are facing a new era where the old ways of
developing and reusing code have to be left behind and a few daring steps are
to be made towards new horizons. The present work analyzes the needs that drive
this change, the factors that contribute to the inertia of the community and
slow the transition, the status and perspective of present attempts, the
principle, practical and technical problems that are to be addressed in the
short and long run.
|
cs/0509003
|
COMODI: Architecture for a Component-Based Scientific Computing System
|
cs.CE
|
The COmputational MODule Integrator (COMODI) is an initiative aiming at a
component based framework, component developer tool and component repository
for scientific computing. We identify the main ingredients to a solution that
would be sufficiently appealing to scientists and engineers to consider
alternatives to their deeply rooted programming traditions. The overall
structure of the complete solution is sketched with special emphasis on the
Component Developer Tool standing at the basis of COMODI.
|
cs/0509005
|
Combining Structured Corporate Data and Document Content to Improve
Expertise Finding
|
cs.IR
|
In this paper, we present an algorithm for automatically building expertise
evidence for finding experts within an organization by combining structured
corporate information with different content. We also describe our test data
collection and our evaluation method. Evaluation of the algorithm shows that
using organizational structure leads to a significant improvement in the
precision of finding an expert. Furthermore we evaluate the impact of using
different data sources on the quality of the results and conclude that Expert
Finding is not a "one engine fits all" solution. It requires an analysis of the
information space into which a solution will be placed and the appropriate
selection and weighting scheme of the data sources.
|
cs/0509006
|
Optimal space-time codes for the MIMO amplify-and-forward cooperative
channel
|
cs.IT math.IT
|
In this work, we extend the non-orthogonal amplify-and-forward (NAF)
cooperative diversity scheme to the MIMO channel. A family of space-time block
codes for a half-duplex MIMO NAF fading cooperative channel with N relays is
constructed. The code construction is based on the non-vanishing determinant
criterion (NVD) and is shown to achieve the optimal diversity-multiplexing
tradeoff (DMT) of the channel. We provide a general explicit algebraic
construction, followed by some examples. In particular, in the single relay
case, it is proved that the Golden code and the 4x4 Perfect code are optimal
for the single-antenna and two-antenna case, respectively. Simulation results
reveal that a significant gain (up to 10dB) can be obtained with the proposed
codes, especially in the single-antenna case.
|
cs/0509007
|
Non-Data-Aided Parameter Estimation in an Additive White Gaussian Noise
Channel
|
cs.IT math.IT
|
Non-data-aided (NDA) parameter estimation is considered for
binary-phase-shift-keying transmission in an additive white Gaussian noise
channel. Cramer-Rao lower bounds (CRLBs) for signal amplitude, noise variance,
channel reliability constant and bit-error rate are derived and it is shown how
these parameters relate to the signal-to-noise ratio (SNR). An alternative
derivation of the iterative maximum likelihood (ML) SNR estimator is presented
together with a novel, low complexity NDA SNR estimator. The performance of the
proposed estimator is compared to previously suggested estimators and the CRLB.
The results show that the proposed estimator performs close to the iterative ML
estimator at significantly lower computational complexity.
|
cs/0509008
|
Joint Equalization and Decoding for Nonlinear Two-Dimensional
Intersymbol Interference Channels with Application to Optical Storage
|
cs.IT math.IT
|
An algorithm that performs joint equalization and decoding for nonlinear
two-dimensional intersymbol interference channels is presented. The algorithm
performs sum-product message-passing on a factor graph that represents the
underlying system. The two-dimensional optical storage (TWODOS) technology is
an example of a system with nonlinear two-dimensional intersymbol interference.
Simulations for the nonlinear channel model of TWODOS show significant
improvement in performance over uncoded performance. Noise tolerance thresholds
for the algorithm for the TWODOS channel, computed using density evolution, are
also presented and accurately predict the limiting performance of the algorithm
as the codeword length increases.
|
cs/0509009
|
Joint Equalization and Decoding for Nonlinear Two-Dimensional
Intersymbol Interference Channels
|
cs.IT math.IT
|
An algorithm that performs joint equalization and decoding for channels with
nonlinear two-dimensional intersymbol interference is presented. The algorithm
performs sum-product message-passing on a factor graph that represents the
underlying system. The two-dimensional optical storage (TwoDOS) technology is
an example of a system with nonlinear two-dimensional intersymbol interference.
Simulations for the nonlinear channel model of TwoDOS show significant
improvement in performance over uncoded performance. Noise tolerance thresholds
for the TwoDOS channel computed using density evolution are also presented.
|
cs/0509010
|
Minimum Mean-Square-Error Equalization using Priors for Two-Dimensional
Intersymbol Interference
|
cs.IT math.IT
|
Joint equalization and decoding schemes are described for two-dimensional
intersymbol interference (ISI) channels. Equalization is performed using the
minimum mean-square-error (MMSE) criterion. Low-density parity-check codes are
used for error correction. The MMSE schemes are the extension of those proposed
by Tuechler et al. (2002) for one-dimensional ISI channels. Extrinsic
information transfer charts, density evolution, and bit-error rate versus
signal-to-noise ratio curves are used to study the performance of the schemes.
|
cs/0509011
|
Clustering Mixed Numeric and Categorical Data: A Cluster Ensemble
Approach
|
cs.AI
|
Clustering is a widely used technique in data mining applications for
discovering patterns in underlying data. Most traditional clustering algorithms
are limited to handling datasets that contain either numeric or categorical
attributes. However, datasets with mixed types of attributes are common in real
life data mining applications. In this paper, we propose a novel
divide-and-conquer technique to solve this problem. First, the original mixed
dataset is divided into two sub-datasets: the pure categorical dataset and the
pure numeric dataset. Next, existing well established clustering algorithms
designed for different types of datasets are employed to produce corresponding
clusters. Last, the clustering results on the categorical and numeric dataset
are combined as a categorical dataset, on which the categorical data clustering
algorithm is used to get the final clusters. Our contribution in this paper is
to provide an algorithm framework for the mixed attributes clustering problem,
in which existing clustering algorithms can be easily integrated, the
capabilities of different kinds of clustering algorithms and characteristics of
different types of datasets could be fully exploited. Comparisons with other
clustering algorithms on real life datasets illustrate the superiority of our
approach.
|
cs/0509012
|
Kriging Scenario For Capital Markets
|
cs.CE
|
An introduction to numerical statistics.
|
cs/0509013
|
On the variational distance of independently repeated experiments
|
cs.IT math.IT
|
Let P and Q be two probability distributions which differ only for values
with non-zero probability. We show that the variational distance between the
n-fold product distributions P^n and Q^n cannot grow faster than the square
root of n.
|
cs/0509014
|
Density Evolution for Asymmetric Memoryless Channels
|
cs.IT math.IT
|
Density evolution is one of the most powerful analytical tools for
low-density parity-check (LDPC) codes and graph codes with message passing
decoding algorithms. With channel symmetry as one of its fundamental
assumptions, density evolution (DE) has been widely and successfully applied to
different channels, including binary erasure channels, binary symmetric
channels, binary additive white Gaussian noise channels, etc. This paper
generalizes density evolution for non-symmetric memoryless channels, which in
turn broadens the applications to general memoryless channels, e.g. z-channels,
composite white Gaussian noise channels, etc. The central theorem underpinning
this generalization is the convergence to perfect projection for any fixed size
supporting tree. A new iterative formula of the same complexity is then
presented and the necessary theorems for the performance concentration theorems
are developed. Several properties of the new density evolution method are
explored, including stability results for general asymmetric memoryless
channels. Simulations, code optimizations, and possible new applications
suggested by this new density evolution method are also provided. This result
is also used to prove the typicality of linear LDPC codes among the coset code
ensemble when the minimum check node degree is sufficiently large. It is shown
that the convergence to perfect projection is essential to the belief
propagation algorithm even when only symmetric channels are considered. Hence
the proof of the convergence to perfect projection serves also as a completion
of the theory of classical density evolution for symmetric memoryless channels.
|
cs/0509015
|
Optimal Prefix Codes with Fewer Distinct Codeword Lengths are Faster to
Construct
|
cs.DS cs.IT math.IT
|
A new method for constructing minimum-redundancy binary prefix codes is
described. Our method does not explicitly build a Huffman tree; instead it uses
a property of optimal prefix codes to compute the codeword lengths
corresponding to the input weights. Let $n$ be the number of weights and $k$ be
the number of distinct codeword lengths as produced by the algorithm for the
optimum codes. The running time of our algorithm is $O(k \cdot n)$. Following
our previous work in \cite{be}, no algorithm can possibly construct optimal
prefix codes in $o(k \cdot n)$ time. When the given weights are presorted our
algorithm performs $O(9^k \cdot \log^{2k}{n})$ comparisons.
|
cs/0509017
|
Traders imprint themselves by adaptively updating their own avatar
|
cs.MA cs.CE
|
Simulations of artificial stock markets were considered as early as 1964 and
multi-agent ones were introduced as early as 1989. Starting the early 90's,
collaborations of economists and physicists produced increasingly realistic
simulation platforms. Currently, the market stylized facts are easily
reproduced and one has now to address the realistic details of the Market
Microstructure and of the Traders Behaviour. This calls for new methods and
tools capable of bridging smoothly between simulations and experiments in
economics.
We propose here the following Avatar-Based Method (ABM). The subjects
implement and maintain their Avatars (programs encoding their personal decision
making procedures) on NatLab, a market simulation platform. Once these
procedures are fed in a computer edible format, they can be operationally used
as such without the need for belabouring, interpreting or conceptualising them.
Thus ABM short-circuits the usual behavioural economics experiments that search
for the psychological mechanisms underlying the subjects behaviour. Finally,
ABM maintains a level of objectivity close to the classical behaviourism while
extending its scope to subjects' decision making mechanisms.
We report on experiments where Avatars designed and maintained by humans from
different backgrounds (including real traders) compete in a continuous
double-auction market. We hope this unbiased way of capturing the adaptive
evolution of real subjects behaviour may lead to a new kind of behavioural
economics experiments with a high degree of reliability, analysability and
reproducibility.
|
cs/0509020
|
Transitive Text Mining for Information Extraction and Hypothesis
Generation
|
cs.IR cs.AI
|
Transitive text mining - also named Swanson Linking (SL) after its primary
and principal researcher - tries to establish meaningful links between
literature sets which are virtually disjoint in the sense that each does not
mention the main concept of the other. If successful, SL may give rise to the
development of new hypotheses. In this communication we describe our approach
to transitive text mining which employs co-occurrence analysis of the medical
subject headings (MeSH), the descriptors assigned to papers indexed in PubMed.
In addition, we will outline the current state of our web-based information
system which will enable our users to perform literature-driven hypothesis
building on their own.
|
cs/0509021
|
The Throughput-Reliability Tradeoff in MIMO Channels
|
cs.IT math.IT
|
In this paper, an outage limited MIMO channel is considered. We build on
Zheng and Tse's elegant formulation of the diversity-multiplexing tradeoff to
develop a better understanding of the asymptotic relationship between the
probability of error, transmission rate, and signal-to-noise ratio. In
particular, we identify the limitation imposed by the multiplexing gain notion
and develop a new formulation for the throughput-reliability tradeoff that
avoids this limitation. The new characterization is then used to elucidate the
asymptotic trends exhibited by the outage probability curves of MIMO channels.
|
cs/0509022
|
Achievable Rates for Pattern Recognition
|
cs.IT cs.CV math.IT
|
Biological and machine pattern recognition systems face a common challenge:
Given sensory data about an unknown object, classify the object by comparing
the sensory data with a library of internal representations stored in memory.
In many cases of interest, the number of patterns to be discriminated and the
richness of the raw data force recognition systems to internally represent
memory and sensory information in a compressed format. However, these
representations must preserve enough information to accommodate the variability
and complexity of the environment, or else recognition will be unreliable.
Thus, there is an intrinsic tradeoff between the amount of resources devoted to
data representation and the complexity of the environment in which a
recognition system may reliably operate.
In this paper we describe a general mathematical model for pattern
recognition systems subject to resource constraints, and show how the
aforementioned resource-complexity tradeoff can be characterized in terms of
three rates related to number of bits available for representing memory and
sensory data, and the number of patterns populating a given statistical
environment. We prove single-letter information theoretic bounds governing the
achievable rates, and illustrate the theory by analyzing the elementary cases
where the pattern data is either binary or Gaussian.
|
cs/0509025
|
A formally verified proof of the prime number theorem
|
cs.AI cs.LO cs.SC
|
The prime number theorem, established by Hadamard and de la Vall'ee Poussin
independently in 1896, asserts that the density of primes in the positive
integers is asymptotic to 1 / ln x. Whereas their proofs made serious use of
the methods of complex analysis, elementary proofs were provided by Selberg and
Erd"os in 1948. We describe a formally verified version of Selberg's proof,
obtained using the Isabelle proof assistant.
|
cs/0509028
|
Projecting the Forward Rate Flow onto a Finite Dimensional Manifold
|
cs.CE cs.IT math.IT
|
Given a Heath-Jarrow-Morton (HJM) interest rate model $\mathcal{M}$ and a
parametrized family of finite dimensional forward rate curves $\mathcal{G}$,
this paper provides a technique for projecting the infinite dimensional forward
rate curve $r_{t}$ given by $\mathcal{M}$ onto the finite dimensional manifold
$\mathcal{G}$.The Stratonovich dynamics of the projected finite dimensional
forward curve are derived and it is shown that, under the regularity
conditions, the given Stratonovich differential equation has a unique strong
solution. Moreover, this projection leads to an efficient algorithm for
implicit parametric estimation of the infinite dimensional HJM model. The
feasibility of this method is demonstrated by applying the generalized method
of moments.
|
cs/0509029
|
Quickest detection of a minimum of disorder times
|
cs.CE cs.IT math.IT
|
A multi-source quickest detection problem is considered. Assume there are two
independent Poisson processes $X^{1}$ and $X^{2}$ with disorder times
$\theta_{1}$ and $\theta_{2}$, respectively; that is, the intensities of $X^1$
and $X^2$ change at random unobservable times $\theta_1$ and $\theta_2$,
respectively. $\theta_1$ and $\theta_2$ are independent of each other and are
exponentially distributed. Define $\theta \triangleq \theta_1 \wedge
\theta_2=\min\{\theta_{1},\theta_{2}\}$ . For any stopping time $\tau$ that is
measurable with respect to the filtration generated by the observations define
a penalty function of the form \[ R_{\tau}=\mathbb{P}(\tau<\theta)+c
\mathbb{E}[(\tau-\theta)^{+}], \] where $c>0$ and $(\tau-\theta)^{+}$ is the
positive part of $\tau-\theta$. It is of interest to find a stopping time
$\tau$ that minimizes the above performance index. Since both observations
$X^{1}$ and $X^{2}$ reveal information about the disorder time $\theta$, even
this simple problem is more involved than solving the disorder problems for
$X^{1}$ and $X^{2}$ separately. This problem is formulated in terms of a three
dimensional sufficient statistic, and the corresponding optimal stopping
problem is examined. A two dimensional optimal stopping problem whose optimal
stopping time turns out to coincide with the optimal stopping time of the
original problem for some range of parameters is also solved. The value
function of this problem serves as a tight upper bound for the original
problem's value function. The two solutions are characterized by iterating
suitable functional operators.
|
cs/0509032
|
A Simple Model to Generate Hard Satisfiable Instances
|
cs.AI cond-mat.stat-mech cs.CC
|
In this paper, we try to further demonstrate that the models of random CSP
instances proposed by [Xu and Li, 2000; 2003] are of theoretical and practical
interest. Indeed, these models, called RB and RD, present several nice
features. First, it is quite easy to generate random instances of any arity
since no particular structure has to be integrated, or property enforced, in
such instances. Then, the existence of an asymptotic phase transition can be
guaranteed while applying a limited restriction on domain size and on
constraint tightness. In that case, a threshold point can be precisely located
and all instances have the guarantee to be hard at the threshold, i.e., to have
an exponential tree-resolution complexity. Next, a formal analysis shows that
it is possible to generate forced satisfiable instances whose hardness is
similar to unforced satisfiable ones. This analysis is supported by some
representative results taken from an intensive experimentation that we have
carried out, using complete and incomplete search methods.
|
cs/0509033
|
K-Histograms: An Efficient Clustering Algorithm for Categorical Dataset
|
cs.AI
|
Clustering categorical data is an integral part of data mining and has
attracted much attention recently. In this paper, we present k-histogram, a new
efficient algorithm for clustering categorical data. The k-histogram algorithm
extends the k-means algorithm to categorical domain by replacing the means of
clusters with histograms, and dynamically updates histograms in the clustering
process. Experimental results on real datasets show that k-histogram algorithm
can produce better clustering results than k-modes algorithm, the one related
with our work most closely.
|
cs/0509037
|
Friends for Free: Self-Organizing Artificial Social Networks for Trust
and Cooperation
|
cs.MA
|
By harvesting friendship networks from e-mail contacts or instant message
"buddy lists" Peer-to-Peer (P2P) applications can improve performance in low
trust environments such as the Internet. However, natural social networks are
not always suitable, reliable or available. We propose an algorithm (SLACER)
that allows peer nodes to create and manage their own friendship networks.
We evaluate performance using a canonical test application, requiring
cooperation between peers for socially optimal outcomes. The Artificial Social
Networks (ASN) produced are connected, cooperative and robust - possessing many
of the disable properties of human friendship networks such as trust between
friends (directly linked peers) and short paths linking everyone via a chain of
friends.
In addition to new application possibilities, SLACER could supply ASN to P2P
applications that currently depend on human social networks thus transforming
them into fully autonomous, self-managing systems.
|
cs/0509039
|
Coding for the feedback Gel'fand-Pinsker channel and the feedforward
Wyner-Ziv source
|
cs.IT math.IT
|
We consider both channel coding and source coding, with perfect past
feedback/feedforward, in the presence of side information. It is first observed
that feedback does not increase the capacity of the Gel'fand-Pinsker channel,
nor does feedforward improve the achievable rate-distortion performance in the
Wyner-Ziv problem. We then focus on the Gaussian case showing that, as in the
absence of side information, feedback/feedforward allows to efficiently attain
the respective performance limits. In particular, we derive schemes via
variations on that of Schalkwijk and Kailath. These variants, which are as
simple as their origins and require no binning, are shown to achieve,
respectively, the capacity of Costa's channel, and the Wyner-Ziv rate
distortion function. Finally, we consider the finite-alphabet setting and
derive schemes for both the channel and the source coding problems that attain
the fundamental limits, using variations on schemes of Ahlswede and Ooi and
Wornell, and of Martinian and Wornell, respectively.
|
cs/0509040
|
Authoring case based training by document data extraction
|
cs.AI cs.IR
|
In this paper, we propose an scalable approach to modeling based upon word
processing documents, and we describe the tool Phoenix providing the technical
infrastructure.
For our training environment d3web.Train, we developed a tool to extract case
knowledge from existing documents, usually dismissal records, extending Phoenix
to d3web.CaseImporter. Independent authors used this tool to develop training
systems, observing a significant decrease of time for setteling-in and a
decrease of time necessary for developing a case.
|
cs/0509041
|
Efficient Reconciliation of Correlated Continuous Random Variables using
LDPC Codes
|
cs.IT math.IT
|
This paper investigates an efficient and practical information reconciliation
method in the case where two parties have access to correlated continuous
random variables. We show that reconciliation is a special case of channel
coding and that existing coded modulation techniques can be adapted for
reconciliation. We describe an explicit reconciliation method based on LDPC
codes in the case of correlated Gaussian variables. We believe that the
proposed method can improve the efficiency of quantum key distribution
protocols based on continuous-spectrum quantum states.
|
cs/0509043
|
Optimal Power Control for Multiuser CDMA Channels
|
cs.IT math.IT
|
In this paper, we define the power region as the set of power allocations for
K users such that everybody meets a minimum signal-to-interference ratio (SIR).
The SIR is modeled in a multiuser CDMA system with fixed linear receiver and
signature sequences. We show that the power region is convex in linear and
logarithmic scale. It furthermore has a componentwise minimal element. Power
constraints are included by the intersection with the set of all viable power
adjustments.
In this framework, we aim at minimizing the total expended power by
minimizing a componentwise monotone functional. If the feasible power region is
nonempty, the minimum is attained. Otherwise, as a solution to balance
conflicting interests, we suggest the projection of the minimum point in the
power region onto the set of viable power settings. Finally, with an
appropriate utility function, the problem of minimizing the total expended
power can be seen as finding the Nash bargaining solution, which sheds light on
power assignment from a game theoretic point of view. Convexity and
componentwise monotonicity are essential prerequisites for this result.
|
cs/0509044
|
Accumulate-Repeat-Accumulate Codes: Systematic Codes Achieving the
Binary Erasure Channel Capacity with Bounded Complexity
|
cs.IT math.IT
|
The paper introduces ensembles of accumulate-repeat-accumulate (ARA) codes
which asymptotically achieve capacity on the binary erasure channel (BEC) with
{\em bounded complexity} per information bit. It also introduces symmetry
properties which play a central role in the construction of capacity-achieving
ensembles for the BEC. The results here improve on the tradeoff between
performance and complexity provided by the first capacity-achieving ensembles
of irregular repeat-accumulate (IRA) codes with bounded complexity per
information bit; these IRA ensembles were previously constructed by Pfister,
Sason and Urbanke. The superiority of ARA codes with moderate to large block
length is exemplified by computer simulations which compare their performance
with those of previously reported capacity-achieving ensembles of LDPC and IRA
codes. The ARA codes also have the advantage of being systematic.
|
cs/0509045
|
On Hats and other Covers
|
cs.IT math.IT
|
We study a game puzzle that has enjoyed recent popularity among
mathematicians, computer scientist, coding theorists and even the mass press.
In the game, $n$ players are fitted with randomly assigned colored hats.
Individual players can see their teammates' hat colors, but not their own.
Based on this information, and without any further communication, each player
must attempt to guess his hat color, or pass. The team wins if there is at
least one correct guess, and no incorrect ones. The goal is to devise guessing
strategies that maximize the team winning probability. We show that for the
case of two hat colors, and for any value of $n$, playing strategies are
equivalent to binary covering codes of radius one. This link, in particular
with Hamming codes, had been observed for values of $n$ of the form $2^m-1$. We
extend the analysis to games with hats of $q$ colors, $q\geq 2$, where
1-coverings are not sufficient to characterize the best strategies. Instead, we
introduce the more appropriate notion of a {\em strong covering}, and show
efficient constructions of these coverings, which achieve winning probabilities
approaching unity. Finally, we briefly discuss results on variants of the
problem, including arbitrary input distributions, randomized playing
strategies, and symmetric strategies.
|
cs/0509046
|
On the number of t-ary trees with a given path length
|
cs.DM cs.IT math.IT
|
We show that the number of $t$-ary trees with path length equal to $p$ is
$\exp(h(t^{-1})t\log t \frac{p}{\log p}(1+o(1)))$, where
$\entropy(x){=}{-}x\log x {-}(1{-}x)\log (1{-}x)$ is the binary entropy
function. Besides its intrinsic combinatorial interest, the question recently
arose in the context of information theory, where the number of $t$-ary trees
with path length $p$ estimates the number of universal types, or, equivalently,
the number of different possible Lempel-Ziv'78 dictionaries for sequences of
length $p$ over an alphabet of size $t$.
|
cs/0509047
|
Secure multiplex coding to attain the channel capacity in wiretap
channels
|
cs.IT cs.CR math.IT
|
It is known that a message can be transmitted safely against any wiretapper
via a noisy channel without a secret key if the coding rate is less than the
so-called secrecy capacity $C_S$, which is usually smaller than the channel
capacity $C$. In order to remove the loss $C - C_S$, we propose a multiplex
coding scheme with plural independent messages. In this paper, it is shown that
the proposed multiplex coding scheme can attain the channel capacity as the
total rate of the plural messages and the perfect secrecy for each message. The
coding theorem is proved by extending Hayashi's proof, in which the coding of
the channel resolvability is applied to wiretap channels.
|
cs/0509048
|
Capacity of Complexity-Constrained Noise-Free CDMA
|
cs.IT math.IT
|
An interference-limited noise-free CDMA downlink channel operating under a
complexity constraint on the receiver is introduced. According to this
paradigm, detected bits, obtained by performing hard decisions directly on the
channel's matched filter output, must be the same as the transmitted binary
inputs. This channel setting, allowing the use of the simplest receiver scheme,
seems to be worthless, making reliable communication at any rate impossible. We
prove, by adopting statistical mechanics notion, that in the large-system limit
such a complexity-constrained CDMA channel gives rise to a non-trivial
Shannon-theoretic capacity, rigorously analyzed and corroborated using
finite-size channel simulations.
|
cs/0509049
|
On the Achievable Information Rates of CDMA Downlink with Trivial
Receivers
|
cs.IT math.IT
|
A noisy CDMA downlink channel operating under a strict complexity constraint
on the receiver is introduced. According to this constraint, detected bits,
obtained by performing hard decisions directly on the channel's matched filter
output, must be the same as the transmitted binary inputs. This channel
setting, allowing the use of the simplest receiver scheme, seems to be
worthless, making reliable communication at any rate impossible. However,
recently this communication paradigm was shown to yield valuable information
rates in the case of a noiseless channel. This finding calls for the
investigation of this attractive complexity-constrained transmission scheme for
the more practical noisy channel case. By adopting the statistical mechanics
notion of metastable states of the renowned Hopfield model, it is proved that
under a bounded noise assumption such complexity-constrained CDMA channel gives
rise to a non-trivial Shannon-theoretic capacity, rigorously analyzed and
corroborated using finite-size channel simulations. For unbounded noise the
channel's outage capacity is addressed and specifically described for the
popular additive white Gaussian noise.
|
cs/0509050
|
Effect of door delay on aircraft evacuation time
|
cs.MA
|
The recent commercial launch of twin-deck Very Large Transport Aircraft
(VLTA) such as the Airbus A380 has raised questions concerning the speed at
which they may be evacuated. The abnormal height of emergency exits on the
upper deck has led to speculation that emotional factors such as fear may lead
to door delay, and thus play a significant role in increasing overall
evacuation time. Full-scale evacuation tests are financially expensive and
potentially hazardous, and systematic studies of the evacuation of VLTA are
rare. Here we present a computationally cheap agent-based framework for the
general simulation of aircraft evacuation, and apply it to the particular case
of the Airbus A380. In particular, we investigate the effect of door delay, and
conclude that even a moderate average delay can lead to evacuation times that
exceed the maximum for safety certification. The model suggests practical ways
to minimise evacuation time, as well as providing a general framework for the
simulation of evacuation.
|
cs/0509053
|
Underwater Hacker Missile Wars: A Cryptography and Engineering Contest
|
cs.CR cs.CE
|
For a recent student conference, the authors developed a day-long design
problem and competition suitable for engineering, mathematics and science
undergraduates. The competition included a cryptography problem, for which a
workshop was run during the conference. This paper describes the competition,
focusing on the cryptography problem and the workshop. Notes from the workshop
and code for the computer programs are made available via the Internet. The
results of a personal self-evaluation (PSE) are described.
|
cs/0509055
|
Learning Optimal Augmented Bayes Networks
|
cs.LG
|
Naive Bayes is a simple Bayesian classifier with strong independence
assumptions among the attributes. This classifier, desipte its strong
independence assumptions, often performs well in practice. It is believed that
relaxing the independence assumptions of a naive Bayes classifier may improve
the classification accuracy of the resulting structure. While finding an
optimal unconstrained Bayesian Network (for most any reasonable scoring
measure) is an NP-hard problem, it is possible to learn in polynomial time
optimal networks obeying various structural restrictions. Several authors have
examined the possibilities of adding augmenting arcs between attributes of a
Naive Bayes classifier. Friedman, Geiger and Goldszmidt define the TAN
structure in which the augmenting arcs form a tree on the attributes, and
present a polynomial time algorithm that learns an optimal TAN with respect to
MDL score. Keogh and Pazzani define Augmented Bayes Networks in which the
augmenting arcs form a forest on the attributes (a collection of trees, hence a
relaxation of the stuctural restriction of TAN), and present heuristic search
methods for learning good, though not optimal, augmenting arc sets. The
authors, however, evaluate the learned structure only in terms of observed
misclassification error and not against a scoring metric, such as MDL. In this
paper, we present a simple, polynomial time greedy algorithm for learning an
optimal Augmented Bayes Network with respect to MDL score.
|
cs/0509058
|
Interactive Unawareness Revisited
|
cs.AI cs.LO
|
We analyze a model of interactive unawareness introduced by Heifetz, Meier
and Schipper (HMS). We consider two axiomatizations for their model, which
capture different notions of validity. These axiomatizations allow us to
compare the HMS approach to both the standard (S5) epistemic logic and two
other approaches to unawareness: that of Fagin and Halpern and that of Modica
and Rustichini. We show that the differences between the HMS approach and the
others are mainly due to the notion of validity used and the fact that the HMS
is based on a 3-valued propositional logic.
|
cs/0509061
|
Guarantees for the Success Frequency of an Algorithm for Finding
Dodgson-Election Winners
|
cs.DS cs.MA
|
In the year 1876 the mathematician Charles Dodgson, who wrote fiction under
the now more famous name of Lewis Carroll, devised a beautiful voting system
that has long fascinated political scientists. However, determining the winner
of a Dodgson election is known to be complete for the \Theta_2^p level of the
polynomial hierarchy. This implies that unless P=NP no polynomial-time solution
to this problem exists, and unless the polynomial hierarchy collapses to NP the
problem is not even in NP. Nonetheless, we prove that when the number of voters
is much greater than the number of candidates--although the number of voters
may still be polynomial in the number of candidates--a simple greedy algorithm
very frequently finds the Dodgson winners in such a way that it ``knows'' that
it has found them, and furthermore the algorithm never incorrectly declares a
nonwinner to be a winner.
|
cs/0509062
|
Capacity-Achieving Codes with Bounded Graphical Complexity on Noisy
Channels
|
cs.IT math.IT
|
We introduce a new family of concatenated codes with an outer low-density
parity-check (LDPC) code and an inner low-density generator matrix (LDGM) code,
and prove that these codes can achieve capacity under any memoryless
binary-input output-symmetric (MBIOS) channel using maximum-likelihood (ML)
decoding with bounded graphical complexity, i.e., the number of edges per
information bit in their graphical representation is bounded. In particular, we
also show that these codes can achieve capacity on the binary erasure channel
(BEC) under belief propagation (BP) decoding with bounded decoding complexity
per information bit per iteration for all erasure probabilities in (0, 1). By
deriving and analyzing the average weight distribution (AWD) and the
corresponding asymptotic growth rate of these codes with a rate-1 inner LDGM
code, we also show that these codes achieve the Gilbert-Varshamov bound with
asymptotically high probability. This result can be attributed to the presence
of the inner rate-1 LDGM code, which is demonstrated to help eliminate high
weight codewords in the LDPC code while maintaining a vanishingly small amount
of low weight codewords.
|
cs/0509064
|
On joint coding for watermarking and encryption
|
cs.IT cs.CR math.IT
|
In continuation to earlier works where the problem of joint information
embedding and lossless compression (of the composite signal) was studied in the
absence \cite{MM03} and in the presence \cite{MM04} of attacks, here we
consider the additional ingredient of protecting the secrecy of the watermark
against an unauthorized party, which has no access to a secret key shared by
the legitimate parties. In other words, we study the problem of joint coding
for three objectives: information embedding, compression, and encryption.Our
main result is a coding theorem that provides a single--letter characterization
of the best achievable tradeoffs among the following parameters: the distortion
between the composite signal and the covertext, the distortion in
reconstructing the watermark by the legitimate receiver, the compressibility of
the composite signal (with and without the key), and the equivocation of the
watermark, as well as its reconstructed version, given the composite signal. In
the attack--free case, if the key is independent of the covertext, this coding
theorem gives rise to a {\it threefold} separation principle that tells that
asymptotically, for long block codes, no optimality is lost by first applying a
rate--distortion code to the watermark source, then encrypting the compressed
codeword, and finally, embedding it into the covertext using the embedding
scheme of \cite{MM03}. In the more general case, however, this separation
principle is no longer valid, as the key plays an additional role of side
information used by the embedding unit.
|
cs/0509065
|
On Deciding Deep Holes of Reed-Solomon Codes
|
cs.IT math.IT
|
For generalized Reed-Solomon codes, it has been proved \cite{GuruswamiVa05}
that the problem of determining if a received word is a deep hole is
co-NP-complete. The reduction relies on the fact that the evaluation set of the
code can be exponential in the length of the code -- a property that practical
codes do not usually possess. In this paper, we first presented a much simpler
proof of the same result. We then consider the problem for standard
Reed-Solomon codes, i.e. the evaluation set consists of all the nonzero
elements in the field. We reduce the problem of identifying deep holes to
deciding whether an absolutely irreducible hypersurface over a finite field
contains a rational point whose coordinates are pairwise distinct and nonzero.
By applying Schmidt and Cafure-Matera estimation of rational points on
algebraic varieties, we prove that the received vector $(f(\alpha))_{\alpha \in
\F_q}$ for Reed-Solomon $[q,k]_q$, $k < q^{1/7 - \epsilon}$, cannot be a deep
hole, whenever $f(x)$ is a polynomial of degree $k+d$ for $1\leq d < q^{3/13
-\epsilon}$.
|
cs/0509068
|
Channel Uncertainty in Ultra Wideband Communication Systems
|
cs.IT math.IT
|
Wide band systems operating over multipath channels may spread their power
over bandwidth if they use duty cycle. Channel uncertainty limits the
achievable data rates of power constrained wide band systems; Duty cycle
transmission reduces the channel uncertainty because the receiver has to
estimate the channel only when transmission takes place. The optimal choice of
the fraction of time used for transmission depends on the spectral efficiency
of the signal modulation. The general principle is demonstrated by comparing
the channel conditions that allow different modulations to achieve the capacity
in the limit. Direct sequence spread spectrum and pulse position modulation
systems with duty cycle achieve the channel capacity, if the increase of the
number of channel paths with the bandwidth is not too rapid. The higher
spectral efficiency of the spread spectrum modulation lets it achieve the
channel capacity in the limit, in environments where pulse position modulation
with non-vanishing symbol time cannot be used because of the large number of
channel paths.
|
cs/0509071
|
CP-nets and Nash equilibria
|
cs.GT cs.AI
|
We relate here two formalisms that are used for different purposes in
reasoning about multi-agent systems. One of them are strategic games that are
used to capture the idea that agents interact with each other while pursuing
their own interest. The other are CP-nets that were introduced to express
qualitative and conditional preferences of the users and which aim at
facilitating the process of preference elicitation. To relate these two
formalisms we introduce a natural, qualitative, extension of the notion of a
strategic game. We show then that the optimal outcomes of a CP-net are exactly
the Nash equilibria of an appropriately defined strategic game in the above
sense. This allows us to use the techniques of game theory to search for
optimal outcomes of CP-nets and vice-versa, to use techniques developed for
CP-nets to search for Nash equilibria of the considered games.
|
cs/0509072
|
Folksonomy as a Complex Network
|
cs.IR cs.DL physics.soc-ph
|
Folksonomy is an emerging technology that works to classify the information
over WWW through tagging the bookmarks, photos or other web-based contents. It
is understood to be organized by every user while not limited to the authors of
the contents and the professional editors. This study surveyed the folksonomy
as a complex network. The result indicates that the network, which is composed
of the tags from the folksonomy, displays both properties of small world and
scale-free. However, the statistics only shows a local and static slice of the
vast body of folksonomy which is still evolving.
|
cs/0509073
|
Distance-Increasing Maps of All Length by Simple Mapping Algorithms
|
cs.IT cs.DM math.IT
|
Distance-increasing maps from binary vectors to permutations, namely DIMs,
are useful for the construction of permutation arrays. While a simple mapping
algorithm defining DIMs of even length is known, existing DIMs of odd length
are either recursively constructed by merging shorter DIMs or defined by much
complicated mapping algorithms. In this paper, DIMs of all length defined by
simple mapping algorithms are presented.
|
cs/0509075
|
On the Capacity of Doubly Correlated MIMO Channels
|
cs.IT math.IT
|
In this paper, we analyze the capacity of multiple-input multiple-output
(MIMO) Rayleigh-fading channels in the presence of spatial fading correlation
at both the transmitter and the receiver, assuming that the channel is unknown
at the transmitter and perfectly known at the receiver. We first derive the
determinant representation for the exact characteristic function of the
capacity, which is then used to determine the trace representations for the
mean, variance, skewness, kurtosis, and other higher-order statistics (HOS).
These results allow us to exactly evaluate two relevant information-theoretic
capacity measures--ergodic capacity and outage capacity--and the HOS of the
capacity for such a MIMO channel. The analytical framework presented in the
paper is valid for arbitrary numbers of antennas and generalizes the previously
known results for independent and identically distributed or one-sided
correlated MIMO channels to the case when fading correlation exists on both
sides. We verify our analytical results by comparing them with Monte Carlo
simulations for a correlation model based on realistic channel measurements as
well as a classical exponential correlation model.
|
cs/0509077
|
Capacity Limits of Cognitive Radio with Distributed and Dynamic Spectral
Activity
|
cs.IT math.IT
|
We investigate the capacity of opportunistic communication in the presence of
dynamic and distributed spectral activity, i.e. when the time varying spectral
holes sensed by the cognitive transmitter are correlated but not identical to
those sensed by the cognitive receiver. Using the information theoretic
framework of communication with causal and non-causal side information at the
transmitter and/or the receiver, we obtain analytical capacity expressions and
the corresponding numerical results. We find that cognitive radio communication
is robust to dynamic spectral environments even when the communication occurs
in bursts of only 3-5 symbols. The value of handshake overhead is investigated
for both lightly loaded and heavily loaded systems. We find that the capacity
benefits of overhead information flow from the transmitter to the receiver is
negligible while feedback information overhead in the opposite direction
significantly improves capacity.
|
cs/0509078
|
On the Feedback Capacity of Stationary Gaussian Channels
|
cs.IT math.IT
|
The capacity of stationary additive Gaussian noise channels with feedback is
characterized as the solution to a variational problem. Toward this end, it is
proved that the optimal feedback coding scheme is stationary. When specialized
to the first-order autoregressive moving-average noise spectrum, this
variational characterization yields a closed-form expression for the feedback
capacity. In particular, this result shows that the celebrated
Schalkwijk--Kailath coding scheme achieves the feedback capacity for the
first-order autoregressive moving-average Gaussian channel, resolving a
long-standing open problem studied by Butman, Schalkwijk--Tiernan, Wolfowitz,
Ozarow, Ordentlich, Yang--Kavcic--Tatikonda, and others.
|
cs/0509079
|
The WSSUS Pulse Design Problem in Multicarrier Transmission
|
cs.IT math.IT
|
Optimal link adaption to the scattering function of wide sense stationary
uncorrelated mobile communication channels is still an unsolved problem despite
its importance for next-generation system design. In multicarrier transmission
such link adaption is performed by pulse shaping, i.e. by properly adjusting
the transmit and receive filters. For example pulse shaped Offset--QAM systems
have been recently shown to have superior performance over standard cyclic
prefix OFDM (while operating at higher spectral efficiency).In this paper we
establish a general mathematical framework for joint transmitter and receiver
pulse shape optimization for so-called Weyl--Heisenberg or Gabor signaling with
respect to the scattering function of the WSSUS channel. In our framework the
pulse shape optimization problem is translated to an optimization problem over
trace class operators which in turn is related to fidelity optimization in
quantum information processing. By convexity relaxation the problem is shown to
be equivalent to a \emph{convex constraint quasi-convex maximization problem}
thereby revealing the non-convex nature of the overall WSSUS pulse design
problem. We present several iterative algorithms for optimization providing
applicable results even for large--scale problem constellations. We show that
with transmitter-side knowledge of the channel statistics a gain of $3 - 6$dB
in $\SINR$ can be expected.
|
cs/0509080
|
Capacity and Character Expansions: Moment generating function and other
exact results for MIMO correlated channels
|
cs.IT cond-mat.mes-hall cond-mat.stat-mech hep-lat math-ph math.IT math.MP
|
We apply a promising new method from the field of representations of Lie
groups to calculate integrals over unitary groups, which are important for
multi-antenna communications. To demonstrate the power and simplicity of this
technique, we first re-derive a number of results that have been used recently
in the community of wireless information theory, using only a few simple steps.
In particular, we derive the joint probability distribution of eigenvalues of
the matrix GG*, with G a semicorrelated Gaussian random matrix or a Gaussian
random matrix with a non-zero mean (and G* its hermitian conjugate) . These
joint probability distribution functions can then be used to calculate the
moment generating function of the mutual information for Gaussian channels with
multiple antennas on both ends with this probability distribution of their
channel matrices G. We then turn to the previously unsolved problem of
calculating the moment generating function of the mutual information of MIMO
(multiple input-multiple output) channels, which are correlated at both the
receiver and the transmitter. From this moment generating function we obtain
the ergodic average of the mutual information and study the outage probability.
These methods can be applied to a number of other problems. As a particular
example, we examine unitary encoded space-time transmission of MIMO systems and
we derive the received signal distribution when the channel matrix is
correlated at the transmitter end.
|
cs/0509081
|
Automatic Face Recognition System Based on Local Fourier-Bessel Features
|
cs.CV
|
We present an automatic face verification system inspired by known properties
of biological systems. In the proposed algorithm the whole image is converted
from the spatial to polar frequency domain by a Fourier-Bessel Transform (FBT).
Using the whole image is compared to the case where only face image regions
(local analysis) are considered. The resulting representations are embedded in
a dissimilarity space, where each image is represented by its distance to all
the other images, and a Pseudo-Fisher discriminator is built. Verification test
results on the FERET database showed that the local-based algorithm outperforms
the global-FBT version. The local-FBT algorithm performed as state-of-the-art
methods under different testing conditions, indicating that the proposed system
is highly robust for expression, age, and illumination variations. We also
evaluated the performance of the proposed system under strong occlusion
conditions and found that it is highly robust for up to 50% of face occlusion.
Finally, we automated completely the verification system by implementing face
and eye detection algorithms. Under this condition, the local approach was only
slightly superior to the global approach.
|
cs/0509082
|
Face Recognition Based on Polar Frequency Features
|
cs.CV
|
A novel biologically motivated face recognition algorithm based on polar
frequency is presented. Polar frequency descriptors are extracted from face
images by Fourier-Bessel transform (FBT). Next, the Euclidean distance between
all images is computed and each image is now represented by its dissimilarity
to the other images. A Pseudo-Fisher Linear Discriminant was built on this
dissimilarity space. The performance of Discrete Fourier transform (DFT)
descriptors, and a combination of both feature types was also evaluated. The
algorithms were tested on a 40- and 1196-subjects face database (ORL and FERET,
respectively). With 5 images per subject in the training and test datasets,
error rate on the ORL database was 3.8, 1.25 and 0.2% for the FBT, DFT, and the
combined classifier, respectively, as compared to 2.6% achieved by the best
previous algorithm. The most informative polar frequency features were
concentrated at low-to-medium angular frequencies coupled to low radial
frequencies. On the FERET database, where an affine normalization
pre-processing was applied, the FBT algorithm outperformed only the PCA in a
rank recognition test. However, it achieved performance comparable to
state-of-the-art methods when evaluated by verification tests. These results
indicate the high informative value of the polar frequency content of face
images in relation to recognition and verification tasks, and that the
Cartesian frequency content can complement information about the subjects'
identity, but possibly only when the images are not pre-normalized. Possible
implications for human face recognition are discussed.
|
cs/0509083
|
Face Verification in Polar Frequency Domain: a Biologically Motivated
Approach
|
cs.CV
|
We present a novel local-based face verification system whose components are
analogous to those of biological systems. In the proposed system, after global
registration and normalization, three eye regions are converted from the
spatial to polar frequency domain by a Fourier-Bessel Transform. The resulting
representations are embedded in a dissimilarity space, where each image is
represented by its distance to all the other images. In this dissimilarity
space a Pseudo-Fisher discriminator is built. ROC and equal error rate
verification test results on the FERET database showed that the system
performed at least as state-of-the-art methods and better than a system based
on polar Fourier features. The local-based system is especially robust to
facial expression and age variations, but sensitive to registration errors.
|
cs/0509085
|
An Improved Lower Bound to the Number of Neighbors Required for the
Asymptotic Connectivity of Ad Hoc Networks
|
cs.NI cs.IT math.IT
|
Xue and Kumar have established that the number of neighbors required for
connectivity of wireless networks with N uniformly distributed nodes must grow
as log(N), and they also established that the actual number required lies
between 0.074log(N) and 5.1774log(N). In this short paper, by recognizing that
connectivity results for networks where the nodes are distributed according to
a Poisson point process can often be applied to the problem for a network with
N nodes, we are able to improve the lower bound. In particular, we show that a
network with nodes distributed in a unit square according to a 2D Poisson point
process of parameter N will be asymptotically disconnected with probability one
if the number of neighbors is less than 0.129log(N). Moreover, similar number
of neighbors is not enough for an asymptotically connected network with N nodes
uniformly in a unit square, hence improving the lower bound.
|
cs/0509086
|
Statistical Mechanical Approach to Lossy Data Compression:Theory and
Practice
|
cs.IT math.IT
|
The encoder and decoder for lossy data compression of binary memoryless
sources are developed on the basis of a specific-type nonmonotonic perceptron.
Statistical mechanical analysis indicates that the potential ability of the
perceptron-based code saturates the theoretically achievable limit in most
cases although exactly performing the compression is computationally difficult.
To resolve this difficulty, we provide a computationally tractable
approximation algorithm using belief propagation (BP), which is a current
standard algorithm of probabilistic inference. Introducing several
approximations and heuristics, the BP-based algorithm exhibits performance that
is close to the achievable limit in a practical time scale in optimal cases.
|
cs/0509087
|
On Time-Variant Distortions in Multicarrier Transmission with
Application to Frequency Offsets and Phase Noise
|
cs.IT math.IT
|
Phase noise and frequency offsets are due to their time-variant behavior one
of the most limiting disturbances in practical OFDM designs and therefore
intensively studied by many authors. In this paper we present a generalized
framework for the prediction of uncoded system performance in the presence of
time-variant distortions including the transmitter and receiver pulse shapes as
well as the channel. Therefore, unlike existing studies, our approach can be
employed for more general multicarrier schemes. To show the usefulness of our
approach, we apply the results to OFDM in the context of frequency offset and
Wiener phase noise, yielding improved bounds on the uncoded performance. In
particular, we obtain exact formulas for the averaged performance in AWGN and
time-invariant multipath channels.
|
cs/0509088
|
Business intelligence systems and user's parameters: an application to a
documents' database
|
cs.DB
|
This article presents earlier results of our research works in the area of
modeling Business Intelligence Systems. The basic idea of this research area is
presented first. We then show the necessity of including certain users'
parameters in Information systems that are used in Business Intelligence
systems in order to integrate a better response from such systems. We
identified two main types of attributes that can be missing from a base and we
showed why they needed to be included. A user model that is based on a
cognitive user evolution is presented. This model when used together with a
good definition of the information needs of the user (decision maker) will
accelerate his decision making process.
|
cs/0509089
|
Semantics of UML 2.0 Activity Diagram for Business Modeling by Means of
Virtual Machine
|
cs.CE cs.PL
|
The paper proposes a more formalized definition of UML 2.0 Activity Diagram
semantics. A subset of activity diagram constructs relevant for business
process modeling is considered. The semantics definition is based on the
original token flow methodology, but a more constructive approach is used. The
Activity Diagram Virtual machine is defined by means of a metamodel, with
operations defined by a mix of pseudocode and OCL pre- and postconditions. A
formal procedure is described which builds the virtual machine for any activity
diagram. The relatively complicated original token movement rules in control
nodes and edges are combined into paths from an action to action. A new
approach is the use of different (push and pull) engines, which move tokens
along the paths. Pull engines are used for paths containing join nodes, where
the movement of several tokens must be coordinated. The proposed virtual
machine approach makes the activity semantics definition more transparent where
the token movement can be easily traced. However, the main benefit of the
approach is the possibility to use the defined virtual machine as a basis for
UML activity diagram based workflow or simulation engine.
|
cs/0509092
|
Automatic extraction of paraphrastic phrases from medium size corpora
|
cs.CL cs.AI
|
This paper presents a versatile system intended to acquire paraphrastic
phrases from a representative corpus. In order to decrease the time spent on
the elaboration of resources for NLP system (for example Information
Extraction, IE hereafter), we suggest to use a machine learning system that
helps defining new templates and associated resources. This knowledge is
automatically derived from the text collection, in interaction with a large
semantic network.
|
cs/0509093
|
On the Outage Capacity of Correlated Multiple-Path MIMO Channels
|
cs.IT math.IT
|
The use of multi-antenna arrays in both transmission and reception has been
shown to dramatically increase the throughput of wireless communication
systems. As a result there has been considerable interest in characterizing the
ergodic average of the mutual information for realistic correlated channels.
Here, an approach is presented that provides analytic expressions not only for
the average, but also the higher cumulant moments of the distribution of the
mutual information for zero-mean Gaussian (multiple-input multiple-output) MIMO
channels with the most general multipath covariance matrices when the channel
is known at the receiver. These channels include multi-tap delay paths, as well
as general channels with covariance matrices that cannot be written as a
Kronecker product, such as dual-polarized antenna arrays with general
correlations at both transmitter and receiver ends. The mathematical methods
are formally valid for large antenna numbers, in which limit it is shown that
all higher cumulant moments of the distribution, other than the first two scale
to zero. Thus, it is confirmed that the distribution of the mutual information
tends to a Gaussian, which enables one to calculate the outage capacity. These
results are quite accurate even in the case of a few antennas, which makes this
approach applicable to realistic situations.
|
cs/0509096
|
Performance Analysis and Enhancement of Multiband OFDM for UWB
Communications
|
cs.IT math.IT
|
In this paper, we analyze the frequency-hopping orthogonal frequency-division
multiplexing (OFDM) system known as Multiband OFDM for high-rate wireless
personal area networks (WPANs) based on ultra-wideband (UWB) transmission.
Besides considering the standard, we also propose and study system performance
enhancements through the application of Turbo and Repeat-Accumulate (RA) codes,
as well as OFDM bit-loading. Our methodology consists of (a) a study of the
channel model developed under IEEE 802.15 for UWB from a frequency-domain
perspective suited for OFDM transmission, (b) development and quantification of
appropriate information-theoretic performance measures, (c) comparison of these
measures with simulation results for the Multiband OFDM standard proposal as
well as our proposed extensions, and (d) the consideration of the influence of
practical, imperfect channel estimation on the performance. We find that the
current Multiband OFDM standard sufficiently exploits the frequency selectivity
of the UWB channel, and that the system performs in the vicinity of the channel
cutoff rate. Turbo codes and a reduced-complexity clustered bit-loading
algorithm improve the system power efficiency by over 6 dB at a data rate of
480 Mbps.
|
cs/0509097
|
Iterative Algebraic Soft-Decision List Decoding of Reed-Solomon Codes
|
cs.IT math.IT
|
In this paper, we present an iterative soft-decision decoding algorithm for
Reed-Solomon codes offering both complexity and performance advantages over
previously known decoding algorithms. Our algorithm is a list decoding
algorithm which combines two powerful soft decision decoding techniques which
were previously regarded in the literature as competitive, namely, the
Koetter-Vardy algebraic soft-decision decoding algorithm and belief-propagation
based on adaptive parity check matrices, recently proposed by Jiang and
Narayanan. Building on the Jiang-Narayanan algorithm, we present a
belief-propagation based algorithm with a significant reduction in
computational complexity. We introduce the concept of using a
belief-propagation based decoder to enhance the soft-input information prior to
decoding with an algebraic soft-decision decoder. Our algorithm can also be
viewed as an interpolation multiplicity assignment scheme for algebraic
soft-decision decoding of Reed-Solomon codes.
|
cs/0509098
|
Applications of correlation inequalities to low density graphical codes
|
cs.IT cond-mat.stat-mech math.IT
|
This contribution is based on the contents of a talk delivered at the
Next-SigmaPhi conference held in Crete in August 2005. It is adressed to an
audience of physicists with diverse horizons and does not assume any background
in communications theory. Capacity approaching error correcting codes for
channel communication known as Low Density Parity Check (LDPC) codes have
attracted considerable attention from coding theorists in the last decade.
Surprisingly strong connections with the theory of diluted spin glasses have
been discovered. In this work we elucidate one new connection, namely that a
class of correlation inequalities valid for gaussian spin glasses can be
applied to the theoretical analysis of LDPC codes. This allows for a rigorous
comparison between the so called (optimal) maximum a posteriori and the
computationaly efficient belief propagation decoders. The main ideas of the
proofs are explained and we refer to recent works for the more lengthy
technical details.
|
cs/0510001
|
Retinal Vessel Segmentation Using the 2-D Morlet Wavelet and Supervised
Classification
|
cs.CV
|
We present a method for automated segmentation of the vasculature in retinal
images. The method produces segmentations by classifying each image pixel as
vessel or non-vessel, based on the pixel's feature vector. Feature vectors are
composed of the pixel's intensity and continuous two-dimensional Morlet wavelet
transform responses taken at multiple scales. The Morlet wavelet is capable of
tuning to specific frequencies, thus allowing noise filtering and vessel
enhancement in a single step. We use a Bayesian classifier with
class-conditional probability density functions (likelihoods) described as
Gaussian mixtures, yielding a fast classification, while being able to model
complex decision surfaces and compare its performance with the linear minimum
squared error classifier. The probability distributions are estimated based on
a training set of labeled pixels obtained from manual segmentations. The
method's performance is evaluated on publicly available DRIVE and STARE
databases of manually labeled non-mydriatic images. On the DRIVE database, it
achieves an area under the receiver operating characteristic (ROC) curve of
0.9598, being slightly superior than that presented by the method of Staal et
al.
|
cs/0510002
|
Optimal Relay Functionality for SNR Maximization in Memoryless Relay
Networks
|
cs.IT math.IT
|
We explore the SNR-optimal relay functionality in a \emph{memoryless} relay
network, i.e. a network where, during each channel use, the signal transmitted
by a relay depends only on the last received symbol at that relay. We develop a
generalized notion of SNR for the class of memoryless relay functions. The
solution to the generalized SNR optimization problem leads to the novel concept
of minimum mean square uncorrelated error estimation(MMSUEE). For the elemental
case of a single relay, we show that MMSUEE is the SNR-optimal memoryless relay
function regardless of the source and relay transmit power, and the modulation
scheme. This scheme, that we call estimate and forward (EF), is also shown to
be SNR-optimal with PSK modulation in a parallel relay network. We demonstrate
that EF performs better than the best of amplify and forward (AF) and
demodulate and forward (DF), in both parallel and serial relay networks. We
also determine that AF is near-optimal at low transmit power in a parallel
network, while DF is near-optimal at high transmit power in a serial network.
For hybrid networks that contain both serial and parallel elements, and when
robust performance is desired, the advantage of EF over the best of AF and DF
is found to be significant. Error probabilities are provided to substantiate
the performance gain obtained through SNR optimality. We also show that, for
\emph{Gaussian} inputs, AF, DF and EF become identical.
|
cs/0510003
|
Generalized ABBA Space-Time Block Codes
|
cs.IT math.IT
|
Linear space-time block codes (STBCs) of unitary rate and full diversity,
systematically constructed over arbitrary constellations for any number of
transmit antennas are introduced. The codes are obtained by generalizing the
existing ABBA STBCs, a.k.a quasi-orthogonal STBCs (QO-STBCs). Furthermore, a
fully orthogonal (symbol-by-symbol) decoder for the new generalized ABBA
(GABBA) codes is provided. This remarkably low-complexity decoder relies on
partition orthogonality properties of the code structure to decompose the
received signal vector into lower-dimension tuples, each dependent only on
certain subsets of the transmitted symbols. Orthogonal decodability results
from the nested application of this technique, with no matrix inversion or
iterative signal processing required. The exact bit-error-rate probability of
GABBA codes over generalized fading channels with maximum likelihood (ML)
decoding is evaluated analytically and compared against simulation results
obtained with the proposed orthogonal decoder. The comparison reveals that the
proposed GABBA solution, despite its very low complexity, achieves nearly the
same performance of the bound corresponding to the ML-decoded system,
especially in systems with large numbers of antennas.
|
cs/0510005
|
Taylor series expansions for the entropy rate of Hidden Markov Processes
|
cs.IT cond-mat.stat-mech math.IT
|
Finding the entropy rate of Hidden Markov Processes is an active research
topic, of both theoretical and practical importance. A recently used approach
is studying the asymptotic behavior of the entropy rate in various regimes. In
this paper we generalize and prove a previous conjecture relating the entropy
rate to entropies of finite systems. Building on our new theorems, we establish
series expansions for the entropy rate in two different regimes. We also study
the radius of convergence of the two series expansions.
|
cs/0510008
|
Accurate and robust image superresolution by neural processing of local
image representations
|
cs.CV cs.NE
|
Image superresolution involves the processing of an image sequence to
generate a still image with higher resolution. Classical approaches, such as
bayesian MAP methods, require iterative minimization procedures, with high
computational costs. Recently, the authors proposed a method to tackle this
problem, based on the use of a hybrid MLP-PNN architecture. In this paper, we
present a novel superresolution method, based on an evolution of this concept,
to incorporate the use of local image models. A neural processing stage
receives as input the value of model coefficients on local windows. The data
dimensionality is firstly reduced by application of PCA. An MLP, trained on
synthetic sequences with various amounts of noise, estimates the
high-resolution image data. The effect of varying the dimension of the network
input space is examined, showing a complex, structured behavior. Quantitative
results are presented showing the accuracy and robustness of the proposed
method.
|
cs/0510009
|
Tree-Based Construction of LDPC Codes Having Good Pseudocodeword Weights
|
cs.IT math.IT
|
We present a tree-based construction of LDPC codes that have minimum
pseudocodeword weight equal to or almost equal to the minimum distance, and
perform well with iterative decoding. The construction involves enumerating a
$d$-regular tree for a fixed number of layers and employing a connection
algorithm based on permutations or mutually orthogonal Latin squares to close
the tree. Methods are presented for degrees $d=p^s$ and $d = p^s+1$, for $p$ a
prime. One class corresponds to the well-known finite-geometry and finite
generalized quadrangle LDPC codes; the other codes presented are new. We also
present some bounds on pseudocodeword weight for $p$-ary LDPC codes. Treating
these codes as $p$-ary LDPC codes rather than binary LDPC codes improves their
rates, minimum distances, and pseudocodeword weights, thereby giving a new
importance to the finite geometry LDPC codes where $p > 2$.
|
cs/0510013
|
Safe Data Sharing and Data Dissemination on Smart Devices
|
cs.CR cs.DB
|
The erosion of trust put in traditional database servers, the growing
interest for different forms of data dissemination and the concern for
protecting children from suspicious Internet content are different factors that
lead to move the access control from servers to clients. Several encryption
schemes can be used to serve this purpose but all suffer from a static way of
sharing data. In a precedent paper, we devised smarter client-based access
control managers exploiting hardware security elements on client devices. The
goal pursued is being able to evaluate dynamic and personalized access control
rules on a ciphered XML input document, with the benefit of dissociating access
rights from encryption. In this demonstration, we validate our solution using a
real smart card platform and explain how we deal with the constraints usually
met on hardware security elements (small memory and low throughput). Finally,
we illustrate the generality of the approach and the easiness of its deployment
through two different applications: a collaborative application and a parental
control application on video streams.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.