id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
cs/0510015
|
Word sense disambiguation criteria: a systematic study
|
cs.CL
|
This article describes the results of a systematic in-depth study of the
criteria used for word sense disambiguation. Our study is based on 60 target
words: 20 nouns, 20 adjectives and 20 verbs. Our results are not always in line
with some practices in the field. For example, we show that omitting
non-content words decreases performance and that bigrams yield better results
than unigrams.
|
cs/0510016
|
From finite-system entropy to entropy rate for a Hidden Markov Process
|
cs.IT math-ph math.IT math.MP
|
A recent result presented the expansion for the entropy rate of a Hidden
Markov Process (HMP) as a power series in the noise variable $\eps$. The
coefficients of the expansion around the noiseless ($\eps = 0$) limit were
calculated up to 11th order, using a conjecture that relates the entropy rate
of a HMP to the entropy of a process of finite length (which is calculated
analytically). In this communication we generalize and prove the validity of
the conjecture, and discuss the theoretical and practical consequences of our
new theorem.
|
cs/0510020
|
Sur le statut r\'{e}f\'{e}rentiel des entit\'{e}s nomm\'{e}es
|
cs.AI cs.IR
|
We show in this paper that, on the one hand, named entities can be designated
using different denominations and that, on the second hand, names denoting
named entities are polysemous. The analysis cannot be limited to reference
resolution but should take into account naming strategies, which are mainly
based on two linguistic operations: synecdoche and metonymy. Lastly, we present
a model that explicitly represents the different denominations in discourse,
unifying the way to represent linguistic knowledge and world knowledge.
|
cs/0510021
|
A Unified Power Control Algorithm for Multiuser Detectors in Large
Systems: Convergence and Performance
|
cs.IT math.IT
|
A unified approach to energy-efficient power control, applicable to a large
family of receivers including the matched filter, the decorrelator, the
(linear) minimum-mean-square-error detector (MMSE), and the individually and
jointly optimal multiuser detectors, has recently been proposed for
code-division-multiple-access (CDMA) networks. This unified power control (UPC)
algorithm exploits the linear relationship that has been shown to exist between
the transmit power and the output signal-to-interference-plus-noise ratio (SIR)
in large systems. Based on this principle and by computing the multiuser
efficiency, the UPC algorithm updates the users' transmit powers in an
iterative way to achieve the desired target SIR. In this paper, the convergence
of the UPC algorithm is proved for the matched filter, the decorrelator, and
the MMSE detector. In addition, the performance of the algorithm in finite-size
systems is studied and compared with that of existing power control schemes.
The UPC algorithm is particularly suitable for systems with randomly generated
long spreading sequences (i.e., sequences whose period is longer than one
symbol duration).
|
cs/0510023
|
On the capacity of mobile ad hoc networks with delay constraints
|
cs.IT cs.NI math.IT
|
Previous work on ad hoc network capacity has focused primarily on
source-destination throughput requirements for different models and
transmission scenarios, with an emphasis on delay tolerant applications. In
such problems, network capacity enhancement is achieved as a tradeoff with
transmission delay. In this paper, the capacity of ad hoc networks supporting
delay sensitive traffic is studied. First, a general framework is proposed for
characterizing the interactions between the physical and the network layer in
an ad hoc network. Then, CDMA ad hoc networks, in which advanced signal
processing techniques such as multiuser detection are relied upon to enhance
the user capacity, are analyzed. The network capacity is characterized using a
combination of geometric arguments and large scale analysis, for several
network scenarios employing matched filters, decorrelators and
minimum-mean-square-error receivers. Insight into the network performance for
finite systems is also provided by means of simulations. Both analysis and
simulations show a significant network capacity gain for ad hoc networks
employing multiuser detectors, compared with those using matched filter
receivers, as well as very good performance even under tight delay and
transmission power requirements.
|
cs/0510025
|
Practical Semantic Analysis of Web Sites and Documents
|
cs.IR
|
As Web sites are now ordinary products, it is necessary to explicit the
notion of quality of a Web site. The quality of a site may be linked to the
easiness of accessibility and also to other criteria such as the fact that the
site is up to date and coherent. This last quality is difficult to insure
because sites may be updated very frequently, may have many authors, may be
partially generated and in this context proof-reading is very difficult. The
same piece of information may be found in different occurrences, but also in
data or meta-data, leading to the need for consistency checking. In this paper
we make a parallel between programs and Web sites. We present some examples of
semantic constraints that one would like to specify (constraints between the
meaning of categories and sub-categories in a thematic directory, consistency
between the organization chart and the rest of the site in an academic site).
We present quickly the Natural Semantics, a way to specify the semantics of
programming languages that inspires our works. Then we propose a specification
language for semantic constraints in Web sites that, in conjunction with the
well known ``make'' program, permits to generate some site verification tools
by compiling the specification into Prolog code. We apply our method to a large
XML document which is the scientific part of our institute activity report,
tracking errors or inconsistencies and also constructing some indicators that
can be used by the management of the institute.
|
cs/0510026
|
A decision support system for ship identification based on the curvature
scale space representation
|
cs.CV
|
In this paper, a decision support system for ship identification is
presented. The system receives as input a silhouette of the vessel to be
identified, previously extracted from a side view of the object. This view
could have been acquired with imaging sensors operating at different spectral
ranges (CCD, FLIR, image intensifier). The input silhouette is preprocessed and
compared to those stored in a database, retrieving a small number of potential
matches ranked by their similarity to the target silhouette. This set of
potential matches is presented to the system operator, who makes the final ship
identification. This system makes use of an evolved version of the Curvature
Scale Space (CSS) representation. In the proposed approach, it is curvature
extrema, instead of zero crossings, that are tracked during silhouette
evolution, hence improving robustness and enabling to cope successfully with
cases where the standard CCS representation is found to be unstable. Also, the
use of local curvature was replaced with the more robust concept of lobe
concavity, with significant additional gains in performance. Experimental
results on actual operational imagery prove the excellent performance and
robustness of the developed method.
|
cs/0510027
|
A Market Test for the Positivity of Arrow-Debreu Prices
|
cs.CE
|
We derive tractable necessary and sufficient conditions for the absence of
buy-and-hold arbitrage opportunities in a perfectly liquid, one period market.
We formulate the positivity of Arrow-Debreu prices as a generalized moment
problem to show that this no arbitrage condition is equivalent to the positive
semidefiniteness of matrices formed by the market price of tradeable securities
and their products. We apply this result to a market with multiple assets and
basket call options.
|
cs/0510029
|
Conditionally independent random variables
|
cs.IT math.IT
|
In this paper we investigate the notion of conditional independence and prove
several information inequalities for conditionally independent random
variables.
|
cs/0510030
|
A Near Maximum Likelihood Decoding Algorithm for MIMO Systems Based on
Semi-Definite Programming
|
cs.IT math.IT
|
In Multi-Input Multi-Output (MIMO) systems, Maximum-Likelihood (ML) decoding
is equivalent to finding the closest lattice point in an N-dimensional complex
space. In general, this problem is known to be NP hard. In this paper, we
propose a quasi-maximum likelihood algorithm based on Semi-Definite Programming
(SDP). We introduce several SDP relaxation models for MIMO systems, with
increasing complexity. We use interior-point methods for solving the models and
obtain a near-ML performance with polynomial computational complexity. Lattice
basis reduction is applied to further reduce the computational complexity of
solving these models. The proposed relaxation models are also used for soft
output decoding in MIMO systems.
|
cs/0510032
|
Polar Polytopes and Recovery of Sparse Representations
|
cs.IT math.IT
|
Suppose we have a signal y which we wish to represent using a linear
combination of a number of basis atoms a_i, y=sum_i x_i a_i = Ax. The problem
of finding the minimum L0 norm representation for y is a hard problem. The
Basis Pursuit (BP) approach proposes to find the minimum L1 norm representation
instead, which corresponds to a linear program (LP) that can be solved using
modern LP techniques, and several recent authors have given conditions for the
BP (minimum L1 norm) and sparse (minimum L0 solutions) representations to be
identical. In this paper, we explore this sparse representation problem} using
the geometry of convex polytopes, as recently introduced into the field by
Donoho. By considering the dual LP we find that the so-called polar polytope P
of the centrally-symmetric polytope P whose vertices are the atom pairs +-a_i
is particularly helpful in providing us with geometrical insight into
optimality conditions given by Fuchs and Tropp for non-unit-norm atom sets. In
exploring this geometry we are able to tighten some of these earlier results,
showing for example that the Fuchs condition is both necessary and sufficient
for L1-unique-optimality, and that there are situations where Orthogonal
Matching Pursuit (OMP) can eventually find all L1-unique-optimal solutions with
m nonzeros even if ERC fails for m, if allowed to run for more than m steps.
|
cs/0510033
|
Coding for the Optical Channel: the Ghost-Pulse Constraint
|
cs.IT cs.DM math.IT
|
We consider a number of constrained coding techniques that can be used to
mitigate a nonlinear effect in the optical fiber channel that causes the
formation of spurious pulses, called ``ghost pulses.'' Specifically, if $b_1
b_2 ... b_{n}$ is a sequence of bits sent across an optical channel, such that
$b_k=b_l=b_m=1$ for some $k,l,m$ (not necessarily all distinct) but $b_{k+l-m}
= 0$, then the ghost-pulse effect causes $b_{k+l-m}$ to change to 1, thereby
creating an error. We design and analyze several coding schemes using binary
and ternary sequences constrained so as to avoid patterns that give rise to
ghost pulses. We also discuss the design of encoders and decoders for these
coding schemes.
|
cs/0510034
|
COMODI: On the Graphical User Interface
|
cs.HC cs.CE cs.MS
|
We propose a series of features for the graphical user interface (GUI) of the
COmputational MOdule Integrator (COMODI) \cite{Synasc05a}\cite{COMODI}. In view
of the special requirements that a COMODI type of framework for scientific
computing imposes and inspiring from existing solutions that provide advanced
graphical visual programming environments, we identify those elements and
associated behaviors that will have to find their way into the first release of
COMODI.
|
cs/0510035
|
Design and Performance Analysis of a New Class of Rate Compatible Serial
Concatenated Convolutional Codes
|
cs.IT math.IT
|
In this paper, we provide a performance analysis of a new class of serial
concatenated convolutional codes (SCCC) where the inner encoder can be
punctured beyond the unitary rate. The puncturing of the inner encoder is not
limited to inner coded bits, but extended to systematic bits. Moreover, it is
split into two different puncturings, in correspondence with inner code
systematic bits and parity bits. We derive the analytical upper bounds to the
error probability of this particular code structure and address suitable design
guidelines for the inner code puncturing patterns. We show that the percentile
of systematic and parity bits to be deleted strongly depends on the SNR region
of interest. In particular, to lower the error floor it is advantageous to put
more puncturing on inner systematic bits. Furthermore, we show that puncturing
of inner systematic bits should be interleaver dependent. Based on these
considerations, we derive design guidelines to obtain well-performing
rate-compatible SCCC families. Throughout the paper, the performance of the
proposed codes are compared with analytical bounds, and with the performance of
PCCC and SCCC proposed in the literature.
|
cs/0510036
|
Semantic Optimization Techniques for Preference Queries
|
cs.DB cs.AI cs.LO
|
Preference queries are relational algebra or SQL queries that contain
occurrences of the winnow operator ("find the most preferred tuples in a given
relation"). Such queries are parameterized by specific preference relations.
Semantic optimization techniques make use of integrity constraints holding in
the database. In the context of semantic optimization of preference queries, we
identify two fundamental properties: containment of preference relations
relative to integrity constraints and satisfaction of order axioms relative to
integrity constraints. We show numerous applications of those notions to
preference query evaluation and optimization. As integrity constraints, we
consider constraint-generating dependencies, a class generalizing functional
dependencies. We demonstrate that the problems of containment and satisfaction
of order axioms can be captured as specific instances of constraint-generating
dependency entailment. This makes it possible to formulate necessary and
sufficient conditions for the applicability of our techniques as constraint
validity problems. We characterize the computational complexity of such
problems.
|
cs/0510037
|
Hi\'{e}rarchisation des r\`{e}gles d'association en fouille de textes
|
cs.IR cs.AI
|
Extraction of association rules is widely used as a data mining method.
However, one of the limit of this approach comes from the large number of
extracted rules and the difficulty for a human expert to deal with the totality
of these rules. We propose to solve this problem by structuring the set of
rules into hierarchy. The expert can then therefore explore the rules, access
from one rule to another one more general when we raise up in the hierarchy,
and in other hand, or a more specific rules. Rules are structured at two
levels. The global level aims at building a hierarchy from the set of rules
extracted. Thus we define a first type of rule-subsomption relying on Galois
lattices. The second level consists in a local and more detailed analysis of
each rule. It generate for a given rule a set of generalization rules
structured into a local hierarchy. This leads to the definition of a second
type of subsomption. This subsomption comes from inductive logic programming
and integrates a terminological model.
|
cs/0510038
|
Learning Unions of $\omega(1)$-Dimensional Rectangles
|
cs.LG
|
We consider the problem of learning unions of rectangles over the domain
$[b]^n$, in the uniform distribution membership query learning setting, where
both b and n are "large". We obtain poly$(n, \log b)$-time algorithms for the
following classes:
- poly$(n \log b)$-way Majority of $O(\frac{\log(n \log b)} {\log \log(n \log
b)})$-dimensional rectangles.
- Union of poly$(\log(n \log b))$ many $O(\frac{\log^2 (n \log b)} {(\log
\log(n \log b) \log \log \log (n \log b))^2})$-dimensional rectangles.
- poly$(n \log b)$-way Majority of poly$(n \log b)$-Or of disjoint
$O(\frac{\log(n \log b)} {\log \log(n \log b)})$-dimensional rectangles.
Our main algorithmic tool is an extension of Jackson's boosting- and
Fourier-based Harmonic Sieve algorithm [Jackson 1997] to the domain $[b]^n$,
building on work of [Akavia, Goldwasser, Safra 2003]. Other ingredients used to
obtain the results stated above are techniques from exact learning [Beimel,
Kushilevitz 1998] and ideas from recent work on learning augmented $AC^{0}$
circuits [Jackson, Klivans, Servedio 2002] and on representing Boolean
functions as thresholds of parities [Klivans, Servedio 2001].
|
cs/0510040
|
The "...system of constraints"
|
cs.IT math.IT
|
This paper proposes that the mathematical relationship between an entropy
distribution and its limit offers some new insight into system performance.
This relationship is used to quantify variation among the entities of a system,
where variation is defined as tolerance, option, specification or
implementation variation among the entities of a system. Variation has a
significnt and increasing impact on communications system performance. This
paper introduces means to identify, quantify and reduce such performance
variations.
|
cs/0510043
|
On Minimal Pseudo-Codewords of Tanner Graphs from Projective Planes
|
cs.IT cs.DM math.IT
|
We would like to better understand the fundamental cone of Tanner graphs
derived from finite projective planes. Towards this goal, we discuss bounds on
the AWGNC and BSC pseudo-weight of minimal pseudo-codewords of such Tanner
graphs, on one hand, and study the structure of minimal pseudo-codewords, on
the other.
|
cs/0510044
|
Belief Propagation Based Multi--User Detection
|
cs.IT math.IT
|
We apply belief propagation (BP) to multi--user detection in a spread
spectrum system, under the assumption of Gaussian symbols. We prove that BP is
both convergent and allows to estimate the correct conditional expectation of
the input symbols. It is therefore an optimal --minimum mean square error--
detection algorithm. This suggests the possibility of designing BP detection
algorithms for more general systems. As a byproduct we rederive the Tse-Hanly
formula for minimum mean square error without any recourse to random matrix
theory.
|
cs/0510045
|
Why We Can Not Surpass Capacity: The Matching Condition
|
cs.IT math.IT
|
We show that iterative coding systems can not surpass capacity using only
quantities which naturally appear in density evolution. Although the result in
itself is trivial, the method which we apply shows that in order to achieve
capacity the various components in an iterative coding system have to be
perfectly matched. This generalizes the perfect matching condition which was
previously known for the case of transmission over the binary erasure channel
to the general class of binary-input memoryless output-symmetric channels.
Potential applications of this perfect matching condition are the construction
of capacity-achieving degree distributions and the determination of the number
required iterations as a function of the multiplicative gap to capacity.
|
cs/0510047
|
Geometrical relations between space time block code designs and
complexity reduction
|
cs.IT math.IT
|
In this work, the geometric relation between space time block code design for
the coherent channel and its non-coherent counterpart is exploited to get an
analogue of the information theoretic inequality $I(X;S)\le I((X,H);S)$ in
terms of diversity. It provides a lower bound on the performance of
non-coherent codes when used in coherent scenarios. This leads in turn to a
code design decomposition result splitting coherent code design into two
complexity reduced sub tasks. Moreover a geometrical criterion for high
performance space time code design is derived.
|
cs/0510049
|
Bounds on the Pseudo-Weight of Minimal Pseudo-Codewords of Projective
Geometry Codes
|
cs.IT cs.DM math.IT
|
In this paper we focus our attention on a family of finite geometry codes,
called type-I projective geometry low-density parity-check (PG-LDPC) codes,
that are constructed based on the projective planes PG{2,q). In particular, we
study their minimal codewords and pseudo-codewords, as it is known that these
vectors characterize completely the code performance under maximum-likelihood
decoding and linear programming decoding, respectively. The main results of
this paper consist of upper and lower bounds on the pseudo-weight of the
minimal pseudo-codewords of type-I PG-LDPC codes.
|
cs/0510050
|
Integration of the DOLCE top-level ontology into the OntoSpec
methodology
|
cs.AI
|
This report describes a new version of the OntoSpec methodology for ontology
building. Defined by the LaRIA Knowledge Engineering Team (University of
Picardie Jules Verne, Amiens, France), OntoSpec aims at helping builders to
model ontological knowledge (upstream of formal representation). The
methodology relies on a set of rigorously-defined modelling primitives and
principles. Its application leads to the elaboration of a semi-informal
ontology, which is independent of knowledge representation languages. We
recently enriched the OntoSpec methodology by endowing it with a new resource,
the DOLCE top-level ontology defined at the LOA (IST-CNR, Trento, Italy). The
goal of this integration is to provide modellers with additional help in
structuring application ontologies, while maintaining independence
vis-\`{a}-vis formal representation languages. In this report, we first provide
an overview of the OntoSpec methodology's general principles and then describe
the DOLCE re-engineering process. A complete version of DOLCE-OS (i.e. a
specification of DOLCE in the semi-informal OntoSpec language) is presented in
an appendix.
|
cs/0510054
|
The Nature of Novelty Detection
|
cs.IR cs.CL
|
Sentence level novelty detection aims at reducing redundant sentences from a
sentence list. In the task, sentences appearing later in the list with no new
meanings are eliminated. Aiming at a better accuracy for detecting redundancy,
this paper reveals the nature of the novelty detection task currently
overlooked by the Novelty community $-$ Novelty as a combination of the partial
overlap (PO, two sentences sharing common facts) and complete overlap (CO, the
first sentence covers all the facts of the second sentence) relations. By
formalizing novelty detection as a combination of the two relations between
sentences, new viewpoints toward techniques dealing with Novelty are proposed.
Among the methods discussed, the similarity, overlap, pool and language
modeling approaches are commonly used. Furthermore, a novel approach, selected
pool method is provided, which is immediate following the nature of the task.
Experimental results obtained on all the three currently available novelty
datasets showed that selected pool is significantly better or no worse than the
current methods. Knowledge about the nature of the task also affects the
evaluation methodologies. We propose new evaluation measures for Novelty
according to the nature of the task, as well as possible directions for future
study.
|
cs/0510055
|
Degrees of Freedom in Multiuser MIMO
|
cs.IT math.IT
|
We explore the available degrees of freedom for various multiuser MIMO
communication scenarios such as the multiple access, broadcast, interference,
relay, X and Z channels. For the two user MIMO interference channel, we find a
general inner bound and a genie-aided outer bound that give us the exact number
of degrees of freedom in many cases. We also study a share-and-transmit scheme
for transmitter cooperation. For the share-and-transmit scheme, we show how the
gains of transmitter cooperation are entirely offset by the cost of enabling
that cooperation so that the available degrees of freedom are not increased.
|
cs/0510056
|
First-Order Modeling and Stability Analysis of Illusory Contours
|
cs.CV cs.AI
|
In visual cognition, illusions help elucidate certain intriguing latent
perceptual functions of the human vision system, and their proper mathematical
modeling and computational simulation are therefore deeply beneficial to both
biological and computer vision. Inspired by existent prior works, the current
paper proposes a first-order energy-based model for analyzing and simulating
illusory contours. The lower complexity of the proposed model facilitates
rigorous mathematical analysis on the detailed geometric structures of illusory
contours. After being asymptotically approximated by classical active contours,
the proposed model is then robustly computed using the celebrated level-set
method of Osher and Sethian (J. Comput. Phys., 79:12-49, 1988) with a natural
supervising scheme. Potential cognitive implications of the mathematical
results are addressed, and generic computational examples are demonstrated and
discussed.
|
cs/0510058
|
Precoding for 2x2 Doubly-Dispersive WSSUS Channels
|
cs.IT math.IT
|
Optimal link adaption to the scattering function of wide sense stationary
uncorrelated scattering (WSSUS) mobile communication channels is still an
unsolved problem despite its importance for next-generation system design. In
multicarrier transmission such link adaption is performed by pulse shaping
which in turn is equivalent to precoding with respect to the second order
channel statistics. In the present framework a translation of the precoder
optimization problem into an optimization problem over trace class operators is
used. This problem which is also well-known in the context of quantum
information theory is unsolved in general due to its non-convex nature. However
in very low dimension the problem formulation reveals an additional analytic
structure which again admits the solution to the optimal precoder and
multiplexing scheme. Hence, in this contribution the analytic solution of the
problem for the 2x2 doubly--dispersive WSSUS channel is presented.
|
cs/0510059
|
Cybercars : Past, Present and Future of the Technology
|
cs.RO
|
Automobile has become the dominant transport mode in the world in the last
century. In order to meet a continuously growing demand for transport, one
solution is to change the control approach for vehicle to full driving
automation, which removes the driver from the control loop to improve
efficiency and reduce accidents. Recent work shows that there are several
realistic paths towards this deployment : driving assistance on passenger cars,
automated commercial vehicles on dedicated infrastructures, and new forms of
urban transport (car-sharing and cybercars). Cybercars have already been put
into operation in Europe, and it seems that this approach could lead the way
towards full automation on most urban, and later interurban infrastructures.
The European project CyberCars has brought many improvements in the technology
needed to operate cybercars over the last three years. A new, larger European
project is now being prepared to carry this work further in order to meet more
ambitious objectives in terms of safety and efficiency. This paper will present
past and present technologies and will focus on the future developments.
|
cs/0510060
|
Optimal Transmit Covariance for Ergodic MIMO Channels
|
cs.IT math.IT
|
In this paper we consider the computation of channel capacity for ergodic
multiple-input multiple-output channels with additive white Gaussian noise. Two
scenarios are considered. Firstly, a time-varying channel is considered in
which both the transmitter and the receiver have knowledge of the channel
realization. The optimal transmission strategy is water-filling over space and
time. It is shown that this may be achieved in a causal, indeed instantaneous
fashion. In the second scenario, only the receiver has perfect knowledge of the
channel realization, while the transmitter has knowledge of the channel gain
probability law. In this case we determine an optimality condition on the input
covariance for ergodic Gaussian vector channels with arbitrary channel
distribution under the condition that the channel gains are independent of the
transmit signal. Using this optimality condition, we find an iterative
algorithm for numerical computation of optimal input covariance matrices.
Applications to correlated Rayleigh and Ricean channels are given.
|
cs/0510062
|
Using Interval Particle Filtering for Marker less 3D Human Motion
Capture
|
cs.AI
|
In this paper we present a new approach for marker less human motion capture
from conventional camera feeds. The aim of our study is to recover 3D positions
of key points of the body that can serve for gait analysis. Our approach is
based on foreground segmentation, an articulated body model and particle
filters. In order to be generic and simple no restrictive dynamic modelling was
used. A new modified particle filtering algorithm was introduced. It is used
efficiently to search the model configuration space. This new algorithm which
we call Interval Particle Filtering reorganizes the configurations search space
in an optimal deterministic way and proved to be efficient in tracking natural
human movement. Results for human motion capture from a single camera are
presented and compared to results obtained from a marker based system. The
system proved to be able to track motion successfully even in partial
occlusions.
|
cs/0510063
|
Markerless Human Motion Capture for Gait Analysis
|
cs.AI
|
The aim of our study is to detect balance disorders and a tendency towards
the falls in the elderly, knowing gait parameters. In this paper we present a
new tool for gait analysis based on markerless human motion capture, from
camera feeds. The system introduced here, recovers the 3D positions of several
key points of the human body while walking. Foreground segmentation, an
articulated body model and particle filtering are basic elements of our
approach. No dynamic model is used thus this system can be described as generic
and simple to implement. A modified particle filtering algorithm, which we call
Interval Particle Filtering, is used to reorganise and search through the
model's configurations search space in a deterministic optimal way. This
algorithm was able to perform human movement tracking with success. Results
from the treatment of a single cam feeds are shown and compared to results
obtained using a marker based human motion capture system.
|
cs/0510067
|
On the Spread of Random Interleaver
|
cs.IT math.IT
|
For a given blocklength we determine the number of interleavers which have
spread equal to two. Using this, we find out the probability that a randomly
chosen interleaver has spread two. We show that as blocklength increases, this
probability increases but very quickly converges to the value $1-e^{-2} \approx
0.8647$. Subsequently, we determine a lower bound on the probability of an
interleaver having spread at least $s$. We show that this lower bound converges
to the value $e^{-2(s-2)^{2}}$, as the blocklength increases.
|
cs/0510068
|
Ultra Wideband Impulse Radio Systems with Multiple Pulse Types
|
cs.IT math.IT
|
In an ultra wideband (UWB) impulse radio (IR) system, a number of pulses,
each transmitted in an interval called a "frame", is employed to represent one
information symbol. Conventionally, a single type of UWB pulse is used in all
frames of all users. In this paper, IR systems with multiple types of UWB
pulses are considered, where different types of pulses can be used in different
frames by different users. Both stored-reference (SR) and transmitted-reference
(TR) systems are considered. First, the spectral properties of a multi-pulse IR
system with polarity randomization is investigated. It is shown that the
average power spectral density is the average of the spectral contents of
different pulse shapes. Then, approximate closed-form expressions for the bit
error probability of a multi-pulse SR-IR system are derived for RAKE receivers
in asynchronous multiuser environments. The effects of both inter-frame
interference (IFI) and multiple-access interference (MAI) are analyzed. The
theoretical and simulation results indicate that SR-IR systems that are more
robust against IFI and MAI than a "conventional" SR-IR system can be designed
with multiple types of ultra-wideband pulses. Finally, extensions to
multi-pulse TR-IR systems are briefly described.
|
cs/0510070
|
On Coding for Reliable Communication over Packet Networks
|
cs.IT cs.NI math.IT
|
We present a capacity-achieving coding scheme for unicast or multicast over
lossy packet networks. In the scheme, intermediate nodes perform additional
coding yet do not decode nor even wait for a block of packets before sending
out coded packets. Rather, whenever they have a transmission opportunity, they
send out coded packets formed from random linear combinations of previously
received packets. All coding and decoding operations have polynomial
complexity.
We show that the scheme is capacity-achieving as long as packets received on
a link arrive according to a process that has an average rate. Thus, packet
losses on a link may exhibit correlation in time or with losses on other links.
In the special case of Poisson traffic with i.i.d. losses, we give error
exponents that quantify the rate of decay of the probability of error with
coding delay. Our analysis of the scheme shows that it is not only
capacity-achieving, but that the propagation of packets carrying "innovative"
information follows the propagation of jobs through a queueing network, and
therefore fluid flow models yield good approximations. We consider networks
with both lossy point-to-point and broadcast links, allowing us to model both
wireline and wireless packet networks.
|
cs/0510071
|
A Simple Cooperative Diversity Method Based on Network Path Selection
|
cs.IT math.IT
|
Cooperative diversity has been recently proposed as a way to form virtual
antenna arrays that provide dramatic gains in slow fading wireless
environments. However most of the proposed solutions require distributed
space-time coding algorithms, the careful design of which is left for future
investigation if there is more than one cooperative relay. We propose a novel
scheme, that alleviates these problems and provides diversity gains on the
order of the number of relays in the network. Our scheme first selects the best
relay from a set of M available relays and then uses this best relay for
cooperation between the source and the destination. We develop and analyze a
distributed method to select the best relay that requires no topology
information and is based on local measurements of the instantaneous channel
conditions. This method also requires no explicit communication among the
relays. The success (or failure) to select the best available path depends on
the statistics of the wireless channel, and a methodology to evaluate
performance for any kind of wireless channel statistics, is provided.
Information theoretic analysis of outage probability shows that our scheme
achieves the same diversity-multiplexing tradeoff as achieved by more complex
protocols, where coordination and distributed space-time coding for M nodes is
required, such as those proposed in [7]. The simplicity of the technique,
allows for immediate implementation in existing radio hardware and its adoption
could provide for improved flexibility, reliability and efficiency in future 4G
wireless systems.
|
cs/0510072
|
On Interleaving Techniques for MIMO Channels and Limitations of Bit
Interleaved Coded Modulation
|
cs.IT math.IT
|
It is shown that while the mutual information curves for coded modulation
(CM) and bit interleaved coded modulation (BICM) overlap in the case of a
single input single output channel, the same is not true in multiple input
multiple output (MIMO) channels. A method for mitigating fading in the presence
of multiple transmit antennas, named coordinate interleaving (CI), is presented
as a generalization of component interleaving for a single transmit antenna.
The extent of any advantages of CI over BICM, relative to CM, is analyzed from
a mutual information perspective; the analysis is based on an equivalent
parallel channel model for CI. Several expressions for mutual information in
the presence of CI and multiple transmit and receive antennas are derived.
Results show that CI gives higher mutual information compared to that of BICM
if proper signal mappings are used. Effects like constellation rotation in the
presence of CI are also considered and illustrated; it is shown that
constellation rotation can increase the constrained capacity.
|
cs/0510075
|
On-Off Frequency-Shift-Keying for Wideband Fading Channels
|
cs.IT math.IT
|
M-ary On-Off Frequency-Shift-Keying (OOFSK) is a digital modulation format in
which M-ary FSK signaling is overlaid on On/Off keying. This paper investigates
the potential of this modulation format in the context of wideband fading
channels. First it is assumed that the receiver uses energy detection for the
reception of OOFSK signals. Capacity expressions are obtained for the cases in
which the receiver has perfect and imperfect fading side information. Power
efficiency is investigated when the transmitter is subject to a peak-to-average
power ratio (PAR) limitation or a peak power limitation. It is shown that under
a PAR limitation, it is extremely power inefficient to operate in the very low
SNR regime. On the other hand, if there is only a peak power limitation, it is
demonstrated that power efficiency improves as one operates with smaller SNR
and vanishing duty factor. Also studied are the capacity improvements that
accrue when the receiver can track phase shifts in the channel or if the
received signal has a specular component. To take advantage of those features,
the phase of the modulation is also allowed to carry information.
|
cs/0510076
|
Applying Evolutionary Optimisation to Robot Obstacle Avoidance
|
cs.AI cs.RO
|
This paper presents an artificial evolutionbased method for stereo image
analysis and its application to real-time obstacle detection and avoidance for
a mobile robot. It uses the Parisian approach, which consists here in splitting
the representation of the robot's environment into a large number of simple
primitives, the "flies", which are evolved following a biologically inspired
scheme and give a fast, low-cost solution to the obstacle detection problem in
mobile robotics.
|
cs/0510077
|
Connection state overhead in a dynamic linear network
|
cs.IT cs.NI math.IT
|
We consider a dynamical linear network where nearest neighbours communicate
via links whose states form binary (open/closed) valued independent and
identically distributed Markov processes. Our main result is the tight
information-theoretic lower bound on the network traffic required by the
connection state overhead, or the information required for all nodes to know
their connected neighbourhood. These results, and especially their possible
generalisations to more realistic network models, could give us valuable
understanding of the unavoidable protocol overheads in rapidly changing Ad hoc
and sensor networks.
|
cs/0510078
|
Vector Gaussian Multiple Description with Individual and Central
Receivers
|
cs.IT math.IT
|
L multiple descriptions of a vector Gaussian source for individual and
central receivers are investigated. The sum rate of the descriptions with
covariance distortion measure constraints, in a positive semidefinite ordering,
is exactly characterized. For two descriptions, the entire rate region is
characterized. Jointly Gaussian descriptions are optimal in achieving the
limiting rates. The key component of the solution is a novel
information-theoretic inequality that is used to lower bound the achievable
multiple description rates.
|
cs/0510079
|
Evidence with Uncertain Likelihoods
|
cs.AI
|
An agent often has a number of hypotheses, and must choose among them based
on observations, or outcomes of experiments. Each of these observations can be
viewed as providing evidence for or against various hypotheses. All the
attempts to formalize this intuition up to now have assumed that associated
with each hypothesis h there is a likelihood function \mu_h, which is a
probability measure that intuitively describes how likely each observation is,
conditional on h being the correct hypothesis. We consider an extension of this
framework where there is uncertainty as to which of a number of likelihood
functions is appropriate, and discuss how one formal approach to defining
evidence, which views evidence as a function from priors to posteriors, can be
generalized to accommodate this uncertainty.
|
cs/0510080
|
When Ignorance is Bliss
|
cs.AI cs.LG
|
It is commonly-accepted wisdom that more information is better, and that
information should never be ignored. Here we argue, using both a Bayesian and a
non-Bayesian analysis, that in some situations you are better off ignoring
information if your uncertainty is represented by a set of probability
measures. These include situations in which the information is relevant for the
prediction task at hand. In the non-Bayesian analysis, we show how ignoring
information avoids dilation, the phenomenon that additional pieces of
information sometimes lead to an increase in uncertainty. In the Bayesian
analysis, we show that for small sample sizes and certain prediction tasks, the
Bayesian posterior based on a noninformative prior yields worse predictions
than simply ignoring the given information.
|
cs/0510083
|
Neuronal Spectral Analysis of EEG and Expert Knowledge Integration for
Automatic Classification of Sleep Stages
|
cs.AI
|
Being able to analyze and interpret signal coming from electroencephalogram
(EEG) recording can be of high interest for many applications including medical
diagnosis and Brain-Computer Interfaces. Indeed, human experts are today able
to extract from this signal many hints related to physiological as well as
cognitive states of the recorded subject and it would be very interesting to
perform such task automatically but today no completely automatic system
exists. In previous studies, we have compared human expertise and automatic
processing tools, including artificial neural networks (ANN), to better
understand the competences of each and determine which are the difficult
aspects to integrate in a fully automatic system. In this paper, we bring more
elements to that study in reporting the main results of a practical experiment
which was carried out in an hospital for sleep pathology study. An EEG
recording was studied and labeled by a human expert and an ANN. We describe
here the characteristics of the experiment, both human and neuronal procedure
of analysis, compare their performances and point out the main limitations
which arise from this study.
|
cs/0510084
|
R\'{e}flexions sur la question fr\'{e}quentielle en traitement du signal
|
cs.CE cs.IR math-ph math.MP math.SP
|
New definitions are suggested for frequencies which may be instantaneous or
not. The Heisenberg-Gabor inequality and the Shannon sampling theorem are
briefly discussed.
|
cs/0510085
|
Canonical time-frequency, time-scale, and frequency-scale
representations of time-varying channels
|
cs.IT math.IT
|
Mobile communication channels are often modeled as linear time-varying
filters or, equivalently, as time-frequency integral operators with finite
support in time and frequency. Such a characterization inherently assumes the
signals are narrowband and may not be appropriate for wideband signals. In this
paper time-scale characterizations are examined that are useful in wideband
time-varying channels, for which a time-scale integral operator is physically
justifiable. A review of these time-frequency and time-scale characterizations
is presented. Both the time-frequency and time-scale integral operators have a
two-dimensional discrete characterization which motivates the design of
time-frequency or time-scale rake receivers. These receivers have taps for both
time and frequency (or time and scale) shifts of the transmitted signal. A
general theory of these characterizations which generates, as specific cases,
the discrete time-frequency and time-scale models is presented here. The
interpretation of these models, namely, that they can be seen to arise from
processing assumptions on the transmit and receive waveforms is discussed. Out
of this discussion a third model arises: a frequency-scale continuous channel
model with an associated discrete frequency-scale characterization.
|
cs/0510089
|
Automata-based adaptive behavior for economic modeling using game theory
|
cs.MA cs.DM
|
In this paper, we deal with some specific domains of applications to game
theory. This is one of the major class of models in the new approaches of
modelling in the economic domain. For that, we use genetic automata which allow
to buid adaptive strategies for the players. We explain how the automata-based
formalism proposed - matrix representation of automata with multiplicities -
allows to define a semi-distance between the strategy behaviors. With that
tools, we are able to generate an automatic processus to compute emergent
systems of entities whose behaviors are represented by these genetic automata.
|
cs/0510091
|
An efficient memetic, permutation-based evolutionary algorithm for
real-world train timetabling
|
cs.AI
|
Train timetabling is a difficult and very tightly constrained combinatorial
problem that deals with the construction of train schedules. We focus on the
particular problem of local reconstruction of the schedule following a small
perturbation, seeking minimisation of the total accumulated delay by adapting
times of departure and arrival for each train and allocation of resources
(tracks, routing nodes, etc.). We describe a permutation-based evolutionary
algorithm that relies on a semi-greedy heuristic to gradually reconstruct the
schedule by inserting trains one after the other following the permutation.
This algorithm can be hybridised with ILOG commercial MIP programming tool
CPLEX in a coarse-grained manner: the evolutionary part is used to quickly
obtain a good but suboptimal solution and this intermediate solution is refined
using CPLEX. Experimental results are presented on a large real-world case
involving more than one million variables and 2 million constraints. Results
are surprisingly good as the evolutionary algorithm, alone or hybridised,
produces excellent solutions much faster than CPLEX alone.
|
cs/0510095
|
Rate Region of the Quadratic Gaussian Two-Encoder Source-Coding Problem
|
cs.IT math.IT
|
We determine the rate region of the quadratic Gaussian two-encoder
source-coding problem. This rate region is achieved by a simple architecture
that separates the analog and digital aspects of the compression. Furthermore,
this architecture requires higher rates to send a Gaussian source than it does
to send any other source with the same covariance. Our techniques can also be
used to determine the sum rate of some generalizations of this classical
problem. Our approach involves coupling the problem to a quadratic Gaussian
``CEO problem.''
|
cs/0511001
|
Capacity with Causal and Non-Causal Side Information - A Unified View
|
cs.IT math.IT
|
We identify the common underlying form of the capacity expression that is
applicable to both cases where causal or non-causal side information is made
available to the transmitter. Using this common form we find that for the
single user channel, the multiple access channel, the degraded broadcast
channel, and the degraded relay channel, the sum capacity with causal and
non-causal side information are identical when all the transmitter side
information is also made available to all the receivers. A genie-aided
outerbound is developed that states that when a genie provides $n$ bits of side
information to a receiver the resulting capacity improvement can not be more
than $n$ bits. Combining these two results we are able to bound the relative
capacity advantage of non-causal side information over causal side information
for both single user as well as various multiple user communication scenarios.
Applications of these capacity bounds are demonstrated through examples of
random access channels. Interestingly, the capacity results indicate that the
excessive MAC layer overheads common in present wireless systems may be avoided
through coding across multiple access blocks. It is also shown that even one
bit of side information at the transmitter can result in unbounded capacity
improvement. As a side, we obtain the sum capacity for a multiple access
channel when the side information available to the transmitter is causal and
possibly correlated to the side information available to the receiver.
|
cs/0511002
|
Bibliographic Classification using the ADS Databases
|
cs.IR cs.DL
|
We discuss two techniques used to characterize bibliographic records based on
their similarity to and relationship with the contents of the NASA Astrophysics
Data System (ADS) databases. The first method has been used to classify input
text as being relevant to one or more subject areas based on an analysis of the
frequency distribution of its individual words. The second method has been used
to classify existing records as being relevant to one or more databases based
on the distribution of the papers citing them. Both techniques have proven to
be valuable tools in assigning new and existing bibliographic records to
different disciplines within the ADS databases.
|
cs/0511003
|
Optimal Prefix Codes for Infinite Alphabets with Nonlinear Costs
|
cs.IT cs.DS math.IT
|
Let $P = \{p(i)\}$ be a measure of strictly positive probabilities on the set
of nonnegative integers. Although the countable number of inputs prevents usage
of the Huffman algorithm, there are nontrivial $P$ for which known methods find
a source code that is optimal in the sense of minimizing expected codeword
length. For some applications, however, a source code should instead minimize
one of a family of nonlinear objective functions, $\beta$-exponential means,
those of the form $\log_a \sum_i p(i) a^{n(i)}$, where $n(i)$ is the length of
the $i$th codeword and $a$ is a positive constant. Applications of such
minimizations include a novel problem of maximizing the chance of message
receipt in single-shot communications ($a<1$) and a previously known problem of
minimizing the chance of buffer overflow in a queueing system ($a>1$). This
paper introduces methods for finding codes optimal for such exponential means.
One method applies to geometric distributions, while another applies to
distributions with lighter tails. The latter algorithm is applied to Poisson
distributions and both are extended to alphabetic codes, as well as to
minimizing maximum pointwise redundancy. The aforementioned application of
minimizing the chance of buffer overflow is also considered.
|
cs/0511004
|
Evolutionary Computing
|
cs.AI
|
Evolutionary computing (EC) is an exciting development in Computer Science.
It amounts to building, applying and studying algorithms based on the Darwinian
principles of natural selection. In this paper we briefly introduce the main
concepts behind evolutionary computing. We present the main components all
evolutionary algorithms (EA), sketch the differences between different types of
EAs and survey application areas ranging from optimization, modeling and
simulation to entertainment.
|
cs/0511005
|
The egalitarian effect of search engines
|
cs.CY cs.IR physics.soc-ph
|
Search engines have become key media for our scientific, economic, and social
activities by enabling people to access information on the Web in spite of its
size and complexity. On the down side, search engines bias the traffic of users
according to their page-ranking strategies, and some have argued that they
create a vicious cycle that amplifies the dominance of established and already
popular sites. We show that, contrary to these prior claims and our own
intuition, the use of search engines actually has an egalitarian effect. We
reconcile theoretical arguments with empirical evidence showing that the
combination of retrieval by search engines and search behavior by users
mitigates the attraction of popular pages, directing more traffic toward less
popular sites, even in comparison to what would be expected from users randomly
surfing the Web.
|
cs/0511008
|
Analysis of Stochastic Service Guarantees in Communication Networks: A
Basic Calculus
|
cs.PF cs.IT cs.NI math.IT
|
A basic calculus is presented for stochastic service guarantee analysis in
communication networks. Central to the calculus are two definitions,
maximum-(virtual)-backlog-centric (m.b.c) stochastic arrival curve and
stochastic service curve, which respectively generalize arrival curve and
service curve in the deterministic network calculus framework. With m.b.c
stochastic arrival curve and stochastic service curve, various basic results
are derived under the (min, +) algebra for the general case analysis, which are
crucial to the development of stochastic network calculus. These results
include (i) superposition of flows, (ii) concatenation of servers, (iii) output
characterization, (iv) per-flow service under aggregation, and (v) stochastic
backlog and delay guarantees. In addition, to perform independent case
analysis, stochastic strict server is defined, which uses an ideal service
process and an impairment process to characterize a server. The concept of
stochastic strict server not only allows us to improve the basic results (i) --
(v) under the independent case, but also provides a convenient way to find the
stochastic service curve of a serve. Moreover, an approach is introduced to
find the m.b.c stochastic arrival curve of a flow and the stochastic service
curve of a server.
|
cs/0511009
|
Mismatched codebooks and the role of entropy-coding in lossy data
compression
|
cs.IT math.IT math.PR
|
We introduce a universal quantization scheme based on random coding, and we
analyze its performance. This scheme consists of a source-independent random
codebook (typically_mismatched_ to the source distribution), followed by
optimal entropy-coding that is_matched_ to the quantized codeword distribution.
A single-letter formula is derived for the rate achieved by this scheme at a
given distortion, in the limit of large codebook dimension. The rate reduction
due to entropy-coding is quantified, and it is shown that it can be arbitrarily
large. In the special case of "almost uniform" codebooks (e.g., an i.i.d.
Gaussian codebook with large variance) and difference distortion measures, a
novel connection is drawn between the compression achieved by the present
scheme and the performance of "universal" entropy-coded dithered lattice
quantizers. This connection generalizes the "half-a-bit" bound on the
redundancy of dithered lattice quantizers. Moreover, it demonstrates a strong
notion of universality where a single "almost uniform" codebook is near-optimal
for_any_ source and_any_ difference distortion measure.
|
cs/0511011
|
The Impact of Social Networks on Multi-Agent Recommender Systems
|
cs.LG cs.CC cs.MA
|
Awerbuch et al.'s approach to distributed recommender systems (DRSs) is to
have agents sample products at random while randomly querying one another for
the best item they have found; we improve upon this by adding a communication
network. Agents can only communicate with their immediate neighbors in the
network, but neighboring agents may or may not represent users with common
interests. We define two network structures: in the ``mailing-list model,''
agents representing similar users form cliques, while in the ``word-of-mouth
model'' the agents are distributed randomly in a scale-free network (SFN). In
both models, agents tell their neighbors about satisfactory products as they
are found. In the word-of-mouth model, knowledge of items propagates only
through interested agents, and the SFN parameters affect the system's
performance. We include a summary of our new results on the character and
parameters of random subgraphs of SFNs, in particular SFNs with power-law
degree distributions down to minimum degree 1. These networks are not as
resilient as Cohen et al. originally suggested. In the case of the widely-cited
``Internet resilience'' result, high failure rates actually lead to the
orphaning of half of the surviving nodes after 60% of the network has failed
and the complete disintegration of the network at 90%. We show that given an
appropriate network, the communication network reduces the number of sampled
items, the number of messages sent, and the amount of ``spam.'' We conclude
that in many cases DRSs will be useful for sharing information in a multi-agent
learning system.
|
cs/0511012
|
Parameters Affecting the Resilience of Scale-Free Networks to Random
Failures
|
cs.NI cs.AR cs.MA
|
It is commonly believed that scale-free networks are robust to massive
numbers of random node deletions. For example, Cohen et al. study scale-free
networks including some which approximate the measured degree distribution of
the Internet. Their results suggest that if each node in this network failed
independently with probability 0.99, the remaining network would continue to
have a giant component. In this paper, we show that a large and important
subclass of scale-free networks are not robust to massive numbers of random
node deletions for practical purposes. In particular, we study finite
scale-free networks which have minimum node degree of 1 and a power-law degree
distribution beginning with nodes of degree 1 (power-law networks). We show
that, in a power-law network approximating the Internet's reported
distribution, when the probability of deletion of each node is 0.5 only about
25% of the surviving nodes in the network remain connected in a giant
component, and the giant component does not persist beyond a critical failure
rate of 0.9. The new result is partially due to improved analytical
accommodation of the large number of degree-0 nodes that result after node
deletions. Our results apply to finite power-law networks with a wide range of
power-law exponents, including Internet-like networks. We give both analytical
and empirical evidence that such networks are not generally robust to massive
random node deletions.
|
cs/0511013
|
K-ANMI: A Mutual Information Based Clustering Algorithm for Categorical
Data
|
cs.AI cs.DB
|
Clustering categorical data is an integral part of data mining and has
attracted much attention recently. In this paper, we present k-ANMI, a new
efficient algorithm for clustering categorical data. The k-ANMI algorithm works
in a way that is similar to the popular k-means algorithm, and the goodness of
clustering in each step is evaluated using a mutual information based criterion
(namely, Average Normalized Mutual Information-ANMI) borrowed from cluster
ensemble. Experimental results on real datasets show that k-ANMI algorithm is
competitive with those state-of-art categorical data clustering algorithms with
respect to clustering accuracy.
|
cs/0511015
|
Towards a Hierarchical Model of Consciousness, Intelligence, Mind and
Body
|
cs.AI
|
This article is taken out.
|
cs/0511016
|
How to make the top ten: Approximating PageRank from in-degree
|
cs.IR physics.soc-ph
|
PageRank has become a key element in the success of search engines, allowing
to rank the most important hits in the top screen of results. One key aspect
that distinguishes PageRank from other prestige measures such as in-degree is
its global nature. From the information provider perspective, this makes it
difficult or impossible to predict how their pages will be ranked. Consequently
a market has emerged for the optimization of search engine results. Here we
study the accuracy with which PageRank can be approximated by in-degree, a
local measure made freely available by search engines. Theoretical and
empirical analyses lead to conclude that given the weak degree correlations in
the Web link graph, the approximation can be relatively accurate, giving
service and information providers an effective new marketing tool.
|
cs/0511019
|
A Counterexample to Cover's 2P Conjecture on Gaussian Feedback Capacity
|
cs.IT math.IT
|
We provide a counterexample to Cover's conjecture that the feedback capacity
$C_\textrm{FB}$ of an additive Gaussian noise channel under power constraint
$P$ be no greater than the nonfeedback capacity $C$ of the same channel under
power constraint $2P$, i.e., $C_\textrm{FB}(P) \le C(2P)$.
|
cs/0511022
|
Does a Plane Imitate a Bird? Does Computer Vision Have to Follow
Biological Paradigms?
|
cs.NE
|
We posit a new paradigm for image information processing. For the last 25
years, this task was usually approached in the frame of Treisman's two-stage
paradigm [1]. The latter supposes an unsupervised, bottom-up directed process
of preliminary information pieces gathering at the lower processing stages and
a supervised, top-down directed process of information pieces binding and
grouping at the higher stages. It is acknowledged that these sub-processes
interact and intervene between them in a tricky and a complicated manner.
Notwithstanding the prevalence of this paradigm in biological and computer
vision, we nevertheless propose to replace it with a new one, which we would
like to designate as a two-part paradigm. In it, information contained in an
image is initially extracted in an independent top-down manner by one part of
the system, and then it is examined and interpreted by another, separate system
part. We argue that the new paradigm seems to be more plausible than its
forerunner. We provide evidence from human attention vision studies and
insights of Kolmogorov's complexity theory to support our arguments. We also
provide some reasons in favor of separate image interpretation issues.
|
cs/0511024
|
Heat kernel expansion for a family of stochastic volatility models :
delta-geometry
|
cs.CE
|
In this paper, we study a family of stochastic volatility processes; this
family features a mean reversion term for the volatility and a double CEV-like
exponent that generalizes SABR and Heston's models. We derive approximated
closed form formulas for the digital prices, the local and implied
volatilities. Our formulas are efficient for small maturities.
Our method is based on differential geometry, especially small time
diffusions on riemanian spaces. This geometrical point of view can be extended
to other processes, and is very accurate to produce variate smiles for small
maturities and small moneyness.
|
cs/0511026
|
A Decision Theoretic Framework for Real-Time Communication
|
cs.IT math.IT
|
We consider a communication system in which the outputs of a Markov source
are encoded and decoded in \emph{real-time} by a finite memory receiver, and
the distortion measure does not tolerate delays. The objective is to choose
designs, i.e. real-time encoding, decoding and memory update strategies that
minimize a total expected distortion measure. This is a dynamic team problem
with non-classical information structure [Witsenhausen:1971]. We use the
structural results of [Teneketzis:2004] to develop a sequential decomposition
for the finite and infinite horizon problems. Thus, we obtain a systematic
methodology for the determination of jointly optimal encoding decoding and
memory update strategies for real-time point-to-point communication systems.
|
cs/0511027
|
Discrete Network Dynamics. Part 1: Operator Theory
|
cs.NE
|
An operator algebra implementation of Markov chain Monte Carlo algorithms for
simulating Markov random fields is proposed. It allows the dynamics of networks
whose nodes have discrete state spaces to be specified by the action of an
update operator that is composed of creation and annihilation operators. This
formulation of discrete network dynamics has properties that are similar to
those of a quantum field theory of bosons, which allows reuse of many
conceptual and theoretical structures from QFT. The equilibrium behaviour of
one of these generalised MRFs and of the adaptive cluster expansion network
(ACEnet) are shown to be equivalent, which provides a way of unifying these two
theories.
|
cs/0511028
|
MIMO Diversity in the Presence of Double Scattering
|
cs.IT math.IT
|
The potential benefits of multiple-antenna systems may be limited by two
types of channel degradations rank deficiency and spatial fading correlation of
the channel. In this paper, we assess the effects of these degradations on the
diversity performance of multiple-input multiple-output (MIMO) systems, with an
emphasis on orthogonal space-time block codes, in terms of the symbol error
probability, the effective fading figure (EFF), and the capacity at low
signal-to-noise ratio (SNR). In particular, we consider a general family of
MIMO channels known as double-scattering channels, which encompasses a variety
of propagation environments from independent and identically distributed
Rayleigh to degenerate keyhole or pinhole cases by embracing both
rank-deficient and spatial correlation effects. It is shown that a MIMO system
with $n_T$ transmit and $n_R$ receive antennas achieves the diversity of order
$\frac{\n_T n_S n_R}{\max(n_T,n_S,n_R)}$ in a double-scattering channel with
$n_S$ effective scatterers. We also quantify the combined effect of the spatial
correlation and the lack of scattering richness on the EFF and the low-SNR
capacity in terms of the correlation figures of transmit, receive, and
scatterer correlation matrices. We further show the monotonicity properties of
these performance measures with respect to the strength of spatial correlation,
characterized by the eigenvalue majorization relations of the correlation
matrices.
|
cs/0511029
|
Non-coherent Rayleigh fading MIMO channels: Capacity Supremum
|
cs.IT math.IT
|
This paper investigates the limits of information transfer over a fast
Rayleigh fading MIMO channel, where neither the transmitter nor the receiver
has the knowledge of the channel state information (CSI) except the fading
statistics. We develop a scalar channel model due to absence of the phase
information in non-coherent Rayleigh fading and derive a capacity supremum with
the number of receive antennas at any signal to noise ratio (SNR) using
Lagrange optimisation. Also, we conceptualise the discrete nature of the
optimal input distribution by posing the optimisation on the channel mutual
information for $N$ discrete inputs. Furthermore, we derive an expression for
the asymptotic capacity when the input power is large, and compare with the
existing capacity results when the receiver is equipped with a large number of
antennas.
|
cs/0511032
|
Spatiotemporal sensistivity and visual attention for efficient rendering
of dynamic environments
|
cs.GR cs.CV
|
We present a method to accelerate global illumination computation in dynamic
environments by taking advantage of limitations of the human visual system. A
model of visual attention is used to locate regions of interest in a scene and
to modulate spatiotemporal sensitivity. The method is applied in the form of a
spatiotemporal error tolerance map. Perceptual acceleration combined with good
sampling protocols provide a global illumination solution feasible for use in
animation. Results indicate an order of magnitude improvement in computational
speed. The method is adaptable and can also be used in image-based rendering,
geometry level of detail selection, realistic image synthesis, video telephony
and video compression.
|
cs/0511036
|
A Capacity Achieving and Low Complexity Multilevel Coding Scheme for ISI
Channels
|
cs.IT math.IT
|
We propose a computationally efficient multilevel coding scheme to achieve
the capacity of an ISI channel using layers of binary inputs. The transmitter
employs multilevel coding with linear mapping. The receiver uses multistage
decoding where each stage performs a separate linear minimum mean square error
(LMMSE) equalization and decoding. The optimality of the scheme is due to the
fact that the LMMSE equalizer is information lossless in an ISI channel when
signal to noise ratio is sufficiently low. The computational complexity is low
and scales linearly with the length of the channel impulse response and the
number of layers. The decoder at each layer sees an equivalent AWGN channel,
which makes coding straightforward.
|
cs/0511037
|
Trellis Pruning for Peak-to-Average Power Ratio Reduction
|
cs.IT math.IT
|
This paper introduces a new trellis pruning method which uses nonlinear
convolutional coding for peak-to-average power ratio (PAPR) reduction of
filtered QPSK and 16-QAM modulations. The Nyquist filter is viewed as a
convolutional encoder that controls the analog waveforms of the filter output
directly. Pruning some edges of the encoder trellis can effectively reduce the
PAPR. The only tradeoff is a slightly lower channel capacity and increased
complexity. The paper presents simulation results of the pruning action and the
resulting PAPR, and also discusses the decoding algorithm and the capacity of
the filtered and pruned QPSK and 16-QAM modulations on the AWGN channel.
Simulation results show that the pruning method reduces the PAPR significantly
without much damage to capacity.
|
cs/0511038
|
Towards a unified theory of logic programming semantics: Level mapping
characterizations of selector generated models
|
cs.AI cs.LO
|
Currently, the variety of expressive extensions and different semantics
created for logic programs with negation is diverse and heterogeneous, and
there is a lack of comprehensive comparative studies which map out the
multitude of perspectives in a uniform way. Most recently, however, new
methodologies have been proposed which allow one to derive uniform
characterizations of different declarative semantics for logic programs with
negation. In this paper, we study the relationship between two of these
approaches, namely the level mapping characterizations due to [Hitzler and
Wendt 2005], and the selector generated models due to [Schwarz 2004]. We will
show that the latter can be captured by means of the former, thereby supporting
the claim that level mappings provide a very flexible framework which is
applicable to very diversely defined semantics.
|
cs/0511039
|
The Generalized Area Theorem and Some of its Consequences
|
cs.IT math.IT
|
There is a fundamental relationship between belief propagation and maximum a
posteriori decoding. The case of transmission over the binary erasure channel
was investigated in detail in a companion paper. This paper investigates the
extension to general memoryless channels (paying special attention to the
binary case). An area theorem for transmission over general memoryless channels
is introduced and some of its many consequences are discussed. We show that
this area theorem gives rise to an upper-bound on the maximum a posteriori
threshold for sparse graph codes. In situations where this bound is tight, the
extrinsic soft bit estimates delivered by the belief propagation decoder
coincide with the correct a posteriori probabilities above the maximum a
posteriori threshold. More generally, it is conjectured that the fundamental
relationship between the maximum a posteriori and the belief propagation
decoder which was observed for transmission over the binary erasure channel
carries over to the general case. We finally demonstrate that in order for the
design rate of an ensemble to approach the capacity under belief propagation
decoding the component codes have to be perfectly matched, a statement which is
well known for the special case of transmission over the binary erasure
channel.
|
cs/0511040
|
Design and Analysis of Nonbinary LDPC Codes for Arbitrary
Discrete-Memoryless Channels
|
cs.IT math.IT
|
We present an analysis, under iterative decoding, of coset LDPC codes over
GF(q), designed for use over arbitrary discrete-memoryless channels
(particularly nonbinary and asymmetric channels). We use a random-coset
analysis to produce an effect that is similar to output-symmetry with binary
channels. We show that the random selection of the nonzero elements of the
GF(q) parity-check matrix induces a permutation-invariance property on the
densities of the decoder messages, which simplifies their analysis and
approximation. We generalize several properties, including symmetry and
stability from the analysis of binary LDPC codes. We show that under a Gaussian
approximation, the entire q-1 dimensional distribution of the vector messages
is described by a single scalar parameter (like the distributions of binary
LDPC messages). We apply this property to develop EXIT charts for our codes. We
use appropriately designed signal constellations to obtain substantial shaping
gains. Simulation results indicate that our codes outperform multilevel codes
at short block lengths. We also present simulation results for the AWGN
channel, including results within 0.56 dB of the unconstrained Shannon limit
(i.e. not restricted to any signal constellation) at a spectral efficiency of 6
bits/s/Hz.
|
cs/0511042
|
Dimensions of Neural-symbolic Integration - A Structured Survey
|
cs.AI cs.LO cs.NE
|
Research on integrated neural-symbolic systems has made significant progress
in the recent past. In particular the understanding of ways to deal with
symbolic knowledge within connectionist systems (also called artificial neural
networks) has reached a critical mass which enables the community to strive for
applicable implementations and use cases. Recent work has covered a great
variety of logics used in artificial intelligence and provides a multitude of
techniques for dealing with them within the context of artificial neural
networks. We present a comprehensive survey of the field of neural-symbolic
integration, including a new classification of system according to their
architectures and abilities.
|
cs/0511046
|
Generalized Kasami Sequences: The Large Set
|
cs.IT cs.CR math.IT
|
In this paper new binary sequence families $\mathcal{F}^k$ of period $2^n-1$
are constructed for even $n$ and any $k$ with ${\rm gcd}(k,n)=2$ if $n/2$ is
odd or ${\rm gcd}(k,n)=1$ if $n/2$ is even. The distribution of their
correlation values is completely determined. These families have maximum
correlation $2^{n/2+1}+1$ and family size $2^{3n/2}+2^{n/2}$ for odd $n/2$ or
$2^{3n/2}+2^{n/2}-1$ for even $n/2$. The construction of the large set of
Kasami sequences which is exactly the $\mathcal{F}^{k}$ with $k=n/2+1$ is
generalized.
|
cs/0511047
|
The Secret Key-Private Key Capacity Region for Three Terminals
|
cs.IT math.IT
|
We consider a model for secrecy generation, with three terminals, by means of
public interterminal communication, and examine the problem of characterizing
all the rates at which all three terminals can generate a ``secret key,'' and
-- simultaneously -- two designated terminals can generate a ``private key''
which is effectively concealed from the remaining terminal; both keys are also
concealed from an eavesdropper that observes the public communication. Inner
and outer bounds for the ``secret key--private key capacity region'' are
derived. Under a certain special condition, these bounds coincide to yield the
(exact) secret key--private key capacity region.
|
cs/0511048
|
Joint Network-Source Coding: An Achievable Region with Diversity Routing
|
cs.IT math.IT
|
We are interested in how to best communicate a (usually real valued) source
to a number of destinations (sinks) over a network with capacity constraints in
a collective fidelity metric over all the sinks, a problem which we call joint
network-source coding. Unlike the lossless network coding problem, lossy
reconstruction of the source at the sinks is permitted. We make a first attempt
to characterize the set of all distortions achievable by a set of sinks in a
given network. While the entire region of all achievable distortions remains
largely an open problem, we find a large, non-trivial subset of it using ideas
in multiple description coding. The achievable region is derived over all
balanced multiple-description codes and over all network flows, while the
network nodes are allowed to forward and duplicate data packets.
|
cs/0511050
|
Secret Key and Private Key Constructions for Simple Multiterminal Source
Models
|
cs.IT cs.CR math.IT
|
This work is motivated by recent results of Csiszar and Narayan (IEEE Trans.
on Inform. Theory, Dec. 2004), which highlight innate connections between
secrecy generation by multiple terminals and multiterminal Slepian-Wolf
near-lossless data compression (sans secrecy restrictions). We propose a new
approach for constructing secret and private keys based on the long-known
Slepian-Wolf code for sources connected by a virtual additive noise channel,
due to Wyner (IEEE Trans. on Inform. Theory, Jan. 1974). Explicit procedures
for such constructions, and their substantiation, are provided.
|
cs/0511051
|
The Private Key Capacity Region for Three Terminals
|
cs.IT cs.CR math.IT
|
We consider a model with three terminals and examine the problem of
characterizing the largest rates at which two pairs of terminals can
simultaneously generate private keys, each of which is effectively concealed
from the remaining terminal.
|
cs/0511052
|
Mining Cellular Automata DataBases throug PCA Models
|
cs.DM cs.DB
|
Cellular Automata are discrete dynamical systems that evolve following simple
and local rules. Despite of its local simplicity, knowledge discovery in CA is
a NP problem. This is the main motivation for using data mining techniques for
CA study. The Principal Component Analysis (PCA) is a useful tool for data
mining because it provides a compact and optimal description of data sets. Such
feature have been explored to compute the best subspace which maximizes the
projection of the I/O patterns of CA onto the principal axis. The stability of
the principal components against the input patterns is the main result of this
approach. In this paper we perform such analysis but in the presence of noise
which randomly reverses the CA output values with probability $p$. As expected,
the number of principal components increases when the pattern size is
increased. However, it seems to remain stable when the pattern size is
unchanged but the noise intensity gets larger. We describe our experiments and
point out further works using KL transform theory and parameter sensitivity
analysis.
|
cs/0511054
|
Eigenvalue Distributions of Sums and Products of Large Random Matrices
via Incremental Matrix Expansions
|
cs.IT math.IT
|
This paper uses an incremental matrix expansion approach to derive asymptotic
eigenvalue distributions (a.e.d.'s) of sums and products of large random
matrices. We show that the result can be derived directly as a consequence of
two common assumptions, and matches the results obtained from using R- and
S-transforms in free probability theory. We also give a direct derivation of
the a.e.d. of the sum of certain random matrices which are not free. This is
used to determine the asymptotic signal-to-interference-ratio of a multiuser
CDMA system with a minimum mean-square error linear receiver.
|
cs/0511056
|
Improved Upper Bounds on Stopping Redundancy
|
cs.IT cs.DM math.IT
|
Let C be a linear code with length n and minimum distance d. The stopping
redundancy of C is defined as the minimum number of rows in a parity-check
matrix for C such that the smallest stopping sets in the corresponding Tanner
graph have size d. We derive new upper bounds on the stopping redundancy of
linear codes in general, and of maximum distance separable (MDS) codes
specifically, and show how they improve upon previously known results. For MDS
codes, the new bounds are found by upper bounding the stopping redundancy by a
combinatorial quantity closely related to Turan numbers. (The Turan number,
T(v,k,t), is the smallest number of t-subsets of a v-set, such that every
k-subset of the v-set contains at least one of the t-subsets.) We further show
that the stopping redundancy of MDS codes is T(n,d-1,d-2)(1+O(n^{-1})) for
fixed d, and is at most T(n,d-1,d-2)(3+O(n^{-1})) for fixed code dimension
k=n-d+1. For d=3,4, we prove that the stopping redundancy of MDS codes is equal
to T(n,d-1,d-2), for which exact formulas are known. For d=5, we show that the
stopping redundancy of MDS codes is either T(n,4,3) or T(n,4,3)+1.
|
cs/0511057
|
Quantized Indexing: Beyond Arithmetic Coding
|
cs.IT cs.DM math.CO math.IT
|
Quantized Indexing is a fast and space-efficient form of enumerative
(combinatorial) coding, the strongest among asymptotically optimal universal
entropy coding algorithms. The present advance in enumerative coding is similar
to that made by arithmetic coding with respect to its unlimited precision
predecessor, Elias coding. The arithmetic precision, execution time, table
sizes and coding delay are all reduced by a factor O(n) at a redundancy below
2*log(e)/2^g bits/symbol (for n input symbols and g-bit QI precision). Due to
its tighter enumeration, QI output redundancy is below that of arithmetic
coding (which can be derived as a lower accuracy approximation of QI). The
relative compression gain vanishes in large n and in high entropy limits and
increases for shorter outputs and for less predictable data. QI is
significantly faster than the fastest arithmetic coders, from factor 6 in high
entropy limit to over 100 in low entropy limit (`typically' 10-20 times
faster). These speedups are result of using only 3 adds, 1 shift and 2 array
lookups (all in 32 bit precision) per less probable symbol and no coding
operations for the most probable symbol . Further, the exact enumeration
algorithm is sharpened and its lattice walks formulation is generalized. A new
numeric type with a broader applicability, sliding window integer, is
introduced.
|
cs/0511058
|
On-line regression competitive with reproducing kernel Hilbert spaces
|
cs.LG
|
We consider the problem of on-line prediction of real-valued labels, assumed
bounded in absolute value by a known constant, of new objects from known
labeled objects. The prediction algorithm's performance is measured by the
squared deviation of the predictions from the actual labels. No stochastic
assumptions are made about the way the labels and objects are generated.
Instead, we are given a benchmark class of prediction rules some of which are
hoped to produce good predictions. We show that for a wide range of
infinite-dimensional benchmark classes one can construct a prediction algorithm
whose cumulative loss over the first N examples does not exceed the cumulative
loss of any prediction rule in the class plus O(sqrt(N)); the main differences
from the known results are that we do not impose any upper bound on the norm of
the considered prediction rules and that we achieve an optimal leading term in
the excess loss of our algorithm. If the benchmark class is "universal" (dense
in the class of continuous functions on each compact set), this provides an
on-line non-stochastic analogue of universally consistent prediction in
non-parametric statistics. We use two proof techniques: one is based on the
Aggregating Algorithm and the other on the recently developed method of
defensive forecasting.
|
cs/0511060
|
On Quadratic Inverses for Quadratic Permutation Polynomials over Integer
Rings
|
cs.IT math.IT
|
An interleaver is a critical component for the channel coding performance of
turbo codes. Algebraic constructions are of particular interest because they
admit analytical designs and simple, practical hardware implementation. Sun and
Takeshita have recently shown that the class of quadratic permutation
polynomials over integer rings provides excellent performance for turbo codes.
In this correspondence, a necessary and sufficient condition is proven for the
existence of a quadratic inverse polynomial for a quadratic permutation
polynomial over an integer ring. Further, a simple construction is given for
the quadratic inverse. All but one of the quadratic interleavers proposed
earlier by Sun and Takeshita are found to admit a quadratic inverse, although
none were explicitly designed to do so. An explanation is argued for the
observation that restriction to a quadratic inverse polynomial does not narrow
the pool of good quadratic interleavers for turbo codes.
|
cs/0511064
|
The consistency principle for a digitization procedure. An algorithm for
building normal digital spaces of continuous n-dimensional objects
|
cs.CV cs.DM
|
This paper considers conditions, which allow to preserve important
topological and geometric properties in the process of digitization. For this
purpose, we introduce a triplet {C,M,D} consisting of a continuous object C, an
intermediate model M, which is a collection of subregions whose union is C, a
digital model D, which is the intersection graph of M, and apply the
consistency principle and criteria of similarity to M in order to make its
mathematical structure consistent with the natural structure of D.
Specifically, this paper introduces a locally centered lump collection of
subregions and shows that for any locally centered lump cover of an
n-dimensional continuous manifold, the digital model of the manifold is a
digital normal n-dimensional space. In addition, we give examples of locally
centered lump tilings of two-manifolds. We propose an algorithm for
constructing normal digital models of continuous objects.
|
cs/0511065
|
Performance Analysis of MIMO-MRC in Double-Correlated Rayleigh
Environments
|
cs.IT math.IT
|
We consider multiple-input multiple-output (MIMO) transmit beamforming
systems with maximum ratio combining (MRC) receivers. The operating environment
is Rayleigh-fading with both transmit and receive spatial correlation. We
present exact expressions for the probability density function (p.d.f.) of the
output signal-to-noise ratio (SNR), as well as the system outage probability.
The results are based on explicit closed-form expressions which we derive for
the p.d.f. and c.d.f. of the maximum eigenvalue of double-correlated complex
Wishart matrices. For systems with two antennas at either the transmitter or
the receiver, we also derive exact closed-form expressions for the symbol error
rate (SER). The new expressions are used to prove that MIMO-MRC achieves the
maximum available spatial diversity order, and to demonstrate the effect of
spatial correlation. The analysis is validated through comparison with
Monte-Carlo simulations.
|
cs/0511067
|
Effects of Initial Stance of Quadruped Trotting on Walking Stability
|
cs.RO
|
It is very important for quadruped walking machine to keep its stability in
high speed walking. It has been indicated that moment around the supporting
diagonal line of quadruped in trotting gait largely influences walking
stability. In this paper, moment around the supporting diagonal line of
quadruped in trotting gait is modeled and its effects on body attitude are
analyzed. The degree of influence varies with different initial stances of
quadruped and we get the optimal initial stance of quadruped in trotting gait
with maximal walking stability. Simulation results are presented. Keywords:
quadruped, trotting, attitude, walking stability.
|
cs/0511068
|
An Agent-based Manufacturing Management System for Production and
Logistics within Cross-Company Regional and National Production Networks
|
cs.RO
|
The goal is the development of a simultaneous, dynamic, technological as well
as logistical real-time planning and an organizational control of the
production by the production units themselves, working in the production
network under the use of Multi-Agent-Technology. The design of the
multi-agent-based manufacturing management system, the models of the single
agents, algorithms for the agent-based, decentralized dispatching of orders,
strategies and data management concepts as well as their integration into the
SCM, basing on the solution described, will be explained in the following.
Keywords: production engineering and management, dynamic manufacturing
planning and control, multi-agentsystems (MAS), supply-chain-management (SCM),
e-manufacturing
|
cs/0511069
|
Nonlinear Receding-Horizon Control of Rigid Link Robot Manipulators
|
cs.RO
|
The approximate nonlinear receding-horizon control law is used to treat the
trajectory tracking control problem of rigid link robot manipulators. The
derived nonlinear predictive law uses a quadratic performance index of the
predicted tracking error and the predicted control effort. A key feature of
this control law is that, for their implementation, there is no need to perform
an online optimization, and asymptotic tracking of smooth reference
trajectories is guaranteed. It is shown that this controller achieves the
positions tracking objectives via link position measurements. The stability
convergence of the output tracking error to the origin is proved. To enhance
the robustness of the closed loop system with respect to payload uncertainties
and viscous friction, an integral action is introduced in the loop. A nonlinear
observer is used to estimate velocity. Simulation results for a two-link rigid
robot are performed to validate the performance of the proposed controller.
Keywords: receding-horizon control, nonlinear observer, robot manipulators,
integral action, robustness.
|
cs/0511070
|
A particle can carry more than one bit of information
|
cs.IT math.IT
|
It is believed that a particle cannot carry more than one bit of information.
It is pointed out that particle or single-particle quantum state can carry more
than one bit of information. It implies that minimum energy cost of
transmitting a bit will be less than the accepted limit KTlog2.
|
cs/0511072
|
Explicit Codes Achieving List Decoding Capacity: Error-correction with
Optimal Redundancy
|
cs.IT math.IT
|
We present error-correcting codes that achieve the information-theoretically
best possible trade-off between the rate and error-correction radius.
Specifically, for every $0 < R < 1$ and $\eps> 0$, we present an explicit
construction of error-correcting codes of rate $R$ that can be list decoded in
polynomial time up to a fraction $(1-R-\eps)$ of {\em worst-case} errors. At
least theoretically, this meets one of the central challenges in algorithmic
coding theory.
Our codes are simple to describe: they are {\em folded Reed-Solomon codes},
which are in fact {\em exactly} Reed-Solomon (RS) codes, but viewed as a code
over a larger alphabet by careful bundling of codeword symbols. Given the
ubiquity of RS codes, this is an appealing feature of our result, and in fact
our methods directly yield better decoding algorithms for RS codes when errors
occur in {\em phased bursts}.
The alphabet size of these folded RS codes is polynomial in the block length.
We are able to reduce this to a constant (depending on $\eps$) using ideas
concerning ``list recovery'' and expander-based codes from
\cite{GI-focs01,GI-ieeejl}. Concatenating the folded RS codes with suitable
inner codes also gives us polynomial time constructible binary codes that can
be efficiently list decoded up to the Zyablov bound, i.e., up to twice the
radius achieved by the standard GMD decoding of concatenated codes.
|
cs/0511073
|
Stochastic Process Semantics for Dynamical Grammar Syntax: An Overview
|
cs.AI cs.LO nlin.AO
|
We define a class of probabilistic models in terms of an operator algebra of
stochastic processes, and a representation for this class in terms of
stochastic parameterized grammars. A syntactic specification of a grammar is
mapped to semantics given in terms of a ring of operators, so that grammatical
composition corresponds to operator addition or multiplication. The operators
are generators for the time-evolution of stochastic processes. Within this
modeling framework one can express data clustering models, logic programs,
ordinary and stochastic differential equations, graph grammars, and stochastic
chemical reaction kinetics. This mathematical formulation connects these
apparently distant fields to one another and to mathematical methods from
quantum field theory and operator algebra.
|
cs/0511074
|
Every Sequence is Decompressible from a Random One
|
cs.IT cs.CC math.IT
|
Kucera and Gacs independently showed that every infinite sequence is Turing
reducible to a Martin-Lof random sequence. This result is extended by showing
that every infinite sequence S is Turing reducible to a Martin-Lof random
sequence R such that the asymptotic number of bits of R needed to compute n
bits of S, divided by n, is precisely the constructive dimension of S. It is
shown that this is the optimal ratio of query bits to computed bits achievable
with Turing reductions. As an application of this result, a new
characterization of constructive dimension is given in terms of Turing
reduction compression ratios.
|
cs/0511075
|
Identifying Interaction Sites in "Recalcitrant" Proteins: Predicted
Protein and Rna Binding Sites in Rev Proteins of Hiv-1 and Eiav Agree with
Experimental Data
|
cs.LG cs.AI
|
Protein-protein and protein nucleic acid interactions are vitally important
for a wide range of biological processes, including regulation of gene
expression, protein synthesis, and replication and assembly of many viruses. We
have developed machine learning approaches for predicting which amino acids of
a protein participate in its interactions with other proteins and/or nucleic
acids, using only the protein sequence as input. In this paper, we describe an
application of classifiers trained on datasets of well-characterized
protein-protein and protein-RNA complexes for which experimental structures are
available. We apply these classifiers to the problem of predicting protein and
RNA binding sites in the sequence of a clinically important protein for which
the structure is not known: the regulatory protein Rev, essential for the
replication of HIV-1 and other lentiviruses. We compare our predictions with
published biochemical, genetic and partial structural information for HIV-1 and
EIAV Rev and with our own published experimental mapping of RNA binding sites
in EIAV Rev. The predicted and experimentally determined binding sites are in
very good agreement. The ability to predict reliably the residues of a protein
that directly contribute to specific binding events - without the requirement
for structural information regarding either the protein or complexes in which
it participates - can potentially generate new disease intervention strategies.
|
cs/0511076
|
Using phonetic constraints in acoustic-to-articulatory inversion
|
cs.CL
|
The goal of this work is to recover articulatory information from the speech
signal by acoustic-to-articulatory inversion. One of the main difficulties with
inversion is that the problem is underdetermined and inversion methods
generally offer no guarantee on the phonetical realism of the inverse
solutions. A way to adress this issue is to use additional phonetic
constraints. Knowledge of the phonetic caracteristics of French vowels enable
the derivation of reasonable articulatory domains in the space of Maeda
parameters: given the formants frequencies (F1,F2,F3) of a speech sample, and
thus the vowel identity, an "ideal" articulatory domain can be derived. The
space of formants frequencies is partitioned into vowels, using either
speaker-specific data or generic information on formants. Then, to each
articulatory vector can be associated a phonetic score varying with the
distance to the "ideal domain" associated with the corresponding vowel.
Inversion experiments were conducted on isolated vowels and vowel-to-vowel
transitions. Articulatory parameters were compared with those obtained without
using these constraints and those measured from X-ray data.
|
cs/0511078
|
Uniqueness of Nonextensive entropy under Renyi's Recipe
|
cs.IT math.IT
|
By replacing linear averaging in Shannon entropy with Kolmogorov-Nagumo
average (KN-averages) or quasilinear mean and further imposing the additivity
constraint, R\'{e}nyi proposed the first formal generalization of Shannon
entropy. Using this recipe of R\'{e}nyi, one can prepare only two information
measures: Shannon and R\'{e}nyi entropy. Indeed, using this formalism R\'{e}nyi
characterized these additive entropies in terms of axioms of quasilinear mean.
As additivity is a characteristic property of Shannon entropy,
pseudo-additivity of the form $x \oplus_{q} y = x + y + (1-q)x y$ is a
characteristic property of nonextensive (or Tsallis) entropy. One can apply
R\'{e}nyi's recipe in the nonextensive case by replacing the linear averaging
in Tsallis entropy with KN-averages and thereby imposing the constraint of
pseudo-additivity. In this paper we show that nonextensive entropy is unique
under the R\'{e}nyi's recipe, and there by give a characterization.
|
cs/0511079
|
An elitist approach for extracting automatically well-realized speech
sounds with high confidence
|
cs.CL
|
This paper presents an "elitist approach" for extracting automatically
well-realized speech sounds with high confidence. The elitist approach uses a
speech recognition system based on Hidden Markov Models (HMM). The HMM are
trained on speech sounds which are systematically well-detected in an iterative
procedure. The results show that, by using the HMM models defined in the
training phase, the speech recognizer detects reliably specific speech sounds
with a small rate of errors.
|
cs/0511081
|
Writing on Fading Paper and Causal Transmitter CSI
|
cs.IT math.IT
|
A wideband fading channel is considered with causal channel state information
(CSI) at the transmitter and no receiver CSI. A simple orthogonal code with
energy detection rule at the receiver (similar to [6]) is shown to achieve the
capacity of this channel in the limit of large bandwidth. This code transmits
energy only when the channel gain is large enough. In this limit, this capacity
without any receiver CSI is the same as the capacity with full receiver CSI--a
phenomenon also true for dirty paper coding. For Rayleigh fading, this capacity
(per unit time) is proportional to the logarithm of the bandwidth. Our coding
scheme is motivated from the Gel'fand-Pinsker [2,3] coding and dirty paper
coding [4]. Nonetheless, for our case, only causal CSI is required at the
transmitter in contrast with dirty-paper coding and Gel'fand-Pinsker coding,
where non-causal CSI is required.
Then we consider a general discrete channel with i.i.d. states. Each input
has an associated cost and a zero cost input "0" exists. The channel state is
assumed be to be known at the transmitter in a causal manner. Capacity per unit
cost is found for this channel and a simple orthogonal code is shown to achieve
this capacity. Later, a novel orthogonal coding scheme is proposed for the case
of causal transmitter CSI and a condition for equivalence of capacity per unit
cost for causal and non-causal transmitter CSI is derived. Finally, some
connections are made to the case of non-causal transmitter CSI in [8].
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.