id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0705.4676
|
Recursive n-gram hashing is pairwise independent, at best
|
cs.DB cs.CL
|
Many applications use sequences of n consecutive symbols (n-grams). Hashing
these n-grams can be a performance bottleneck. For more speed, recursive hash
families compute hash values by updating previous values. We prove that
recursive hash families cannot be more than pairwise independent. While hashing
by irreducible polynomials is pairwise independent, our implementations either
run in time O(n) or use an exponential amount of memory. As a more scalable
alternative, we make hashing by cyclic polynomials pairwise independent by
ignoring n-1 bits. Experimentally, we show that hashing by cyclic polynomials
is is twice as fast as hashing by irreducible polynomials. We also show that
randomized Karp-Rabin hash families are not pairwise independent.
|
0706.0022
|
Modeling Computations in a Semantic Network
|
cs.AI
|
Semantic network research has seen a resurgence from its early history in the
cognitive sciences with the inception of the Semantic Web initiative. The
Semantic Web effort has brought forth an array of technologies that support the
encoding, storage, and querying of the semantic network data structure at the
world stage. Currently, the popular conception of the Semantic Web is that of a
data modeling medium where real and conceptual entities are related in
semantically meaningful ways. However, new models have emerged that explicitly
encode procedural information within the semantic network substrate. With these
new technologies, the Semantic Web has evolved from a data modeling medium to a
computational medium. This article provides a classification of existing
computational modeling efforts and the requirements of supporting technologies
that will aid in the further growth of this burgeoning domain.
|
0706.0225
|
On the End-to-End Distortion for a Buffered Transmission over Fading
Channel
|
cs.IT math.IT
|
In this paper, we study the end-to-end distortion/delay tradeoff for a
analogue source transmitted over a fading channel. The analogue source is
quantized and stored in a buffer until it is transmitted. There are two extreme
cases as far as buffer delay is concerned: no delay and infinite delay. We
observe that there is a significant power gain by introducing a buffer delay.
Our goal is to investigate the situation between these two extremes. Using
recently proposed \emph{effective capacity} concept, we derive a closed-form
formula for this tradeoff. For SISO case, an asymptotically tight upper bound
for our distortion-delay curve is derived, which approaches to the infinite
delay lower bound as $\mathcal{D}_\infty \exp(\frac{C}{\tau_n})$, with $\tau_n$
is the normalized delay, $C$ is a constant. For more general MIMO channel, we
computed the distortion SNR exponent -- the exponential decay rate of the
expected distortion in the high SNR regime. Numerical results demonstrate that
introduction of a small amount delay can save significant transmission power.
|
0706.0280
|
Multi-Agent Modeling Using Intelligent Agents in the Game of Lerpa
|
cs.MA cs.GT
|
Game theory has many limitations implicit in its application. By utilizing
multiagent modeling, it is possible to solve a number of problems that are
unsolvable using traditional game theory. In this paper reinforcement learning
is applied to neural networks to create intelligent agents
|
0706.0300
|
Automatic Detection of Pulmonary Embolism using Computational
Intelligence
|
cs.CV
|
This article describes the implementation of a system designed to
automatically detect the presence of pulmonary embolism in lung scans. These
images are firstly segmented, before alignment and feature extraction using
PCA. The neural network was trained using the Hybrid Monte Carlo method,
resulting in a committee of 250 neural networks and good results are obtained.
|
0706.0323
|
Multiplication of free random variables and the S-transform: the case of
vanishing mean
|
math.OA cs.IT math.IT math.PR
|
This note extends Voiculescu's S-transform based analytical machinery for
free multiplicative convolution to the case where the mean of the probability
measures vanishes. We show that with the right interpretation of the
S-transform in the case of vanishing mean, the usual formula makes perfectly
good sense.
|
0706.0457
|
Challenges and Opportunities of Evolutionary Robotics
|
cs.NE cs.RO
|
Robotic hardware designs are becoming more complex as the variety and number
of on-board sensors increase and as greater computational power is provided in
ever-smaller packages on-board robots. These advances in hardware, however, do
not automatically translate into better software for controlling complex
robots. Evolutionary techniques hold the potential to solve many difficult
problems in robotics which defy simple conventional approaches, but present
many challenges as well. Numerous disciplines including artificial life,
cognitive science and neural networks, rule-based systems, behavior-based
control, genetic algorithms and other forms of evolutionary computation have
contributed to shaping the current state of evolutionary robotics. This paper
provides an overview of developments in the emerging field of evolutionary
robotics, and discusses some of the opportunities and challenges which
currently face practitioners in the field.
|
0706.0465
|
Virtual Sensor Based Fault Detection and Classification on a Plasma Etch
Reactor
|
cs.AI cs.CV
|
The SEMATECH sponsored J-88-E project teaming Texas Instruments with
NeuroDyne (et al.) focused on Fault Detection and Classification (FDC) on a Lam
9600 aluminum plasma etch reactor, used in the process of semiconductor
fabrication. Fault classification was accomplished by implementing a series of
virtual sensor models which used data from real sensors (Lam Station sensors,
Optical Emission Spectroscopy, and RF Monitoring) to predict recipe setpoints
and wafer state characteristics. Fault detection and classification were
performed by comparing predicted recipe and wafer state values with expected
values. Models utilized include linear PLS, Polynomial PLS, and Neural Network
PLS. Prediction of recipe setpoints based upon sensor data provides a
capability for cross-checking that the machine is maintaining the desired
setpoints. Wafer state characteristics such as Line Width Reduction and
Remaining Oxide were estimated on-line using these same process sensors (Lam,
OES, RFM). Wafer-to-wafer measurement of these characteristics in a production
setting (where typically this information may be only sparsely available, if at
all, after batch processing runs with numerous wafers have been completed)
would provide important information to the operator that the process is or is
not producing wafers within acceptable bounds of product quality. Production
yield is increased, and correspondingly per unit cost is reduced, by providing
the operator with the opportunity to adjust the process or machine before
etching more wafers.
|
0706.0534
|
Compressed Regression
|
stat.ML cs.IT math.IT
|
Recent research has studied the role of sparsity in high dimensional
regression and signal reconstruction, establishing theoretical limits for
recovering sparse models from sparse data. This line of work shows that
$\ell_1$-regularized least squares regression can accurately estimate a sparse
linear model from $n$ noisy examples in $p$ dimensions, even if $p$ is much
larger than $n$. In this paper we study a variant of this problem where the
original $n$ input variables are compressed by a random linear transformation
to $m \ll n$ examples in $p$ dimensions, and establish conditions under which a
sparse linear model can be successfully recovered from the compressed data. A
primary motivation for this compression procedure is to anonymize the data and
preserve privacy by revealing little information about the original data. We
characterize the number of random projections that are required for
$\ell_1$-regularized compressed regression to identify the nonzero coefficients
in the true model with probability approaching one, a property called
``sparsistence.'' In addition, we show that $\ell_1$-regularized compressed
regression asymptotically predicts as well as an oracle linear model, a
property called ``persistence.'' Finally, we characterize the privacy
properties of the compression procedure in information-theoretic terms,
establishing upper bounds on the mutual information between the compressed and
uncompressed data that decay to zero.
|
0706.0585
|
A Novel Model of Working Set Selection for SMO Decomposition Methods
|
cs.LG cs.AI
|
In the process of training Support Vector Machines (SVMs) by decomposition
methods, working set selection is an important technique, and some exciting
schemes were employed into this field. To improve working set selection, we
propose a new model for working set selection in sequential minimal
optimization (SMO) decomposition methods. In this model, it selects B as
working set without reselection. Some properties are given by simple proof, and
experiments demonstrate that the proposed method is in general faster than
existing methods.
|
0706.0682
|
Code spectrum and reliability function: Gaussian channel
|
cs.IT math.IT
|
A new approach for upper bounding the channel reliability function using the
code spectrum is described. It allows to treat both low and high rate cases in
a unified way. In particular, the earlier known upper bounds are improved, and
a new derivation of the sphere-packing bound is presented.
|
0706.0685
|
Non-Parametric Field Estimation using Randomly Deployed, Noisy, Binary
Sensors
|
cs.IT math.IT
|
The reconstruction of a deterministic data field from binary-quantized noisy
observations of sensors which are randomly deployed over the field domain is
studied. The study focuses on the extremes of lack of deterministic control in
the sensor deployment, lack of knowledge of the noise distribution, and lack of
sensing precision and reliability. Such adverse conditions are motivated by
possible real-world scenarios where a large collection of low-cost, crudely
manufactured sensors are mass-deployed in an environment where little can be
assumed about the ambient noise. A simple estimator that reconstructs the
entire data field from these unreliable, binary-quantized, noisy observations
is proposed. Technical conditions for the almost sure and integrated mean
squared error (MSE) convergence of the estimate to the data field, as the
number of sensors tends to infinity, are derived and their implications are
discussed. For finite-dimensional, bounded-variation, and
Sobolev-differentiable function classes, specific integrated MSE decay rates
are derived. For the first and third function classes these rates are found to
be minimax order optimal with respect to infinite precision sensing and known
noise distribution.
|
0706.0720
|
Universal Quantile Estimation with Feedback in the
Communication-Constrained Setting
|
cs.IT math.IT
|
We consider the following problem of decentralized statistical inference:
given i.i.d. samples from an unknown distribution, estimate an arbitrary
quantile subject to limits on the number of bits exchanged. We analyze a
standard fusion-based architecture, in which each of $m$ sensors transmits a
single bit to the fusion center, which in turn is permitted to send some number
$k$ bits of feedback. Supposing that each of $\nodenum$ sensors receives $n$
observations, the optimal centralized protocol yields mean-squared error
decaying as $\order(1/[n m])$. We develop and analyze the performance of
various decentralized protocols in comparison to this centralized
gold-standard. First, we describe a decentralized protocol based on $k =
\log(\nodenum)$ bits of feedback that is strongly consistent, and achieves the
same asymptotic MSE as the centralized optimum. Second, we describe and analyze
a decentralized protocol based on only a single bit ($k=1$) of feedback. For
step sizes independent of $m$, it achieves an asymptotic MSE of order
$\order[1/(n \sqrt{m})]$, whereas for step sizes decaying as $1/\sqrt{m}$, it
achieves the same $\order(1/[n m])$ decay in MSE as the centralized optimum.
Our theoretical results are complemented by simulations, illustrating the
tradeoffs between these different protocols.
|
0706.0869
|
Position Coding
|
cs.IT math.CO math.IT
|
A position coding pattern is an array of symbols in which subarrays of a
certain fixed size appear at most once. So, each subarray uniquely identifies a
location in the larger array, which means there is a bijection of some sort
from this set of subarrays to a set of coordinates. The key to Fly Pentop
Computer paper and other examples of position codes is a method to read the
subarray and then convert it to coordinates. Position coding makes use of ideas
from discrete mathematics and number theory. In this paper, we will describe
the underlying mathematics of two position codes, one being the Anoto code that
is the basis of "Fly paper". Then, we will present two new codes, one which
uses binary wavelets as part of the bijection.
|
0706.0870
|
Inferring the Composition of a Trader Population in a Financial Market
|
cs.CE nlin.AO
|
We discuss a method for predicting financial movements and finding pockets of
predictability in the price-series, which is built around inferring the
heterogeneity of trading strategies in a multi-agent trader population. This
work explores extensions to our previous framework (arXiv:physics/0506134).
Here we allow for more intelligent agents possessing a richer strategy set, and
we no longer constrain the estimate for the heterogeneity of the agents to a
probability space. We also introduce a scheme which allows the incorporation of
models with a wide variety of agent types, and discuss a mechanism for the
removal of bias from relevant parameters.
|
0706.1001
|
Epistemic Analysis of Strategic Games with Arbitrary Strategy Sets
|
cs.GT cs.AI
|
We provide here an epistemic analysis of arbitrary strategic games based on
the possibility correspondences. Such an analysis calls for the use of
transfinite iterations of the corresponding operators. Our approach is based on
Tarski's Fixpoint Theorem and applies both to the notions of rationalizability
and the iterated elimination of strictly dominated strategies.
|
0706.1051
|
Improved Neural Modeling of Real-World Systems Using Genetic Algorithm
Based Variable Selection
|
cs.NE
|
Neural network models of real-world systems, such as industrial processes,
made from sensor data must often rely on incomplete data. System states may not
all be known, sensor data may be biased or noisy, and it is not often known
which sensor data may be useful for predictive modelling. Genetic algorithms
may be used to help to address this problem by determining the near optimal
subset of sensor variables most appropriate to produce good models. This paper
describes the use of genetic search to optimize variable selection to determine
inputs into the neural network model. We discuss genetic algorithm
implementation issues including data representation types and genetic operators
such as crossover and mutation. We present the use of this technique for neural
network modelling of a typical industrial application, a liquid fed ceramic
melter, and detail the results of the genetic search to optimize the neural
network model for this application.
|
0706.1061
|
Design, Implementation, and Cooperative Coevolution of an Autonomous/
Teleoperated Control System for a Serpentine Robotic Manipulator
|
cs.NE cs.RO
|
Design, implementation, and machine learning issues associated with
developing a control system for a serpentine robotic manipulator are explored.
The controller developed provides autonomous control of the serpentine robotic
manipulatorduring operation of the manipulator within an enclosed environment
such as an underground storage tank. The controller algorithms make use of both
low-level joint angle control employing force/position feedback constraints,
and high-level coordinated control of end-effector positioning. This approach
has resulted in both high-level full robotic control and low-level telerobotic
control modes, and provides a high level of dexterity for the operator.
|
0706.1119
|
Cointegration of the Daily Electric Power System Load and the Weather
|
cs.CE
|
The paper makes a thermal predictive analysis of the electric power system
security for a day ahead. This predictive analysis is set as a thermal
computation of the expected security. This computation is obtained by
cointegrating the daily electric power systen load and the weather, by finding
the daily electric power system thermodynamics and by introducing tests for
this thermodynamics. The predictive analysis made shows the electricity
consumers' wisdom.
|
0706.1137
|
Automatically Restructuring Practice Guidelines using the GEM DTD
|
cs.AI
|
This paper describes a system capable of semi-automatically filling an XML
template from free texts in the clinical domain (practice guidelines). The XML
template includes semantic information not explicitly encoded in the text
(pairs of conditions and actions/recommendations). Therefore, there is a need
to compute the exact scope of conditions over text sequences expressing the
required actions. We present a system developed for this task. We show that it
yields good performance when applied to the analysis of French practice
guidelines.
|
0706.1169
|
Vector Precoding for Wireless MIMO Systems: A Replica Analysis
|
cs.IT cond-mat.stat-mech math.IT
|
We apply the replica method to analyze vector precoding, a method to reduce
transmit power in antenna array communications. The analysis applies to a very
general class of channel matrices. The statistics of the channel matrix enter
the transmitted energy per symbol via its R-transform. We find that vector
precoding performs much better for complex than for real alphabets. As a
byproduct, we find a nonlinear precoding method with polynomial complexity that
outperforms NP-hard Tomlinson-Harashima precoding for binary modulation on
complex channels if the number of transmit antennas is slightly larger than
twice the number of receive antennas.
|
0706.1179
|
Collaborative product and process model: Multiple Viewpoints approach
|
cs.OH cs.IR
|
The design and development of complex products invariably involves many
actors who have different points of view on the problem they are addressing,
the product being developed, and the process by which it is being developed.
The actors' viewpoints approach was designed to provide an organisational
framework in which these different perspectives or points of views, and their
relationships, could be explicitly gathered and formatted (by actor activity's
focus). The approach acknowledges the inevitability of multiple interpretation
of product information as different views, promotes gathering of actors'
interests, and encourages retrieved adequate information while providing
support for integration through PLM and/or SCM collaboration. In this paper, we
present our multiple viewpoints approach, and we illustrate it by an industrial
example on cyclone vessel product.
|
0706.1290
|
Temporal Reasoning without Transitive Tables
|
cs.AI
|
Representing and reasoning about qualitative temporal information is an
essential part of many artificial intelligence tasks. Lots of models have been
proposed in the litterature for representing such temporal information. All
derive from a point-based or an interval-based framework. One fundamental
reasoning task that arises in applications of these frameworks is given by the
following scheme: given possibly indefinite and incomplete knowledge of the
binary relationships between some temporal objects, find the consistent
scenarii between all these objects. All these models require transitive tables
-- or similarly inference rules-- for solving such tasks. We have defined an
alternative model, S-languages - to represent qualitative temporal information,
based on the only two relations of \emph{precedence} and \emph{simultaneity}.
In this paper, we show how this model enables to avoid transitive tables or
inference rules to handle this kind of problem.
|
0706.1399
|
Duality and Stability Regions of Multi-rate Broadcast and Multiple
Access Networks
|
cs.IT math.IT
|
We characterize stability regions of two-user fading Gaussian multiple access
(MAC) and broadcast (BC) networks with centralized scheduling. The data to be
transmitted to the users is encoded into codewords of fixed length. The rates
of the codewords used are restricted to a fixed set of finite cardinality. With
successive decoding and interference cancellation at the receivers, we find the
set of arrival rates that can be stabilized over the MAC and BC networks. In
MAC and BC networks with average power constraints, we observe that the duality
property that relates the MAC and BC information theoretic capacity regions
extend to their stability regions as well. In MAC and BC networks with peak
power constraints, the union of stability regions of dual MAC networks is found
to be strictly contained in the BC stability region.
|
0706.1410
|
Evolutionary Mesh Numbering: Preliminary Results
|
cs.NA cs.NE math.NA math.OC
|
Mesh numbering is a critical issue in Finite Element Methods, as the
computational cost of one analysis is highly dependent on the order of the
nodes of the mesh. This paper presents some preliminary investigations on the
problem of mesh numbering using Evolutionary Algorithms. Three conclusions can
be drawn from these experiments. First, the results of the up-to-date method
used in all FEM softwares (Gibb's method) can be consistently improved; second,
none of the crossover operators tried so far (either general or problem
specific) proved useful; third, though the general tendency in Evolutionary
Computation seems to be the hybridization with other methods (deterministic or
heuristic), none of the presented attempt did encounter any success yet. The
good news, however, is that this algorithm allows an improvement over the
standard heuristic method between 12% and 20% for both the 1545 and 5453-nodes
meshes used as test-bed. Finally, some strange interaction between the
selection scheme and the use of problem specific mutation operator was
observed, which appeals for further investigation.
|
0706.1588
|
Detection of Gauss-Markov Random Fields with Nearest-Neighbor Dependency
|
cs.IT math.IT
|
The problem of hypothesis testing against independence for a Gauss-Markov
random field (GMRF) is analyzed. Assuming an acyclic dependency graph, an
expression for the log-likelihood ratio of detection is derived. Assuming
random placement of nodes over a large region according to the Poisson or
uniform distribution and nearest-neighbor dependency graph, the error exponent
of the Neyman-Pearson detector is derived using large-deviations theory. The
error exponent is expressed as a dependency-graph functional and the limit is
evaluated through a special law of large numbers for stabilizing graph
functionals. The exponent is analyzed for different values of the variance
ratio and correlation. It is found that a more correlated GMRF has a higher
exponent at low values of the variance ratio whereas the situation is reversed
at high values of the variance ratio.
|
0706.1700
|
Information Criteria and Arithmetic Codings : An Illustration on Raw
Images
|
cs.IT math.IT
|
In this paper we give a short theoretical description of the general
predictive adaptive arithmetic coding technique. The links between this
technique and the works of J. Rissanen in the 80's, in particular the BIC
information criterion used in parametrical model selection problems, are
established. We also design lossless and lossy coding techniques of images. The
lossless technique uses a mix between fixed-length coding and arithmetic coding
and provides better compression results than those separate methods. That
technique is also seen to have an interesting application in the domain of
statistics since it gives a data-driven procedure for the non-parametrical
histogram selection problem. The lossy technique uses only predictive adaptive
arithmetic codes and shows how a good choice of the order of prediction might
lead to better results in terms of compression. We illustrate those coding
techniques on a raw grayscale image.
|
0706.1716
|
Modeling and analysis using hybrid Petri nets
|
cs.IT math.IT
|
This paper is devoted to the use of hybrid Petri nets (PNs) for modeling and
control of hybrid dynamic systems (HDS). Modeling, analysis and control of HDS
attract ever more of researchers' attention and several works have been devoted
to these topics. We consider in this paper the extensions of the PN formalism
(initially conceived for modeling and analysis of discrete event systems) in
the direction of hybrid modeling. We present, first, the continuous PN models.
These models are obtained from discrete PNs by the fluidification of the
markings. They constitute the first steps in the extension of PNs toward hybrid
modeling. Then, we present two hybrid PN models, which differ in the class of
HDS they can deal with. The first one is used for deterministic HDS modeling,
whereas the second one can deal with HDS with nondeterministic behavior.
Keywords: Hybrid dynamic systems; D-elementary hybrid Petri nets; Hybrid
automata; Controller synthesis
|
0706.1751
|
MacWilliams Identity for Codes with the Rank Metric
|
cs.IT math.IT
|
The MacWilliams identity, which relates the weight distribution of a code to
the weight distribution of its dual code, is useful in determining the weight
distribution of codes. In this paper, we derive the MacWilliams identity for
linear codes with the rank metric, and our identity has a different form than
that by Delsarte. Using our MacWilliams identity, we also derive related
identities for rank metric codes. These identities parallel the binomial and
power moment identities derived for codes with the Hamming metric.
|
0706.1860
|
FIPA-based Interoperable Agent Mobility Proposal
|
cs.MA cs.NI
|
This paper presents a proposal for a flexible agent mobility architecture
based on IEEE-FIPA standards and intended to be one of them. This proposal is a
first step towards interoperable mobility mechanisms, which are needed for
future agent migration between different kinds of platforms. Our proposal is
presented as a flexible and robust architecture that has been successfully
implemented in the JADE and AgentScape platforms. It is based on an open set of
protocols, allowing new protocols and future improvements to be accommodated in
the architecture. With this proposal we demonstrate that a standard
architecture for agent mobility capable of supporting several agent platforms
can be defined and implemented.
|
0706.1926
|
Towards understanding and modelling office daily life
|
cs.CV cs.CY
|
Measuring and modeling human behavior is a very complex task. In this paper
we present our initial thoughts on modeling and automatic recognition of some
human activities in an office. We argue that to successfully model human
activities, we need to consider both individual behavior and group dynamics. To
demonstrate these theoretical approaches, we introduce an experimental system
for analyzing everyday activity in our office.
|
0706.2033
|
Power Allocation for Discrete-Input Delay-Limited Fading Channels
|
cs.IT math.IT
|
We consider power allocation algorithms for fixed-rate transmission over
Nakagami-m non-ergodic block-fading channels with perfect transmitter and
receiver channel state information and discrete input signal constellations,
under both short- and long-term power constraints. Optimal power allocation
schemes are shown to be direct applications of previous results in the
literature. We show that the SNR exponent of the optimal short-term scheme is
given by m times the Singleton bound. We also illustrate the significant gains
available by employing long-term power constraints. In particular, we analyze
the optimal long-term solution, showing that zero outage can be achieved
provided that the corresponding short-term SNR exponent with the same system
parameters is strictly greater than one. Conversely, if the short-term SNR
exponent is smaller than one, we show that zero outage cannot be achieved. In
this case, we derive the corresponding long-term SNR exponent as a function of
the Singleton bound. Due to the nature of the expressions involved, the
complexity of optimal schemes may be prohibitive for system implementation. We
therefore propose simple sub-optimal power allocation schemes whose outage
probability performance is very close to the minimum outage probability
obtained by optimal schemes. We also show the applicability of these techniques
to practical systems employing orthogonal frequency division multiplexing.
|
0706.2040
|
Getting started in probabilistic graphical models
|
q-bio.QM cs.LG physics.soc-ph stat.ME stat.ML
|
Probabilistic graphical models (PGMs) have become a popular tool for
computational analysis of biological data in a variety of domains. But, what
exactly are they and how do they work? How can we use PGMs to discover patterns
that are biologically relevant? And to what extent can PGMs help us formulate
new hypotheses that are testable at the bench? This note sketches out some
answers and illustrates the main ideas behind the statistical approach to
biological pattern discovery.
|
0706.2310
|
Space-time coding techniques with bit-interleaved coded modulations for
MIMO block-fading channels
|
cs.IT math.IT
|
The space-time bit-interleaved coded modulation (ST-BICM) is an efficient
technique to obtain high diversity and coding gain on a block-fading MIMO
channel. Its maximum-likelihood (ML) performance is computed under ideal
interleaving conditions, which enables a global optimization taking into
account channel coding. Thanks to a diversity upperbound derived from the
Singleton bound, an appropriate choice of the time dimension of the space-time
coding is possible, which maximizes diversity while minimizing complexity.
Based on the analysis, an optimized interleaver and a set of linear precoders,
called dispersive nucleo algebraic (DNA) precoders are proposed. The proposed
precoders have good performance with respect to the state of the art and exist
for any number of transmit antennas and any time dimension. With turbo codes,
they exhibit a frame error rate which does not increase with frame length.
|
0706.2331
|
Pricing American Options for Jump Diffusions by Iterating Optimal
Stopping Problems for Diffusions
|
cs.CE
|
We approximate the price of the American put for jump diffusions by a
sequence of functions, which are computed iteratively. This sequence converges
to the price function uniformly and exponentially fast. Each element of the
approximating sequence solves an optimal stopping problem for geometric
Brownian motion, and can be numerically computed using the classical finite
difference methods. We prove the convergence of this numerical scheme and
present examples to illustrate its performance.
|
0706.2434
|
Interference and Outage in Clustered Wireless Ad Hoc Networks
|
cs.IT math.IT
|
In the analysis of large random wireless networks, the underlying node
distribution is almost ubiquitously assumed to be the homogeneous Poisson point
process. In this paper, the node locations are assumed to form a Poisson
clustered process on the plane. We derive the distributional properties of the
interference and provide upper and lower bounds for its CCDF. We consider the
probability of successful transmission in an interference limited channel when
fading is modeled as Rayleigh. We provide a numerically integrable expression
for the outage probability and closed-form upper and lower bounds.We show that
when the transmitter-receiver distance is large, the success probability is
greater than that of a Poisson arrangement. These results characterize the
performance of the system under geographical or MAC-induced clustering. We
obtain the maximum intensity of transmitting nodes for a given outage
constraint, i.e., the transmission capacity (of this spatial arrangement) and
show that it is equal to that of a Poisson arrangement of nodes. For the
analysis, techniques from stochastic geometry are used, in particular the
probability generating functional of Poisson cluster processes, the Palm
characterization of Poisson cluster processes and the Campbell-Mecke theorem.
|
0706.2746
|
Abstract Storage Devices
|
cs.DM cs.CC cs.IT math.IT
|
A quantum storage device differs radically from a conventional physical
storage device. Its state can be set to any value in a certain (infinite) state
space, but in general every possible read operation yields only partial
information about the stored state.
The purpose of this paper is to initiate the study of a combinatorial
abstraction, called abstract storage device (ASD), which models deterministic
storage devices with the property that only partial information about the state
can be read, but that there is a degree of freedom as to which partial
information should be retrieved.
This concept leads to a number of interesting problems which we address, like
the reduction of one device to another device, the equivalence of devices,
direct products of devices, as well as the factorization of a device into
primitive devices. We prove that every ASD has an equivalent ASD with minimal
number of states and of possible read operations. Also, we prove that the
reducibility problem for ASD's is NP-complete, that the equivalence problem is
at least as hard as the graph isomorphism problem, and that the factorization
into binary-output devices (if it exists) is unique.
|
0706.2795
|
Dirty-paper Coding without Channel Information at the Transmitter and
Imperfect Estimation at the Receiver
|
cs.IT math.IT
|
In this paper, we examine the effects of imperfect channel estimation at the
receiver and no channel knowledge at the transmitter on the capacity of the
fading Costa's channel with channel state information non-causally known at the
transmitter. We derive the optimal Dirty-paper coding (DPC) scheme and its
corresponding achievable rates with the assumption of Gaussian inputs. Our
results, for uncorrelated Rayleigh fading, provide intuitive insights on the
impact of the channel estimate and the channel characteristics (e.g. SNR,
fading process, channel training) on the achievable rates. These are useful in
practical scenarios of multiuser wireless communications (e.g. Broadcast
Channels) and information embedding applications (e.g. robust watermarking). We
also studied optimal training design adapted to each application. We provide
numerical results for a single-user fading Costa's channel with
maximum-likehood (ML) channel estimation. These illustrate an interesting
practical trade-off between the amount of training and its impact to the
interference cancellation performance using DPC scheme.
|
0706.2797
|
Extraction d'entit\'es dans des collections \'evolutives
|
cs.IR
|
The goal of our work is to use a set of reports and extract named entities,
in our case the names of Industrial or Academic partners. Starting with an
initial list of entities, we use a first set of documents to identify syntactic
patterns that are then validated in a supervised learning phase on a set of
annotated documents. The complete collection is then explored. This approach is
similar to the ones used in data extraction from semi-structured documents
(wrappers) and do not need any linguistic resources neither a large set for
training. As our collection of documents would evolve over years, we hope that
the performance of the extraction would improve with the increased size of the
training set.
|
0706.2809
|
On the Outage Capacity of a Practical Decoder Using Channel Estimation
Accuracy
|
cs.IT math.IT
|
The optimal decoder achieving the outage capacity under imperfect channel
estimation is investigated. First, by searching into the family of nearest
neighbor decoders, which can be easily implemented on most practical coded
modulation systems, we derive a decoding metric that minimizes the average of
the transmission error probability over all channel estimation errors. This
metric, for arbitrary memoryless channels, achieves the capacity of a composite
(more noisy) channel. Next, according to the notion of estimation-induced
outage capacity (EIO capacity) introduced in our previous work, we characterize
maximal achievable information rates associated to the proposed decoder. The
performance of the proposed decoding metric over uncorrelated Rayleigh fading
MIMO channels is compared to both the classical mismatched maximum-likelihood
(ML) decoder and the theoretical limits given by the EIO capacity (i.e. the
best decoder in presence of channel estimation errors). Numerical results show
that the derived metric provides significant gains, in terms of achievable
information rates and bit error rate (BER), in a bit interleaved coded
modulation (BICM) framework, without introducing any additional decoding
complexity.
|
0706.2906
|
Capacity Scaling for MIMO Two-Way Relaying
|
cs.IT math.IT
|
A multiple input multiple output (MIMO) two-way relay channel is considered,
where two sources want to exchange messages with each other using multiple
relay nodes, and both the sources and relay nodes are equipped with multiple
antennas. Both the sources are assumed to have equal number of antennas and
have perfect channel state information (CSI) for all the channels of the MIMO
two-way relay channel, whereas, each relay node is either assumed to have CSI
for its transmit and receive channel (the coherent case) or no CSI for any of
the channels (the non-coherent case). The main results in this paper are on the
scaling behavior of the capacity region of the MIMO two-way relay channel with
increasing number of relay nodes. In the coherent case, the capacity region of
the MIMO two-way relay channel is shown to scale linearly with the number of
antennas at source nodes and logarithmically with the number of relay nodes. In
the non-coherent case, the capacity region is shown to scale linearly with the
number of antennas at the source nodes and logarithmically with the signal to
noise ratio.
|
0706.2926
|
Reducing the Error Floor
|
cs.IT math.IT
|
We discuss how the loop calculus approach of [Chertkov, Chernyak '06],
enhanced by the pseudo-codeword search algorithm of [Chertkov, Stepanov '06]
and the facet-guessing idea from [Dimakis, Wainwright '06], improves decoding
of graph based codes in the error-floor domain. The utility of the new, Linear
Programming based, decoding is demonstrated via analysis and simulations of the
model $[155,64,20]$ code.
|
0706.2963
|
Outage Behavior of Discrete Memoryless Channels Under Channel Estimation
Errors
|
cs.IT math.IT
|
Classically, communication systems are designed assuming perfect channel
state information at the receiver and/or transmitter. However, in many
practical situations, only an estimate of the channel is available that differs
from the true channel. We address this channel mismatch scenario by using the
notion of estimation-induced outage capacity, for which we provide an
associated coding theorem and its strong converse, assuming a discrete
memoryless channel. We illustrate our ideas via numerical simulations for
transmissions over Ricean fading channels under a quality of service (QoS)
constraint using rate-limited feedback channel and maximum likelihood (ML)
channel estimation. Our results provide intuitive insights on the impact of the
channel estimate and the channel characteristics (SNR, Ricean K-factor,
training sequence length, feedback rate, etc.) on the mean outage capacity.
|
0706.3009
|
Application of a design space exploration tool to enhance interleaver
generation
|
cs.AR cs.IT math.IT
|
This paper presents a methodology to efficiently explore the design space of
communication adapters. In most digital signal processing (DSP) applications,
the overall performance of the system is significantly affected by
communication architectures, as a consequence the designers need specifically
optimized adapters. By explicitly modeling these communications within an
effective graph-theoretic model and analysis framework, we automatically
generate an optimized architecture, named Space-Time AdapteR (STAR). Our design
flow inputs a C description of Input/Output data scheduling, and user
requirements (throughput, latency, parallelism...), and formalizes
communication constraints through a Resource Constraints Graph (RCG). Design
space exploration is then performed through associated tools, to synthesize a
STAR component under time-to-market constraints. The proposed approach has been
tested to design an industrial data mixing block example: an Ultra-Wideband
interleaver.
|
0706.3060
|
N-Body Simulations on GPUs
|
cs.CE cs.DC
|
Commercial graphics processors (GPUs) have high compute capacity at very low
cost, which makes them attractive for general purpose scientific computing. In
this paper we show how graphics processors can be used for N-body simulations
to obtain improvements in performance over current generation CPUs. We have
developed a highly optimized algorithm for performing the O(N^2) force
calculations that constitute the major part of stellar and molecular dynamics
simulations. In some of the calculations, we achieve sustained performance of
nearly 100 GFlops on an ATI X1900XTX. The performance on GPUs is comparable to
specialized processors such as GRAPE-6A and MDGRAPE-3, but at a fraction of the
cost. Furthermore, the wide availability of GPUs has significant implications
for cluster computing and distributed computing efforts like Folding@Home.
|
0706.3104
|
Group Testing with Random Pools: optimal two-stage algorithms
|
cs.DS cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT
|
We study Probabilistic Group Testing of a set of N items each of which is
defective with probability p. We focus on the double limit of small defect
probability, p<<1, and large number of variables, N>>1, taking either p->0
after $N\to\infty$ or $p=1/N^{\beta}$ with $\beta\in(0,1/2)$. In both settings
the optimal number of tests which are required to identify with certainty the
defectives via a two-stage procedure, $\bar T(N,p)$, is known to scale as
$Np|\log p|$. Here we determine the sharp asymptotic value of $\bar
T(N,p)/(Np|\log p|)$ and construct a class of two-stage algorithms over which
this optimal value is attained. This is done by choosing a proper bipartite
regular graph (of tests and variable nodes) for the first stage of the
detection. Furthermore we prove that this optimal value is also attained on
average over a random bipartite graph where all variables have the same degree,
while the tests have Poisson-distributed degrees. Finally, we improve the
existing upper and lower bound for the optimal number of tests in the case
$p=1/N^{\beta}$ with $\beta\in[1/2,1)$.
|
0706.3129
|
Closed-Form Density of States and Localization Length for a
Non-Hermitian Disordered System
|
cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT nlin.SI
|
We calculate the Lyapunov exponent for the non-Hermitian Zakharov-Shabat
eigenvalue problem corresponding to the attractive non-linear Schroedinger
equation with a Gaussian random pulse as initial value function. Using an
extension of the Thouless formula to non-Hermitian random operators, we
calculate the corresponding average density of states. We analyze two cases,
one with circularly symmetric complex Gaussian pulses and the other with real
Gaussian pulses. We discuss the implications in the context of the information
transmission through non-linear optical fibers.
|
0706.3170
|
Asymptotic Analysis of General Multiuser Detectors in MIMO DS-CDMA
Channels
|
cs.IT math.IT
|
We analyze a MIMO DS-CDMA channel with a general multiuser detector including
a nonlinear multiuser detector, using the replica method. In the many-user,
limit the MIMO DS-CDMA channel with the multiuser detector is decoupled into a
bank of single-user SIMO Gaussian channels if a spatial spreading scheme is
employed. On the other hand, it is decoupled into a bank of single-user MIMO
Gaussian channels if a spatial spreading scheme is not employed. The spectral
efficiency of the MIMO DS-CDMA channel with the spatial spreading scheme is
comparable with that of the MIMO DS-CDMA channel using an optimal space-time
block code without the spatial spreading scheme. In the case of the QPSK data
modulation scheme the spectral efficiency of the MIMO DS-CDMA channel with the
MMSE detector shows {\it waterfall} behavior and is very close to the
corresponding sum capacity when the system load is just below the transition
point of the {\it waterfall} behavior. Our result implies that the performance
of a multiuser detector taking the data modulation scheme into consideration
can be far superior to that of linear multiuser detectors.
|
0706.3188
|
A tutorial on conformal prediction
|
cs.LG stat.ML
|
Conformal prediction uses past experience to determine precise levels of
confidence in new predictions. Given an error probability $\epsilon$, together
with a method that makes a prediction $\hat{y}$ of a label $y$, it produces a
set of labels, typically containing $\hat{y}$, that also contains $y$ with
probability $1-\epsilon$. Conformal prediction can be applied to any method for
producing $\hat{y}$: a nearest-neighbor method, a support-vector machine, ridge
regression, etc.
Conformal prediction is designed for an on-line setting in which labels are
predicted successively, each one being revealed before the next is predicted.
The most novel and valuable feature of conformal prediction is that if the
successive examples are sampled independently from the same distribution, then
the successive predictions will be right $1-\epsilon$ of the time, even though
they are based on an accumulating dataset rather than on independent datasets.
In addition to the model under which successive examples are sampled
independently, other on-line compression models can also use conformal
prediction. The widely used Gaussian linear model is one of these.
This tutorial presents a self-contained account of the theory of conformal
prediction and works through several numerical examples. A more comprehensive
treatment of the topic is provided in "Algorithmic Learning in a Random World",
by Vladimir Vovk, Alex Gammerman, and Glenn Shafer (Springer, 2005).
|
0706.3295
|
Lower bounds on the minimum average distance of binary codes
|
cs.IT math.CO math.IT
|
New lower bounds on the minimum average Hamming distance of binary codes are
derived. The bounds are obtained using linear programming approach.
|
0706.3430
|
The Impact of Channel Feedback on Opportunistic Relay Selection for
Hybrid-ARQ in Wireless Networks
|
cs.IT math.IT
|
This paper presents a decentralized relay selection protocol for a dense
wireless network and describes channel feedback strategies that improve its
performance. The proposed selection protocol supports hybrid
automatic-repeat-request transmission where relays forward parity information
to the destination in the event of a decoding error. Channel feedback is
employed for refining the relay selection process and for selecting an
appropriate transmission mode in a proposed adaptive modulation transmission
framework. An approximation of the throughput of the proposed adaptive
modulation strategy is presented, and the dependence of the throughput on
system parameters such as the relay contention probability and the adaptive
modulation switching point is illustrated via maximization of this
approximation. Simulations show that the throughput of the proposed selection
strategy is comparable to that yielded by a centralized selection approach that
relies on geographic information.
|
0706.3480
|
Tight Bounds on the Average Length, Entropy, and Redundancy of
Anti-Uniform Huffman Codes
|
cs.IT math.IT
|
In this paper we consider the class of anti-uniform Huffman codes and derive
tight lower and upper bounds on the average length, entropy, and redundancy of
such codes in terms of the alphabet size of the source. The Fibonacci
distributions are introduced which play a fundamental role in AUH codes. It is
shown that such distributions maximize the average length and the entropy of
the code for a given alphabet size. Another previously known bound on the
entropy for given average length follows immediately from our results.
|
0706.3502
|
Approximately-Universal Space-Time Codes for the Parallel, Multi-Block
and Cooperative-Dynamic-Decode-and-Forward Channels
|
cs.IT cs.DM cs.NI math.IT
|
Explicit codes are constructed that achieve the diversity-multiplexing gain
tradeoff of the cooperative-relay channel under the dynamic decode-and-forward
protocol for any network size and for all numbers of transmit and receive
antennas at the relays.
A particularly simple code construction that makes use of the Alamouti code
as a basic building block is provided for the single relay case.
Along the way, we prove that space-time codes previously constructed in the
literature for the block-fading and parallel channels are approximately
universal, i.e., they achieve the DMT for any fading distribution. It is shown
how approximate universality of these codes leads to the first DMT-optimum code
construction for the general, MIMO-OFDM channel.
|
0706.3639
|
A Collection of Definitions of Intelligence
|
cs.AI
|
This paper is a survey of a large number of informal definitions of
``intelligence'' that the authors have collected over the years. Naturally,
compiling a complete list would be impossible as many definitions of
intelligence are buried deep inside articles and books. Nevertheless, the
70-odd definitions presented here are, to the authors' knowledge, the largest
and most well referenced collection there is.
|
0706.3679
|
Scale-sensitive Psi-dimensions: the Capacity Measures for Classifiers
Taking Values in R^Q
|
cs.LG
|
Bounds on the risk play a crucial role in statistical learning theory. They
usually involve as capacity measure of the model studied the VC dimension or
one of its extensions. In classification, such "VC dimensions" exist for models
taking values in {0, 1}, {1,..., Q} and R. We introduce the generalizations
appropriate for the missing case, the one of models with values in R^Q. This
provides us with a new guaranteed risk for M-SVMs which appears superior to the
existing one.
|
0706.3710
|
Optimal Constellations for the Low SNR Noncoherent MIMO Block Rayleigh
Fading Channel
|
cs.IT math.IT
|
Reliable communication over the discrete-input/continuous-output noncoherent
multiple-input multiple-output (MIMO) Rayleigh block fading channel is
considered when the signal-to-noise ratio (SNR) per degree of freedom is low.
Two key problems are posed and solved to obtain the optimum discrete input. In
both problems, the average and peak power per space-time slot of the input
constellation are constrained. In the first one, the peak power to average
power ratio (PPAPR) of the input constellation is held fixed, while in the
second problem, the peak power is fixed independently of the average power. In
the first PPAPR-constrained problem, the mutual information, which grows as
O(SNR^2), is maximized up to second order in SNR. In the second
peak-constrained problem, where the mutual information behaves as O(SNR), the
structure of constellations that are optimal up to first order, or
equivalently, that minimize energy/bit, are explicitly characterized.
Furthermore, among constellations that are first-order optimal, those that
maximize the mutual information up to second order, or equivalently, the
wideband slope, are characterized. In both PPAPR-constrained and
peak-constrained problems, the optimal constellations are obtained in
closed-form as solutions to non-convex optimizations, and interestingly, they
are found to be identical. Due to its special structure, the common solution is
referred to as Space Time Orthogonal Rank one Modulation, or STORM. In both
problems, it is seen that STORM provides a sharp characterization of the
behavior of noncoherent MIMO capacity.
|
0706.3752
|
Secure Nested Codes for Type II Wiretap Channels
|
cs.IT cs.CR math.IT
|
This paper considers the problem of secure coding design for a type II
wiretap channel, where the main channel is noiseless and the eavesdropper
channel is a general binary-input symmetric-output memoryless channel. The
proposed secure error-correcting code has a nested code structure. Two secure
nested coding schemes are studied for a type II Gaussian wiretap channel. The
nesting is based on cosets of a good code sequence for the first scheme and on
cosets of the dual of a good code sequence for the second scheme. In each case,
the corresponding achievable rate-equivocation pair is derived based on the
threshold behavior of good code sequences. The two secure coding schemes
together establish an achievable rate-equivocation region, which almost covers
the secrecy capacity-equivocation region in this case study. The proposed
secure coding scheme is extended to a type II binary symmetric wiretap channel.
A new achievable perfect secrecy rate, which improves upon the previously
reported result by Thangaraj et al., is derived for this channel.
|
0706.3753
|
Multiple Access Channels with Generalized Feedback and Confidential
Messages
|
cs.IT math.IT
|
This paper considers the problem of secret communication over a multiple
access channel with generalized feedback. Two trusted users send independent
confidential messages to an intended receiver, in the presence of a passive
eavesdropper. In this setting, an active cooperation between two trusted users
is enabled through using channel feedback in order to improve the communication
efficiency. Based on rate-splitting and decode-and-forward strategies,
achievable secrecy rate regions are derived for both discrete memoryless and
Gaussian channels. Results show that channel feedback improves the achievable
secrecy rates.
|
0706.3834
|
Design of optimal convolutional codes for joint decoding of correlated
sources in wireless sensor networks
|
cs.IT math.IT
|
We consider a wireless sensors network scenario where two nodes detect
correlated sources and deliver them to a central collector via a wireless link.
Differently from the Slepian-Wolf approach to distributed source coding, in the
proposed scenario the sensing nodes do not perform any pre-compression of the
sensed data. Original data are instead independently encoded by means of
low-complexity convolutional codes. The decoder performs joint decoding with
the aim of exploiting the inherent correlation between the transmitted sources.
Complexity at the decoder is kept low thanks to the use of an iterative joint
decoding scheme, where the output of each decoder is fed to the other decoder's
input as a-priori information. For such scheme, we derive a novel analytical
framework for evaluating an upper bound of joint-detection packet error
probability and for deriving the optimum coding scheme. Experimental results
confirm the validity of the analytical framework, and show that recursive codes
allow a noticeable performance gain with respect to non-recursive coding
schemes. Moreover, the proposed recursive coding scheme allows to approach the
ideal Slepian-Wolf scheme performance in AWGN channel, and to clearly
outperform it over fading channels on account of diversity gain due to
correlation of information.
|
0706.3846
|
Opportunistic Scheduling and Beamforming for MIMO-SDMA Downlink Systems
with Linear Combining
|
cs.IT math.IT
|
Opportunistic scheduling and beamforming schemes are proposed for multiuser
MIMO-SDMA downlink systems with linear combining in this work. Signals received
from all antennas of each mobile terminal (MT) are linearly combined to improve
the {\em effective} signal-to-noise-interference ratios (SINRs). By exploiting
limited feedback on the effective SINRs, the base station (BS) schedules
simultaneous data transmission on multiple beams to the MTs with the largest
effective SINRs. Utilizing the extreme value theory, we derive the asymptotic
system throughputs and scaling laws for the proposed scheduling and beamforming
schemes with different linear combining techniques. Computer simulations
confirm that the proposed schemes can substantially improve the system
throughput.
|
0706.4323
|
Theory of Finite or Infinite Trees Revisited
|
cs.LO cs.AI
|
We present in this paper a first-order axiomatization of an extended theory
$T$ of finite or infinite trees, built on a signature containing an infinite
set of function symbols and a relation $\fini(t)$ which enables to distinguish
between finite or infinite trees. We show that $T$ has at least one model and
prove its completeness by giving not only a decision procedure, but a full
first-order constraint solver which gives clear and explicit solutions for any
first-order constraint satisfaction problem in $T$. The solver is given in the
form of 16 rewriting rules which transform any first-order constraint $\phi$
into an equivalent disjunction $\phi$ of simple formulas such that $\phi$ is
either the formula $\true$ or the formula $\false$ or a formula having at least
one free variable, being equivalent neither to $\true$ nor to $\false$ and
where the solutions of the free variables are expressed in a clear and explicit
way. The correctness of our rules implies the completeness of $T$. We also
describe an implementation of our algorithm in CHR (Constraint Handling Rules)
and compare the performance with an implementation in C++ and that of a recent
decision procedure for decomposable theories.
|
0706.4375
|
A Robust Linguistic Platform for Efficient and Domain specific Web
Content Analysis
|
cs.AI
|
Web semantic access in specific domains calls for specialized search engines
with enhanced semantic querying and indexing capacities, which pertain both to
information retrieval (IR) and to information extraction (IE). A rich
linguistic analysis is required either to identify the relevant semantic units
to index and weight them according to linguistic specific statistical
distribution, or as the basis of an information extraction process. Recent
developments make Natural Language Processing (NLP) techniques reliable enough
to process large collections of documents and to enrich them with semantic
annotations. This paper focuses on the design and the development of a text
processing platform, Ogmios, which has been developed in the ALVIS project. The
Ogmios platform exploits existing NLP modules and resources, which may be tuned
to specific domains and produces linguistically annotated documents. We show
how the three constraints of genericity, domain semantic awareness and
performance can be handled all together.
|
0707.0050
|
Non-atomic Games for Multi-User Systems
|
cs.IT cs.GT math.IT
|
In this contribution, the performance of a multi-user system is analyzed in
the context of frequency selective fading channels. Using game theoretic tools,
a useful framework is provided in order to determine the optimal power
allocation when users know only their own channel (while perfect channel state
information is assumed at the base station). We consider the realistic case of
frequency selective channels for uplink CDMA. This scenario illustrates the
case of decentralized schemes, where limited information on the network is
available at the terminal. Various receivers are considered, namely the Matched
filter, the MMSE filter and the optimum filter. The goal of this paper is to
derive simple expressions for the non-cooperative Nash equilibrium as the
number of mobiles becomes large and the spreading length increases. To that end
two asymptotic methodologies are combined. The first is asymptotic random
matrix theory which allows us to obtain explicit expressions of the impact of
all other mobiles on any given tagged mobile. The second is the theory of
non-atomic games which computes good approximations of the Nash equilibrium as
the number of mobiles grows.
|
0707.0181
|
Location and Spectral Estimation of Weak Wave Packets on Noise
Background
|
cs.CE
|
The method of location and spectral estimation of weak signals on a noise
background is being considered. The method is based on the optimized on order
and noise dispersion autoregressive model of a sought signal. A new approach of
model order determination is being offered. Available estimation of the noise
dispersion is close to the real one. The optimized model allows to define
function of empirical data spectral and dynamic features changes. The analysis
of the signal as dynamic invariant in respect of the linear shift
transformation yields the function of model consistency. Use of these both
functions enables to detect short-time and nonstationary wave packets at signal
to noise ratio as from -20 dB and above.
|
0707.0234
|
Selection Relaying at Low Signal to Noise Ratios
|
cs.IT math.IT
|
Performance of cooperative diversity schemes at Low Signal to Noise Ratios
(LSNR) was recently studied by Avestimehr et. al. [1] who emphasized the
importance of diversity gain over multiplexing gain at low SNRs. It has also
been pointed out that continuous energy transfer to the channel is necessary
for achieving the max-flow min-cut bound at LSNR. Motivated by this we propose
the use of Selection Decode and Forward (SDF) at LSNR and analyze its
performance in terms of the outage probability. We also propose an energy
optimization scheme which further brings down the outage probability.
|
0707.0285
|
A Generalized Sampling Theorem for Frequency Localized Signals
|
cs.IT math.IT
|
A generalized sampling theorem for frequency localized signals is presented.
The generalization in the proposed model of sampling is twofold: (1) It applies
to various prefilters effecting a "soft" bandlimitation, (2) an approximate
reconstruction from sample values rather than a perfect one is obtained (though
the former might be "practically perfect" in many cases). For an arbitrary
finite-energy signal the frequency localization is performed by a prefilter
realizing a crosscorrelation with a function of prescribed properties. The
range of the filter, the so-called localization space, is described in some
detail. Regular sampling is applied and a reconstruction formula is given. For
the reconstruction error a general error estimate is derived and connections
between a critical sampling interval and notions of "soft bandwidth" for the
prefilter are indicated. Examples based on the sinc-function, Gaussian
functions and B-splines are discussed.
|
0707.0323
|
Interference Alignment and the Degrees of Freedom for the K User
Interference Channel
|
cs.IT math.IT
|
While the best known outerbound for the K user interference channel states
that there cannot be more than K/2 degrees of freedom, it has been conjectured
that in general the constant interference channel with any number of users has
only one degree of freedom. In this paper, we explore the spatial degrees of
freedom per orthogonal time and frequency dimension for the K user wireless
interference channel where the channel coefficients take distinct values across
frequency slots but are fixed in time. We answer five closely related
questions. First, we show that K/2 degrees of freedom can be achieved by
channel design, i.e. if the nodes are allowed to choose the best constant,
finite and nonzero channel coefficient values. Second, we show that if channel
coefficients can not be controlled by the nodes but are selected by nature,
i.e., randomly drawn from a continuous distribution, the total number of
spatial degrees of freedom for the K user interference channel is almost surely
K/2 per orthogonal time and frequency dimension. Thus, only half the spatial
degrees of freedom are lost due to distributed processing of transmitted and
received signals on the interference channel. Third, we show that interference
alignment and zero forcing suffice to achieve all the degrees of freedom in all
cases. Fourth, we show that the degrees of freedom $D$ directly lead to an
$\mathcal{O}(1)$ capacity characterization of the form
$C(SNR)=D\log(1+SNR)+\mathcal{O}(1)$ for the multiple access channel, the
broadcast channel, the 2 user interference channel, the 2 user MIMO X channel
and the 3 user interference channel with M>1 antennas at each node. Fifth, we
characterize the degree of freedom benefits from cognitive sharing of messages
on the 3 user interference channel.
|
0707.0336
|
Pricing Options on Defaultable Stocks
|
cs.CE
|
In this note, we develop stock option price approximations for a model which
takes both the risk o default and the stochastic volatility into account. We
also let the intensity of defaults be influenced by the volatility. We show
that it might be possible to infer the risk neutral default intensity from the
stock option prices. Our option price approximation has a rich implied
volatility surface structure and fits the data implied volatility well. Our
calibration exercise shows that an effective hazard rate from bonds issued by a
company can be used to explain the implied volatility skew of the implied
volatility of the option prices issued by the same company.
|
0707.0421
|
The $k$-anonymity Problem is Hard
|
cs.DB cs.CC cs.DS
|
The problem of publishing personal data without giving up privacy is becoming
increasingly important. An interesting formalization recently proposed is the
k-anonymity. This approach requires that the rows in a table are clustered in
sets of size at least k and that all the rows in a cluster become the same
tuple, after the suppression of some records. The natural optimization problem,
where the goal is to minimize the number of suppressed entries, is known to be
NP-hard when the values are over a ternary alphabet, k = 3 and the rows length
is unbounded. In this paper we give a lower bound on the approximation factor
that any polynomial-time algorithm can achive on two restrictions of the
problem,namely (i) when the records values are over a binary alphabet and k =
3, and (ii) when the records have length at most 8 and k = 4, showing that
these restrictions of the problem are APX-hard.
|
0707.0454
|
Optimal Strategies for Gaussian Jamming in Block-Fading Channels under
Delay and Power Constraints
|
cs.IT math.IT
|
Without assuming any knowledge on source's codebook and its output signals,
we formulate a Gaussian jamming problem in block fading channels as a
two-player zero sum game. The outage probability is adopted as an objective
function, over which transmitter aims at minimization and jammer aims at
maximization by selecting their power control strategies. Optimal power control
strategies for each player are obtained under both short-term and long-term
power constraints. For the latter case, we first prove the non-existence of a
Nash equilibrium, and then provide a complete solution for both maxmin and
minimax problems. Numerical results demonstrate a sharp difference between the
outage probabilities of the minimax and maxmin solutions.
|
0707.0459
|
Physical Network Coding in Two-Way Wireless Relay Channels
|
cs.IT cs.NI math.IT
|
It has recently been recognized that the wireless networks represent a
fertile ground for devising communication modes based on network coding. A
particularly suitable application of the network coding arises for the two--way
relay channels, where two nodes communicate with each other assisted by using a
third, relay node. Such a scenario enables application of \emph{physical
network coding}, where the network coding is either done (a) jointly with the
channel coding or (b) through physical combining of the communication flows
over the multiple access channel. In this paper we first group the existing
schemes for physical network coding into two generic schemes, termed 3--step
and 2--step scheme, respectively. We investigate the conditions for
maximization of the two--way rate for each individual scheme: (1) the
Decode--and--Forward (DF) 3--step schemes (2) three different schemes with two
steps: Amplify--and--Forward (AF), JDF and Denoise--and--Forward (DNF). While
the DNF scheme has a potential to offer the best two--way rate, the most
interesting result of the paper is that, for some SNR configurations of the
source--relay links, JDF yields identical maximal two--way rate as the upper
bound on the rate for DNF.
|
0707.0463
|
Blind Estimation of Multiple Carrier Frequency Offsets
|
cs.IT math.IT
|
Multiple carrier-frequency offsets (CFO) arise in a distributed antenna
system, where data are transmitted simultaneously from multiple antennas. In
such systems the received signal contains multiple CFOs due to mismatch between
the local oscillators of transmitters and receiver. This results in a
time-varying rotation of the data constellation, which needs to be compensated
for at the receiver before symbol recovery. This paper proposes a new approach
for blind CFO estimation and symbol recovery. The received base-band signal is
over-sampled, and its polyphase components are used to formulate a virtual
Multiple-Input Multiple-Output (MIMO) problem. By applying blind MIMO system
estimation techniques, the system response is estimated and used to
subsequently transform the multiple CFOs estimation problem into many
independent single CFO estimation problems. Furthermore, an initial estimate of
the CFO is obtained from the phase of the MIMO system response. The Cramer-Rao
Lower bound is also derived, and the large sample performance of the proposed
estimator is compared to the bound.
|
0707.0476
|
Fractional Power Control for Decentralized Wireless Networks
|
cs.IT math.IT
|
We consider a new approach to power control in decentralized wireless
networks, termed fractional power control (FPC). Transmission power is chosen
as the current channel quality raised to an exponent -s, where s is a constant
between 0 and 1. The choices s = 1 and s = 0 correspond to the familiar cases
of channel inversion and constant power transmission, respectively. Choosing s
in (0,1) allows all intermediate policies between these two extremes to be
evaluated, and we see that usually neither extreme is ideal. We derive
closed-form approximations for the outage probability relative to a target SINR
in a decentralized (ad hoc or unlicensed) network as well as for the resulting
transmission capacity, which is the number of users/m^2 that can achieve this
SINR on average. Using these approximations, which are quite accurate over
typical system parameter values, we prove that using an exponent of 1/2
minimizes the outage probability, meaning that the inverse square root of the
channel strength is a sensible transmit power scaling for networks with a
relatively low density of interferers. We also show numerically that this
choice of s is robust to a wide range of variations in the network parameters.
Intuitively, s=1/2 balances between helping disadvantaged users while making
sure they do not flood the network with interference.
|
0707.0479
|
Precoding for the AWGN Channel with Discrete Interference
|
cs.IT math.IT
|
For a state-dependent DMC with input alphabet $\mathcal{X}$ and state
alphabet $\mathcal{S}$ where the i.i.d. state sequence is known causally at the
transmitter, it is shown that by using at most
$|\mathcal{X}||\mathcal{S}|-|\mathcal{S}|+1$ out of
$|\mathcal{X}|^{|\mathcal{S}|}$ input symbols of the Shannon's
\emph{associated} channel, the capacity is achievable. As an example of
state-dependent channels with side information at the transmitter, $M$-ary
signal transmission over AWGN channel with additive $Q$-ary interference where
the sequence of i.i.d. interference symbols is known causally at the
transmitter is considered. For the special case where the Gaussian noise power
is zero, a sufficient condition, which is independent of interference, is given
for the capacity to be $\log_2 M$ bits per channel use. The problem of
maximization of the transmission rate under the constraint that the channel
input given any current interference symbol is uniformly distributed over the
channel input alphabet is investigated. For this setting, the general structure
of a communication system with optimal precoding is proposed.
|
0707.0498
|
The Role of Time in the Creation of Knowledge
|
cs.LG cs.AI cs.IT math.IT
|
This paper I assume that in humans the creation of knowledge depends on a
discrete time, or stage, sequential decision-making process subjected to a
stochastic, information transmitting environment. For each time-stage, this
environment randomly transmits Shannon type information-packets to the
decision-maker, who examines each of them for relevancy and then determines his
optimal choices. Using this set of relevant information-packets, the
decision-maker adapts, over time, to the stochastic nature of his environment,
and optimizes the subjective expected rate-of-growth of knowledge. The
decision-maker's optimal actions, lead to a decision function that involves,
over time, his view of the subjective entropy of the environmental process and
other important parameters at each time-stage of the process. Using this model
of human behavior, one could create psychometric experiments using computer
simulation and real decision-makers, to play programmed games to measure the
resulting human performance.
|
0707.0500
|
Location-Aided Fast Distributed Consensus in Wireless Networks
|
cs.IT math.IT
|
Existing works on distributed consensus explore linear iterations based on
reversible Markov chains, which contribute to the slow convergence of the
algorithms. It has been observed that by overcoming the diffusive behavior of
reversible chains, certain nonreversible chains lifted from reversible ones mix
substantially faster than the original chains. In this paper, we investigate
the idea of accelerating distributed consensus via lifting Markov chains, and
propose a class of Location-Aided Distributed Averaging (LADA) algorithms for
wireless networks, where nodes' coarse location information is used to
construct nonreversible chains that facilitate distributed computing and
cooperative processing. First, two general pseudo-algorithms are presented to
illustrate the notion of distributed averaging through chain-lifting. These
pseudo-algorithms are then respectively instantiated through one LADA algorithm
on grid networks, and one on general wireless networks. For a $k\times k$ grid
network, the proposed LADA algorithm achieves an $\epsilon$-averaging time of
$O(k\log(\epsilon^{-1}))$. Based on this algorithm, in a wireless network with
transmission range $r$, an $\epsilon$-averaging time of
$O(r^{-1}\log(\epsilon^{-1}))$ can be attained through a centralized algorithm.
Subsequently, we present a fully-distributed LADA algorithm for wireless
networks, which utilizes only the direction information of neighbors to
construct nonreversible chains. It is shown that this distributed LADA
algorithm achieves the same scaling law in averaging time as the centralized
scheme. Finally, we propose a cluster-based LADA (C-LADA) algorithm, which,
requiring no central coordination, provides the additional benefit of reduced
message complexity compared with the distributed LADA algorithm.
|
0707.0514
|
Phase space methods and psychoacoustic models in lossy transform coding
|
cs.IT cs.SD math.IT
|
I present a method for lossy transform coding of digital audio that uses the
Weyl symbol calculus for constructing the encoding and decoding transformation.
The method establishes a direct connection between a time-frequency
representation of the signal dependent threshold of masked noise and the
encode/decode pair. The formalism also offers a time-frequency measure of
perceptual entropy.
|
0707.0548
|
From Royal Road to Epistatic Road for Variable Length Evolution
Algorithm
|
cs.NE
|
Although there are some real world applications where the use of variable
length representation (VLR) in Evolutionary Algorithm is natural and suitable,
an academic framework is lacking for such representations. In this work we
propose a family of tunable fitness landscapes based on VLR of genotypes. The
fitness landscapes we propose possess a tunable degree of both neutrality and
epistasis; they are inspired, on the one hand by the Royal Road fitness
landscapes, and the other hand by the NK fitness landscapes. So these
landscapes offer a scale of continuity from Royal Road functions, with
neutrality and no epistasis, to landscapes with a large amount of epistasis and
no redundancy. To gain insight into these fitness landscapes, we first use
standard tools such as adaptive walks and correlation length. Second, we
evaluate the performances of evolutionary algorithms on these landscapes for
various values of the neutral and the epistatic parameters; the results allow
us to correlate the performances with the expected degrees of neutrality and
epistasis.
|
0707.0568
|
Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems
based on Game Theory-Part I: Nash Equilibria
|
cs.IT cs.GT math.IT
|
In this two-parts paper we propose a decentralized strategy, based on a
game-theoretic formulation, to find out the optimal precoding/multiplexing
matrices for a multipoint-to-multipoint communication system composed of a set
of wideband links sharing the same physical resources, i.e., time and
bandwidth. We assume, as optimality criterion, the achievement of a Nash
equilibrium and consider two alternative optimization problems: 1) the
competitive maximization of mutual information on each link, given constraints
on the transmit power and on the spectral mask imposed by the radio spectrum
regulatory bodies; and 2) the competitive maximization of the transmission
rate, using finite order constellations, under the same constraints as above,
plus a constraint on the average error probability. In Part I of the paper, we
start by showing that the solution set of both noncooperative games is always
nonempty and contains only pure strategies. Then, we prove that the optimal
precoding/multiplexing scheme for both games leads to a channel diagonalizing
structure, so that both matrix-valued problems can be recast in a simpler
unified vector power control game, with no performance penalty. Thus, we study
this simpler game and derive sufficient conditions ensuring the uniqueness of
the Nash equilibrium. Interestingly, although derived under stronger
constraints, incorporating for example spectral mask constraints, our
uniqueness conditions have broader validity than previously known conditions.
Finally, we assess the goodness of the proposed decentralized strategy by
comparing its performance with the performance of a Pareto-optimal centralized
scheme. To reach the Nash equilibria of the game, in Part II, we propose
alternative distributed algorithms, along with their convergence conditions.
|
0707.0641
|
Where are Bottlenecks in NK Fitness Landscapes?
|
cs.NE
|
Usually the offspring-parent fitness correlation is used to visualize and
analyze some caracteristics of fitness landscapes such as evolvability. In this
paper, we introduce a more general representation of this correlation, the
Fitness Cloud (FC). We use the bottleneck metaphor to emphasise fitness levels
in landscape that cause local search process to slow down. For a local search
heuristic such as hill-climbing or simulated annealing, FC allows to visualize
bottleneck and neutrality of landscapes. To confirm the relevance of the FC
representation we show where the bottlenecks are in the well-know NK fitness
landscape and also how to use neutrality information from the FC to combine
some neutral operator with local search heuristic.
|
0707.0643
|
Scuba Search : when selection meets innovation
|
cs.NE
|
We proposed a new search heuristic using the scuba diving metaphor. This
approach is based on the concept of evolvability and tends to exploit
neutrality in fitness landscape. Despite the fact that natural evolution does
not directly select for evolvability, the basic idea behind the scuba search
heuristic is to explicitly push the evolvability to increase. The search
process switches between two phases: Conquest-of-the-Waters and
Invasion-of-the-Land. A comparative study of the new algorithm and standard
local search heuristics on the NKq-landscapes has shown advantage and limit of
the scuba search. To enlighten qualitative differences between neutral search
processes, the space is changed into a connected graph to visualize the
pathways that the search is likely to follow.
|
0707.0649
|
Sphere Lower Bound for Rotated Lattice Constellations in Fading Channels
|
cs.IT math.IT
|
We study the error probability performance of rotated lattice constellations
in frequency-flat Nakagami-$m$ block-fading channels. In particular, we use the
sphere lower bound on the underlying infinite lattice as a performance
benchmark. We show that the sphere lower bound has full diversity. We observe
that optimally rotated lattices with largest known minimum product distance
perform very close to the lower bound, while the ensemble of random rotations
is shown to lack diversity and perform far from it.
|
0707.0652
|
How to use the Scuba Diving metaphor to solve problem with neutrality ?
|
cs.NE
|
We proposed a new search heuristic using the scuba diving metaphor. This
approach is based on the concept of evolvability and tends to exploit
neutrality which exists in many real-world problems. Despite the fact that
natural evolution does not directly select for evolvability, the basic idea
behind the scuba search heuristic is to explicitly push evolvability to
increase. A comparative study of the scuba algorithm and standard local search
heuristics has shown the advantage and the limitation of the scuba search. In
order to tune neutrality, we use the NKq fitness landscapes and a family of
travelling salesman problems (TSP) where cities are randomly placed on a
lattice and where travel distance between cities is computed with the Manhattan
metric. In this last problem the amount of neutrality varies with the city
concentration on the grid ; assuming the concentration below one, this TSP
reasonably remains a NP-hard problem.
|
0707.0701
|
Clustering and Feature Selection using Sparse Principal Component
Analysis
|
cs.AI cs.LG cs.MS
|
In this paper, we study the application of sparse principal component
analysis (PCA) to clustering and feature selection problems. Sparse PCA seeks
sparse factors, or linear combinations of the data variables, explaining a
maximum amount of variance in the data while having only a limited number of
nonzero coefficients. PCA is often used as a simple clustering technique and
sparse factors allow us here to interpret the clusters in terms of a reduced
set of variables. We begin with a brief introduction and motivation on sparse
PCA and detail our implementation of the algorithm in d'Aspremont et al.
(2005). We then apply these results to some classic clustering and feature
selection problems arising in biology.
|
0707.0704
|
Model Selection Through Sparse Maximum Likelihood Estimation
|
cs.AI cs.LG
|
We consider the problem of estimating the parameters of a Gaussian or binary
distribution in such a way that the resulting undirected graphical model is
sparse. Our approach is to solve a maximum likelihood problem with an added
l_1-norm penalty term. The problem as formulated is convex but the memory
requirements and complexity of existing interior point methods are prohibitive
for problems with more than tens of nodes. We present two new algorithms for
solving problems with at least a thousand nodes in the Gaussian case. Our first
algorithm uses block coordinate descent, and can be interpreted as recursive
l_1-norm penalized regression. Our second algorithm, based on Nesterov's first
order method, yields a complexity estimate with a better dependence on problem
size than existing interior point methods. Using a log determinant relaxation
of the log partition function (Wainwright & Jordan (2006)), we show that these
same algorithms can be used to solve an approximate sparse maximum likelihood
problem for the binary case. We test our algorithms on synthetic data, as well
as on gene expression and senate voting records data.
|
0707.0705
|
Optimal Solutions for Sparse Principal Component Analysis
|
cs.AI cs.LG
|
Given a sample covariance matrix, we examine the problem of maximizing the
variance explained by a linear combination of the input variables while
constraining the number of nonzero coefficients in this combination. This is
known as sparse principal component analysis and has a wide array of
applications in machine learning and engineering. We formulate a new
semidefinite relaxation to this problem and derive a greedy algorithm that
computes a full set of good solutions for all target numbers of non zero
coefficients, with total complexity O(n^3), where n is the number of variables.
We then use the same relaxation to derive sufficient conditions for global
optimality of a solution, which can be tested in O(n^3) per pattern. We discuss
applications in subset selection and sparse recovery and show on artificial
examples and biological data that our algorithm does provide globally optimal
solutions in many cases.
|
0707.0724
|
Workspace Analysis of the Parallel Module of the VERNE Machine
|
cs.RO physics.class-ph
|
The paper addresses geometric aspects of a spatial three-degree-of-freedom
parallel module, which is the parallel module of a hybrid serial-parallel
5-axis machine tool. This parallel module consists of a moving platform that is
connected to a fixed base by three non-identical legs. Each leg is made up of
one prismatic and two pairs of spherical joint, which are connected in a way
that the combined effects of the three legs lead to an over-constrained
mechanism with complex motion. This motion is defined as a simultaneous
combination of rotation and translation. A method for computing the complete
workspace of the VERNE parallel module for various tool lengths is presented.
An algorithm describing this method is also introduced.
|
0707.0745
|
Semantic Information Retrieval from Distributed Heterogeneous Data
Sources
|
cs.DB
|
Information retrieval from distributed heterogeneous data sources remains a
challenging issue. As the number of data sources increases more intelligent
retrieval techniques, focusing on information content and semantics, are
required. Currently ontologies are being widely used for managing semantic
knowledge, especially in the field of bioinformatics. In this paper we describe
an ontology assisted system that allows users to query distributed
heterogeneous data sources by hiding details like location, information
structure, access pattern and semantic structure of the data. Our goal is to
provide an integrated view on biomedical information sources for the
Health-e-Child project with the aim to overcome the lack of sufficient
semantic-based reformulation techniques for querying distributed data sources.
In particular, this paper examines the problem of query reformulation across
biomedical data sources, based on merged ontologies and the underlying
heterogeneous descriptions of the respective data sources.
|
0707.0763
|
The Requirements for Ontologies in Medical Data Integration: A Case
Study
|
cs.DB
|
Evidence-based medicine is critically dependent on three sources of
information: a medical knowledge base, the patients medical record and
knowledge of available resources, including where appropriate, clinical
protocols. Patient data is often scattered in a variety of databases and may,
in a distributed model, be held across several disparate repositories.
Consequently addressing the needs of an evidence-based medicine community
presents issues of biomedical data integration, clinical interpretation and
knowledge management. This paper outlines how the Health-e-Child project has
approached the challenge of requirements specification for (bio-) medical data
integration, from the level of cellular data, through disease to that of
patient and population. The approach is illuminated through the requirements
elicitation and analysis of Juvenile Idiopathic Arthritis (JIA), one of three
diseases being studied in the EC-funded Health-e-Child project.
|
0707.0764
|
p-Adic Degeneracy of the Genetic Code
|
q-bio.GN cs.IT math.IT physics.bio-ph
|
Degeneracy of the genetic code is a biological way to minimize effects of the
undesirable mutation changes. Degeneration has a natural description on the
5-adic space of 64 codons $\mathcal{C}_5 (64) = \{n_0 + n_1 5 + n_2 5^2
: n_i = 1, 2, 3, 4 \} ,$ where $n_i$ are digits related to nucleotides as
follows: C = 1, A = 2, T = U = 3, G = 4. The smallest 5-adic distance between
codons joins them into 16 quadruplets, which under 2-adic distance decay into
32 doublets. p-Adically close codons are assigned to one of 20 amino acids,
which are building blocks of proteins, or code termination of protein
synthesis. We shown that genetic code multiplets are made of the p-adic nearest
codons.
|
0707.0799
|
A New Family of Unitary Space-Time Codes with a Fast Parallel Sphere
Decoder Algorithm
|
cs.IT math.IT
|
In this paper we propose a new design criterion and a new class of unitary
signal constellations for differential space-time modulation for
multiple-antenna systems over Rayleigh flat-fading channels with unknown fading
coefficients. Extensive simulations show that the new codes have significantly
better performance than existing codes. We have compared the performance of our
codes with differential detection schemes using orthogonal design, Cayley
differential codes, fixed-point-free group codes and product of groups and for
the same bit error rate, our codes allow smaller signal to noise ratio by as
much as 10 dB. The design of the new codes is accomplished in a systematic way
through the optimization of a performance index that closely describes the bit
error rate as a function of the signal to noise ratio. The new performance
index is computationally simple and we have derived analytical expressions for
its gradient with respect to constellation parameters. Decoding of the proposed
constellations is reduced to a set of one-dimensional closest point problems
that we solve using parallel sphere decoder algorithms. This decoding strategy
can also improve efficiency of existing codes.
|
0707.0802
|
Very fast watermarking by reversible contrast mapping
|
cs.MM cs.CR cs.CV cs.IT math.IT
|
Reversible contrast mapping (RCM) is a simple integer transform that applies
to pairs of pixels. For some pairs of pixels, RCM is invertible, even if the
least significant bits (LSBs) of the transformed pixels are lost. The data
space occupied by the LSBs is suitable for data hiding. The embedded
information bit-rates of the proposed spatial domain reversible watermarking
scheme are close to the highest bit-rates reported so far. The scheme does not
need additional data compression, and, in terms of mathematical complexity, it
appears to be the lowest complexity one proposed up to now. A very fast lookup
table implementation is proposed. Robustness against cropping can be ensured as
well.
|
0707.0805
|
A New Generalization of Chebyshev Inequality for Random Vectors
|
math.ST cs.LG math.PR stat.AP stat.TH
|
In this article, we derive a new generalization of Chebyshev inequality for
random vectors. We demonstrate that the new generalization is much less
conservative than the classical generalization.
|
0707.0808
|
The Cyborg Astrobiologist: Porting from a wearable computer to the
Astrobiology Phone-cam
|
cs.CV astro-ph cs.AI cs.CE cs.HC cs.NI cs.RO cs.SE
|
We have used a simple camera phone to significantly improve an `exploration
system' for astrobiology and geology. This camera phone will make it much
easier to develop and test computer-vision algorithms for future planetary
exploration. We envision that the `Astrobiology Phone-cam' exploration system
can be fruitfully used in other problem domains as well.
|
0707.0860
|
On the Minimum Number of Transmissions in Single-Hop Wireless Coding
Networks
|
cs.IT cs.NI math.IT
|
The advent of network coding presents promising opportunities in many areas
of communication and networking. It has been recently shown that network coding
technique can significantly increase the overall throughput of wireless
networks by taking advantage of their broadcast nature. In wireless networks,
each transmitted packet is broadcasted within a certain area and can be
overheard by the neighboring nodes. When a node needs to transmit packets, it
employs the opportunistic coding approach that uses the knowledge of what the
node's neighbors have heard in order to reduce the number of transmissions.
With this approach, each transmitted packet is a linear combination of the
original packets over a certain finite field.
In this paper, we focus on the fundamental problem of finding the optimal
encoding for the broadcasted packets that minimizes the overall number of
transmissions. We show that this problem is NP-complete over GF(2) and
establish several fundamental properties of the optimal solution. We also
propose a simple heuristic solution for the problem based on graph coloring and
present some empirical results for random settings.
|
0707.0871
|
Optimal Linear Precoding Strategies for Wideband Non-Cooperative Systems
based on Game Theory-Part II: Algorithms
|
cs.IT cs.GT math.IT
|
In this two-part paper, we address the problem of finding the optimal
precoding/multiplexing scheme for a set of non-cooperative links sharing the
same physical resources, e.g., time and bandwidth. We consider two alternative
optimization problems: P.1) the maximization of mutual information on each
link, given constraints on the transmit power and spectral mask; and P.2) the
maximization of the transmission rate on each link, using finite order
constellations, under the same constraints as in P.1, plus a constraint on the
maximum average error probability on each link. Aiming at finding decentralized
strategies, we adopted as optimality criterion the achievement of a Nash
equilibrium and thus we formulated both problems P.1 and P.2 as strategic
noncooperative (matrix-valued) games. In Part I of this two-part paper, after
deriving the optimal structure of the linear transceivers for both games, we
provided a unified set of sufficient conditions that guarantee the uniqueness
of the Nash equilibrium. In this Part II, we focus on the achievement of the
equilibrium and propose alternative distributed iterative algorithms that solve
both games. Specifically, the new proposed algorithms are the following: 1) the
sequential and simultaneous iterative waterfilling based algorithms,
incorporating spectral mask constraints; 2) the sequential and simultaneous
gradient projection based algorithms, establishing an interesting link with
variational inequality problems. Our main contribution is to provide sufficient
conditions for the global convergence of all the proposed algorithms which,
although derived under stronger constraints, incorporating for example spectral
mask constraints, have a broader validity than the convergence conditions known
in the current literature for the sequential iterative waterfilling algorithm.
|
0707.0878
|
Risk Analysis in Robust Control -- Making the Case for Probabilistic
Robust Control
|
math.OC cs.SY math.ST stat.TH
|
This paper offers a critical view of the "worst-case" approach that is the
cornerstone of robust control design. It is our contention that a blind
acceptance of worst-case scenarios may lead to designs that are actually more
dangerous than designs based on probabilistic techniques with a built-in risk
factor. The real issue is one of modeling. If one accepts that no mathematical
model of uncertainties is perfect then a probabilistic approach can lead to
more reliable control even if it cannot guarantee stability for all possible
cases. Our presentation is based on case analysis. We first establish that
worst-case is not necessarily "all-encompassing." In fact, we show that for
some uncertain control problems to have a conventional robust control solution
it is necessary to make assumptions that leave out some feasible cases. Once we
establish that point, we argue that it is not uncommon for the risk of
unaccounted cases in worst-case design to be greater than that of the accepted
risk in a probabilistic approach. With an example, we quantify the risks and
show that worst-case can be significantly more risky. Finally, we join our
analysis with existing results on computational complexity and probabilistic
robustness to argue that the deterministic worst-case analysis is not
necessarily the better tool.
|
0707.0895
|
Segmentation and Context of Literary and Musical Sequences
|
cs.CL physics.data-an
|
We test a segmentation algorithm, based on the calculation of the
Jensen-Shannon divergence between probability distributions, to two symbolic
sequences of literary and musical origin. The first sequence represents the
successive appearance of characters in a theatrical play, and the second
represents the succession of tones from the twelve-tone scale in a keyboard
sonata. The algorithm divides the sequences into segments of maximal
compositional divergence between them. For the play, these segments are related
to changes in the frequency of appearance of different characters and in the
geographical setting of the action. For the sonata, the segments correspond to
tonal domains and reveal in detail the characteristic tonal progression of such
kind of musical composition.
|
0707.0909
|
Spectrum Sensing in Cognitive Radios Based on Multiple Cyclic
Frequencies
|
cs.IT math.IT
|
Cognitive radios sense the radio spectrum in order to find unused frequency
bands and use them in an agile manner. Transmission by the primary user must be
detected reliably even in the low signal-to-noise ratio (SNR) regime and in the
face of shadowing and fading. Communication signals are typically
cyclostationary, and have many periodic statistical properties related to the
symbol rate, the coding and modulation schemes as well as the guard periods,
for example. These properties can be exploited in designing a detector, and for
distinguishing between the primary and secondary users' signals. In this paper,
a generalized likelihood ratio test (GLRT) for detecting the presence of
cyclostationarity using multiple cyclic frequencies is proposed. Distributed
decision making is employed by combining the quantized local test statistics
from many secondary users. User cooperation allows for mitigating the effects
of shadowing and provides a larger footprint for the cognitive radio system.
Simulation examples demonstrate the resulting performance gains in the low SNR
regime and the benefits of cooperative detection.
|
0707.0969
|
Resource Allocation for Wireless Fading Relay Channels: Max-Min Solution
|
cs.IT math.IT
|
As a basic information-theoretic model for fading relay channels, the
parallel relay channel is first studied, for which lower and upper bounds on
the capacity are derived. For the parallel relay channel with degraded
subchannels, the capacity is established, and is further demonstrated via the
Gaussian case, for which the synchronized and asynchronized capacities are
obtained. The capacity achieving power allocation at the source and relay nodes
among the subchannels is characterized. The fading relay channel is then
studied, for which resource allocations that maximize the achievable rates are
obtained for both the full-duplex and half-duplex cases. Capacities are
established for fading relay channels that satisfy certain conditions.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.