id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0812.2726
|
Universal Behavior in Large-scale Aggregation of Independent Noisy
Observations
|
cs.IT math.IT
|
Aggregation of noisy observations involves a difficult tradeoff between
observation quality, which can be increased by increasing the number of
observations, and aggregation quality which decreases if the number of
observations is too large. We clarify this behavior for a protypical system in
which arbitrarily large numbers of observations exceeding the system capacity
can be aggregated using lossy data compression. We show the existence of a
scaling relation between the collective error and the system capacity, and show
that large scale lossy aggregation can outperform lossless aggregation above a
critical level of observation noise. Further, we show that universal results
for scaling and critical value of noise which are independent of system
capacity can be obtained by considering asymptotic behavior when the system
capacity increases toward infinity.
|
0812.2785
|
Prediction of Platinum Prices Using Dynamically Weighted Mixture of
Experts
|
cs.AI
|
Neural networks are powerful tools for classification and regression in
static environments. This paper describes a technique for creating an ensemble
of neural networks that adapts dynamically to changing conditions. The model
separates the input space into four regions and each network is given a weight
in each region based on its performance on samples from that region. The
ensemble adapts dynamically by constantly adjusting these weights based on the
current performance of the networks. The data set used is a collection of
financial indicators with the goal of predicting the future platinum price. An
ensemble with no weightings does not improve on the naive estimate of no weekly
change; our weighting algorithm gives an average percentage error of 63% for
twenty weeks of prediction.
|
0812.2874
|
A Data Model for Integrating Heterogeneous Medical Data in the
Health-e-Child Project
|
cs.DB
|
There has been much research activity in recent times about providing the
data infrastructures needed for the provision of personalised healthcare. In
particular the requirement of integrating multiple, potentially distributed,
heterogeneous data sources in the medical domain for the use of clinicians has
set challenging goals for the healthgrid community. The approach advocated in
this paper surrounds the provision of an Integrated Data Model plus links
to/from ontologies to homogenize biomedical (from genomic, through cellular,
disease, patient and population-related) data in the context of the EC
Framework 6 Health-e-Child project. Clinical requirements are identified, the
design approach in constructing the model is detailed and the integrated model
described in the context of examples taken from that project. Pointers are
given to future work relating the model to medical ontologies and challenges to
the use of fully integrated models and ontologies are identified.
|
0812.2879
|
Ontology Assisted Query Reformulation Using Semantic and Assertion
Capabilities of OWL-DL Ontologies
|
cs.DB
|
End users of recent biomedical information systems are often unaware of the
storage structure and access mechanisms of the underlying data sources and can
require simplified mechanisms for writing domain specific complex queries. This
research aims to assist users and their applications in formulating queries
without requiring complete knowledge of the information structure of underlying
data sources. To achieve this, query reformulation techniques and algorithms
have been developed that can interpret ontology-based search criteria and
associated domain knowledge in order to reformulate a relational query. These
query reformulation algorithms exploit the semantic relationships and assertion
capabilities of OWL-DL based domain ontologies for query reformulation. In this
paper, this approach is applied to the integrated database schema of the EU
funded Health-e-Child (HeC) project with the aim of providing ontology assisted
query reformulation techniques to simplify the global access that is needed to
millions of medical records across the UK and Europe.
|
0812.2892
|
Sparse Component Analysis (SCA) in Random-valued and Salt and Pepper
Noise Removal
|
cs.CV
|
In this paper, we propose a new method for impulse noise removal from images.
It uses the sparsity of images in the Discrete Cosine Transform (DCT) domain.
The zeros in this domain give us the exact mathematical equation to reconstruct
the pixels that are corrupted by random-value impulse noises. The proposed
method can also detect and correct the corrupted pixels. Moreover, in a simpler
case that salt and pepper noise is the brightest and darkest pixels in the
image, we propose a simpler version of our method. In addition to the proposed
method, we suggest a combination of the traditional median filter method with
our method to yield better results when the percentage of the corrupted samples
is high.
|
0812.2926
|
New parallel programming language design: a bridge between brain models
and multi-core/many-core computers?
|
cs.PL cs.AI
|
The recurrent theme of this paper is that sequences of long temporal patterns
as opposed to sequences of simple statements are to be fed into computation
devices, being them (new proposed) models for brain activity or
multi-core/many-core computers. In such models, parts of these long temporal
patterns are already committed while other are predicted. This combination of
matching patterns and making predictions appears as a key element in producing
intelligent processing in brain models and getting efficient speculative
execution on multi-core/many-core computers. A bridge between these far-apart
models of computation could be provided by appropriate design of massively
parallel, interactive programming languages. Agapia is a recently proposed
language of this kind, where user controlled long high-level temporal
structures occur at the interaction interfaces of processes. In this paper
Agapia is used to link HTMs brain models with TRIPS multi-core/many-core
architectures.
|
0812.2969
|
A Growing Self-Organizing Network for Reconstructing Curves and Surfaces
|
cs.NE cs.AI
|
Self-organizing networks such as Neural Gas, Growing Neural Gas and many
others have been adopted in actual applications for both dimensionality
reduction and manifold learning. Typically, in these applications, the
structure of the adapted network yields a good estimate of the topology of the
unknown subspace from where the input data points are sampled. The approach
presented here takes a different perspective, namely by assuming that the input
space is a manifold of known dimension. In return, the new type of growing
self-organizing network presented gains the ability to adapt itself in way that
may guarantee the effective and stable recovery of the exact topological
structure of the input manifold.
|
0812.2971
|
Cyclotomic FFT of Length 2047 Based on a Novel 11-point Cyclic
Convolution
|
cs.IT math.IT
|
In this manuscript, we propose a novel 11-point cyclic convolution algorithm
based on alternate Fourier transform. With the proposed bilinear form, we
construct a length-2047 cyclotomic FFT.
|
0812.2991
|
Analyse et structuration automatique des guides de bonnes pratiques
cliniques : essai d'\'evaluation
|
cs.AI
|
Health Practice Guideliens are supposed to unify practices and propose
recommendations to physicians. This paper describes GemFrame, a system capable
of semi-automatically filling an XML template from free texts in the clinical
domain. The XML template includes semantic information not explicitly encoded
in the text (pairs of conditions and ac-tions/recommendations). Therefore,
there is a need to compute the exact scope of condi-tions over text sequences
expressing the re-quired actions. We present a system developped for this task.
We show that it yields good performance when applied to the analysis of French
practice guidelines. We conclude with a precise evaluation of the tool.
|
0812.3066
|
Beyond Bandlimited Sampling: Nonlinearities, Smoothness and Sparsity
|
cs.IT math.IT
|
Sampling theory has benefited from a surge of research in recent years, due
in part to the intense research in wavelet theory and the connections made
between the two fields. In this survey we present several extensions of the
Shannon theorem, that have been developed primarily in the past two decades,
which treat a wide class of input signals as well as nonideal sampling and
nonlinear distortions. This framework is based on viewing sampling in a broader
sense of projection onto appropriate subspaces, and then choosing the subspaces
to yield interesting new possibilities. For example, our results can be used to
uniformly sample non-bandlimited signals, and to perfectly compensate for
nonlinear effects.
|
0812.3070
|
A Computational Model to Disentangle Semantic Information Embedded in
Word Association Norms
|
cs.CL cs.AI physics.data-an physics.soc-ph
|
Two well-known databases of semantic relationships between pairs of words
used in psycholinguistics, feature-based and association-based, are studied as
complex networks. We propose an algorithm to disentangle feature based
relationships from free association semantic networks. The algorithm uses the
rich topology of the free association semantic network to produce a new set of
relationships between words similar to those observed in feature production
norms.
|
0812.3120
|
Mode Switching for MIMO Broadcast Channel Based on Delay and Channel
Quantization
|
cs.IT math.IT
|
Imperfect channel state information degrades the performance of
multiple-input multiple-output (MIMO) communications; its effect on single-user
(SU) and multi-user (MU) MIMO transmissions are quite different. In particular,
MU-MIMO suffers from residual inter-user interference due to imperfect channel
state information while SU-MIMO only suffers from a power loss. This paper
compares the throughput loss of both SU and MU MIMO on the downlink due to
delay and channel quantization. Accurate closed-form approximations are derived
for the achievable rates for both SU and MU MIMO. It is shown that SU-MIMO is
relatively robust to delayed and quantized channel information, while MU MIMO
with zero-forcing precoding loses spatial multiplexing gain with a fixed delay
or fixed codebook size. Based on derived achievable rates, a mode switching
algorithm is proposed that switches between SU and MU MIMO modes to improve the
spectral efficiency, based on the average signal-to-noise ratio (SNR), the
normalized Doppler frequency, and the channel quantization codebook size. The
operating regions for SU and MU modes with different delays and codebook sizes
are determined, which can be used to select the preferred mode. It is shown
that the MU mode is active only when the normalized Doppler frequency is very
small and the codebook size is large.
|
0812.3124
|
Achievable Throughput of Multi-mode Multiuser MIMO with Imperfect CSI
Constraints
|
cs.IT math.IT
|
For the multiple-input multiple-output (MIMO) broadcast channel with
imperfect channel state information (CSI), neither the capacity nor the optimal
transmission technique have been fully discovered. In this paper, we derive
achievable ergodic rates for a MIMO fading broadcast channel when CSI is
delayed and quantized. It is shown that we should not support too many users
with spatial division multiplexing due to the residual inter-user interference
caused by imperfect CSI. Based on the derived achievable rates, we propose a
multi-mode transmission strategy to maximize the throughput, which adaptively
adjusts the number of active users based on the channel statistics information.
|
0812.3145
|
Binary Classification Based on Potentials
|
cs.LG
|
We introduce a simple and computationally trivial method for binary
classification based on the evaluation of potential functions. We demonstrate
that despite the conceptual and computational simplicity of the method its
performance can match or exceed that of standard Support Vector Machine
methods.
|
0812.3147
|
Comparison of Binary Classification Based on Signed Distance Functions
with Support Vector Machines
|
cs.LG cs.CG
|
We investigate the performance of a simple signed distance function (SDF)
based method by direct comparison with standard SVM packages, as well as
K-nearest neighbor and RBFN methods. We present experimental results comparing
the SDF approach with other classifiers on both synthetic geometric problems
and five benchmark clinical microarray data sets. On both geometric problems
and microarray data sets, the non-optimized SDF based classifiers perform just
as well or slightly better than well-developed, standard SVM methods. These
results demonstrate the potential accuracy of SDF-based methods on some types
of problems.
|
0812.3226
|
BiopSym: a simulator for enhanced learning of ultrasound-guided prostate
biopsy
|
cs.RO
|
This paper describes a simulator of ultrasound-guided prostate biopsies for
cancer diagnosis. When performing biopsy series, the clinician has to move the
ultrasound probe and to mentally integrate the real-time bi-dimensional images
into a three-dimensional (3D) representation of the anatomical environment.
Such a 3D representation is necessary to sample regularly the prostate in order
to maximize the probability of detecting a cancer if any. To make the training
of young physicians easier and faster we developed a simulator that combines
images computed from three-dimensional ultrasound recorded data to haptic
feedback. The paper presents the first version of this simulator.
|
0812.3232
|
Maximum Sum-Rate of MIMO Multiuser Scheduling with Linear Receivers
|
cs.IT math.IT
|
We analyze scheduling algorithms for multiuser communication systems with
users having multiple antennas and linear receivers. When there is no feedback
of channel information, we consider a common round robin scheduling algorithm,
and derive new exact and high signal-to-noise ratio (SNR) maximum sum-rate
results for the maximum ratio combining (MRC) and minimum mean squared error
(MMSE) receivers. We also present new analysis of MRC, zero forcing (ZF) and
MMSE receivers in the low SNR regime. When there are limited feedback
capabilities in the system, we consider a common practical scheduling scheme
based on signal-to-interference-and-noise ratio (SINR) feedback at the
transmitter. We derive new accurate approximations for the maximum sum-rate,
for the cases of MRC, ZF and MMSE receivers. We also derive maximum sum-rate
scaling laws, which reveal that the maximum sum-rate of all three linear
receivers converge to the same value for a large number of users, but at
different rates.
|
0812.3285
|
On Successive Refinement for the Kaspi/Heegard-Berger Problem
|
cs.IT math.IT
|
Consider a source that produces independent copies of a triplet of jointly
distributed random variables, $\{X_{i},Y_{i},Z_{i}\}_{i=1}^{\infty}$. The
process $\{X_{i}\}$ is observed at the encoder, and is supposed to be
reproduced at two decoders, where $\{Y_{i}\}$ and $\{Z_{i}\}$ are observed, in
either a causal or non-causal manner. The communication between the encoder and
the decoders is carried in two successive stages. In the first stage, the
transmission is available to both decoders and the source is reconstructed
according to the received bit-stream and the individual side information (SI).
In the second stage, additional information is sent to both decoders and the
source reconstructions are refined according to the transmissions at both
stages and the available SI. It is desired to find the necessary and sufficient
conditions on the communication rates between the encoder and decoders, so that
the distortions incurred (at each stage) will not exceed given thresholds. For
the case of non-degraded causal SI at the decoders, an exact single-letter
characterization of the achievable region is derived for the case of pure
source-coding. Then, for the case of communication carried over independent
DMS's with random states known causally/non-causally at the encoder and with
causal SI about the source at the decoders, a single-letter characterization of
all achievable distortion in both stages is provided and it is shown that the
separation theorem holds. Finally, for non-causal degraded SI, inner and outer
bounds to the achievable rate-distortion region are derived. These bounds are
shown to be tight for certain cases of reconstruction requirements at the
decoders, thereby shading some light on the problem of successive refinement
with non-degraded SI at the decoders.
|
0812.3306
|
Worst-Case Optimal Adaptive Prefix Coding
|
cs.IT math.IT
|
A common complaint about adaptive prefix coding is that it is much slower
than static prefix coding. Karpinski and Nekrich recently took an important
step towards resolving this: they gave an adaptive Shannon coding algorithm
that encodes each character in (O (1)) amortized time and decodes it in (O
(\log H)) amortized time, where $H$ is the empirical entropy of the input
string $s$. For comparison, Gagie's adaptive Shannon coder and both Knuth's and
Vitter's adaptive Huffman coders all use (\Theta (H)) amortized time for each
character. In this paper we give an adaptive Shannon coder that both encodes
and decodes each character in (O (1)) worst-case time. As with both previous
adaptive Shannon coders, we store $s$ in at most ((H + 1) |s| + o (|s|)) bits.
We also show that this encoding length is worst-case optimal up to the lower
order term.
|
0812.3404
|
Diversity-Multiplexing Tradeoff for the MIMO Static Half-Duplex Relay
|
cs.IT math.IT
|
In this work, we investigate the diversity-multiplexing tradeoff (DMT) of the
multiple-antenna (MIMO) static half-duplex relay channel. A general expression
is derived for the DMT upper bound, which can be achieved by a
compress-and-forward protocol at the relay, under certain assumptions. The DMT
expression is given as the solution of a minimization problem in general, and
an explicit expression is found when the relay channel is symmetric in terms of
number of antennas, i.e. the source and the destination have n antennas each,
and the relay has m antennas. It is observed that the static half-duplex DMT
matches the full-duplex DMT when the relay has a single antenna, and is
strictly below the full-duplex DMT when the relay has multiple antennas.
Besides, the derivation of the upper bound involves a new asymptotic study of
spherical integrals (that is, integrals with respect to the Haar measure on the
unitary group U(n)), which is a topic of mathematical interest in itself.
|
0812.3429
|
Quantum Predictive Learning and Communication Complexity with Single
Input
|
quant-ph cs.LG
|
We define a new model of quantum learning that we call Predictive Quantum
(PQ). This is a quantum analogue of PAC, where during the testing phase the
student is only required to answer a polynomial number of testing queries.
We demonstrate a relational concept class that is efficiently learnable in
PQ, while in any "reasonable" classical model exponential amount of training
data would be required. This is the first unconditional separation between
quantum and classical learning.
We show that our separation is the best possible in several ways; in
particular, there is no analogous result for a functional class, as well as for
several weaker versions of quantum learning. In order to demonstrate tightness
of our separation we consider a special case of one-way communication that we
call single-input mode, where Bob receives no input. Somewhat surprisingly,
this setting becomes nontrivial when relational communication tasks are
considered. In particular, any problem with two-sided input can be transformed
into a single-input relational problem of equal classical one-way cost. We show
that the situation is different in the quantum case, where the same
transformation can make the communication complexity exponentially larger. This
happens if and only if the original problem has exponential gap between quantum
and classical one-way communication costs. We believe that these auxiliary
results might be of independent interest.
|
0812.3447
|
Completion Time Minimization and Robust Power Control in Wireless Packet
Networks
|
cs.IT math.IT
|
A wireless packet network is considered in which each user transmits a stream
of packets to its destination. The transmit power of each user interferes with
the transmission of all other users. A convex cost function of the completion
times of the user packets is minimized by optimally allocating the users'
transmission power subject to their respective power constraints. At all ranges
of SINR, completion time minimization can be formulated as a convex
optimization problem and hence can be efficiently solved. In particular,
although the feasible rate region of the wireless network is non-convex, its
corresponding completion time region is shown to be convex. When channel
knowledge is imperfect, robust power control is considered based on the channel
fading distribution subject to outage probability constraints. The problem is
shown to be convex when the fading distribution is log-concave in exponentiated
channel power gains; e.g., when each user is under independent Rayleigh,
Nakagami, or log-normal fading. Applying the optimization frameworks in a
wireless cellular network, the average completion time is significantly reduced
as compared to full power transmission.
|
0812.3465
|
Linearly Parameterized Bandits
|
cs.LG
|
We consider bandit problems involving a large (possibly infinite) collection
of arms, in which the expected reward of each arm is a linear function of an
$r$-dimensional random vector $\mathbf{Z} \in \mathbb{R}^r$, where $r \geq 2$.
The objective is to minimize the cumulative regret and Bayes risk. When the set
of arms corresponds to the unit sphere, we prove that the regret and Bayes risk
is of order $\Theta(r \sqrt{T})$, by establishing a lower bound for an
arbitrary policy, and showing that a matching upper bound is obtained through a
policy that alternates between exploration and exploitation phases. The
phase-based policy is also shown to be effective if the set of arms satisfies a
strong convexity condition. For the case of a general set of arms, we describe
a near-optimal policy whose regret and Bayes risk admit upper bounds of the
form $O(r \sqrt{T} \log^{3/2} T)$.
|
0812.3478
|
Automatic Construction of Lightweight Domain Ontologies for Chemical
Engineering Risk Management
|
cs.AI
|
The need for domain ontologies in mission critical applications such as risk
management and hazard identification is becoming more and more pressing. Most
research on ontology learning conducted in the academia remains unrealistic for
real-world applications. One of the main problems is the dependence on
non-incremental, rare knowledge and textual resources, and manually-crafted
patterns and rules. This paper reports work in progress aiming to address such
undesirable dependencies during ontology construction. Initial experiments
using a working prototype of the system revealed promising potentials in
automatically constructing high-quality domain ontologies using real-world
texts.
|
0812.3550
|
XML Static Analyzer User Manual
|
cs.PL cs.DB cs.LO cs.SE
|
This document describes how to use the XML static analyzer in practice. It
provides informal documentation for using the XML reasoning solver
implementation. The solver allows automated verification of properties that are
expressed as logical formulas over trees. A logical formula may for instance
express structural constraints or navigation properties (like e.g. path
existence and node selection) in finite trees. Logical formulas can be
expressed using the syntax of XPath expressions, DTD, XML Schemas, and Relax NG
definitions.
|
0812.3632
|
Optimal detection of homogeneous segment of observations in stochastic
sequence
|
math.PR cs.IT math.IT math.ST stat.TH
|
A Markov process is registered. At random moment $\theta$ the distribution of
observed sequence changes. Using probability maximizing approach the optimal
stopping rule for detecting the change is identified. Some explicit solution is
obtained.
|
0812.3642
|
MIMO Two-way Relay Channel: Diversity-Multiplexing Tradeoff Analysis
|
cs.IT math.IT
|
A multi-hop two-way relay channel is considered in which all the terminals
are equipped with multiple antennas. Assuming independent quasi-static Rayleigh
fading channels and channel state information available at the receivers, we
characterize the optimal diversity-multiplexing gain tradeoff (DMT) curve for a
full-duplex relay terminal. It is shown that the optimal DMT can be achieved by
a compress-and-forward type relaying strategy in which the relay quantizes its
received signal and transmits the corresponding channel codeword. It is
noteworthy that, with this transmission protocol, the two transmissions in
opposite directions can achieve their respective single user optimal DMT
performances simultaneously, despite the interference they cause to each other.
Motivated by the optimality of this scheme in the case of the two-way relay
channel, a novel dynamic compress-and-forward (DCF) protocol is proposed for
the one-way multi-hop MIMO relay channel for a half-duplex relay terminal, and
this scheme is shown to achieve the optimal DMT performance.
|
0812.3648
|
A New Method for Knowledge Representation in Expert System's (XMLKR)
|
cs.DC cs.AI
|
Knowledge representation it is an essential section of a Expert Systems,
Because in this section we have a framework to establish an expert system then
we can modeling and use by this to design an expert system. Many method it is
exist for knowledge representation but each method have problems, in this paper
we introduce a new method of object oriented by XML language as XMLKR to
knowledge representation, and we want to discuss advantage and disadvantage of
this method.
|
0812.3709
|
Minimum Expected Distortion in Gaussian Source Coding with Fading Side
Information
|
cs.IT math.IT
|
An encoder, subject to a rate constraint, wishes to describe a Gaussian
source under squared error distortion. The decoder, besides receiving the
encoder's description, also observes side information consisting of
uncompressed source symbol subject to slow fading and noise. The decoder knows
the fading realization but the encoder knows only its distribution. The
rate-distortion function that simultaneously satisfies the distortion
constraints for all fading states was derived by Heegard and Berger. A layered
encoding strategy is considered in which each codeword layer targets a given
fading state. When the side-information channel has two discrete fading states,
the expected distortion is minimized by optimally allocating the encoding rate
between the two codeword layers. For multiple fading states, the minimum
expected distortion is formulated as the solution of a convex optimization
problem with linearly many variables and constraints. Through a limiting
process on the primal and dual solutions, it is shown that single-layer rate
allocation is optimal when the fading probability density function is
continuous and quasiconcave (e.g., Rayleigh, Rician, Nakagami, and log-normal).
In particular, under Rayleigh fading, the optimal single codeword layer targets
the least favorable state as if the side information was absent.
|
0812.3715
|
Business processes integration and performance indicators in a PLM
|
cs.DB
|
In an economic environment more and more competitive, the effective
management of information and knowledge is a strategic issue for industrial
enterprises. In the global marketplace, companies must use reactive strategies
and reduce their products development cycle. In this context, the PLM (Product
Lifecycle Management) is considered as a key component of the information
system. The aim of this paper is to present an approach to integrate Business
Processes in a PLM system. This approach is implemented in automotive sector
with second-tier subcontractor
|
0812.3742
|
Quickest Change Detection of a Markov Process Across a Sensor Array
|
cs.IT math.IT math.ST stat.TH
|
Recent attention in quickest change detection in the multi-sensor setting has
been on the case where the densities of the observations change at the same
instant at all the sensors due to the disruption. In this work, a more general
scenario is considered where the change propagates across the sensors, and its
propagation can be modeled as a Markov process. A centralized, Bayesian version
of this problem, with a fusion center that has perfect information about the
observations and a priori knowledge of the statistics of the change process, is
considered. The problem of minimizing the average detection delay subject to
false alarm constraints is formulated as a partially observable Markov decision
process (POMDP). Insights into the structure of the optimal stopping rule are
presented. In the limiting case of rare disruptions, we show that the structure
of the optimal test reduces to thresholding the a posteriori probability of the
hypothesis that no change has happened. We establish the asymptotic optimality
(in the vanishing false alarm probability regime) of this threshold test under
a certain condition on the Kullback-Leibler (K-L) divergence between the post-
and the pre-change densities. In the special case of near-instantaneous change
propagation across the sensors, this condition reduces to the mild condition
that the K-L divergence be positive. Numerical studies show that this low
complexity threshold test results in a substantial improvement in performance
over naive tests such as a single-sensor test or a test that wrongly assumes
that the change propagates instantaneously.
|
0812.3788
|
Foundations of SPARQL Query Optimization
|
cs.DB
|
The SPARQL query language is a recent W3C standard for processing RDF data, a
format that has been developed to encode information in a machine-readable way.
We investigate the foundations of SPARQL query optimization and (a) provide
novel complexity results for the SPARQL evaluation problem, showing that the
main source of complexity is operator OPTIONAL alone; (b) propose a
comprehensive set of algebraic query rewriting rules; (c) present a framework
for constraint-based SPARQL optimization based upon the well-known chase
procedure for Conjunctive Query minimization. In this line, we develop two
novel termination conditions for the chase. They subsume the strongest
conditions known so far and do not increase the complexity of the recognition
problem, thus making a larger class of both Conjunctive and SPARQL queries
amenable to constraint-based optimization. Our results are of immediate
practical interest and might empower any SPARQL query optimizer.
|
0812.3873
|
The K-Receiver Broadcast Channel with Confidential Messages
|
cs.IT math.IT
|
The secrecy capacity region for the K-receiver degraded broadcast channel
(BC) is given for confidential messages sent to the receivers and to be kept
secret from an external wiretapper. Superposition coding and Wyner's random
code partitioning are used to show the achievable rate tuples. Error
probability analysis and equivocation calculation are also provided. In the
converse proof, a new definition for the auxiliary random variables is used,
which is different from either the case of the 2-receiver BC without common
message or the K-receiver BC with common message, both with an external
wiretapper; or the K-receiver BC without a wiretapper.
|
0812.3890
|
Optimal Relay-Subset Selection and Time-Allocation in Decode-and-Forward
Cooperative Networks
|
cs.IT math.IT
|
We present the optimal relay-subset selection and transmission-time for a
decode-and-forward, half-duplex cooperative network of arbitrary size. The
resource allocation is obtained by maximizing over the rates obtained for each
possible subset of active relays, and the unique time allocation for each set
can be obtained by solving a linear system of equations. We also present a
simple recursive algorithm for the optimization problem which reduces the
computational load of finding the required matrix inverses, and reduces the
number of required iterations. Our results, in terms of outage rate, confirm
the benefit of adding potential relays to a small network and the diminishing
marginal returns for a larger network. We also show that optimizing over the
channel resources ensures that more relays are active over a larger SNR range,
and that linear network constellations significantly outperform grid
constellations. Through simulations, the optimization is shown to be robust to
node numbering.
|
0812.4012
|
De Bruijn Graph Homomorphisms and Recursive De Bruijn Sequences
|
math.CO cs.IT math.IT
|
This paper presents a method to find new De Bruijn cycles based on ones of
lesser order. This is done by mapping a De Bruijn cycle to several vertex
disjoint cycles in a De Bruijn digraph of higher order and connecting these
cycles into one full cycle. We characterize homomorphisms between De Bruijn
digraphs of different orders that allow this construction. These maps
generalize the well-known D-morphism of Lempel between De Bruijn digraphs of
consecutive orders. Also, an efficient recursive algorithm that yields an
exponential number of nonbinary De Bruijn cycles is implemented.
|
0812.4044
|
The Offset Tree for Learning with Partial Labels
|
cs.LG cs.AI
|
We present an algorithm, called the Offset Tree, for learning to make
decisions in situations where the payoff of only one choice is observed, rather
than all choices. The algorithm reduces this setting to binary classification,
allowing one to reuse of any existing, fully supervised binary classification
algorithm in this partial information setting. We show that the Offset Tree is
an optimal reduction to binary classification. In particular, it has regret at
most $(k-1)$ times the regret of the binary classifier it uses (where $k$ is
the number of choices), and no reduction to binary classification can do
better. This reduction is also computationally optimal, both at training and
test time, requiring just $O(\log_2 k)$ work to train on an example or make a
prediction.
Experiments with the Offset Tree show that it generally performs better than
several alternative approaches.
|
0812.4170
|
Finding Still Lifes with Memetic/Exact Hybrid Algorithms
|
cs.NE cs.AI
|
The maximum density still life problem (MDSLP) is a hard constraint
optimization problem based on Conway's game of life. It is a prime example of
weighted constrained optimization problem that has been recently tackled in the
constraint-programming community. Bucket elimination (BE) is a complete
technique commonly used to solve this kind of constraint satisfaction problem.
When the memory required to apply BE is too high, a heuristic method based on
it (denominated mini-buckets) can be used to calculate bounds for the optimal
solution. Nevertheless, the curse of dimensionality makes these techniques
unpractical for large size problems. In response to this situation, we present
a memetic algorithm for the MDSLP in which BE is used as a mechanism for
recombining solutions, providing the best possible child from the parental set.
Subsequently, a multi-level model in which this exact/metaheuristic hybrid is
further hybridized with branch-and-bound techniques and mini-buckets is
studied. Extensive experimental results analyze the performance of these models
and multi-parent recombination. The resulting algorithm consistently finds
optimal patterns for up to date solved instances in less time than current
approaches. Moreover, it is shown that this proposal provides new best known
solutions for very large instances.
|
0812.4235
|
Client-server multi-task learning from distributed datasets
|
cs.LG cs.AI
|
A client-server architecture to simultaneously solve multiple learning tasks
from distributed datasets is described. In such architecture, each client is
associated with an individual learning task and the associated dataset of
examples. The goal of the architecture is to perform information fusion from
multiple datasets while preserving privacy of individual data. The role of the
server is to collect data in real-time from the clients and codify the
information in a common database. The information coded in this database can be
used by all the clients to solve their individual learning task, so that each
client can exploit the informative content of all the datasets without actually
having access to private data of others. The proposed algorithmic framework,
based on regularization theory and kernel methods, uses a suitable class of
mixed effect kernels. The new method is illustrated through a simulated music
recommendation system.
|
0812.4332
|
Content-based and Algorithmic Classifications of Journals: Perspectives
on the Dynamics of Scientific Communication and Indexer Effects
|
physics.data-an cs.DL cs.IR physics.soc-ph
|
The aggregated journal-journal citation matrix -based on the Journal Citation
Reports (JCR) of the Science Citation Index- can be decomposed by indexers
and/or algorithmically. In this study, we test the results of two recently
available algorithms for the decomposition of large matrices against two
content-based classifications of journals: the ISI Subject Categories and the
field/subfield classification of Glaenzel & Schubert (2003). The content-based
schemes allow for the attribution of more than a single category to a journal,
whereas the algorithms maximize the ratio of within-category citations over
between-category citations in the aggregated category-category citation matrix.
By adding categories, indexers generate between-category citations, which may
enrich the database, for example, in the case of inter-disciplinary
developments. The consequent indexer effects are significant in sparse areas of
the matrix more than in denser ones. Algorithmic decompositions, on the other
hand, are more heavily skewed towards a relatively small number of categories,
while this is deliberately counter-acted upon in the case of content-based
classifications. Because of the indexer effects, science policy studies and the
sociology of science should be careful when using content-based
classifications, which are made for bibliographic disclosure, and not for the
purpose of analyzing latent structures in scientific communications. Despite
the large differences among them, the four classification schemes enable us to
generate surprisingly similar maps of science at the global level. Erroneous
classifications are cancelled as noise at the aggregate level, but may disturb
the evaluation locally.
|
0812.4334
|
Multi-User SISO Precoding based on Generalized Multi-Unitary
Decomposition for Single-carrier Transmission in Frequency Selective Channel
|
cs.IT math.IT
|
In this paper, we propose to exploit the richly scattered multi-path nature
of a frequency selective channel to provide additional degrees of freedom for
desigining effective precoding schemes for multi-user communications. We design
the precoding matrix for multi-user communications based on the Generalized
Multi-Unitary Decomposition (GMUD), where the channel matrix H is transformed
into P_i*R_r*Q_i^H. An advantage of GMUD is that multiple pairs of unitary
matrices P_i and Q_i can be obtained with one single R_r. Since the column of
Q_i can be used as the transmission beam of a particular user, multiple
solutions of Q_i provide a large selection of transmission beams, which can be
exploited to achieve high degrees of orthogonality between the multipaths, as
well as between the interfering users. Hence the proposed precoding technique
based on GMUD achieves better performance than precoding based on singular
value decomposition.
|
0812.4360
|
Driven by Compression Progress: A Simple Principle Explains Essential
Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention,
Curiosity, Creativity, Art, Science, Music, Jokes
|
cs.AI cs.NE
|
I argue that data becomes temporarily interesting by itself to some
self-improving, but computationally limited, subjective observer once he learns
to predict or compress the data in a better way, thus making it subjectively
simpler and more beautiful. Curiosity is the desire to create or discover more
non-random, non-arbitrary, regular data that is novel and surprising not in the
traditional sense of Boltzmann and Shannon but in the sense that it allows for
compression progress because its regularity was not yet known. This drive
maximizes interestingness, the first derivative of subjective beauty or
compressibility, that is, the steepness of the learning curve. It motivates
exploring infants, pure mathematicians, composers, artists, dancers, comedians,
yourself, and (since 1990) artificial systems.
|
0812.4446
|
The Latent Relation Mapping Engine: Algorithm and Experiments
|
cs.CL cs.AI cs.LG
|
Many AI researchers and cognitive scientists have argued that analogy is the
core of cognition. The most influential work on computational modeling of
analogy-making is Structure Mapping Theory (SMT) and its implementation in the
Structure Mapping Engine (SME). A limitation of SME is the requirement for
complex hand-coded representations. We introduce the Latent Relation Mapping
Engine (LRME), which combines ideas from SME and Latent Relational Analysis
(LRA) in order to remove the requirement for hand-coded representations. LRME
builds analogical mappings between lists of words, using a large corpus of raw
text to automatically discover the semantic relations among the words. We
evaluate LRME on a set of twenty analogical mapping problems, ten based on
scientific analogies and ten based on common metaphors. LRME achieves
human-level performance on the twenty problems. We compare LRME with a variety
of alternative approaches and find that they are not able to reach the same
level of performance.
|
0812.4460
|
Emergence of Spontaneous Order Through Neighborhood Formation in
Peer-to-Peer Recommender Systems
|
cs.AI cs.IR cs.MA
|
The advent of the Semantic Web necessitates paradigm shifts away from
centralized client/server architectures towards decentralization and
peer-to-peer computation, making the existence of central authorities
superfluous and even impossible. At the same time, recommender systems are
gaining considerable impact in e-commerce, providing people with
recommendations that are personalized and tailored to their very needs. These
recommender systems have traditionally been deployed with stark centralized
scenarios in mind, operating in closed communities detached from their host
network's outer perimeter. We aim at marrying these two worlds, i.e.,
decentralized peer-to-peer computing and recommender systems, in one
agent-based framework. Our architecture features an epidemic-style protocol
maintaining neighborhoods of like-minded peers in a robust, selforganizing
fashion. In order to demonstrate our architecture's ability to retain
scalability, robustness and to allow for convergence towards high-quality
recommendations, we conduct offline experiments on top of the popular MovieLens
dataset.
|
0812.4461
|
Mining User Profiles to Support Structure and Explanation in Open Social
Networking
|
cs.IR
|
The proliferation of media sharing and social networking websites has brought
with it vast collections of site-specific user generated content. The result is
a Social Networking Divide in which the concepts and structure common across
different sites are hidden. The knowledge and structures from one social site
are not adequately exploited to provide new information and resources to the
same or different users in comparable social sites. For music bloggers, this
latent structure, forces bloggers to select sub-optimal blogrolls. However, by
integrating the social activities of music bloggers and listeners, we are able
to overcome this limitation: improving the quality of the blogroll
neighborhoods, in terms of similarity, by 85 percent when using tracks and by
120 percent when integrating tags from another site.
|
0812.4471
|
Diversity-Multiplexing Tradeoff of Network Coding with Bidirectional
Random Relaying
|
cs.IT math.IT
|
This paper develops a diversity-multiplexing tradeoff (DMT) over a
bidirectional random relay set in a wireless network where the distribution of
all nodes is a stationary Poisson point process. This is a nontrivial extension
of the DMT because it requires consideration of the cooperation (or lack
thereof) of relay nodes, the traffic pattern and the time allocation between
the forward and reverse traffic directions. We then use this tradeoff to
compare the DMTs of traditional time-division multihop (TDMH) and network
coding (NC). Our main results are the derivations of the DMT for both TDMH and
NC. This shows, surprisingly, that if relay nodes collaborate NC does not
always have a better DMT than TDMH since it is difficult to simultaneously
achieve bidirectional transmit diversity for both source nodes. In fact, for
certain traffic patterns NC can have a worse DMT due to suboptimal time
allocation between the forward and reverse transmission directions.
|
0812.4487
|
New Sequences Design from Weil Representation with Low Two-Dimensional
Correlation in Both Time and Phase Shifts
|
cs.IT cs.DM math.IT math.RT
|
For a given prime $p$, a new construction of families of the complex valued
sequences of period $p$ with efficient implementation is given by applying both
multiplicative characters and additive characters of finite field
$\mathbb{F}_p$. Such a signal set consists of $p^2(p-2)$ time-shift distinct
sequences, the magnitude of the two-dimensional autocorrelation function (i.e.,
the ambiguity function) in both time and phase of each sequence is upper
bounded by $2\sqrt{p}$ at any shift not equal to $(0, 0)$, and the magnitude of
the ambiguity function of any pair of phase-shift distinct sequences is upper
bounded by $4\sqrt{p}$. Furthermore, the magnitude of their Fourier transform
spectrum is less than or equal to 2. A proof is given through finding a simple
elementary construction for the sequences constructed from the Weil
representation by Gurevich, Hadani and Sochen. An open problem for directly
establishing these assertions without involving the Weil representation is
addressed.
|
0812.4514
|
Quantum generalized Reed-Solomon codes: Unified framework for quantum
MDS codes
|
quant-ph cs.IT math.IT
|
We construct a new family of quantum MDS codes from classical generalized
Reed-Solomon codes and derive the necessary and sufficient condition under
which these quantum codes exist. We also give code bounds and show how to
construct them analytically. We find that existing quantum MDS codes can be
unified under these codes in the sense that when a quantum MDS code exists,
then a quantum code of this type with the same parameters also exists. Thus as
far as is known at present, they are the most important family of quantum MDS
codes.
|
0812.4523
|
System Theoretic Viewpoint on Modeling of Complex Systems: Design,
Synthesis, Simulation, and Control
|
cs.CE
|
We consider the basic features of complex dynamic and control systems,
including systems having hierarchical structure. Special attention is paid to
the problems of design and synthesis of complex systems and control models, and
to the development of simulation techniques and systems. A model of complex
system is proposed and briefly analyzed.
|
0812.4542
|
Assessing scientific research performance and impact with single indices
|
cs.IR physics.soc-ph
|
We provide a comprehensive and critical review of the h-index and its most
important modifications proposed in the literature, as well as of other similar
indicators measuring research output and impact. Extensions of some of these
indices are presented and illustrated.
|
0812.4580
|
Feature Markov Decision Processes
|
cs.AI cs.IT cs.LG math.IT
|
General purpose intelligent learning agents cycle through (complex,non-MDP)
sequences of observations, actions, and rewards. On the other hand,
reinforcement learning is well-developed for small finite state Markov Decision
Processes (MDPs). So far it is an art performed by human designers to extract
the right state representation out of the bare observations, i.e. to reduce the
agent setup to the MDP framework. Before we can think of mechanizing this
search for suitable MDPs, we need a formal objective criterion. The main
contribution of this article is to develop such a criterion. I also integrate
the various parts into one learning algorithm. Extensions to more realistic
dynamic Bayesian networks are developed in a companion article.
|
0812.4581
|
Feature Dynamic Bayesian Networks
|
cs.AI cs.IT cs.LG math.IT
|
Feature Markov Decision Processes (PhiMDPs) are well-suited for learning
agents in general environments. Nevertheless, unstructured (Phi)MDPs are
limited to relatively simple environments. Structured MDPs like Dynamic
Bayesian Networks (DBNs) are used for large-scale real-world problems. In this
article I extend PhiMDP to PhiDBN. The primary contribution is to derive a cost
criterion that allows to automatically extract the most relevant features from
the environment, leading to the "best" DBN representation. I discuss all
building blocks required for a complete general learning algorithm.
|
0812.4614
|
I, Quantum Robot: Quantum Mind control on a Quantum Computer
|
quant-ph cs.AI cs.LO cs.RO
|
The logic which describes quantum robots is not orthodox quantum logic, but a
deductive calculus which reproduces the quantum tasks (computational processes,
and actions) taking into account quantum superposition and quantum
entanglement. A way toward the realization of intelligent quantum robots is to
adopt a quantum metalanguage to control quantum robots. A physical
implementation of a quantum metalanguage might be the use of coherent states in
brain signals.
|
0812.4627
|
Bayesian Compressive Sensing via Belief Propagation
|
cs.IT math.IT
|
Compressive sensing (CS) is an emerging field based on the revelation that a
small collection of linear projections of a sparse signal contains enough
information for stable, sub-Nyquist signal acquisition. When a statistical
characterization of the signal is available, Bayesian inference can complement
conventional CS methods based on linear programming or greedy algorithms. We
perform approximate Bayesian inference using belief propagation (BP) decoding,
which represents the CS encoding matrix as a graphical model. Fast computation
is obtained by reducing the size of the graphical model with sparse encoding
matrices. To decode a length-N signal containing K large coefficients, our
CS-BP decoding algorithm uses O(Klog(N)) measurements and O(Nlog^2(N))
computation. Finally, although we focus on a two-state mixture Gaussian model,
CS-BP is easily adapted to other signal models.
|
0812.4642
|
Error-Trellis State Complexity of LDPC Convolutional Codes Based on
Circulant Matrices
|
cs.IT math.IT
|
Let H(D) be the parity-check matrix of an LDPC convolutional code
corresponding to the parity-check matrix H of a QC code obtained using the
method of Tanner et al. We see that the entries in H(D) are all monomials and
several rows (columns) have monomial factors. Let us cyclically shift the rows
of H. Then the parity-check matrix H'(D) corresponding to the modified matrix
H' defines another convolutional code. However, its free distance is
lower-bounded by the minimum distance of the original QC code. Also, each row
(column) of H'(D) has a factor different from the one in H(D). We show that the
state-space complexity of the error-trellis associated with H'(D) can be
significantly reduced by controlling the row shifts applied to H with the
error-correction capability being preserved.
|
0812.4803
|
Technical Report: Achievable Rates for the MAC with Correlated
Channel-State Information
|
cs.IT math.IT
|
In this paper we provide an achievable rate region for the discrete
memoryless multiple access channel with correlated state information known
non-causally at the encoders using a random binning technique. This result is a
generalization of the random binning technique used by Gel'fand and Pinsker for
the problem with non-causal channel state information at the encoder in point
to point communication.
|
0812.4826
|
Delay-Throughput Tradeoff for Supportive Two-Tier Networks
|
cs.IT cs.NI math.IT
|
Consider a static wireless network that has two tiers with different
priorities: a primary tier vs. a secondary tier. The primary tier consists of
randomly distributed legacy nodes of density $n$, which have an absolute
priority to access the spectrum. The secondary tier consists of randomly
distributed cognitive nodes of density $m=n^\beta$ with $\beta\geq 2$, which
can only access the spectrum opportunistically to limit the interference to the
primary tier. By allowing the secondary tier to route the packets for the
primary tier, we show that the primary tier can achieve a throughput scaling of
$\lambda_p(n)=\Theta(1/\log n)$ per node and a delay-throughput tradeoff of
$D_p(n)=\Theta(\sqrt{n^\beta\log n}\lambda_p(n))$ for $\lambda_p(n)=O(1/\log
n)$, while the secondary tier still achieves the same optimal delay-throughput
tradeoff as a stand-alone network.
|
0812.4889
|
Statistical Physics of Signal Estimation in Gaussian Noise: Theory and
Examples of Phase Transitions
|
cs.IT math.IT
|
We consider the problem of signal estimation (denoising) from a statistical
mechanical perspective, using a relationship between the minimum mean square
error (MMSE), of estimating a signal, and the mutual information between this
signal and its noisy version. The paper consists of essentially two parts. In
the first, we derive several statistical-mechanical relationships between a few
important quantities in this problem area, such as the MMSE, the differential
entropy, the Fisher information, the free energy, and a generalized notion of
temperature. We also draw analogies and differences between certain relations
pertaining to the estimation problem and the parallel relations in
thermodynamics and statistical physics. In the second part of the paper, we
provide several application examples, where we demonstrate how certain analysis
tools that are customary in statistical physics, prove useful in the analysis
of the MMSE. In most of these examples, the corresponding
statistical-mechanical systems turn out to consist of strong interactions that
cause phase transitions, which in turn are reflected as irregularities and
discontinuities (similar to threshold effects) in the behavior of the MMSE.
|
0812.4937
|
Efficient Interpolation in the Guruswami-Sudan Algorithm
|
cs.IT cs.DM math.AC math.IT
|
A novel algorithm is proposed for the interpolation step of the
Guruswami-Sudan list decoding algorithm. The proposed method is based on the
binary exponentiation algorithm, and can be considered as an extension of the
Lee-O'Sullivan algorithm. The algorithm is shown to achieve both asymptotical
and practical performance gain compared to the case of iterative interpolation
algorithm. Further complexity reduction is achieved by integrating the proposed
method with re-encoding. The key contribution of the paper, which enables the
complexity reduction, is a novel randomized ideal multiplication algorithm.
|
0812.4952
|
Importance Weighted Active Learning
|
cs.LG
|
We present a practical and statistically consistent scheme for actively
learning binary classifiers under general loss functions. Our algorithm uses
importance weighting to correct sampling bias, and by controlling the variance,
we are able to give rigorous label complexity bounds for the learning process.
Experiments on passively labeled data show that this approach reduces the label
complexity required to achieve good predictive performance on many learning
problems.
|
0812.4985
|
On the Capacity of Partially Cognitive Radios
|
cs.IT math.IT
|
This paper considers the problem of cognitive radios with partial-message
information. Here, an interference channel setting is considered where one
transmitter (the "cognitive" one) knows the message of the other ("legitimate"
user) partially. An outer bound on the capacity region of this channel is found
for the "weak" interference case (where the interference from the cognitive
transmitter to the legitimate receiver is weak). This outer bound is shown for
both the discrete-memoryless and the Gaussian channel cases. An achievable
region is subsequently determined for a mixed interference Gaussian cognitive
radio channel, where the interference from the legitimate transmitter to the
cognitive receiver is "strong". It is shown that, for a class of mixed Gaussian
cognitive radio channels, portions of the outer bound are achievable thus
resulting in a characterization of a part of this channel's capacity region.
|
0812.4986
|
An Array Algebra
|
cs.DB
|
This is a proposal of an algebra which aims at distributed array processing.
The focus lies on re-arranging and distributing array data, which may be
multi-dimensional. The context of the work is scientific processing; thus, the
core science operations are assumed to be taken care of in external libraries
or languages. A main design driver is the desire to carry over some of the
strategies of the relational algebra into the array domain.
|
0812.5026
|
Group representation design of digital signals and sequences
|
cs.IT cs.DM math.IT math.RT
|
In this survey a novel system, called the oscillator system, consisting of
order of p^3 functions (signals) on the finite field F_{p}, is described and
studied. The new functions are proved to satisfy good auto-correlation,
cross-correlation and low peak-to-average power ratio properties. Moreover, the
oscillator system is closed under the operation of discrete Fourier transform.
Applications of the oscillator system for discrete radar and digital
communication theory are explained. Finally, an explicit algorithm to construct
the oscillator system is presented.
|
0812.5032
|
A New Clustering Algorithm Based Upon Flocking On Complex Network
|
cs.LG cs.AI cs.CV physics.soc-ph
|
We have proposed a model based upon flocking on a complex network, and then
developed two clustering algorithms on the basis of it. In the algorithms,
firstly a \textit{k}-nearest neighbor (knn) graph as a weighted and directed
graph is produced among all data points in a dataset each of which is regarded
as an agent who can move in space, and then a time-varying complex network is
created by adding long-range links for each data point. Furthermore, each data
point is not only acted by its \textit{k} nearest neighbors but also \textit{r}
long-range neighbors through fields established in space by them together, so
it will take a step along the direction of the vector sum of all fields. It is
more important that these long-range links provides some hidden information for
each data point when it moves and at the same time accelerate its speed
converging to a center. As they move in space according to the proposed model,
data points that belong to the same class are located at a same position
gradually, whereas those that belong to different classes are away from one
another. Consequently, the experimental results have demonstrated that data
points in datasets are clustered reasonably and efficiently, and the rates of
convergence of clustering algorithms are fast enough. Moreover, the comparison
with other algorithms also provides an indication of the effectiveness of the
proposed approach.
|
0812.5064
|
A Novel Clustering Algorithm Based Upon Games on Evolving Network
|
cs.LG cs.CV cs.GT nlin.AO
|
This paper introduces a model based upon games on an evolving network, and
develops three clustering algorithms according to it. In the clustering
algorithms, data points for clustering are regarded as players who can make
decisions in games. On the network describing relationships among data points,
an edge-removing-and-rewiring (ERR) function is employed to explore in a
neighborhood of a data point, which removes edges connecting to neighbors with
small payoffs, and creates new edges to neighbors with larger payoffs. As such,
the connections among data points vary over time. During the evolution of
network, some strategies are spread in the network. As a consequence, clusters
are formed automatically, in which data points with the same evolutionarily
stable strategy are collected as a cluster, so the number of evolutionarily
stable strategies indicates the number of clusters. Moreover, the experimental
results have demonstrated that data points in datasets are clustered reasonably
and efficiently, and the comparison with other algorithms also provides an
indication of the effectiveness of the proposed algorithms.
|
0812.5104
|
On Quantum and Classical Error Control Codes: Constructions and
Applications
|
cs.IT math.IT quant-ph
|
It is conjectured that quantum computers are able to solve certain problems
more quickly than any deterministic or probabilistic computer. A quantum
computer exploits the rules of quantum mechanics to speed up computations.
However, it is a formidable task to build a quantum computer, since the quantum
mechanical systems storing the information unavoidably interact with their
environment. Therefore, one has to mitigate the resulting noise and decoherence
effects to avoid computational errors.
In this work, I study various aspects of quantum error control codes -- the
key component of fault-tolerant quantum information processing. I present the
fundamental theory and necessary background of quantum codes and construct many
families of quantum block and convolutional codes over finite fields, in
addition to families of subsystem codes over symmetric and asymmetric channels.
Particularly, many families of quantum BCH, RS, duadic, and convolutional
codes are constructed over finite fields. Families of subsystem codes and a
class of optimal MDS subsystem codes are derived over asymmetric and symmetric
quantum channels. In addition, propagation rules and tables of upper bounds on
subsystem code parameters are established. Classes of quantum and classical
LDPC codes based on finite geometries and Latin squares are constructed.
|
0901.0015
|
Maximum Entropy on Compact Groups
|
cs.IT math.IT math.PR
|
On a compact group the Haar probability measure plays the role of uniform
distribution. The entropy and rate distortion theory for this uniform
distribution is studied. New results and simplified proofs on convergence of
convolutions on compact groups are presented and they can be formulated as
entropy increases to its maximum. Information theoretic techniques and Markov
chains play a crucial role. The convergence results are also formulated via
rate distortion functions. The rate of convergence is shown to be exponential.
|
0901.0042
|
A family of asymptotically good quantum codes based on code
concatenation
|
quant-ph cs.IT math.IT
|
We explicitly construct an infinite family of asymptotically good
concatenated quantum stabilizer codes where the outer code uses CSS-type
quantum Reed-Solomon code and the inner code uses a set of special quantum
codes. In the field of quantum error-correcting codes, this is the first time
that a family of asymptotically good quantum codes is derived from bad codes.
Its occurrence supplies a gap in quantum coding theory.
|
0901.0044
|
Information Inequalities for Joint Distributions, with Interpretations
and Applications
|
cs.IT math.CO math.IT math.PR
|
Upper and lower bounds are obtained for the joint entropy of a collection of
random variables in terms of an arbitrary collection of subset joint entropies.
These inequalities generalize Shannon's chain rule for entropy as well as
inequalities of Han, Fujishige and Shearer. A duality between the upper and
lower bounds for joint entropy is developed. All of these results are shown to
be special cases of general, new results for submodular functions-- thus, the
inequalities presented constitute a richly structured class of Shannon-type
inequalities. The new inequalities are applied to obtain new results in
combinatorics, such as bounds on the number of independent sets in an arbitrary
graph and the number of zero-error source-channel codes, as well as new
determinantal inequalities in matrix theory. A new inequality for relative
entropies is also developed, along with interpretations in terms of hypothesis
testing. Finally, revealing connections of the results to literature in
economics, computer science, and physics are explored.
|
0901.0055
|
Entropy and set cardinality inequalities for partition-determined
functions
|
cs.IT math.CO math.IT math.NT math.PR
|
A new notion of partition-determined functions is introduced, and several
basic inequalities are developed for the entropy of such functions of
independent random variables, as well as for cardinalities of compound sets
obtained using these functions. Here a compound set means a set obtained by
varying each argument of a function of several variables over a set associated
with that argument, where all the sets are subsets of an appropriate algebraic
structure so that the function is well defined. On the one hand, the entropy
inequalities developed for partition-determined functions imply entropic
analogues of general inequalities of Pl\"unnecke-Ruzsa type. On the other hand,
the cardinality inequalities developed for compound sets imply several
inequalities for sumsets, including for instance a generalization of
inequalities proved by Gyarmati, Matolcsi and Ruzsa (2010). We also provide
partial progress towards a conjecture of Ruzsa (2007) for sumsets in nonabelian
groups. All proofs are elementary and rely on properly developing certain
information-theoretic inequalities.
|
0901.0062
|
Cores of Cooperative Games in Information Theory
|
cs.IT cs.GT math.IT
|
Cores of cooperative games are ubiquitous in information theory, and arise
most frequently in the characterization of fundamental limits in various
scenarios involving multiple users. Examples include classical settings in
network information theory such as Slepian-Wolf source coding and multiple
access channels, classical settings in statistics such as robust hypothesis
testing, and new settings at the intersection of networking and statistics such
as distributed estimation problems for sensor networks. Cooperative game theory
allows one to understand aspects of all of these problems from a fresh and
unifying perspective that treats users as players in a game, sometimes leading
to new insights. At the heart of these analyses are fundamental dualities that
have been long studied in the context of cooperative games; for information
theoretic purposes, these are dualities between information inequalities on the
one hand and properties of rate, capacity or other resource allocation regions
on the other.
|
0901.0065
|
Exact Histogram Specification Optimized for Structural Similarity
|
cs.CV cs.MM
|
An exact histogram specification (EHS) method modifies its input image to
have a specified histogram. Applications of EHS include image (contrast)
enhancement (e.g., by histogram equalization) and histogram watermarking.
Performing EHS on an image, however, reduces its visual quality. Starting from
the output of a generic EHS method, we maximize the structural similarity index
(SSIM) between the original image (before EHS) and the result of EHS
iteratively. Essential in this process is the computationally simple and
accurate formula we derive for SSIM gradient. As it is based on gradient
ascent, the proposed EHS always converges. Experimental results confirm that
while obtaining the histogram exactly as specified, the proposed method
invariably outperforms the existing methods in terms of visual quality of the
result. The computational complexity of the proposed method is shown to be of
the same order as that of the existing methods.
Index terms: histogram modification, histogram equalization, optimization for
perceptual visual quality, structural similarity gradient ascent, histogram
watermarking, contrast enhancement.
|
0901.0118
|
On the Stability Region of Amplify-and-Forward Cooperative Relay
Networks
|
cs.IT math.IT
|
This paper considers an amplify-and-forward relay network with fading states.
Amplify-and-forward scheme (along with its variations) is the core mechanism
for enabling cooperative communication in wireless networks, and hence
understanding the network stability region under amplify-and-forward scheme is
very important. However, in a relay network employing amplify-and-forward, the
interaction between nodes is described in terms of real-valued ``packets''
(signals) instead of discrete packets (bits). This restrains the relay nodes
from re-encoding the packets at desired rates. Hence, the stability analysis
for relay networks employing amplify-and-forward scheme is by no means a
straightforward extension of that in packet-based networks. In this paper, the
stability region of a four-node relay network is characterized, and a simple
throughput optimal algorithm with joint scheduling and rate allocation is
proposed.
|
0901.0163
|
Limited-Rate Channel State Feedback for Multicarrier Block Fading
Channels
|
cs.IT math.IT
|
The capacity of a fading channel can be substantially increased by feeding
back channel state information from the receiver to the transmitter. With
limited-rate feedback what state information to feed back and how to encode it
are important open questions. This paper studies power loading in a
multicarrier system using no more than one bit of feedback per sub-channel. The
sub-channels can be correlated and full channel state information is assumed at
the receiver.
|
0901.0168
|
Coding for Two-User SISO and MIMO Multiple Access Channels
|
cs.IT math.IT
|
Constellation Constrained (CC) capacity regions of a two-user SISO Gaussian
Multiple Access Channel (GMAC) with finite complex input alphabets and
continuous output are computed in this paper. When both the users employ the
same code alphabet, it is well known that an appropriate rotation between the
alphabets provides unique decodability to the receiver. For such a set-up, a
metric is proposed to compute the angle(s) of rotation between the alphabets
such that the CC capacity region is maximally enlarged. Subsequently, code
pairs based on Trellis Coded Modulation (TCM) are designed for the two-user
GMAC with $M$-PSK and $M$-PAM alphabet pairs for arbitrary values of $M$ and it
is proved that, for certain angles of rotation, Ungerboeck labelling on the
trellis of each user maximizes the guaranteed squared Euclidean distance of the
\textit{sum trellis}. Hence, such a labelling scheme can be used systematically
to construct trellis code pairs for a two-user GMAC to achieve sum rates close
to the sum capacity of the channel. More importantly, it is shown for the first
time that ML decoding complexity at the destination is significantly reduced
when $M$-PAM alphabet pairs are employed with \textit{almost} no loss in the
sum capacity. \indent A two-user Multiple Input Multiple Output (MIMO) fading
MAC with $N_{t}$ antennas at both the users and a single antenna at the
destination has also been considered with the assumption that the destination
has the perfect knowledge of channel state information and the two users have
the perfect knowledge of only the phase components of their channels. For such
a set-up, two distinct classes of Space Time Block Code (STBC) pairs derived
from the well known class of real orthogonal designs are proposed such that the
STBC pairs are information lossless and have low ML decoding complexity.
|
0901.0170
|
Pedestrian Traffic: on the Quickest Path
|
physics.soc-ph cs.MA physics.comp-ph
|
When a large group of pedestrians moves around a corner, most pedestrians do
not follow the shortest path, which is to stay as close as possible to the
inner wall, but try to minimize the travel time. For this they accept to move
on a longer path with some distance to the corner, to avoid large densities and
by this succeed in maintaining a comparatively high speed. In many models of
pedestrian dynamics the basic rule of motion is often either "move as far as
possible toward the destination" or - reformulated - "of all coordinates
accessible in this time step move to the one with the smallest distance to the
destination". Atop of this rule modifications are placed to make the motion
more realistic. These modifications usually focus on local behavior and neglect
long-ranged effects. Compared to real pedestrians this leads to agents in a
simulation valuing the shortest path a lot better than the quickest. So, in a
situation as the movement of a large crowd around a corner, one needs an
additional element in a model of pedestrian dynamics that makes the agents
deviate from the rule of the shortest path. In this work it is shown, how this
can be achieved by using a flood fill dynamic potential field method, where
during the filling process the value of a field cell is not increased by 1, but
by a larger value, if it is occupied by an agent. This idea may be an obvious
one, however, the tricky part - and therefore in a strict sense the
contribution of this work - is a) to minimize unrealistic artifacts, as naive
flood fill metrics deviate considerably from the Euclidean metric and in this
respect yield large errors, b) do this with limited computational effort, and
c) keep agents' movement at very low densities unaltered.
|
0901.0213
|
Filtering Microarray Correlations by Statistical Literature Analysis
Yields Potential Hypotheses for Lactation Research
|
cs.DL cs.DB
|
Our results demonstrated that a previously reported protein name
co-occurrence method (5-mention PubGene) which was not based on a hypothesis
testing framework, it is generally statistically more significant than the 99th
percentile of Poisson distribution-based method of calculating co-occurrence.
It agrees with previous methods using natural language processing to extract
protein-protein interaction from text as more than 96% of the interactions
found by natural language processing methods to overlap with the results from
5-mention PubGene method. However, less than 2% of the gene co-expressions
analyzed by microarray were found from direct co-occurrence or interaction
information extraction from the literature. At the same time, combining
microarray and literature analyses, we derive a novel set of 7 potential
functional protein-protein interactions that had not been previously described
in the literature.
|
0901.0220
|
Comments on "Broadcast Channels with Arbitrarily Correlated Sources"
|
cs.IT math.IT
|
The Marton-Gelfand-Pinsker inner bound on the capacity region of broadcast
channels was extended by Han-Costa to include arbitrarily correlated sources
where the capacity region is replaced by an admissible source region. The main
arguments of Han-Costa are correct but unfortunately the authors overlooked an
inequality in their derivation. The corrected region is presented and the
absence of the omitted inequality is shown to sometimes admit sources that are
not admissible.
|
0901.0222
|
Dynamic Muscle Fatigue Evaluation in Virtual Working Environment
|
cs.RO
|
Musculoskeletal disorder (MSD) is one of the major health problems in
mechanical work especially in manual handling jobs. Muscle fatigue is believed
to be the main reason for MSD. Posture analysis techniques have been used to
expose MSD risks of the work, but most of the conventional methods are only
suitable for static posture analysis. Meanwhile the subjective influences from
the inspectors can result differences in the risk assessment. Another
disadvantage is that the evaluation has to be taken place in the workshop, so
it is impossible to avoid some design defects before data collection in the
field environment and it is time consuming. In order to enhance the efficiency
of ergonomic MSD risk evaluation and avoid subjective influences, we develop a
new muscle fatigue model and a new fatigue index to evaluate the human muscle
fatigue during manual handling jobs in this paper. Our new fatigue model is
closely related to the muscle load during working procedure so that it can be
used to evaluate the dynamic working process. This muscle fatigue model is
mathematically validated and it is to be further experimental validated and
integrated into a virtual working environment to evaluate the muscle fatigue
and predict the MSD risks quickly and objectively.
|
0901.0252
|
MIMO decoding based on stochastic reconstruction from multiple
projections
|
cs.IT cs.LG math.IT
|
Least squares (LS) fitting is one of the most fundamental techniques in
science and engineering. It is used to estimate parameters from multiple noisy
observations. In many problems the parameters are known a-priori to be bounded
integer valued, or they come from a finite set of values on an arbitrary finite
lattice. In this case finding the closest vector becomes NP-Hard problem. In
this paper we propose a novel algorithm, the Tomographic Least Squares Decoder
(TLSD), that not only solves the ILS problem, better than other sub-optimal
techniques, but also is capable of providing the a-posteriori probability
distribution for each element in the solution vector. The algorithm is based on
reconstruction of the vector from multiple two-dimensional projections. The
projections are carefully chosen to provide low computational complexity.
Unlike other iterative techniques, such as the belief propagation, the proposed
algorithm has ensured convergence. We also provide simulated experiments
comparing the algorithm to other sub-optimal algorithms.
|
0901.0269
|
Random Linear Network Coding For Time Division Duplexing: Energy
Analysis
|
cs.IT math.IT
|
We study the energy performance of random linear network coding for time
division duplexing channels. We assume a packet erasure channel with nodes that
cannot transmit and receive information simultaneously. The sender transmits
coded data packets back-to-back before stopping to wait for the receiver to
acknowledge the number of degrees of freedom, if any, that are required to
decode correctly the information. Our analysis shows that, in terms of mean
energy consumed, there is an optimal number of coded data packets to send
before stopping to listen. This number depends on the energy needed to transmit
each coded packet and the acknowledgment (ACK), probabilities of packet and ACK
erasure, and the number of degrees of freedom that the receiver requires to
decode the data. We show that its energy performance is superior to that of a
full-duplex system. We also study the performance of our scheme when the number
of coded packets is chosen to minimize the mean time to complete transmission
as in [1]. Energy performance under this optimization criterion is found to be
close to optimal, thus providing a good trade-off between energy and time
required to complete transmissions.
|
0901.0275
|
Physical-Layer Security: Combining Error Control Coding and Cryptography
|
cs.IT cs.CR math.IT
|
In this paper we consider tandem error control coding and cryptography in the
setting of the {\em wiretap channel} due to Wyner. In a typical communications
system a cryptographic application is run at a layer above the physical layer
and assumes the channel is error free. However, in any real application the
channels for friendly users and passive eavesdroppers are not error free and
Wyner's wiretap model addresses this scenario. Using this model, we show the
security of a common cryptographic primitive, i.e. a keystream generator based
on linear feedback shift registers (LFSR), can be strengthened by exploiting
properties of the physical layer. A passive eavesdropper can be made to
experience greater difficulty in cracking an LFSR-based cryptographic system
insomuch that the computational complexity of discovering the secret key
increases by orders of magnitude, or is altogether infeasible. This result is
shown for two fast correlation attacks originally presented by Meier and
Staffelbach, in the context of channel errors due to the wiretap channel model.
|
0901.0296
|
Experience versus Talent Shapes the Structure of the Web
|
cs.CY cs.IR physics.soc-ph
|
We use sequential large-scale crawl data to empirically investigate and
validate the dynamics that underlie the evolution of the structure of the web.
We find that the overall structure of the web is defined by an intricate
interplay between experience or entitlement of the pages (as measured by the
number of inbound hyperlinks a page already has), inherent talent or fitness of
the pages (as measured by the likelihood that someone visiting the page would
give a hyperlink to it), and the continual high rates of birth and death of
pages on the web. We find that the web is conservative in judging talent and
the overall fitness distribution is exponential, showing low variability. The
small variance in talent, however, is enough to lead to experience
distributions with high variance: The preferential attachment mechanism
amplifies these small biases and leads to heavy-tailed power-law (PL) inbound
degree distributions over all pages, as well as over pages that are of the same
age. The balancing act between experience and talent on the web allows newly
introduced pages with novel and interesting content to grow quickly and surpass
older pages. In this regard, it is much like what we observe in high-mobility
and meritocratic societies: People with entitlement continue to have access to
the best resources, but there is just enough screening for fitness that allows
for talented winners to emerge and join the ranks of the leaders. Finally, we
show that the fitness estimates have potential practical applications in
ranking query results.
|
0901.0317
|
Design of a P System based Artificial Graph Chemistry
|
cs.NE cs.AI
|
Artificial Chemistries (ACs) are symbolic chemical metaphors for the
exploration of Artificial Life, with specific focus on the origin of life. In
this work we define a P system based artificial graph chemistry to understand
the principles leading to the evolution of life-like structures in an AC set up
and to develop a unified framework to characterize and classify symbolic
artificial chemistries by devising appropriate formalism to capture semantic
and organizational information. An extension of P system is considered by
associating probabilities with the rules providing the topological framework
for the evolution of a labeled undirected graph based molecular reaction
semantics.
|
0901.0318
|
Thoughts on an Unified Framework for Artificial Chemistries
|
cs.AI cs.IT math.IT nlin.AO
|
Artificial Chemistries (ACs) are symbolic chemical metaphors for the
exploration of Artificial Life, with specific focus on the problem of
biogenesis or the origin of life. This paper presents authors thoughts towards
defining a unified framework to characterize and classify symbolic artificial
chemistries by devising appropriate formalism to capture semantic and
organizational information. We identify three basic high level abstractions in
initial proposal for this framework viz., information, computation, and
communication. We present an analysis of two important notions of information,
namely, Shannon's Entropy and Algorithmic Information, and discuss inductive
and deductive approaches for defining the framework.
|
0901.0339
|
Resolution-based Query Answering for Semantic Access to Relational
Databases: A Research Note
|
cs.LO cs.DB
|
We address the problem of semantic querying of relational databases (RDB)
modulo knowledge bases using very expressive knowledge representation
formalisms, such as full first-order logic or its various fragments. We propose
to use a first-order logic (FOL) reasoner for computing schematic answers to
deductive queries, with the subsequent instantiation of these schematic answers
using a conventional relational DBMS. In this research note, we outline the
main idea of this technique -- using abstractions of databases and constrained
clauses for deriving schematic answers. The proposed method can be directly
used with regular RDB, including legacy databases. Moreover, we propose it as a
potential basis for an efficient Web-scale semantic search technology.
|
0901.0358
|
Weighted Naive Bayes Model for Semi-Structured Document Categorization
|
cs.IR
|
The aim of this paper is the supervised classification of semi-structured
data. A formal model based on bayesian classification is developed while
addressing the integration of the document structure into classification tasks.
We define what we call the structural context of occurrence for unstructured
data, and we derive a recursive formulation in which parameters are used to
weight the contribution of structural element relatively to the others. A
simplified version of this formal model is implemented to carry out textual
documents classification experiments. First results show, for a adhoc weighting
strategy, that the structural context of word occurrences has a significant
impact on classification results comparing to the performance of a simple
multinomial naive Bayes classifier. The proposed implementation competes on the
Reuters-21578 data with the SVM classifier associated or not with the splitting
of structural components. These results encourage exploring the learning of
acceptable weighting strategies for this model, in particular boosting
strategies.
|
0901.0401
|
From Physics to Economics: An Econometric Example Using Maximum Relative
Entropy
|
q-fin.ST cs.IT math.IT physics.data-an physics.pop-ph stat.CO stat.ME
|
Econophysics, is based on the premise that some ideas and methods from
physics can be applied to economic situations. We intend to show in this paper
how a physics concept such as entropy can be applied to an economic problem. In
so doing, we demonstrate how information in the form of observable data and
moment constraints are introduced into the method of Maximum relative Entropy
(MrE). A general example of updating with data and moments is shown. Two
specific econometric examples are solved in detail which can then be used as
templates for real world problems. A numerical example is compared to a large
deviation solution which illustrates some of the advantages of the MrE method.
|
0901.0492
|
Transmission Capacities for Overlaid Wireless Ad Hoc Networks with
Outage Constraints
|
cs.IT math.IT
|
We study the transmission capacities of two coexisting wireless networks (a
primary network vs. a secondary network) that operate in the same geographic
region and share the same spectrum. We define transmission capacity as the
product among the density of transmissions, the transmission rate, and the
successful transmission probability (1 minus the outage probability). The
primary (PR) network has a higher priority to access the spectrum without
particular considerations for the secondary (SR) network, where the SR network
limits its interference to the PR network by carefully controlling the density
of its transmitters. Assuming that the nodes are distributed according to
Poisson point processes and the two networks use different transmission ranges,
we quantify the transmission capacities for both of these two networks and
discuss their tradeoff based on asymptotic analyses. Our results show that if
the PR network permits a small increase of its outage probability, the sum
transmission capacity of the two networks (i.e., the overall spectrum
efficiency per unit area) will be boosted significantly over that of a single
network.
|
0901.0521
|
On Multipath Fading Channels at High SNR
|
cs.IT math.IT
|
This work studies the capacity of multipath fading channels. A noncoherent
channel model is considered, where neither the transmitter nor the receiver is
cognizant of the realization of the path gains, but both are cognizant of their
statistics. It is shown that if the delay spread is large in the sense that the
variances of the path gains decay exponentially or slower, then capacity is
bounded in the signal-to-noise ratio (SNR). For such channels, capacity does
not tend to infinity as the SNR tends to infinity. In contrast, if the
variances of the path gains decay faster than exponentially, then capacity is
unbounded in the SNR. It is further demonstrated that if the number of paths is
finite, then at high SNR capacity grows double-logarithmically with the SNR,
and the capacity pre-loglog, defined as the limiting ratio of capacity to
log(log(SNR)) as SNR tends to infinity, is 1 irrespective of the number of
paths.
|
0901.0536
|
Polar Codes: Characterization of Exponent, Bounds, and Constructions
|
cs.IT math.IT
|
Polar codes were recently introduced by Ar\i kan. They achieve the capacity
of arbitrary symmetric binary-input discrete memoryless channels under a low
complexity successive cancellation decoding strategy. The original polar code
construction is closely related to the recursive construction of Reed-Muller
codes and is based on the $2 \times 2$ matrix $\bigl[ 1 &0 1& 1 \bigr]$. It was
shown by Ar\i kan and Telatar that this construction achieves an error exponent
of $\frac12$, i.e., that for sufficiently large blocklengths the error
probability decays exponentially in the square root of the length. It was
already mentioned by Ar\i kan that in principle larger matrices can be used to
construct polar codes. A fundamental question then is to see whether there
exist matrices with exponent exceeding $\frac12$. We first show that any $\ell
\times \ell$ matrix none of whose column permutations is upper triangular
polarizes symmetric channels. We then characterize the exponent of a given
square matrix and derive upper and lower bounds on achievable exponents. Using
these bounds we show that there are no matrices of size less than 15 with
exponents exceeding $\frac12$. Further, we give a general construction based on
BCH codes which for large $n$ achieves exponents arbitrarily close to 1 and
which exceeds $\frac12$ for size 16.
|
0901.0541
|
Linear Transformations and Restricted Isometry Property
|
cs.IT math.IT
|
The Restricted Isometry Property (RIP) introduced by Cand\'es and Tao is a
fundamental property in compressed sensing theory. It says that if a sampling
matrix satisfies the RIP of certain order proportional to the sparsity of the
signal, then the original signal can be reconstructed even if the sampling
matrix provides a sample vector which is much smaller in size than the original
signal. This short note addresses the problem of how a linear transformation
will affect the RIP. This problem arises from the consideration of extending
the sensing matrix and the use of compressed sensing in different bases. As an
application, the result is applied to the redundant dictionary setting in
compressed sensing.
|
0901.0573
|
Asymptotic stability and capacity results for a broad family of power
adjustment rules: Expanded discussion
|
cs.IT math.FA math.IT
|
In any wireless communication environment in which a transmitter creates
interference to the others, a system of non-linear equations arises. Its form
(for 2 terminals) is p1=g1(p2;a1) and p2=g2(p1;a2), with p1, p2 power levels;
a1, a2 quality-of-service (QoS) targets; and g1, g2 functions akin to
"interference functions" in Yates (JSAC, 13(7):1341-1348, 1995). Two
fundamental questions are: (1) does the system have a solution?; and if so, (2)
what is it?. (Yates, 1995) shows that IF the system has a solution, AND the
"interference functions" satisfy some simple properties, a "greedy" power
adjustment process will always converge to a solution. We show that, if the
power-adjustment functions have similar properties to those of (Yates, 1995),
and satisfy a condition of the simple form gi(1,1,...,1)<1, then the system has
a unique solution that can be found iteratively. As examples, feasibility
conditions for macro-diversity and multiple-connection receptions are given.
Informally speaking, we complement (Yates, 1995) by adding the feasibility
condition it lacked. Our analysis is based on norm concepts, and the Banach's
contraction-mapping principle.
|
0901.0595
|
Capacity regions of two new classes of 2-receiver broadcast channels
|
cs.IT math.IT
|
Motivated by a simple broadcast channel, we generalize the notions of a less
noisy receiver and a more capable receiver to an essentially less noisy
receiver and an essentially more capable receiver respectively. We establish
the capacity regions of these classes by borrowing on existing techniques to
obtain the characterization of the capacity region for certain new and
interesting classes of broadcast channels. We also establish the relationships
between the new classes and the existing classes.
|
0901.0597
|
On the Optimal Convergence Probability of Univariate Estimation of
Distribution Algorithms
|
cs.NE cs.AI
|
In this paper, we obtain bounds on the probability of convergence to the
optimal solution for the compact Genetic Algorithm (cGA) and the Population
Based Incremental Learning (PBIL). We also give a sufficient condition for
convergence of these algorithms to the optimal solution and compute a range of
possible values of the parameters of these algorithms for which they converge
to the optimal solution with a confidence level.
|
0901.0598
|
A Step Forward in Studying the Compact Genetic Algorithm
|
cs.NE cs.AI
|
The compact Genetic Algorithm (cGA) is an Estimation of Distribution
Algorithm that generates offspring population according to the estimated
probabilistic model of the parent population instead of using traditional
recombination and mutation operators. The cGA only needs a small amount of
memory; therefore, it may be quite useful in memory-constrained applications.
This paper introduces a theoretical framework for studying the cGA from the
convergence point of view in which, we model the cGA by a Markov process and
approximate its behavior using an Ordinary Differential Equation (ODE). Then,
we prove that the corresponding ODE converges to local optima and stays there.
Consequently, we conclude that the cGA will converge to the local optima of the
function to be optimized.
|
0901.0608
|
Multicasting correlated multi-source to multi-sink over a network
|
cs.IT math.IT
|
The problem of network coding with multicast of a single source to multisink
has first been studied by Ahlswede, Cai, Li and Yeung in 2000, in which they
have established the celebrated max-flow mini-cut theorem on non-physical
information flow over a network of independent channels. On the other hand, in
1980, Han has studied the case with correlated multisource and a single sink
from the viewpoint of polymatroidal functions in which a necessary and
sufficient condition has been demonstrated for reliable transmission over the
network. This paper presents an attempt to unify both cases, which leads to
establish a necessary and sufficient condition for reliable transmission over a
network multicasting correlated multisource to multisink. Here, the problem of
separation of source coding and channel coding is also discussed.
|
0901.0633
|
Optimal control as a graphical model inference problem
|
math.OC cs.SY
|
We reformulate a class of non-linear stochastic optimal control problems
introduced by Todorov (2007) as a Kullback-Leibler (KL) minimization problem.
As a result, the optimal control computation reduces to an inference
computation and approximate inference methods can be applied to efficiently
compute approximate optimal controls. We show how this KL control theory
contains the path integral control method as a special case. We provide an
example of a block stacking task and a multi-agent cooperative game where we
demonstrate how approximate inference can be successfully applied to instances
that are too complex for exact computation. We discuss the relation of the KL
control approach to other inference approaches to control.
|
0901.0643
|
An Information Theoretic Analysis of Single Transceiver Passive RFID
Networks
|
cs.IT math.IT
|
In this paper, we study single transceiver passive RFID networks by modeling
the underlying physical system as a special cascade of a certain broadcast
channel (BCC) and a multiple access channel (MAC), using a "nested codebook"
structure in between. The particular application differentiates this
communication setup from an ordinary cascade of a BCC and a MAC, and requires
certain structures such as "nested codebooks", impurity channels or additional
power constraints. We investigate this problem both for discrete alphabets,
where we characterize the achievable rate region, as well as for continuous
alphabets with additive Gaussian noise, where we provide the capacity region.
Hence, we establish the maximal achievable error free communication rates for
this particular problem which constitutes the fundamental limit that is
achievable by any TDMA based RFID protocol and the achievable rate region for
any RFID protocol for the case of continuous alphabets under additive Gaussian
noise.
|
0901.0702
|
Multidimensional Flash Codes
|
cs.IT math.IT
|
Flash memory is a non-volatile computer memory comprised of blocks of cells,
wherein each cell can take on q different levels corresponding to the number of
electrons it contains. Increasing the cell level is easy; however, reducing a
cell level forces all the other cells in the same block to be erased. This
erasing operation is undesirable and therefore has to be used as infrequently
as possible. We consider the problem of designing codes for this purpose, where
k bits are stored using a block of n cells with q levels each. The goal is to
maximize the number of bit writes before an erase operation is required. We
present an efficient construction of codes that can store an arbitrary number
of bits. Our construction can be viewed as an extension to multiple dimensions
of the earlier work of Jiang and Bruck, where single-dimensional codes that can
store only 2 bits were proposed.
|
0901.0733
|
Contextual hypotheses and semantics of logic programs
|
cs.LO cs.AI
|
Logic programming has developed as a rich field, built over a logical
substratum whose main constituent is a nonclassical form of negation, sometimes
coexisting with classical negation. The field has seen the advent of a number
of alternative semantics, with Kripke-Kleene semantics, the well-founded
semantics, the stable model semantics, and the answer-set semantics standing
out as the most successful. We show that all aforementioned semantics are
particular cases of a generic semantics, in a framework where classical
negation is the unique form of negation and where the literals in the bodies of
the rules can be `marked' to indicate that they can be the targets of
hypotheses. A particular semantics then amounts to choosing a particular
marking scheme and choosing a particular set of hypotheses. When a literal
belongs to the chosen set of hypotheses, all marked occurrences of that literal
in the body of a rule are assumed to be true, whereas the occurrences of that
literal that have not been marked in the body of the rule are to be derived in
order to contribute to the firing of the rule. Hence the notion of hypothetical
reasoning that is presented in this framework is not based on making global
assumptions, but more subtly on making local, contextual assumptions, taking
effect as indicated by the chosen marking scheme on the basis of the chosen set
of hypotheses. Our approach offers a unified view on the various semantics
proposed in logic programming, classical in that only classical negation is
used, and links the semantics of logic programs to mechanisms that endow
rule-based systems with the power to harness hypothetical reasoning.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.