id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0901.2698
|
On integral probability metrics, \phi-divergences and binary
classification
|
cs.IT math.IT
|
A class of distance measures on probabilities -- the integral probability
metrics (IPMs) -- is addressed: these include the Wasserstein distance, Dudley
metric, and Maximum Mean Discrepancy. IPMs have thus far mostly been used in
more abstract settings, for instance as theoretical tools in mass
transportation problems, and in metrizing the weak topology on the set of all
Borel probability measures defined on a metric space. Practical applications of
IPMs are less common, with some exceptions in the kernel machines literature.
The present work contributes a number of novel properties of IPMs, which should
contribute to making IPMs more widely used in practice, for instance in areas
where $\phi$-divergences are currently popular.
First, to understand the relation between IPMs and $\phi$-divergences, the
necessary and sufficient conditions under which these classes intersect are
derived: the total variation distance is shown to be the only non-trivial
$\phi$-divergence that is also an IPM. This shows that IPMs are essentially
different from $\phi$-divergences. Second, empirical estimates of several IPMs
from finite i.i.d. samples are obtained, and their consistency and convergence
rates are analyzed. These estimators are shown to be easily computable, with
better rates of convergence than estimators of $\phi$-divergences. Third, a
novel interpretation is provided for IPMs by relating them to binary
classification, where it is shown that the IPM between class-conditional
distributions is the negative of the optimal risk associated with a binary
classifier. In addition, the smoothness of an appropriate binary classifier is
proved to be inversely related to the distance between the class-conditional
distributions, measured in terms of an IPM.
|
0901.2764
|
Dirty Paper Coding for Fading Channels with Partial Transmitter Side
Information
|
cs.IT math.IT
|
The problem of Dirty Paper Coding (DPC) over the Fading Dirty Paper Channel
(FDPC) Y = H(X + S)+Z, a more general version of Costa's channel, is studied
for the case in which there is partial and perfect knowledge of the fading
process H at the transmitter (CSIT) and the receiver (CSIR), respectively. A
key step in this problem is to determine the optimal inflation factor (under
Costa's choice of auxiliary random variable) when there is only partial CSIT.
Towards this end, two iterative numerical algorithms are proposed. Both of
these algorithms are seen to yield a good choice for the inflation factor.
Finally, the high-SNR (signal-to-noise ratio) behavior of the achievable rate
over the FDPC is dealt with. It is proved that FDPC (with t transmit and r
receive antennas) achieves the largest possible scaling factor of min(t,r) log
SNR even with no CSIT. Furthermore, in the high SNR regime, the optimality of
Costa's choice of auxiliary random variable is established even when there is
partial (or no) CSIT in the special case of FDPC with t <= r. Using the
high-SNR scaling-law result of the FDPC (mentioned before), it is shown that a
DPC-based multi-user transmission strategy, unlike other beamforming-based
multi-user strategies, can achieve a single-user sum-rate scaling factor over
the multiple-input multiple-output Gaussian Broadcast Channel with partial (or
no) CSIT.
|
0901.2768
|
FRFD MIMO Systems: Precoded V-BLAST with Limited Feedback Versus
Non-orthogonal STBC MIMO
|
cs.IT math.IT
|
Full-rate (FR) and full-diversity (FD) are attractive features in MIMO
systems. We refer to systems which achieve both FR and FD simultaneously as
FRFD systems. Non-orthogonal STBCs can achieve FRFD without feedback, but their
ML decoding complexities are high. V-BLAST without precoding achieves FR but
not FD. FRFD can be achieved in V-BLAST through precoding given full channel
state information at the transmitter (CSIT). However, with limited feedback
precoding, V-BLAST achieves FD, but with some rate loss. Our contribution in
this paper is two-fold: $i)$ we propose a limited feedback (LFB) precoding
scheme which achieves FRFD in $2\times 2$, $3\times 3$ and $4\times 4$ V-BLAST
systems (we refer to this scheme as FRFD-VBLAST-LFB scheme), and $ii)$
comparing the performances of the FRFD-VBLAST-LFB scheme and non-orthogonal
STBCs without feedback (e.g., Golden code, perfect codes) under ML decoding, we
show that in $2\times 2$ MIMO system with 4-QAM/16-QAM, FRFD-VBLAST-LFB scheme
outperforms the Golden code by about 0.6 dB; in $3\times 3$ and $4\times 4$
MIMO systems, the performance of FRFD-VBLAST-LFB scheme is comparable to the
performance of perfect codes. The FRFD-VBLAST-LFB scheme is attractive because
1) ML decoding becomes less complex compared to that of non-orthogonal STBCs,
2) the number of feedback bits required to achieve the above performance is
small, 3) in slow-fading, it is adequate to send feedback bits only
occasionally, and 4) in most practical wireless systems feedback channel is
often available (e.g., for adaptive modulation, rate/power control).
|
0901.2804
|
The Secrecy Capacity for a 3-Receiver Broadcast Channel with Degraded
Message Sets
|
cs.IT math.IT
|
This paper has been withdrawn by the author due to some errors.
|
0901.2838
|
Analytical Solution of Covariance Evolution for Regular LDPC Codes
|
cs.IT math.IT
|
The covariance evolution is a system of differential equations with respect
to the covariance of the number of edges connecting to the nodes of each
residual degree. Solving the covariance evolution, we can derive distributions
of the number of check nodes of residual degree 1, which helps us to estimate
the block error probability for finite-length LDPC code. Amraoui et al.\
resorted to numerical computations to solve the covariance evolution. In this
paper, we give the analytical solution of the covariance evolution.
|
0901.2850
|
On finitely recursive programs
|
cs.AI cs.LO
|
Disjunctive finitary programs are a class of logic programs admitting
function symbols and hence infinite domains. They have very good computational
properties, for example ground queries are decidable while in the general case
the stable model semantics is highly undecidable. In this paper we prove that a
larger class of programs, called finitely recursive programs, preserves most of
the good properties of finitary programs under the stable model semantics,
namely: (i) finitely recursive programs enjoy a compactness property; (ii)
inconsistency checking and skeptical reasoning are semidecidable; (iii)
skeptical resolution is complete for normal finitely recursive programs.
Moreover, we show how to check inconsistency and answer skeptical queries using
finite subsets of the ground program instantiation. We achieve this by
extending the splitting sequence theorem by Lifschitz and Turner: We prove that
if the input program P is finitely recursive, then the partial stable models
determined by any smooth splitting omega-sequence converge to a stable model of
P.
|
0901.2864
|
An extension of the order bound for AG codes
|
math.NT cs.IT math.AG math.IT
|
The most successful method to obtain lower bounds for the minimum distance of
an algebraic geometric code is the order bound, which generalizes the Feng-Rao
bound. We provide a significant extension of the bound that improves the order
bounds by Beelen and by Duursma and Park. We include an exhaustive numerical
comparison of the different bounds for 10168 two-point codes on the Suzuki
curve of genus g=124 over the field of 32 elements. Keywords: algebraic
geometric code, order bound, Suzuki curve.
|
0901.2903
|
Entropy Measures vs. Algorithmic Information
|
cs.IT cs.CC math.IT
|
Algorithmic entropy and Shannon entropy are two conceptually different
information measures, as the former is based on size of programs and the later
in probability distributions. However, it is known that, for any recursive
probability distribution, the expected value of algorithmic entropy equals its
Shannon entropy, up to a constant that depends only on the distribution. We
study if a similar relationship holds for R\'{e}nyi and Tsallis entropies of
order $\alpha$, showing that it only holds for R\'{e}nyi and Tsallis entropies
of order 1 (i.e., for Shannon entropy). Regarding a time bounded analogue
relationship, we show that, for distributions such that the cumulative
probability distribution is computable in time $t(n)$, the expected value of
time-bounded algorithmic entropy (where the alloted time is $nt(n)\log
(nt(n))$) is in the same range as the unbounded version. So, for these
distributions, Shannon entropy captures the notion of computationally
accessible information. We prove that, for universal time-bounded distribution
$\m^t(x)$, Tsallis and R\'{e}nyi entropies converge if and only if $\alpha$ is
greater than 1.
|
0901.2911
|
Gibbs Free Energy Analysis of a Quantum Analog of the Classical Binary
Symmetric Channel
|
physics.gen-ph cond-mat.stat-mech cs.IT math.IT
|
The Gibbs free energy properties of a quantum {\it send, receive}
communications system are studied. The communications model resembles the
classical Ising model of spins on a lattice in that the joint state of the
quantum system is the product of sender and receiver states. However, the
system differs from the classical case in that the sender and receiver spin
states are quantum superposition states coupled by a Hamiltonian operator. A
basic understanding of these states is directly relevant to communications
theory and indirectly relevant to computation since the product states form a
basis for entangled states. Highlights of the study include an exact method for
decimation for quantum spins. The main result is that the minimum Gibbs free
energy of the quantum system in the product state is higher (lower capacity)
than a classical system with the same parameter values. The result is both
surprising and not. The channel characteristics of the quantum system in the
product state are markedly inferior to those of the classical Ising system.
Intuitively, it would seem that capacity should suffer as a result. Yet, one
would expect entangled states, built from product states, to have better
correlation properties.
|
0901.2912
|
Weighted $\ell_1$ Minimization for Sparse Recovery with Prior
Information
|
cs.IT math.IT
|
In this paper we study the compressed sensing problem of recovering a sparse
signal from a system of underdetermined linear equations when we have prior
information about the probability of each entry of the unknown signal being
nonzero. In particular, we focus on a model where the entries of the unknown
vector fall into two sets, each with a different probability of being nonzero.
We propose a weighted $\ell_1$ minimization recovery algorithm and analyze its
performance using a Grassman angle approach. We compute explicitly the
relationship between the system parameters (the weights, the number of
measurements, the size of the two sets, the probabilities of being non-zero) so
that an iid random Gaussian measurement matrix along with weighted $\ell_1$
minimization recovers almost all such sparse signals with overwhelming
probability as the problem dimension increases. This allows us to compute the
optimal weights. We also provide simulations to demonstrate the advantages of
the method over conventional $\ell_1$ optimization.
|
0901.2922
|
Scheduling in Multi-hop Wireless Networks with Priorities
|
cs.IT math.IT
|
In this paper we consider prioritized maximal scheduling in multi-hop
wireless networks, where the scheduler chooses a maximal independent set
greedily according to a sequence specified by certain priorities. We show that
if the probability distributions of the priorities are properly chosen, we can
achieve the optimal (maximum) stability region using an i.i.d random priority
assignment process, for any set of arrival processes that satisfy Law of Large
Numbers. The pre-computation of the priorities is, in general, NP-hard, but
there exists polynomial time approximation scheme (PTAS) to achieve any
fraction of the optimal stability region. We next focus on the simple case of
static priority and specify a greedy priority assignment algorithm, which can
achieve the same fraction of the optimal stability region as the state of art
result for Longest Queue First (LQF) schedulers. We also show that this
algorithm can be easily adapted to satisfy delay constraints in the large
deviations regime, and therefore, supports Quality of Service (QoS) for each
link.
|
0901.2924
|
Universal Complex Structures in Written Language
|
physics.soc-ph cs.CL
|
Quantitative linguistics has provided us with a number of empirical laws that
characterise the evolution of languages and competition amongst them. In terms
of language usage, one of the most influential results is Zipf's law of word
frequencies. Zipf's law appears to be universal, and may not even be unique to
human language. However, there is ongoing controversy over whether Zipf's law
is a good indicator of complexity. Here we present an alternative approach that
puts Zipf's law in the context of critical phenomena (the cornerstone of
complexity in physics) and establishes the presence of a large scale
"attraction" between successive repetitions of words. Moreover, this phenomenon
is scale-invariant and universal -- the pattern is independent of word
frequency and is observed in texts by different authors and written in
different languages. There is evidence, however, that the shape of the scaling
relation changes for words that play a key role in the text, implying the
existence of different "universality classes" in the repetition of words. These
behaviours exhibit striking parallels with complex catastrophic phenomena.
|
0901.2934
|
Noisy DPC and Application to a Cognitive Channel
|
cs.IT math.IT
|
In this paper, we first consider a channel that is contaminated by two
independent Gaussian noises $S ~ N(0,Q)$ and $Z_0 ~ N(0,N_0)$. The capacity of
this channel is computed when independent noisy versions of $S$ are known to
the transmitter and/or receiver. It is shown that the channel capacity is
greater then the capacity when $S$ is completely unknown, but is less then the
capacity when $S$ is perfectly known at the transmitter or receiver. For
example, if there is one noisy version of $S$ known at the transmitter only,
the capacity is $0.5log(1+P/(Q(N_1/(Q+N_1))+N_0))$, where $P$ is the input
power constraint and $N_1$ is the power of the noise corrupting $S$. We then
consider a Gaussian cognitive interference channel (IC) and propose a causal
noisy dirty paper coding (DPC) strategy. We compute the achievable region using
this noisy DPC strategy and quantify the regions when it achieves the upper
bound on the rate.
|
0901.2954
|
An Upper Limit of AC Huffman Code Length in JPEG Compression
|
cs.IT cs.CC cs.CE cs.CV math.IT
|
A strategy for computing upper code-length limits of AC Huffman codes for an
8x8 block in JPEG Baseline coding is developed. The method is based on a
geometric interpretation of the DCT, and the calculated limits are as close as
14% to the maximum code-lengths. The proposed strategy can be adapted to other
transform coding methods, e.g., MPEG 2 and 4 video compressions, to calculate
close upper code length limits for the respective processing blocks.
|
0901.3017
|
Statistical analysis of the Indus script using $n$-grams
|
cs.CL
|
The Indus script is one of the major undeciphered scripts of the ancient
world. The small size of the corpus, the absence of bilingual texts, and the
lack of definite knowledge of the underlying language has frustrated efforts at
decipherment since the discovery of the remains of the Indus civilisation.
Recently, some researchers have questioned the premise that the Indus script
encodes spoken language. Building on previous statistical approaches, we apply
the tools of statistical language processing, specifically $n$-gram Markov
chains, to analyse the Indus script for syntax. Our main results are that the
script has well-defined signs which begin and end texts, that there is
directionality and strong correlations in the sign order, and that there are
groups of signs which appear to have identical syntactic function. All these
require no {\it a priori} suppositions regarding the syntactic or semantic
content of the signs, but follow directly from the statistical analysis. Using
information theoretic measures, we find the information in the script to be
intermediate between that of a completely random and a completely fixed
ordering of signs. Our study reveals that the Indus script is a structured sign
system showing features of a formal language, but, at present, cannot
conclusively establish that it encodes {\it natural} language. Our $n$-gram
Markov model is useful for predicting signs which are missing or illegible in a
corpus of Indus texts. This work forms the basis for the development of a
stochastic grammar which can be used to explore the syntax of the Indus script
in greater detail.
|
0901.3056
|
Factorization of Joint Probability Mass Functions into Parity Check
Interactions
|
cs.IT cs.DM math.IT math.PR
|
We show that any joint probability mass function (PMF) can be expressed as a
product of parity check factors and factors of degree one with the help of some
auxiliary variables, if the alphabet size is appropriate for defining a parity
check equation. In other words, marginalization of a joint PMF is equivalent to
a soft decoding task as long as a finite field can be constructed over the
alphabet of the PMF. In factor graph terminology this claim means that a factor
graph representing such a joint PMF always has an equivalent Tanner graph. We
provide a systematic method based on the Hilbert space of PMFs and orthogonal
projections for obtaining this factorization.
|
0901.3130
|
Secure Communication in the Low-SNR Regime: A Characterization of the
Energy-Secrecy Tradeoff
|
cs.IT math.IT
|
Secrecy capacity of a multiple-antenna wiretap channel is studied in the low
signal-to-noise ratio (SNR) regime. Expressions for the first and second
derivatives of the secrecy capacity with respect to SNR at SNR = 0 are derived.
Transmission strategies required to achieve these derivatives are identified.
In particular, it is shown that it is optimal in the low-SNR regime to transmit
in the maximum-eigenvalue eigenspace of H_m* H_m - N_m/N_e H_e* H_e where H_m
and H_e denote the channel matrices associated with the legitimate receiver and
eavesdropper, respectively, and N_m and N_e are the noise variances at the
receiver and eavesdropper, respectively. Energy efficiency is analyzed by
finding the minimum bit energy required for secure and reliable communications,
and the wideband slope. Increased bit energy requirements under secrecy
constraints are quantified. Finally, the impact of fading is investigated.
|
0901.3132
|
Low-SNR Analysis of Interference Channels under Secrecy Constraints
|
cs.IT math.IT
|
In this paper, we study the secrecy rates over weak Gaussian interference
channels for different transmission schemes. We focus on the low-SNR regime and
obtain the minimum bit energy E_b/N_0_min values, and the wideband slope
regions for both TDMA and multiplexed transmission schemes. We show that
secrecy constraints introduce a penalty in both the minimum bit energy and the
slope regions. Additionally, we identify under what conditions TDMA or
multiplexed transmission is optimal. Finally, we show that TDMA is more likely
to be optimal in the presence of secrecy constraints.
|
0901.3134
|
Energy Efficiency of Fixed-Rate Wireless Transmissions under Queueing
Constraints and Channel Uncertainty
|
cs.IT math.IT
|
Energy efficiency of fixed-rate transmissions is studied in the presence of
queueing constraints and channel uncertainty. It is assumed that neither the
transmitter nor the receiver has channel side information prior to
transmission. The channel coefficients are estimated at the receiver via
minimum mean-square-error (MMSE) estimation with the aid of training symbols.
It is further assumed that the system operates under statistical queueing
constraints in the form of limitations on buffer violation probabilities. The
optimal fraction of of power allocated to training is identified. Spectral
efficiency--bit energy tradeoff is analyzed in the low-power and wideband
regimes by employing the effective capacity formulation. In particular, it is
shown that the bit energy increases without bound in the low-power regime as
the average power vanishes. On the other hand, it is proven that the bit energy
diminishes to its minimum value in the wideband regime as the available
bandwidth increases. For this case, expressions for the minimum bit energy and
wideband slope are derived. Overall, energy costs of channel uncertainty and
queueing constraints are identified.
|
0901.3150
|
Matrix Completion from a Few Entries
|
cs.LG stat.ML
|
Let M be a random (alpha n) x n matrix of rank r<<n, and assume that a
uniformly random subset E of its entries is observed. We describe an efficient
algorithm that reconstructs M from |E| = O(rn) observed entries with relative
root mean square error RMSE <= C(rn/|E|)^0.5 . Further, if r=O(1), M can be
reconstructed exactly from |E| = O(n log(n)) entries. These results apply
beyond random matrices to general low-rank incoherent matrices.
This settles (in the case of bounded rank) a question left open by Candes and
Recht and improves over the guarantees for their reconstruction algorithm. The
complexity of our algorithm is O(|E|r log(n)), which opens the way to its use
for massive data sets. In the process of proving these statements, we obtain a
generalization of a celebrated result by Friedman-Kahn-Szemeredi and Feige-Ofek
on the spectrum of sparse random matrices.
|
0901.3170
|
On linear balancing sets
|
cs.IT cs.DM math.IT
|
Let n be an even positive integer and F be the field \GF(2). A word in F^n is
called balanced if its Hamming weight is n/2. A subset C \subseteq F^n$ is
called a balancing set if for every word y \in F^n there is a word x \in C such
that y + x is balanced. It is shown that most linear subspaces of F^n of
dimension slightly larger than 3/2\log_2(n) are balancing sets. A
generalization of this result to linear subspaces that are "almost balancing"
is also presented. On the other hand, it is shown that the problem of deciding
whether a given set of vectors in F^n spans a balancing set, is NP-hard. An
application of linear balancing sets is presented for designing efficient
error-correcting coding schemes in which the codewords are balanced.
|
0901.3192
|
End-to-End Outage Minimization in OFDM Based Linear Relay Networks
|
cs.IT math.IT
|
Multi-hop relaying is an economically efficient architecture for coverage
extension and throughput enhancement in future wireless networks. OFDM, on the
other hand, is a spectrally efficient physical layer modulation technique for
broadband transmission. As a natural consequence of combining OFDM with
multi-hop relaying, the allocation of per-hop subcarrier power and per-hop
transmission time is crucial in optimizing the network performance. This paper
is concerned with the end-to-end information outage in an OFDM based linear
relay network. Our goal is to find an optimal power and time adaptation policy
to minimize the outage probability under a long-term total power constraint. We
solve the problem in two steps. First, for any given channel realization, we
derive the minimum short-term power required to meet a target transmission
rate. We show that it can be obtained through two nested bisection loops. To
reduce computational complexity and signalling overhead, we also propose a
sub-optimal algorithm. In the second step, we determine a power threshold to
control the transmission on-off so that the long-term total power constraint is
satisfied. Numerical examples are provided to illustrate the performance of the
proposed power and time adaptation schemes with respect to other resource
adaptation schemes.
|
0901.3196
|
Statistical Performance Analysis of MDL Source Enumeration in Array
Processing
|
cs.IT math.IT
|
In this correspondence, we focus on the performance analysis of the
widely-used minimum description length (MDL) source enumeration technique in
array processing. Unfortunately, available theoretical analysis exhibit
deviation from the simulation results. We present an accurate and insightful
performance analysis for the probability of missed detection. We also show that
the statistical performance of the MDL is approximately the same under both
deterministic and stochastic signal models. Simulation results show the
superiority of the proposed analysis over available results.
|
0901.3197
|
A Low Density Lattice Decoder via Non-Parametric Belief Propagation
|
cs.IT math.IT
|
The recent work of Sommer, Feder and Shalvi presented a new family of codes
called low density lattice codes (LDLC) that can be decoded efficiently and
approach the capacity of the AWGN channel. A linear time iterative decoding
scheme which is based on a message-passing formulation on a factor graph is
given.
In the current work we report our theoretical findings regarding the relation
between the LDLC decoder and belief propagation. We show that the LDLC decoder
is an instance of non-parametric belief propagation and further connect it to
the Gaussian belief propagation algorithm. Our new results enable borrowing
knowledge from the non-parametric and Gaussian belief propagation domains into
the LDLC domain. Specifically, we give more general convergence conditions for
convergence of the LDLC decoder (under the same assumptions of the original
LDLC convergence analysis). We discuss how to extend the LDLC decoder from
Latin square to full rank, non-square matrices. We propose an efficient
construction of sparse generator matrix and its matching decoder. We report
preliminary experimental results which show our decoder has comparable symbol
to error rate compared to the original LDLC decoder.%
|
0901.3202
|
Model-Consistent Sparse Estimation through the Bootstrap
|
cs.LG stat.ML
|
We consider the least-square linear regression problem with regularization by
the $\ell^1$-norm, a problem usually referred to as the Lasso. In this paper,
we first present a detailed asymptotic analysis of model consistency of the
Lasso in low-dimensional settings. For various decays of the regularization
parameter, we compute asymptotic equivalents of the probability of correct
model selection. For a specific rate decay, we show that the Lasso selects all
the variables that should enter the model with probability tending to one
exponentially fast, while it selects all other variables with strictly positive
probability. We show that this property implies that if we run the Lasso for
several bootstrapped replications of a given sample, then intersecting the
supports of the Lasso bootstrap estimates leads to consistent model selection.
This novel variable selection procedure, referred to as the Bolasso, is
extended to high-dimensional settings by a provably consistent two-step
procedure.
|
0901.3291
|
Approaching the linguistic complexity
|
cs.CL physics.data-an
|
We analyze the rank-frequency distributions of words in selected English and
Polish texts. We compare scaling properties of these distributions in both
languages. We also study a few small corpora of Polish literary texts and find
that for a corpus consisting of texts written by different authors the basic
scaling regime is broken more strongly than in the case of comparable corpus
consisting of texts written by the same author. Similarly, for a corpus
consisting of texts translated into Polish from other languages the scaling
regime is broken more strongly than for a comparable corpus of native Polish
texts. Moreover, based on the British National Corpus, we consider the
rank-frequency distributions of the grammatically basic forms of words (lemmas)
tagged with their proper part of speech. We find that these distributions do
not scale if each part of speech is analyzed separately. The only part of
speech that independently develops a trace of scaling is verbs.
|
0901.3314
|
Sending a Bi-Variate Gaussian over a Gaussian MAC
|
cs.IT math.IT
|
We study the power versus distortion trade-off for the distributed
transmission of a memoryless bi-variate Gaussian source over a two-to-one
average-power limited Gaussian multiple-access channel. In this problem, each
of two separate transmitters observes a different component of a memoryless
bi-variate Gaussian source. The two transmitters then describe their source
component to a common receiver via an average-power constrained Gaussian
multiple-access channel. From the output of the multiple-access channel, the
receiver wishes to reconstruct each source component with the least possible
expected squared-error distortion. Our interest is in characterizing the
distortion pairs that are simultaneously achievable on the two source
components.
We present sufficient conditions and necessary conditions for the
achievability of a distortion pair. These conditions are expressed as a
function of the channel signal-to-noise ratio (SNR) and of the source
correlation. In several cases the necessary conditions and sufficient
conditions are shown to agree. In particular, we show that if the channel SNR
is below a certain threshold, then an uncoded transmission scheme is optimal.
We also derive the precise high-SNR asymptotics of an optimal scheme.
|
0901.3403
|
Distributed Compressive Sensing
|
cs.IT math.IT
|
Compressive sensing is a signal acquisition framework based on the revelation
that a small collection of linear projections of a sparse signal contains
enough information for stable recovery. In this paper we introduce a new theory
for distributed compressive sensing (DCS) that enables new distributed coding
algorithms for multi-signal ensembles that exploit both intra- and inter-signal
correlation structures. The DCS theory rests on a new concept that we term the
joint sparsity of a signal ensemble. Our theoretical contribution is to
characterize the fundamental performance limits of DCS recovery for jointly
sparse signal ensembles in the noiseless measurement setting; our result
connects single-signal, joint, and distributed (multi-encoder) compressive
sensing. To demonstrate the efficacy of our framework and to show that
additional challenges such as computational tractability can be addressed, we
study in detail three example models for jointly sparse signals. For these
models, we develop practical algorithms for joint recovery of multiple signals
from incoherent projections. In two of our three models, the results are
asymptotically best-possible, meaning that both the upper and lower bounds
match the performance of our practical algorithms. Moreover, simulations
indicate that the asymptotics take effect with just a moderate number of
signals. DCS is immediately applicable to a range of problems in sensor arrays
and networks.
|
0901.3408
|
Limits of Deterministic Compressed Sensing Considering Arbitrary
Orthonormal Basis for Sparsity
|
cs.IT math.IT
|
It is previously shown that proper random linear samples of a finite discrete
signal (vector) which has a sparse representation in an orthonormal basis make
it possible (with probability 1) to recover the original signal. Moreover, the
choice of the linear samples does not depend on the sparsity domain. In this
paper, we will show that the replacement of random linear samples with
deterministic functions of the signal (not necessarily linear) will not result
in unique reconstruction of k-sparse signals except for k=1. We will show that
there exist deterministic nonlinear sampling functions for unique
reconstruction of 1- sparse signals while deterministic linear samples fail to
do so.
|
0901.3467
|
Erasure Codes with a Banded Structure for Hybrid Iterative-ML Decoding
|
cs.IT math.IT
|
This paper presents new FEC codes for the erasure channel, LDPC-Band, that
have been designed so as to optimize a hybrid iterative-Maximum Likelihood (ML)
decoding. Indeed, these codes feature simultaneously a sparse parity check
matrix, which allows an efficient use of iterative LDPC decoding, and a
generator matrix with a band structure, which allows fast ML decoding on the
erasure channel. The combination of these two decoding algorithms leads to
erasure codes achieving a very good trade-off between complexity and erasure
correction capability.
|
0901.3475
|
Efficient decoding algorithm using triangularity of $\mbf{R}$ matrix of
QR-decomposition
|
cs.IT math.IT
|
An efficient decoding algorithm named `divided decoder' is proposed in this
paper. Divided decoding can be combined with any decoder using QR-decomposition
and offers different pairs of performance and complexity. Divided decoding
provides various combinations of two or more different searching algorithms.
Hence it makes flexibility in error rate and complexity for the algorithms
using it. We calculate diversity orders and upper bounds of error rates for
typical models when these models are solved by divided decodings with sphere
decoder, and discuss about the effects of divided decoding on complexity.
Simulation results of divided decodings combined with a sphere decoder
according to different splitting indices correspond to the theoretical
analysis.
|
0901.3574
|
Automating Access Control Logics in Simple Type Theory with LEO-II
|
cs.LO cs.AI
|
Garg and Abadi recently proved that prominent access control logics can be
translated in a sound and complete way into modal logic S4. We have previously
outlined how normal multimodal logics, including monomodal logics K and S4, can
be embedded in simple type theory (which is also known as higher-order logic)
and we have demonstrated that the higher-order theorem prover LEO-II can
automate reasoning in and about them. In this paper we combine these results
and describe a sound and complete embedding of different access control logics
in simple type theory. Employing this framework we show that the off the shelf
theorem prover LEO-II can be applied to automate reasoning in prominent access
control logics.
|
0901.3580
|
Feedback Capacity of the Gaussian Interference Channel to Within 1.7075
Bits: the Symmetric Case
|
cs.IT math.IT
|
We characterize the symmetric capacity to within 1.7075 bits/s/Hz for the
two-user Gaussian interference channel with feedback. The result makes use of a
deterministic model to provide insights into the Gaussian channel. We derive a
new outer bound to show that a proposed achievable scheme can achieve the
symmetric capacity to within 1.7075 bits for all channel parameters. From this
result, we show that feedback provides unbounded gain, i.e., the gain becomes
arbitrarily large for certain channel parameters. It is a surprising result
because feedback has been so far known to provide only power gain (bounded
gain) in the context of multiple access channels and broadcast channels.
|
0901.3585
|
Resource Adaptive Agents in Interactive Theorem Proving
|
cs.LO cs.AI
|
We introduce a resource adaptive agent mechanism which supports the user in
interactive theorem proving. The mechanism uses a two layered architecture of
agent societies to suggest appropriate commands together with possible command
argument instantiations. Experiments with this approach show that its
effectiveness can be further improved by introducing a resource concept. In
this paper we provide an abstract view on the overall mechanism, motivate the
necessity of an appropriate resource concept and discuss its realization within
the agent architecture.
|
0901.3590
|
On the Dual Formulation of Boosting Algorithms
|
cs.LG cs.CV
|
We study boosting algorithms from a new perspective. We show that the
Lagrange dual problems of AdaBoost, LogitBoost and soft-margin LPBoost with
generalized hinge loss are all entropy maximization problems. By looking at the
dual problems of these boosting algorithms, we show that the success of
boosting algorithms can be understood in terms of maintaining a better margin
distribution by maximizing margins and at the same time controlling the margin
variance.We also theoretically prove that, approximately, AdaBoost maximizes
the average margin, instead of the minimum margin. The duality formulation also
enables us to develop column generation based optimization algorithms, which
are totally corrective. We show that they exhibit almost identical
classification results to that of standard stage-wise additive boosting
algorithms but with much faster convergence rates. Therefore fewer weak
classifiers are needed to build the ensemble using our proposed optimization
technique.
|
0901.3596
|
Joint source-channel with side information coding error exponents
|
cs.IT math.IT
|
In this paper, we study the upper and the lower bounds on the joint
source-channel coding error exponent with decoder side-information. The results
in the paper are non-trivial extensions of the Csiszar's classical paper [5].
Unlike the joint source-channel coding result in [5], it is not obvious whether
the lower bound and the upper bound are equivalent even if the channel coding
error exponent is known. For a class of channels, including the symmetric
channels, we apply a game-theoretic result to establish the existence of a
saddle point and hence prove that the lower and upper bounds are the same if
the channel coding error exponent is known. More interestingly, we show that
encoder side-information does not increase the error exponents in this case.
|
0901.3608
|
A remark on higher order RUE-resolution with EXTRUE
|
cs.AI cs.LO
|
We show that a prominent counterexample for the completeness of first order
RUE-resolution does not apply to the higher order RUE-resolution approach
EXTRUE.
|
0901.3630
|
Decay of Correlations in Low Density Parity Check Codes: Low Noise
Regime
|
cs.IT math.IT
|
Consider transmission over a binary additive white gaussian noise channel
using a fixed low-density parity check code. We consider the posterior measure
over the code bits and the corresponding correlation between two codebits,
averaged over the noise realizations. We show that for low enough noise
variance this average correlation decays exponentially fast with the graph
distance between the code bits. One consequence of this result is that for low
enough noise variance the GEXIT functions (further averaged over a standard
code ensemble) of the belief propagation and optimal decoders are the same.
|
0901.3751
|
Sorting improves word-aligned bitmap indexes
|
cs.DB
|
Bitmap indexes must be compressed to reduce input/output costs and minimize
CPU usage. To accelerate logical operations (AND, OR, XOR) over bitmaps, we use
techniques based on run-length encoding (RLE), such as Word-Aligned Hybrid
(WAH) compression. These techniques are sensitive to the order of the rows: a
simple lexicographical sort can divide the index size by 9 and make indexes
several times faster. We investigate row-reordering heuristics. Simply
permuting the columns of the table can increase the sorting efficiency by 40%.
Secondary contributions include efficient algorithms to construct and aggregate
bitmaps. The effect of word length is also reviewed by constructing 16-bit,
32-bit and 64-bit indexes. Using 64-bit CPUs, we find that 64-bit indexes are
slightly faster than 32-bit indexes despite being nearly twice as large.
|
0901.3762
|
Enhancing the capabilities of LIGO time-frequency plane searches through
clustering
|
gr-qc astro-ph.IM cs.CV physics.data-an
|
One class of gravitational wave signals LIGO is searching for consists of
short duration bursts of unknown waveforms. Potential sources include core
collapse supernovae, gamma ray burst progenitors, and mergers of binary black
holes or neutron stars. We present a density-based clustering algorithm to
improve the performance of time-frequency searches for such gravitational-wave
bursts when they are extended in time and/or frequency, and not sufficiently
well known to permit matched filtering. We have implemented this algorithm as
an extension to the QPipeline, a gravitational-wave data analysis pipeline for
the detection of bursts, which currently determines the statistical
significance of events based solely on the peak significance observed in
minimum uncertainty regions of the time-frequency plane. Density based
clustering improves the performance of such a search by considering the
aggregate significance of arbitrarily shaped regions in the time-frequency
plane and rejecting the isolated minimum uncertainty features expected from the
background detector noise. In this paper, we present test results for simulated
signals and demonstrate that density based clustering improves the performance
of the QPipeline for signals extended in time and/or frequency.
|
0901.3769
|
Deceptiveness and Neutrality - the ND family of fitness landscapes
|
cs.AI
|
When a considerable number of mutations have no effects on fitness values,
the fitness landscape is said neutral. In order to study the interplay between
neutrality, which exists in many real-world applications, and performances of
metaheuristics, it is useful to design landscapes which make it possible to
tune precisely neutral degree distribution. Even though many neutral landscape
models have already been designed, none of them are general enough to create
landscapes with specific neutral degree distributions. We propose three steps
to design such landscapes: first using an algorithm we construct a landscape
whose distribution roughly fits the target one, then we use a simulated
annealing heuristic to bring closer the two distributions and finally we affect
fitness values to each neutral network. Then using this new family of fitness
landscapes we are able to highlight the interplay between deceptiveness and
neutrality.
|
0901.3795
|
On a random number of disorders
|
math.PR cs.IT math.IT math.ST stat.TH
|
We register a random sequence which has the following properties: it has
three segments being the homogeneous Markov processes. Each segment has his own
one step transition probability law and the length of the segment is unknown
and random. It means that at two random successive moments (they can be equal
also and equal zero too) the source of observations is changed and the first
observation in new segment is chosen according to new transition probability
starting from the last state of the previous segment. In effect the number of
homogeneous segments is random. The transition probabilities of each process
are known and a priori distribution of the disorder moments is given. The
former research on such problem has been devoted to various questions
concerning the distribution changes. The random number of distributional
segments creates new problems in solutions with relation to analysis of the
model with deterministic number of segments. Two cases are presented in
details. In the first one the objectives is to stop on or between the disorder
moments while in the second one our objective is to find the strategy which
immediately detects the distribution changes. Both problems are reformulated to
optimal stopping of the observed sequences. The detailed analysis of the
problem is presented to show the form of optimal decision function.
|
0901.3809
|
Interference channel capacity region for randomized fixed-composition
codes
|
cs.IT math.IT
|
The randomized fixe-composition with optimal decoding error exponents are
studied \cite{Raul_ISIT,Raul_journal} for the finite alphabet interference
channel (IFC) with two transmitter-receiver pairs. In this paper we investigate
the capacity region of the randomized fixed-composition coding scheme. A
complete characterization of the capacity region of the said coding scheme is
given. The inner bound is derived by showing the existence of a positive error
exponent within the capacity region. A simple universal decoding rule is given.
The tight outer bound is derived by extending a technique first developed in
\cite{Dueck_RC} for single input output channels to interference channels. It
is shown that even with a sophisticated time-sharing scheme among randomized
fixed-composition codes, the capacity region of the randomized
fixed-composition coding is not bigger than the known Han-Kobayashi
\cite{Han_Kobayashi} capacity region. This suggests that the average behavior
of random codes are not sufficient to get new capacity regions.
|
0901.3820
|
On the rate distortion function of Bernoulli Gaussian sequences
|
cs.IT math.IT
|
In this paper, we study the rate distortion function of the i.i.d sequence of
multiplications of a Bernoulli $p$ random variable and a gaussian random
variable $\sim N(0,1)$. We use a new technique in the derivation of the lower
bound in which we establish the duality between channel coding and lossy source
coding in the strong sense. We improve the lower bound on the rate distortion
function over the best known lower bound by $p\log_2\frac{1}{p}$ if distortion
$D$ is small. This has some interesting implications on sparse signals where
$p$ is small since the known gap between the lower and upper bound is $H(p)$.
This improvement in the lower bound shows that the lower and upper bounds are
almost identical for sparse signals with small distortion because
$\lim\limits_{p\to 0}\frac{p\log_2\frac{1}{p}}{H(p)}=1$.
|
0901.3839
|
Remembering what we like: Toward an agent-based model of Web traffic
|
cs.HC cs.CY cs.IR cs.MA physics.soc-ph
|
Analysis of aggregate Web traffic has shown that PageRank is a poor model of
how people actually navigate the Web. Using the empirical traffic patterns
generated by a thousand users over the course of two months, we characterize
the properties of Web traffic that cannot be reproduced by Markovian models, in
which destinations are independent of past decisions. In particular, we show
that the diversity of sites visited by individual users is smaller and more
broadly distributed than predicted by the PageRank model; that link traffic is
more broadly distributed than predicted; and that the time between consecutive
visits to the same site by a user is less broadly distributed than predicted.
To account for these discrepancies, we introduce a more realistic navigation
model in which agents maintain individual lists of bookmarks that are used as
teleportation targets. The model can also account for branching, a traffic
property caused by browser features such as tabs and the back button. The model
reproduces aggregate traffic patterns such as site popularity, while also
generating more accurate predictions of diversity, link traffic, and return
time distributions. This model for the first time allows us to capture the
extreme heterogeneity of aggregate traffic measurements while explaining the
more narrowly focused browsing patterns of individual users.
|
0901.3880
|
Capacity Scaling of Single-source Wireless Networks: Effect of Multiple
Antennas
|
cs.IT math.IT
|
We consider a wireless network in which a single source node located at the
center of a unit area having $m$ antennas transmits messages to $n$ randomly
located destination nodes in the same area having a single antenna each. To
achieve the sum-rate proportional to $m$ by transmit beamforming, channel state
information (CSI) is essentially required at the transmitter (CSIT), which is
hard to obtain in practice because of the time-varying nature of the channels
and feedback overhead. We show that, even without CSIT, the achievable sum-rate
scales as $\Theta(m\log m)$ if a cooperation between receivers is allowed. By
deriving the cut-set upper bound, we also show that $\Theta(m\log m)$ scaling
is optimal. Specifically, for $n=\omega(m^2)$, the simple TDMA-based
quantize-and-forward is enough to achieve the capacity scaling. For
$n=\omega(m)$ and $n=\operatorname{O}(m^2)$, on the other hand, we apply the
hierarchical cooperation to achieve the capacity scaling.
|
0901.3910
|
Simulation of mitochondrial metabolism using multi-agents system
|
q-bio.SC cs.MA q-bio.QM
|
Metabolic pathways describe chains of enzymatic reactions. Their modelling is
a key point to understand living systems. An enzymatic reaction is an
interaction between one or several metabolites (substrates) and an enzyme
(simple protein or enzymatic complex build of several subunits). In our
Mitochondria in Silico Project, MitoScop, we study the metabolism of the
mitochondria, an intra-cellular organelle. Many ordinary differential equation
models are available in the literature. They well fit experimental results on
flux values inside the metabolic pathways, but many parameters are di$\pm$cult
to transcribe with such models: localization of enzymes, rules about the
reactions scheduler, etc Moreover, a model of a significant part of
mitochondrial metabolism could become very complex and contain more than 50
equations. In this context, the multi-agents systems appear as an alternative
to model the metabolic pathways. Firstly, we have looked after membrane design.
The mitochondria is a particular case because the inner mitochondrial space, ie
matricial space, is delimited by two membranes: the inner and the outer one. In
addition to matricial enzymes, other enzymes are located inside the membranes
or in the inter-membrane space. Analysis of mitochondrial metabolism must take
into account this kind of architecture.
|
0901.3923
|
Model-Based Event Detection in Wireless Sensor Networks
|
cs.NI cs.CV
|
In this paper we present an application of techniques from statistical signal
processing to the problem of event detection in wireless sensor networks used
for environmental monitoring. The proposed approach uses the well-established
Principal Component Analysis (PCA) technique to build a compact model of the
observed phenomena that is able to capture daily and seasonal trends in the
collected measurements. We then use the divergence between actual measurements
and model predictions to detect the existence of discrete events within the
collected data streams. Our preliminary results show that this event detection
mechanism is sensitive enough to detect the onset of rain events using the
temperature modality of a wireless sensor network.
|
0901.3939
|
Effectively Searching Maps in Web Documents
|
cs.DL cs.IR
|
Maps are an important source of information in archaeology and other
sciences. Users want to search for historical maps to determine recorded
history of the political geography of regions at different eras, to find out
where exactly archaeological artifacts were discovered, etc. Currently, they
have to use a generic search engine and add the term map along with other
keywords to search for maps. This crude method will generate a significant
number of false positives that the user will need to cull through to get the
desired results. To reduce their manual effort, we propose an automatic map
identification, indexing, and retrieval system that enables users to search and
retrieve maps appearing in a large corpus of digital documents using simple
keyword queries. We identify features that can help in distinguishing maps from
other figures in digital documents and show how a Support-Vector-Machine-based
classifier can be used to identify maps. We propose map-level-metadata e.g.,
captions, references to the maps in text, etc. and document-level metadata,
e.g., title, abstract, citations, how recent the publication is, etc. and show
how they can be automatically extracted and indexed. Our novel ranking
algorithm weights different metadata fields differently and also uses the
document-level metadata to help rank retrieved maps. Empirical evaluations show
which features should be selected and which metadata fields should be weighted
more. We also demonstrate improved retrieval results in comparison to
adaptations of existing methods for map retrieval. Our map search engine has
been deployed in an online map-search system that is part of the Blind-Review
digital library system.
|
0901.3948
|
OFDM Channel Estimation Based on Adaptive Thresholding for Sparse Signal
Detection
|
cs.IT math.IT
|
Wireless OFDM channels can be approximated by a time varying filter with
sparse time domain taps. Recent achievements in sparse signal processing such
as compressed sensing have facilitated the use of sparsity in estimation, which
improves the performance significantly. The problem of these sparse-based
methods is the need for a stable transformation matrix which is not fulfilled
in the current transmission setups. To assist the analog filtering at the
receiver, the transmitter leaves some of the subcarriers at both edges of the
bandwidth unused which results in an ill-conditioned DFT submatrix. To overcome
this difficulty we propose Adaptive Thresholding for Sparse Signal Detection
(ATSSD). Simulation results confirm that the proposed method works well in
time-invariant and specially time-varying channels where other methods may not
work as well.
|
0901.3950
|
Efficient Sampling of Sparse Wideband Analog Signals
|
cs.IT math.IT
|
Periodic nonuniform sampling is a known method to sample spectrally sparse
signals below the Nyquist rate. This strategy relies on the implicit assumption
that the individual samplers are exposed to the entire frequency range. This
assumption becomes impractical for wideband sparse signals. The current paper
proposes an alternative sampling stage that does not require a full-band front
end. Instead, signals are captured with an analog front end that consists of a
bank of multipliers and lowpass filters whose cutoff is much lower than the
Nyquist rate. The problem of recovering the original signal from the low-rate
samples can be studied within the framework of compressive sampling. An
appropriate parameter selection ensures that the samples uniquely determine the
analog input. Moreover, the analog input can be stably reconstructed with
digital algorithms. Numerical experiments support the theoretical analysis.
|
0901.3984
|
Stop the Chase
|
cs.DB
|
The chase procedure, an algorithm proposed 25+ years ago to fix constraint
violations in database instances, has been successfully applied in a variety of
contexts, such as query optimization, data exchange, and data integration. Its
practicability, however, is limited by the fact that - for an arbitrary set of
constraints - it might not terminate; even worse, chase termination is an
undecidable problem in general. In response, the database community has
proposed sufficient restrictions on top of the constraints that guarantee chase
termination on any database instance. In this paper, we propose a novel
sufficient termination condition, called inductive restriction, which strictly
generalizes previous conditions, but can be checked as efficiently.
Furthermore, we motivate and study the problem of data-dependent chase
termination and, as a key result, present sufficient termination conditions
w.r.t. fixed instances. They are strictly more general than inductive
restriction and might guarantee termination although the chase does not
terminate in the general case.
|
0901.3987
|
Improved Delay Estimates for a Queueing Model for Random Linear Coding
for Unicast
|
cs.IT math.IT
|
Consider a lossy communication channel for unicast with zero-delay feedback.
For this communication scenario, a simple retransmission scheme is optimum with
respect to delay. An alternative approach is to use random linear coding in
automatic repeat-request (ARQ) mode. We extend the work of Shrader and
Ephremides, by deriving an expression for the delay of random linear coding
over field of infinite size. Simulation results for various field sizes are
also provided.
|
0901.3990
|
Du corpus au dictionnaire
|
cs.CL cs.IR
|
In this article, we propose an automatic process to build multi-lingual
lexico-semantic resources. The goal of these resources is to browse
semantically textual information contained in texts of different languages.
This method uses a mathematical model called Atlas s\'emantiques in order to
represent the different senses of each word. It uses the linguistic relations
between words to create graphs that are projected into a semantic space. These
projections constitute semantic maps that denote the sense trends of each given
word. This model is fed with syntactic relations between words extracted from a
corpus. Therefore, the lexico-semantic resource produced describes all the
words and all their meanings observed in the corpus. The sense trends are
expressed by syntactic contexts, typical for a given meaning. The link between
each sense trend and the utterances used to build the sense trend are also
stored in an index. Thus all the instances of a word in a particular sense are
linked and can be browsed easily. And by using several corpora of different
languages, several resources are built that correspond with each other through
languages. It makes it possible to browse information through languages thanks
to syntactic contexts translations (even if some of them are partial).
|
0901.4004
|
Mining for adverse drug events with formal concept analysis
|
cs.AI
|
The pharmacovigilance databases consist of several case reports involving
drugs and adverse events (AEs). Some methods are applied consistently to
highlight all signals, i.e. all statistically significant associations between
a drug and an AE. These methods are appropriate for verification of more
complex relationships involving one or several drug(s) and AE(s) (e.g;
syndromes or interactions) but do not address the identification of them. We
propose a method for the extraction of these relationships based on Formal
Concept Analysis (FCA) associated with disproportionality measures. This method
identifies all sets of drugs and AEs which are potential signals, syndromes or
interactions. Compared to a previous experience of disproportionality analysis
without FCA, the addition of FCA was more efficient for identifying false
positives related to concomitant drugs.
|
0901.4012
|
Cross-situational and supervised learning in the emergence of
communication
|
cs.LG
|
Scenarios for the emergence or bootstrap of a lexicon involve the repeated
interaction between at least two agents who must reach a consensus on how to
name N objects using H words. Here we consider minimal models of two types of
learning algorithms: cross-situational learning, in which the individuals
determine the meaning of a word by looking for something in common across all
observed uses of that word, and supervised operant conditioning learning, in
which there is strong feedback between individuals about the intended meaning
of the words. Despite the stark differences between these learning schemes, we
show that they yield the same communication accuracy in the realistic limits of
large N and H, which coincides with the result of the classical occupancy
problem of randomly assigning N objects to H words.
|
0901.4068
|
On the Sum Capacity of A Class of Cyclically Symmetric Deterministic
Interference Channels
|
cs.IT math.IT
|
Certain deterministic interference channels have been shown to accurately
model Gaussian interference channels in the asymptotic low-noise regime.
Motivated by this correspondence, we investigate a K user-pair, cyclically
symmetric, deterministic interference channel in which each receiver
experiences interference only from its neighboring transmitters (Wyner model).
We establish the sum capacity for a large set of channel parameters, thus
generalizing previous results for the 2-pair case.
|
0901.4129
|
Quasi-Cyclic LDPC Codes: Influence of Proto- and Tanner-Graph Structure
on Minimum Hamming Distance Upper Bounds
|
cs.IT cs.DM math.IT
|
Quasi-cyclic (QC) low-density parity-check (LDPC) codes are an important
instance of proto-graph-based LDPC codes. In this paper we present upper bounds
on the minimum Hamming distance of QC LDPC codes and study how these upper
bounds depend on graph structure parameters (like variable degrees, check node
degrees, girth) of the Tanner graph and of the underlying proto-graph.
Moreover, for several classes of proto-graphs we present explicit QC LDPC code
constructions that achieve (or come close to) the respective minimum Hamming
distance upper bounds. Because of the tight algebraic connection between QC
codes and convolutional codes, we can state similar results for the free
Hamming distance of convolutional codes. In fact, some QC code statements are
established by first proving the corresponding convolutional code statements
and then using a result by Tanner that says that the minimum Hamming distance
of a QC code is upper bounded by the free Hamming distance of the convolutional
code that is obtained by "unwrapping" the QC code.
|
0901.4134
|
Distributed Lossy Averaging
|
cs.IT math.IT
|
An information theoretic formulation of the distributed averaging problem
previously studied in computer science and control is presented. We assume a
network with m nodes each observing a WGN source. The nodes communicate and
perform local processing with the goal of computing the average of the sources
to within a prescribed mean squared error distortion. The network rate
distortion function R^*(D) for a 2-node network with correlated Gaussian
sources is established. A general cutset lower bound on R^*(D) is established
and shown to be achievable to within a factor of 2 via a centralized protocol
over a star network. A lower bound on the network rate distortion function for
distributed weighted-sum protocols, which is larger in order than the cutset
bound by a factor of log m is established. An upper bound on the network rate
distortion function for gossip-base weighted-sum protocols, which is only log
log m larger in order than the lower bound for a complete graph network, is
established. The results suggest that using distributed protocols results in a
factor of log m increase in order relative to centralized protocols.
|
0901.4137
|
Practical Robust Estimators for the Imprecise Dirichlet Model
|
math.ST cs.LG stat.ML stat.TH
|
Walley's Imprecise Dirichlet Model (IDM) for categorical i.i.d. data extends
the classical Dirichlet model to a set of priors. It overcomes several
fundamental problems which other approaches to uncertainty suffer from. Yet, to
be useful in practice, one needs efficient ways for computing the
imprecise=robust sets or intervals. The main objective of this work is to
derive exact, conservative, and approximate, robust and credible interval
estimates under the IDM for a large class of statistical estimators, including
the entropy and mutual information.
|
0901.4147
|
Determination of Minimal Sets of Control Places for Safe Petri Nets
|
cs.IT math.IT
|
Our objective is to design a controlled system with a simple method for
discrete event systems based on Petri nets. It is possible to construct the
Petri net model of a system and the specification separately. By synchronous
composition of both models, the desired functioning closed loop model is
deduced. Often uncontrollable transitions lead to forbidden states. The problem
of forbidden states is solved using linear constraints. A set of linear
constraints allows forbidding the reachability of these states. Generally, the
number of these so-called forbidden states and consequently the number of
constraints are large and lead to a great number of control places. A
systematic method to reduce the size and the number of constraints for safe
Petri Nets is given. By using a method based on the Petri nets invariants,
maximal permissive controllers are determined. The size of the controller is
close to the size of the specified model, and it can be implemented on a PLC in
a structural way.
|
0901.4180
|
Google distance between words
|
cs.CL
|
Cilibrasi and Vitanyi have demonstrated that it is possible to extract the
meaning of words from the world-wide web. To achieve this, they rely on the
number of webpages that are found through a Google search containing a given
word and they associate the page count to the probability that the word appears
on a webpage. Thus, conditional probabilities allow them to correlate one word
with another word's meaning. Furthermore, they have developed a similarity
distance function that gauges how closely related a pair of words is. We
present a specific counterexample to the triangle inequality for this
similarity distance function.
|
0901.4192
|
Fixing Convergence of Gaussian Belief Propagation
|
cs.IT cs.LG math.IT stat.CO
|
Gaussian belief propagation (GaBP) is an iterative message-passing algorithm
for inference in Gaussian graphical models. It is known that when GaBP
converges it converges to the correct MAP estimate of the Gaussian random
vector and simple sufficient conditions for its convergence have been
established. In this paper we develop a double-loop algorithm for forcing
convergence of GaBP. Our method computes the correct MAP estimate even in cases
where standard GaBP would not have converged. We further extend this
construction to compute least-squares solutions of over-constrained linear
systems. We believe that our construction has numerous applications, since the
GaBP algorithm is linked to solution of linear systems of equations, which is a
fundamental problem in computer science and engineering. As a case study, we
discuss the linear detection problem. We show that using our new construction,
we are able to force convergence of Montanari's linear detection algorithm, in
cases where it would originally fail. As a consequence, we are able to increase
significantly the number of users that can transmit concurrently.
|
0901.4205
|
On the small weight codewords of the functional codes C_2(Q), Q a
non-singular quadric
|
math.AG cs.IT math.IT
|
We study the small weight codewords of the functional code C_2(Q), with Q a
non-singular quadric of PG(N,q). We prove that the small weight codewords
correspond to the intersections of Q with the singular quadrics of PG(N,q)
consisting of two hyperplanes. We also calculate the number of codewords having
these small weights.
|
0901.4224
|
Geospatial semantics: beyond ontologies, towards an enactive approach
|
cs.AI cs.DB
|
Current approaches to semantics in the geospatial domain are mainly based on
ontologies, but ontologies, since continue to build entirely on the symbolic
methodology, suffers from the classical problems, e.g. the symbol grounding
problem, affecting representational theories. We claim for an enactive approach
to semantics, where meaning is considered to be an emergent feature arising
context-dependently in action. Since representational theories are unable to
deal with context, a new formalism is required toward a contextual theory of
concepts. SCOP is considered a promising formalism in this sense and is briefly
described.
|
0901.4267
|
LR-aided MMSE lattice decoding is DMT optimal for all approximately
universal codes
|
cs.IT math.IT
|
Currently for the nt x nr MIMO channel, any explicitly constructed space-time
(ST) designs that achieve optimality with respect to the diversity multiplexing
tradeoff (DMT) are known to do so only when decoded using maximum likelihood
(ML) decoding, which may incur prohibitive decoding complexity. In this paper
we prove that MMSE regularized lattice decoding, as well as the computationally
efficient lattice reduction (LR) aided MMSE decoder, allows for efficient and
DMT optimal decoding of any approximately universal lattice-based code. The
result identifies for the first time an explicitly constructed encoder and a
computationally efficient decoder that achieve DMT optimality for all
multiplexing gains and all channel dimensions. The results hold irrespective of
the fading statistics.
|
0901.4272
|
Dynamic Control of a Flow-Rack Automated Storage and Retrieval System
|
cs.IT math.IT
|
In this paper we propose a control scheme based on coloured Petri net (CPN)
for a flow-rack automated storage and retrieval system. The AS/RS is modelled
using Coloured Petri nets, the developed model has been used to capture and
provide the rack state. We introduce in the control system an optimization
module as a decision process which performs a real-time optimization working on
a discrete events time scale. The objective is to find bin locations for the
retrieval requests by minimizing the total number of retrieval cycles for a
batch of requests and thereby increase the system throughput. By solving the
optimization model, the proposed method gives according to customers request
and the rack state, the best bin locations for retrieval, i.e. allowing at the
same time to satisfy the customers request and carrying out the minimum of
retrieval cycles.
|
0901.4275
|
Informative Sensing
|
cs.IT math.IT
|
Compressed sensing is a recent set of mathematical results showing that
sparse signals can be exactly reconstructed from a small number of linear
measurements. Interestingly, for ideal sparse signals with no measurement
noise, random measurements allow perfect reconstruction while measurements
based on principal component analysis (PCA) or independent component analysis
(ICA) do not. At the same time, for other signal and noise distributions, PCA
and ICA can significantly outperform random projections in terms of enabling
reconstruction from a small number of measurements. In this paper we ask: given
the distribution of signals we wish to measure, what are the optimal set of
linear projections for compressed sensing? We consider the problem of finding a
small number of linear projections that are maximally informative about the
signal. Formally, we use the InfoMax criterion and seek to maximize the mutual
information between the signal, x, and the (possibly noisy) projection y=Wx. We
show that in general the optimal projections are not the principal components
of the data nor random projections, but rather a seemingly novel set of
projections that capture what is still uncertain about the signal, given the
knowledge of distribution. We present analytic solutions for certain special
cases including natural images. In particular, for natural images, the
near-optimal projections are bandwise random, i.e., incoherent to the sparse
bases at a particular frequency band but with more weights on the
low-frequencies, which has a physical relation to the multi-resolution
representation of images.
|
0901.4375
|
Extracting Spooky-activation-at-a-distance from Considerations of
Entanglement
|
physics.data-an cs.CL quant-ph
|
Following an early claim by Nelson & McEvoy \cite{Nelson:McEvoy:2007}
suggesting that word associations can display `spooky action at a distance
behaviour', a serious investigation of the potentially quantum nature of such
associations is currently underway. This paper presents a simple quantum model
of a word association system. It is shown that a quantum model of word
entanglement can recover aspects of both the Spreading Activation equation and
the Spooky-activation-at-a-distance equation, both of which are used to model
the activation level of words in human memory.
|
0901.4379
|
Ergodic Interference Alignment
|
cs.IT math.IT
|
This paper develops a new communication strategy, ergodic interference
alignment, for the K-user interference channel with time-varying fading. At any
particular time, each receiver will see a superposition of the transmitted
signals plus noise. The standard approach to such a scenario results in each
transmitter-receiver pair achieving a rate proportional to 1/K its
interference-free ergodic capacity. However, given two well-chosen time
indices, the channel coefficients from interfering users can be made to exactly
cancel. By adding up these two observations, each receiver can obtain its
desired signal without any interference. If the channel gains have independent,
uniform phases, this technique allows each user to achieve at least 1/2 its
interference-free ergodic capacity at any signal-to-noise ratio. Prior
interference alignment techniques were only able to attain this performance as
the signal-to-noise ratio tended to infinity. Extensions are given for the case
where each receiver wants a message from more than one transmitter as well as
the "X channel" case (with two receivers) where each transmitter has an
independent message for each receiver. Finally, it is shown how to generalize
this strategy beyond Gaussian channel models. For a class of finite field
interference channels, this approach yields the ergodic capacity region.
|
0901.4420
|
Some Generalizations of the Capacity Theorem for AWGN Channels
|
cs.IT math.IT
|
The channel capacity theorem for additive white Gaussian noise channel
(AWGN), widely known as the Shannon-Hartley Law, expresses the information
capacity of a channel bandlimited in the conventional Fourier domain in terms
of the signal-to-noise ratio in it. In this letter generalized versions of the
Shannon-Hartley Law using the linear canonical transform (LCT) are presented.
The channel capacity for AWGN channels is found to be a function of the LCT
parameters.
|
0901.4466
|
Physarum boats: If plasmodium sailed it would never leave a port
|
cs.RO q-bio.CB
|
Plasmodium of \emph{Physarum polycephalum} is a single huge (visible by naked
eye) cell with myriad of nuclei. The plasmodium is a promising substrate for
non-classical, nature-inspired, computing devices. It is capable for
approximation of shortest path, computation of planar proximity graphs and
plane tessellations, primitive memory and decision-making. The unique
properties of the plasmodium make it an ideal candidate for a role of amorphous
biological robots with massive parallel information processing and distributed
inputs and outputs. We show that when adhered to light-weight object resting on
a water surface the plasmodium can propel the object by oscillating its
protoplasmic pseudopodia. In experimental laboratory conditions and
computational experiments we study phenomenology of the plasmodium-floater
system, and possible mechanisms of controlling motion of objects propelled by
on board plasmodium.
|
0901.4467
|
Efficient LDPC Codes over GF(q) for Lossy Data Compression
|
cs.IT math.IT
|
In this paper we consider the lossy compression of a binary symmetric source.
We present a scheme that provides a low complexity lossy compressor with near
optimal empirical performance. The proposed scheme is based on b-reduced
ultra-sparse LDPC codes over GF(q). Encoding is performed by the Reinforced
Belief Propagation algorithm, a variant of Belief Propagation. The
computational complexity at the encoder is O(<d>.n.q.log q), where <d> is the
average degree of the check nodes. For our code ensemble, decoding can be
performed iteratively following the inverse steps of the leaf removal
algorithm. For a sparse parity-check matrix the number of needed operations is
O(n).
|
0901.4551
|
Robust Key Agreement Schemes
|
cs.IT math.IT
|
This paper considers a key agreement problem in which two parties aim to
agree on a key by exchanging messages in the presence of adversarial tampering.
The aim of the adversary is to disrupt the key agreement process, but there are
no secrecy constraints (i.e. we do not insist that the key is kept secret from
the adversary). The main results of the paper are coding schemes and bounds on
maximum key generation rates for this problem.
|
0901.4571
|
Everyone is a Curator: Human-Assisted Preservation for ORE Aggregations
|
cs.DL cs.IR
|
The Open Archives Initiative (OAI) has recently created the Object Reuse and
Exchange (ORE) project that defines Resource Maps (ReMs) for describing
aggregations of web resources. These aggregations are susceptible to many of
the same preservation challenges that face other web resources. In this paper,
we investigate how the aggregations of web resources can be preserved outside
of the typical repository environment and instead rely on the thousands of
interactive users in the web community and the Web Infrastructure (the
collection of web archives, search engines, and personal archiving services) to
facilitate preservation. Inspired by Web 2.0 services such as digg,
deli.cio.us, and Yahoo! Buzz, we have developed a lightweight system called
ReMember that attempts to harness the collective abilities of the web community
for preservation purposes instead of solely placing the burden of curatorial
responsibilities on a small number of experts.
|
0901.4591
|
Network Coding-Based Protection Strategy Against Node Failures
|
cs.IT cs.CR cs.NI math.IT
|
The enormous increase in the usage of communication networks has made
protection against node and link failures essential in the deployment of
reliable networks. To prevent loss of data due to node failures, a network
protection strategy is proposed that aims to withstand such failures.
Particularly, a protection strategy against any single node failure is designed
for a given network with a set of $n$ disjoint paths between senders and
receivers. Network coding and reduced capacity are deployed in this strategy
without adding extra working paths to the readily available connection paths.
This strategy is based on protection against node failures as protection
against multiple link failures. In addition, the encoding and decoding
operational aspects of the premeditated protection strategy are demonstrated.
|
0901.4612
|
Network Coding Capacity: A Functional Dependence Bound
|
cs.IT math.IT
|
Explicit characterization and computation of the multi-source network coding
capacity region (or even bounds) is long standing open problem. In fact,
finding the capacity region requires determination of the set of all entropic
vectors $\Gamma^{*}$, which is known to be an extremely hard problem. On the
other hand, calculating the explicitly known linear programming bound is very
hard in practice due to an exponential growth in complexity as a function of
network size. We give a new, easily computable outer bound, based on
characterization of all functional dependencies in networks. We also show that
the proposed bound is tighter than some known bounds.
|
0901.4648
|
On The Positive Definiteness of Polarity Coincidence Correlation
Coefficient Matrix
|
cs.IT math.IT
|
Polarity coincidence correlator (PCC), when used to estimate the covariance
matrix on an element-by-element basis, may not yield a positive semi-definite
(PSD) estimate. Devlin et al. [1], claimed that element-wise PCC is not
guaranteed to be PSD in dimensions p>3 for real signals. However, no
justification or proof was available on this issue. In this letter, it is
proved that for real signals with p<=3 and for complex signals with p<=2, a PSD
estimate is guaranteed. Counterexamples are presented for higher dimensions
which yield invalid covariance estimates.
|
0901.4694
|
Limit on the Addressability of Fault-Tolerant Nanowire Decoders
|
cs.AR cs.DM cs.IT math.IT
|
Although prone to fabrication error, the nanowire crossbar is a promising
candidate component for next generation nanometer-scale circuits. In the
nanowire crossbar architecture, nanowires are addressed by controlling voltages
on the mesowires. For area efficiency, we are interested in the maximum number
of nanowires $N(m,e)$ that can be addressed by $m$ mesowires, in the face of up
to $e$ fabrication errors. Asymptotically tight bounds on $N(m,e)$ are
established in this paper. In particular, it is shown that $N(m,e) = \Theta(2^m
/ m^{e+1/2})$. Interesting observations are made on the equivalence between
this problem and the problem of constructing optimal EC/AUED codes,
superimposed distance codes, pooling designs, and diffbounded set systems.
Results in this paper also improve upon those in the EC/AUEC codes literature.
|
0901.4723
|
On Algorithms Based on Joint Estimation of Currents and Contrast in
Microwave Tomography
|
math.NA cs.IT math.IT
|
This paper deals with improvements to the contrast source inversion method
which is widely used in microwave tomography. First, the method is reviewed and
weaknesses of both the criterion form and the optimization strategy are
underlined. Then, two new algorithms are proposed. Both of them are based on
the same criterion, similar but more robust than the one used in contrast
source inversion. The first technique keeps the main characteristics of the
contrast source inversion optimization scheme but is based on a better
exploitation of the conjugate gradient algorithm. The second technique is based
on a preconditioned conjugate gradient algorithm and performs simultaneous
updates of sets of unknowns that are normally processed sequentially. Both
techniques are shown to be more efficient than original contrast source
inversion.
|
0901.4761
|
A Knowledge Discovery Framework for Learning Task Models from User
Interactions in Intelligent Tutoring Systems
|
cs.AI
|
Domain experts should provide relevant domain knowledge to an Intelligent
Tutoring System (ITS) so that it can guide a learner during problemsolving
learning activities. However, for many ill-defined domains, the domain
knowledge is hard to define explicitly. In previous works, we showed how
sequential pattern mining can be used to extract a partial problem space from
logged user interactions, and how it can support tutoring services during
problem-solving exercises. This article describes an extension of this approach
to extract a problem space that is richer and more adapted for supporting
tutoring services. We combined sequential pattern mining with (1) dimensional
pattern mining (2) time intervals, (3) the automatic clustering of valued
actions and (4) closed sequences mining. Some tutoring services have been
implemented and an experiment has been conducted in a tutoring system.
|
0901.4784
|
On the Entropy of Written Spanish
|
cs.CL cs.IT math.IT
|
This paper reports on results on the entropy of the Spanish language. They
are based on an analysis of natural language for n-word symbols (n = 1 to 18),
trigrams, digrams, and characters. The results obtained in this work are based
on the analysis of twelve different literary works in Spanish, as well as a
279917 word news file provided by the Spanish press agency EFE. Entropy values
are calculated by a direct method using computer processing and the probability
law of large numbers. Three samples of artificial Spanish language produced by
a first-order model software source are also analyzed and compared with natural
Spanish language.
|
0901.4830
|
On the Relationship Between the Multi-antenna Secrecy Communications and
Cognitive Radio Communications
|
cs.IT math.IT
|
This paper studies the capacity of the multi-antenna or multiple-input
multiple-output (MIMO) secrecy channels with multiple eavesdroppers having
single/multiple antennas. It is known that the MIMO secrecy capacity is
achievable with the optimal transmit covariance matrix that maximizes the
minimum difference between the channel mutual information of the secrecy user
and those of the eavesdroppers. The MIMO secrecy capacity computation can thus
be formulated as a non-convex max-min problem, which cannot be solved
efficiently by standard convex optimization techniques. To handle this
difficulty, we explore a relationship between the MIMO secrecy channel and the
recently developed MIMO cognitive radio (CR) channel, in which the
multi-antenna secondary user transmits over the same spectrum simultaneously
with multiple primary users, subject to the received interference power
constraints at the primary users, or the so-called ``interference temperature
(IT)'' constraints. By constructing an auxiliary CR MIMO channel that has the
same channel responses as the MIMO secrecy channel, we prove that the optimal
transmit covariance matrix to achieve the secrecy capacity is the same as that
to achieve the CR spectrum sharing capacity with properly selected IT
constraints. Based on this relationship, several algorithms are proposed to
solve the non-convex secrecy capacity computation problem by transforming it
into a sequence of CR spectrum sharing capacity computation problems that are
convex. For the case with single-antenna eavesdroppers, the proposed algorithms
obtain the exact capacity of the MIMO secrecy channel, while for the case with
multi-antenna eavesdroppers, the proposed algorithms obtain both upper and
lower bounds on the MIMO secrecy capacity.
|
0901.4876
|
Non-Confluent NLC Graph Grammar Inference by Compressing Disjoint
Subgraphs
|
cs.LG cs.DM
|
Grammar inference deals with determining (preferable simple) models/grammars
consistent with a set of observations. There is a large body of research on
grammar inference within the theory of formal languages. However, there is
surprisingly little known on grammar inference for graph grammars. In this
paper we take a further step in this direction and work within the framework of
node label controlled (NLC) graph grammars. Specifically, we characterize,
given a set of disjoint and isomorphic subgraphs of a graph $G$, whether or not
there is a NLC graph grammar rule which can generate these subgraphs to obtain
$G$. This generalizes previous results by assuming that the set of isomorphic
subgraphs is disjoint instead of non-touching. This leads naturally to consider
the more involved ``non-confluent'' graph grammar rules.
|
0901.4898
|
Effective Delay Control in Online Network Coding
|
cs.IT math.IT
|
Motivated by streaming applications with stringent delay constraints, we
consider the design of online network coding algorithms with timely delivery
guarantees. Assuming that the sender is providing the same data to multiple
receivers over independent packet erasure channels, we focus on the case of
perfect feedback and heterogeneous erasure probabilities. Based on a general
analytical framework for evaluating the decoding delay, we show that existing
ARQ schemes fail to ensure that receivers with weak channels are able to
recover from packet losses within reasonable time. To overcome this problem, we
re-define the encoding rules in order to break the chains of linear
combinations that cannot be decoded after one of the packets is lost. Our
results show that sending uncoded packets at key times ensures that all the
receivers are able to meet specific delay requirements with very high
probability.
|
0901.4934
|
A historical perspective on developing foundations iInfo(TM) information
systems: iConsult(TM) and iEntertain(TM) apps using iDescribers(TM)
information integration for iOrgs(TM) information systems
|
cs.DC cs.DB cs.LO
|
Technology now at hand can integrate all kinds of digital information for
individuals, groups, and organizations so their information usefully links
together. iInfo(TM) information integration works by making connections
including examples like the following:
- A statistical connection between "being in a traffic jam" and "driving in
downtown Trenton between 5PM and 6PM on a weekday."
- A terminological connection between "MSR" and "Microsoft Research."
- A causal connection between "joining a group" and "being a member of the
group."
- A syntactic connection between "a pin dropped" and "a dropped pin."
- A biological connection between "a dolphin" and "a mammal".
- A demographic connection between "undocumented residents of California" and
"7% of the population of California."
- A geographical connection between "Leeds" and "England."
- A temporal connection between "turning on a computer" and "joining an
on-line discussion."
By making these connections, iInfo offers tremendous value for individuals,
families, groups, and organizations in making more effective use of information
technology.
In practice, integrated information is invariably pervasively inconsistent.
Therefore iInfo must be able to make connections even in the face of
inconsistency. The business of iInfo is not to make difficult decisions like
deciding the ultimate truth or probability of propositions. Instead it provides
means for processing information and carefully recording its provenance
including arguments (including arguments about arguments) for and against
propositions that is used by iConsult(TM) and iEntertain(TM) apps in iOrgs(TM)
Information Systems.
A historical perspective on the above questions is highly pertinent to the
current quest to develop foundations for privacy-friendly client-cloud
computing.
|
0901.4953
|
A Keygraph Classification Framework for Real-Time Object Detection
|
cs.CV
|
In this paper, we propose a new approach for keypoint-based object detection.
Traditional keypoint-based methods consist in classifying individual points and
using pose estimation to discard misclassifications. Since a single point
carries no relational features, such methods inherently restrict the usage of
structural information to the pose estimation phase. Therefore, the classifier
considers purely appearance-based feature vectors, thus requiring
computationally expensive feature extraction or complex probabilistic modelling
to achieve satisfactory robustness. In contrast, our approach consists in
classifying graphs of keypoints, which incorporates structural information
during the classification phase and allows the extraction of simpler feature
vectors that are naturally robust. In the present work, 3-vertices graphs have
been considered, though the methodology is general and larger order graphs may
be adopted. Successful experimental results obtained for real-time object
detection in video sequences are reported.
|
0901.4963
|
How Emotional Mechanism Helps Episodic Learning in a Cognitive Agent
|
cs.AI
|
In this paper we propose the CTS (Concious Tutoring System) technology, a
biologically plausible cognitive agent based on human brain functions.This
agent is capable of learning and remembering events and any related information
such as corresponding procedures, stimuli and their emotional valences. Our
proposed episodic memory and episodic learning mechanism are closer to the
current multiple-trace theory in neuroscience, because they are inspired by it
[5] contrary to other mechanisms that are incorporated in cognitive agents.
This is because in our model emotions play a role in the encoding and
remembering of events. This allows the agent to improve its behavior by
remembering previously selected behaviors which are influenced by its emotional
mechanism. Moreover, the architecture incorporates a realistic memory
consolidation process based on a data mining algorithm.
|
0902.0026
|
Beyond Nyquist: Efficient Sampling of Sparse Bandlimited Signals
|
cs.IT math.IT
|
Wideband analog signals push contemporary analog-to-digital conversion
systems to their performance limits. In many applications, however, sampling at
the Nyquist rate is inefficient because the signals of interest contain only a
small number of significant frequencies relative to the bandlimit, although the
locations of the frequencies may not be known a priori. For this type of sparse
signal, other sampling strategies are possible. This paper describes a new type
of data acquisition system, called a random demodulator, that is constructed
from robust, readily available components. Let K denote the total number of
frequencies in the signal, and let W denote its bandlimit in Hz. Simulations
suggest that the random demodulator requires just O(K log(W/K)) samples per
second to stably reconstruct the signal. This sampling rate is exponentially
lower than the Nyquist rate of W Hz. In contrast with Nyquist sampling, one
must use nonlinear methods, such as convex programming, to recover the signal
from the samples taken by the random demodulator. This paper provides a
detailed theoretical analysis of the system's performance that supports the
empirical observations.
|
0902.0043
|
Cut-Simulation and Impredicativity
|
cs.LO cs.AI
|
We investigate cut-elimination and cut-simulation in impredicative
(higher-order) logics. We illustrate that adding simple axioms such as Leibniz
equations to a calculus for an impredicative logic -- in our case a sequent
calculus for classical type theory -- is like adding cut. The phenomenon
equally applies to prominent axioms like Boolean- and functional
extensionality, induction, choice, and description. This calls for the
development of calculi where these principles are built-in instead of being
treated axiomatically.
|
0902.0058
|
The second weight of generalized Reed-Muller codes in most cases
|
cs.IT math.IT
|
The second weight of the Generalized Reed-Muller code of order $d$ over the
finite field with $q$ elements is now known for $d <q$ and $d>(n-1)(q-1)$. In
this paper, we determine the second weight for the other values of $d$ which
are not multiple of $q-1$ plus 1. For the special case $d=a(q-1)+1$ we give an
estimate.
|
0902.0133
|
New Algorithms and Lower Bounds for Sequential-Access Data Compression
|
cs.IT math.IT
|
This thesis concerns sequential-access data compression, i.e., by algorithms
that read the input one or more times from beginning to end. In one chapter we
consider adaptive prefix coding, for which we must read the input character by
character, outputting each character's self-delimiting codeword before reading
the next one. We show how to encode and decode each character in constant
worst-case time while producing an encoding whose length is worst-case optimal.
In another chapter we consider one-pass compression with memory bounded in
terms of the alphabet size and context length, and prove a nearly tight
tradeoff between the amount of memory we can use and the quality of the
compression we can achieve. In a third chapter we consider compression in the
read/write streams model, which allows us passes and memory both
polylogarithmic in the size of the input. We first show how to achieve
universal compression using only one pass over one stream. We then show that
one stream is not sufficient for achieving good grammar-based compression.
Finally, we show that two streams are necessary and sufficient for achieving
entropy-only bounds.
|
0902.0189
|
The Ergodic Capacity of The MIMO Wire-Tap Channel
|
cs.IT math.IT
|
This paper has been withdrawn to provide a more rigorous proof of the
converse of Theorem 1 and Lemma 1 as well.
|
0902.0221
|
Over-enhancement Reduction in Local Histogram Equalization using its
Degrees of Freedom
|
cs.CV cs.MM
|
A well-known issue of local (adaptive) histogram equalization (LHE) is
over-enhancement (i.e., generation of spurious details) in homogenous areas of
the image. In this paper, we show that the LHE problem has many solutions due
to the ambiguity in ranking pixels with the same intensity. The LHE solution
space can be searched for the images having the maximum PSNR or structural
similarity (SSIM) with the input image. As compared to the results of the prior
art, these solutions are more similar to the input image while offering the
same local contrast.
Index Terms: histogram modification or specification, contrast enhancement
|
0902.0271
|
Asymmetric numeral systems
|
cs.IT cs.CR math.GM math.IT
|
In this paper will be presented new approach to entropy coding: family of
generalizations of standard numeral systems which are optimal for encoding
sequence of equiprobable symbols, into asymmetric numeral systems - optimal for
freely chosen probability distributions of symbols. It has some similarities to
Range Coding but instead of encoding symbol in choosing a range, we spread
these ranges uniformly over the whole interval. This leads to simpler encoder -
instead of using two states to define range, we need only one. This approach is
very universal - we can obtain from extremely precise encoding (ABS) to
extremely fast with possibility to additionally encrypt the data (ANS). This
encryption uses the key to initialize random number generator, which is used to
calculate the coding tables. Such preinitialized encryption has additional
advantage: is resistant to brute force attack - to check a key we have to make
whole initialization. There will be also presented application for new approach
to error correction: after an error in each step we have chosen probability to
observe that something was wrong. There will be also presented application for
new approach to error correction: after an error in each step we have chosen
probability to observe that something was wrong. We can get near Shannon's
limit for any noise level this way with expected linear time of correction.
|
0902.0320
|
Planar Graphical Models which are Easy
|
cond-mat.stat-mech cond-mat.dis-nn cs.CC cs.IT math-ph math.IT math.MP
|
We describe a rich family of binary variables statistical mechanics models on
a given planar graph which are equivalent to Gaussian Grassmann Graphical
models (free fermions) defined on the same graph. Calculation of the partition
function (weighted counting) for such a model is easy (of polynomial
complexity) as reducible to evaluation of a Pfaffian of a matrix of size equal
to twice the number of edges in the graph. In particular, this approach touches
upon Holographic Algorithms of Valiant and utilizes the Gauge Transformations
discussed in our previous works.
|
0902.0337
|
Stability and Delay of Zero-Forcing SDMA with Limited Feedback
|
cs.IT math.IT
|
This paper addresses the stability and queueing delay of Space Division
Multiple Access (SDMA) systems with bursty traffic, where zero-forcing
beamforming enables simultaneous transmission to multiple mobiles. Computing
beamforming vectors relies on quantized channel state information (CSI)
feedback (limited feedback) from mobiles. Define the stability region for SDMA
as the set of multiuser packet-arrival rates for which the steady-state queue
lengths are finite. Given perfect CSI feedback and equal power allocation over
scheduled queues, the stability region is proved to be a convex polytope having
the derived vertices. For any set of arrival rates in the stability region,
multiuser queues are shown to be stabilized by a joint queue-and-beamforming
control policy that maximizes the departure-rate-weighted sum of queue lengths.
The stability region for limited feedback is found to be the perfect-CSI region
multiplied by one minus a small factor. The required number of feedback bits
per mobile is proved to scale logarithmically with the inverse of the above
factor as well as linearly with the number of transmit antennas minus one. The
effects of limited feedback on queueing delay are also quantified. For Poisson
arrival processes, CSI quantization errors are shown to multiply average
queueing delay by a factor larger than one. This factor can be controlled by
adjusting the number of feedback bits per mobile following the derived
relationship. For general arrival processes, CSI errors are found to increase
Kingman's bound on the tail probability of the instantaneous delay by one plus
a small factor. The required number of feedback bits per mobile is shown to
scale logarithmically with this factor.
|
0902.0354
|
Optimum Power and Rate Allocation for Coded V-BLAST
|
cs.IT math.IT
|
An analytical framework for minimizing the outage probability of a coded
spatial multiplexing system while keeping the rate close to the capacity is
developed. Based on this framework, specific strategies of optimum power and
rate allocation for the coded V-BLAST architecture are obtained and its
performance is analyzed. A fractional waterfilling algorithm, which is shown to
optimize both the capacity and the outage probability of the coded V-BLAST, is
proposed. Compact, closed-form expressions for the optimum allocation of the
average power are given. The uniform allocation of average power is shown to be
near optimum at moderate to high SNR for the coded V-BLAST with the average
rate allocation (when per-stream rates are set to match the per-stream
capacity). The results reported also apply to multiuser detection and channel
equalization relying on successive interference cancelation.
|
0902.0392
|
Tree Exploration for Bayesian RL Exploration
|
stat.ML cs.LG
|
Research in reinforcement learning has produced algorithms for optimal
decision making under uncertainty that fall within two main types. The first
employs a Bayesian framework, where optimality improves with increased
computational time. This is because the resulting planning task takes the form
of a dynamic programming problem on a belief tree with an infinite number of
states. The second type employs relatively simple algorithm which are shown to
suffer small regret within a distribution-free framework. This paper presents a
lower bound and a high probability upper bound on the optimal value function
for the nodes in the Bayesian belief tree, which are analogous to similar
bounds in POMDPs. The bounds are then used to create more efficient strategies
for exploring the tree. The resulting algorithms are compared with the
distribution-free algorithm UCB1, as well as a simpler baseline algorithm on
multi-armed bandit problems.
|
0902.0417
|
Decoding Network Codes by Message Passing
|
cs.IT math.IT
|
In this paper, we show how to construct a factor graph from a network code.
This provides a systematic framework for decoding using message passing
algorithms. The proposed message passing decoder exploits knowledge of the
underlying communications network topology to simplify decoding. For uniquely
decodeable linear network codes on networks with error-free links, only the
message supports (rather than the message values themselves) are required to be
passed. This proposed simplified support message algorithm is an instance of
the sum-product algorithm. Our message-passing framework provides a basis for
the design of network codes and control of network topology with a view toward
quantifiable complexity reduction in the sink terminals.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.