id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1004.4758
|
A Design of Paraunitary Polyphase Matrices of Rational Filter Banks
Based on (P,Q) Shift-Invariant Systems
|
cs.IT math.IT
|
In this paper we present a method to design paraunitary polyphase matrices of
critically sampled rational filter banks. The method is based on (P,Q)
shift-invariant systems, and so any kind of rational splitting of the frequency
spectrum can be achieved using this method. Ideal (P,Q) shift-invariant system
with smallest P and Q that map of a band of input spectrum to the output
spectrum are obtained. A new set of filters is obtained that characterize a
(P,Q) shift-invariant system. Ideal frequency spectrum of these filters are
obtained using ideal $(P,Q)$ shift-invariant systems. Actual paraunitary
polyphase matrices are then obtained by minimizing the stopband energies of
these filters against the parameters of the paraunitary polyphase matrices.
|
1004.4793
|
Logical methods of object recognition on satellite images using spatial
constraints
|
cs.CV
|
A logical approach to object recognition on image is proposed. The main idea
of the approach is to perform the object recognition as a logical inference on
a set of rules describing an object shape.
|
1004.4801
|
Ontology-based inference for causal explanation
|
cs.AI
|
We define an inference system to capture explanations based on causal
statements, using an ontology in the form of an IS-A hierarchy. We first
introduce a simple logical language which makes it possible to express that a
fact causes another fact and that a fact explains another fact. We present a
set of formal inference patterns from causal statements to explanation
statements. We introduce an elementary ontology which gives greater
expressiveness to the system while staying close to propositional reasoning. We
provide an inference system that captures the patterns discussed, firstly in a
purely propositional framework, then in a datalog (limited predicate)
framework.
|
1004.4815
|
Universal A Posteriori Metrics Game
|
cs.IT math.IT math.PR
|
Over binary input channels, uniform distribution is a universal prior, in the
sense that it allows to maximize the worst case mutual information over all
binary input channels, ensuring at least 94.2% of the capacity. In this paper,
we address a similar question, but with respect to a universal generalized
linear decoder. We look for the best collection of finitely many a posteriori
metrics, to maximize the worst case mismatched mutual information achieved by
decoding with these metrics (instead of an optimal decoder such as the Maximum
Likelihood (ML) tuned to the true channel). It is shown that for binary input
and output channels, two metrics suffice to actually achieve the same
performance as an optimal decoder. In particular, this implies that there exist
a decoder which is generalized linear and achieves at least 94.2% of the
compound capacity on any compound set, without the knowledge of the underlying
set.
|
1004.4824
|
Growth and structure of Slovenia's scientific collaboration network
|
physics.soc-ph cond-mat.stat-mech cs.DB
|
We study the evolution of Slovenia's scientific collaboration network from
1960 till present with a yearly resolution. For each year the network was
constructed from publication records of Slovene scientists, whereby two were
connected if, up to the given year inclusive, they have coauthored at least one
paper together. Starting with no more than 30 scientists with an average of 1.5
collaborators in the year 1960, the network to date consists of 7380
individuals that, on average, have 10.7 collaborators. We show that, in spite
of the broad myriad of research fields covered, the networks form "small
worlds" and that indeed the average path between any pair of scientists scales
logarithmically with size after the largest component becomes large enough.
Moreover, we show that the network growth is governed by near-liner
preferential attachment, giving rise to a log-normal distribution of
collaborators per author, and that the average starting year is roughly
inversely proportional to the number of collaborators eventually acquired.
Understandably, not all that became active early have till now gathered many
collaborators. We also give results for the clustering coefficient and the
diameter of the network over time, and compare our conclusions with those
reported previously.
|
1004.4826
|
Impact of Channel Asymmetry on Base Station Cooperative Transmission
with Limited Feedback
|
cs.IT math.IT
|
Base station (BS) cooperative transmission, also known as coordinated
multi-point transmission (CoMP), is an effective way to avoid inter-cell
interference in universal frequency reuse cellular systems. To gain the
promised benefit, however, huge feedback overhead is in demand to gather the
channel information. In this paper, we analyze the impact of channel asymmetry,
which is inherent in CoMP systems, on downlink BS cooperative transmission with
limited feedback. We analyze the per-user rate loss of a multi-user CoMP system
led by quantization. Per-cell quantization of multicell channels is considered,
which quantizes the local channel and cross channel separately and is more
feasible in practice. From both the analytical and simulation results, we
provide a whole picture on various critical factors that lead to the
performance loss. Specifically, we show that the per user rate loss led by
limited feedback depends on the location of its paired users, except for
relying on its own signal to noise ratio and the quantization errors as in
single cell multi-user multiple antenna systems. This implies that the
quantization accuracy required for local and cross channel of each user depends
on the locations of its own as well as its paired users.
|
1004.4830
|
Performance Evaluation of SCM-WDM System Using Different Linecoding
|
cs.OH cs.IT math.IT
|
This paper investigates the theoretical performance analysis for a subcarrier
multiplexed (SCM) wavelength division multiplexing (WDM) optical transmission
system in presence of optical beat interference (OBI) which occurs during the
photo detection process. We have presented a comparison for improving the
performance of SCM-WDM system in presence of OBI. Non-return-to zero (NRZ),
Manchester and Miller code (MC) line coding are used for performance
investigation of SCM-WDM system. A suitable signal bandwidth is selected and
200 KHz is considered as channel bandwidth. Power spectrum of signal and cross
component for those line coding are analyzed. Comparison results are evaluated
in terms of signal to OBI ratio for the three linecoding schemes which is
called signal to interference ratio (SIR). It is found that there is a
significant increase in the SIR by employing Miller code compared to NRZ and
Manchester for the same data rate. For example, for a number of subcarriers of
10, the achievable SIR is about -24 dB for Miller coded system compared to -46
dB for NRZ coded system and -49 dB for Manchester coded system. The results are
found to be satisfactorily agreed with the expected results.
|
1004.4848
|
Punctuation effects in English and Esperanto texts
|
cs.CL physics.data-an
|
A statistical physics study of punctuation effects on sentence lengths is
presented for written texts: {\it Alice in wonderland} and {\it Through a
looking glass}. The translation of the first text into esperanto is also
considered as a test for the role of punctuation in defining a style, and for
contrasting natural and artificial, but written, languages. Several log-log
plots of the sentence length-rank relationship are presented for the major
punctuation marks. Different power laws are observed with characteristic
exponents. The exponent can take a value much less than unity ($ca.$ 0.50 or
0.30) depending on how a sentence is defined. The texts are also mapped into
time series based on the word frequencies. The quantitative differences between
the original and translated texts are very minutes, at the exponent level. It
is argued that sentences seem to be more reliable than word distributions in
discussing an author style.
|
1004.4864
|
Polynomial Learning of Distribution Families
|
cs.LG cs.DS
|
The question of polynomial learnability of probability distributions,
particularly Gaussian mixture distributions, has recently received significant
attention in theoretical computer science and machine learning. However,
despite major progress, the general question of polynomial learnability of
Gaussian mixture distributions still remained open. The current work resolves
the question of polynomial learnability for Gaussian mixtures in high dimension
with an arbitrary fixed number of components. The result on learning Gaussian
mixtures relies on an analysis of distributions belonging to what we call
"polynomial families" in low dimension. These families are characterized by
their moments being polynomial in parameters and include almost all common
probability distributions as well as their mixtures and products. Using tools
from real algebraic geometry, we show that parameters of any distribution
belonging to such a family can be learned in polynomial time and using a
polynomial number of sample points. The result on learning polynomial families
is quite general and is of independent interest. To estimate parameters of a
Gaussian mixture distribution in high dimensions, we provide a deterministic
algorithm for dimensionality reduction. This allows us to reduce learning a
high-dimensional mixture to a polynomial number of parameter estimations in low
dimension. Combining this reduction with the results on polynomial families
yields our result on learning arbitrary Gaussian mixtures in high dimensions.
|
1004.4880
|
ECME Thresholding Methods for Sparse Signal Reconstruction
|
cs.IT math.IT
|
We propose a probabilistic framework for interpreting and developing hard
thresholding sparse signal reconstruction methods and present several new
algorithms based on this framework. The measurements follow an underdetermined
linear model, where the regression-coefficient vector is the sum of an unknown
deterministic sparse signal component and a zero-mean white Gaussian component
with an unknown variance. We first derive an expectation-conditional
maximization either (ECME) iteration that guarantees convergence to a local
maximum of the likelihood function of the unknown parameters for a given signal
sparsity level. To analyze the reconstruction accuracy, we introduce the
minimum sparse subspace quotient (SSQ), a more flexible measure of the sampling
operator than the well-established restricted isometry property (RIP). We prove
that, if the minimum SSQ is sufficiently large, ECME achieves perfect or
near-optimal recovery of sparse or approximately sparse signals, respectively.
We also propose a double overrelaxation (DORE) thresholding scheme for
accelerating the ECME iteration. If the signal sparsity level is unknown, we
introduce an unconstrained sparsity selection (USS) criterion for its selection
and show that, under certain conditions, applying this criterion is equivalent
to finding the sparsest solution of the underlying underdetermined linear
system. Finally, we present our automatic double overrelaxation (ADORE)
thresholding method that utilizes the USS criterion to select the signal
sparsity level. We apply the proposed schemes to reconstruct sparse and
approximately sparse signals from tomographic projections and compressive
samples.
|
1004.4882
|
Properties of Codes in the Johnson Scheme
|
cs.IT math.IT
|
Codes which attain the sphere packing bound are called perfect codes. The
most important metrics in coding theory on which perfect codes are defined are
the Hamming metric and the Johnson metric. While for the Hamming metric all
perfect codes over finite fields are known, in the Johnson metric it was
conjectured by Delsarte in 1970's that there are no nontrivial perfect codes.
The general nonexistence proof still remains the open problem. In this work we
examine constant weight codes as well as doubly constant weight codes, and
reduce the range of parameters in which perfect codes may exist in both cases.
We start with the constant weight codes. We introduce an improvement of Roos'
bound for one-perfect codes, and present some new divisibility conditions,
which are based on the connection between perfect codes in Johnson graph J(n,w)
and block designs. Next, we consider binomial moments for perfect codes. We
show which parameters can be excluded for one-perfect codes. We examine
two-perfect codes in J(2w,w) and present necessary conditions for existence of
such codes. We prove that there are no two-perfect codes in J(2w,w) with length
less then 2.5*10^{15}. Next we examine perfect doubly constant weight codes. We
present a family of parameters for codes whose size of sphere divides the size
of whole space. We then prove a bound on length of such codes, similarly to
Roos' bound for perfect codes in Johnson graph. Finally we describe Steiner
systems and doubly Steiner systems, which are strongly connected with the
constant weight and doubly constant weight codes respectively. We provide an
anticode-based proof of a bound on length of Steiner system, prove that doubly
Steiner system is a diameter perfect code and present a bound on length of
doubly Steiner system.
|
1004.4917
|
On the Capacity of Compound State-Dependent Channels with States Known
at the Transmitter
|
cs.IT math.IT math.PR
|
This paper investigates the capacity of compound state-dependent channels
with non-causal state information available at only the transmitter. A new
lower bound on the capacity of this class of channels is derived. This bound is
shown to be tight for the special case of compound channels with stochastic
degraded components, yielding the full characterization of the capacity.
Specific results are derived for the compound Gaussian Dirty-Paper (GDP)
channel. This model consists of an additive white Gaussian noise (AWGN) channel
corrupted by an additive Gaussian interfering signal, known at the transmitter
only, where the input and the state signals are affected by fading coefficients
whose realizations are unknown at the transmitter. Our bounds are shown to be
tight for specific cases. Applications of these results arise in a variety of
wireless scenarios as multicast channels, cognitive radio and problems with
interference cancellation.
|
1004.4944
|
Outer Bounds for the Interference Channel with a Cognitive Relay
|
cs.IT math.IT
|
In this paper, we first present an outer bound for a general interference
channel with a cognitive relay, i.e., a relay that has non-causal knowledge of
both independent messages transmitted in the interference channel. This outer
bound reduces to the capacity region of the deterministic broadcast channel and
of the deterministic cognitive interference channel through nulling of certain
channel inputs. It does not, however, reduce to that of certain deterministic
interference channels for which capacity is known. As such, we subsequently
tighten the bound for channels whose outputs satisfy an "invertibility"
condition. This second outer bound now reduces to the capacity of this special
class of deterministic interference channels. The second outer bound is further
tightened for the high SNR deterministic approximation of the Gaussian
interference channel with a cognitive relay by exploiting the special structure
of the interference. We provide an example that suggests that this third bound
is tight in at least some parameter regimes for the high SNR deterministic
approximation of the Gaussian channel. Another example shows that the third
bound is capacity in the special case where there are no direct links between
the non-cognitive transmitters.
|
1004.4949
|
Reed Muller Sensing Matrices and the LASSO
|
cs.IT math.IT
|
We construct two families of deterministic sensing matrices where the columns
are obtained by exponentiating codewords in the quaternary Delsarte-Goethals
code $DG(m,r)$. This method of construction results in sensing matrices with
low coherence and spectral norm. The first family, which we call
Delsarte-Goethals frames, are $2^m$ - dimensional tight frames with redundancy
$2^{rm}$. The second family, which we call Delsarte-Goethals sieves, are
obtained by subsampling the column vectors in a Delsarte-Goethals frame.
Different rows of a Delsarte-Goethals sieve may not be orthogonal, and we
present an effective algorithm for identifying all pairs of non-orthogonal
rows. The pairs turn out to be duplicate measurements and eliminating them
leads to a tight frame. Experimental results suggest that all $DG(m,r)$ sieves
with $m\leq 15$ and $r\geq2$ are tight-frames; there are no duplicate rows. For
both families of sensing matrices, we measure accuracy of reconstruction
(statistical 0-1 loss) and complexity (average reconstruction time) as a
function of the sparsity level $k$. Our results show that DG frames and sieves
outperform random Gaussian matrices in terms of noiseless and noisy signal
recovery using the LASSO.
|
1004.4965
|
Many-to-Many Graph Matching: a Continuous Relaxation Approach
|
stat.ML cs.CV
|
Graphs provide an efficient tool for object representation in various
computer vision applications. Once graph-based representations are constructed,
an important question is how to compare graphs. This problem is often
formulated as a graph matching problem where one seeks a mapping between
vertices of two graphs which optimally aligns their structure. In the classical
formulation of graph matching, only one-to-one correspondences between vertices
are considered. However, in many applications, graphs cannot be matched
perfectly and it is more interesting to consider many-to-many correspondences
where clusters of vertices in one graph are matched to clusters of vertices in
the other graph. In this paper, we formulate the many-to-many graph matching
problem as a discrete optimization problem and propose an approximate algorithm
based on a continuous relaxation of the combinatorial problem. We compare our
method with other existing methods on several benchmark computer vision
datasets.
|
1004.4968
|
On the Achievable Rate Regions for a Class of Cognitive Radio Channels:
Interference Channel with Degraded Message Sets with Unidirectional
Destination Cooperation
|
cs.IT math.IT
|
This paper considers the capacity gains due to unidirectional destination
cooperation in cognitive radio channels. We propose a novel channel,
interference channel with degraded message sets with unidirectional destination
cooperation (IC-DMS-UDC), to allow the receiver of cognitive radio (secondary
user) to participate in relaying the information for primary system (legitimate
user). Our main result is the development of an achievable rate region which
combines Gel'fand-Pinkser coding with partial-decode-and-forward strategy
employed in the relay channel. A numerical evaluation of the region in the
Gaussian case is also provided to demonstrate the improvements.
|
1004.5026
|
Compressed Sensing: How sharp is the Restricted Isometry Property
|
cs.IT math.IT
|
Compressed Sensing (CS) seeks to recover an unknown vector with $N$ entries
by making far fewer than $N$ measurements; it posits that the number of
compressed sensing measurements should be comparable to the information content
of the vector, not simply $N$. CS combines the important task of compression
directly with the measurement task. Since its introduction in 2004 there have
been hundreds of manuscripts on CS, a large fraction of which develop
algorithms to recover a signal from its compressed measurements. Because of the
paradoxical nature of CS -- exact reconstruction from seemingly undersampled
measurements -- it is crucial for acceptance of an algorithm that rigorous
analyses verify the degree of undersampling the algorithm permits. The
Restricted Isometry Property (RIP) has become the dominant tool used for the
analysis in such cases. We present here an asymmetric form of RIP which gives
tighter bounds than the usual symmetric one. We give the best known bounds on
the RIP constants for matrices from the Gaussian ensemble. Our derivations
illustrate the way in which the combinatorial nature of CS is controlled. Our
quantitative bounds on the RIP allow precise statements as to how aggressively
a signal can be undersampled, the essential question for practitioners. We also
document the extent to which RIP gives precise information about the true
performance limits of CS, by comparing with approaches from high-dimensional
geometry.
|
1004.5049
|
The Burbea-Rao and Bhattacharyya centroids
|
cs.IT cs.CG math.IT
|
We study the centroid with respect to the class of information-theoretic
Burbea-Rao divergences that generalize the celebrated Jensen-Shannon divergence
by measuring the non-negative Jensen difference induced by a strictly convex
and differentiable function. Although those Burbea-Rao divergences are
symmetric by construction, they are not metric since they fail to satisfy the
triangle inequality. We first explain how a particular symmetrization of
Bregman divergences called Jensen-Bregman distances yields exactly those
Burbea-Rao divergences. We then proceed by defining skew Burbea-Rao
divergences, and show that skew Burbea-Rao divergences amount in limit cases to
compute Bregman divergences. We then prove that Burbea-Rao centroids are
unique, and can be arbitrarily finely approximated by a generic iterative
concave-convex optimization algorithm with guaranteed convergence property. In
the second part of the paper, we consider the Bhattacharyya distance that is
commonly used to measure overlapping degree of probability distributions. We
show that Bhattacharyya distances on members of the same statistical
exponential family amount to calculate a Burbea-Rao divergence in disguise.
Thus we get an efficient algorithm for computing the Bhattacharyya centroid of
a set of parametric distributions belonging to the same exponential families,
improving over former specialized methods found in the literature that were
limited to univariate or "diagonal" multivariate Gaussians. To illustrate the
performance of our Bhattacharyya/Burbea-Rao centroid algorithm, we present
experimental performance results for $k$-means and hierarchical clustering
methods of Gaussian mixture models.
|
1004.5051
|
Tailored RF pulse optimization for magnetization inversion at ultra high
field
|
cs.CE cs.NE physics.med-ph
|
The radiofrequency (RF) transmit field is severely inhomogeneous at ultrahigh
field due to both RF penetration and RF coil design issues. This particularly
impairs image quality for sequences that use inversion pulses such as
magnetization prepared rapid acquisition gradient echo and limits the use of
quantitative arterial spin labeling sequences such as flow-attenuated inversion
recovery. Here we have used a search algorithm to produce inversion pulses
tailored to take into account the heterogeneity of the RF transmit field at 7
T. This created a slice selective inversion pulse that worked well (good slice
profile and uniform inversion) over the range of RF amplitudes typically
obtained in the head at 7 T while still maintaining an experimentally
achievable pulse length and pulse amplitude in the brain at 7 T. The pulses
used were based on the frequency offset correction inversion technique, as well
as time dilation of functions, but the RF amplitude, frequency sweep, and
gradient functions were all generated using a genetic algorithm with an
evaluation function that took into account both the desired inversion profile
and the transmit field inhomogeneity.
|
1004.5070
|
Multichannel Sampling of Pulse Streams at the Rate of Innovation
|
cs.IT math.IT
|
We consider minimal-rate sampling schemes for infinite streams of delayed and
weighted versions of a known pulse shape. The minimal sampling rate for these
parametric signals is referred to as the rate of innovation and is equal to the
number of degrees of freedom per unit time. Although sampling of infinite pulse
streams was treated in previous works, either the rate of innovation was not
achieved, or the pulse shape was limited to Diracs. In this paper we propose a
multichannel architecture for sampling pulse streams with arbitrary shape,
operating at the rate of innovation. Our approach is based on modulating the
input signal with a set of properly chosen waveforms, followed by a bank of
integrators. This architecture is motivated by recent work on sub-Nyquist
sampling of multiband signals. We show that the pulse stream can be recovered
from the proposed minimal-rate samples using standard tools taken from spectral
estimation in a stable way even at high rates of innovation. In addition, we
address practical implementation issues, such as reduction of hardware
complexity and immunity to failure in the sampling channels. The resulting
scheme is flexible and exhibits better noise robustness than previous
approaches.
|
1004.5071
|
Dimensions of Formality: A Case Study for MKM in Software Engineering
|
cs.DL cs.AI cs.SE
|
We study the formalization of a collection of documents created for a
Software Engineering project from an MKM perspective. We analyze how document
and collection markup formats can cope with an open-ended, multi-dimensional
space of primary and secondary classifications and relationships. We show that
RDFa-based extensions of MKM formats, employing flexible "metadata"
relationships referencing specific vocabularies for distinct dimensions, are
well-suited to encode this and to put it into service. This formalized
knowledge can be used for enriching interactive document browsing, for enabling
multi-dimensional metadata queries over documents and collections, and for
exporting Linked Data to the Semantic Web and thus enabling further reuse.
|
1004.5094
|
Fastest Distributed Consensus Problem on Branches of an Arbitrary
Connected Sensor Network
|
cs.IT cs.DC cs.DM math.IT
|
This paper studies the fastest distributed consensus averaging problem on
branches of an arbitrary connected sensor network. In the previous works full
knowledge about the sensor network's connectivity topology was required for
determining the optimal weights and convergence rate of distributed consensus
averaging algorithm over the network. Here in this work for the first time, the
optimal weights are determined analytically for the edges of certain types of
branches, independent of the rest of network. The solution procedure consists
of stratification of associated connectivity graph of the branches and
Semidefinite Programming (SDP), particularly solving the slackness conditions,
where the optimal weights are obtained by inductive comparing of the
characteristic polynomials initiated by slackness conditions. Several examples
and numerical results are provided to confirm the optimality of the obtained
weights.
|
1004.5108
|
Analyzing Random Network Coding with Differential Equations and
Differential Inclusions
|
cs.IT cs.NI math.DS math.IT
|
We develop a framework based on differential equations (DE) and differential
inclusions (DI) for analyzing Random Network Coding (RNC), as well as a
nonlinear variant referred to as Random Coupon (RC), in a wireless network. The
DEDI framework serves as a powerful numerical and analytical tool to study RNC.
We demonstrate its versatility by proving theoretical results on multicast
information flows in a wireless network using RNC or RC. We also demonstrate
the accuracy and flexibility of the performance analysis enabled by this
framework via illustrative examples of networks with multiple multicast
sessions, user cooperation and arbitrary topologies.
|
1004.5132
|
The Two-User Deterministic Interference Channel with Rate-Limited
Feedback
|
cs.IT math.IT
|
In this paper we study the effect of rate-limited feedback on the sum-rate
capacity of the deterministic interference channel. We characterize the
sum-rate capacity of this channel in the symmetric case and show that having
feedback links can increase the sum-rate capacity by at most the rate of the
available feedback. Our proof includes a novel upper-bound on the sum-rate
capacity and a set of new achievability strategies.
|
1004.5157
|
Deriving Good LDPC Convolutional Codes from LDPC Block Codes
|
cs.IT math.IT
|
Low-density parity-check (LDPC) convolutional codes are capable of achieving
excellent performance with low encoding and decoding complexity. In this paper
we discuss several graph-cover-based methods for deriving families of
time-invariant and time-varying LDPC convolutional codes from LDPC block codes
and show how earlier proposed LDPC convolutional code constructions can be
presented within this framework. Some of the constructed convolutional codes
significantly outperform the underlying LDPC block codes. We investigate some
possible reasons for this "convolutional gain," and we also discuss the ---
mostly moderate --- decoder cost increase that is incurred by going from LDPC
block to LDPC convolutional codes.
|
1004.5168
|
Efficient and Effective Spam Filtering and Re-ranking for Large Web
Datasets
|
cs.IR
|
The TREC 2009 web ad hoc and relevance feedback tasks used a new document
collection, the ClueWeb09 dataset, which was crawled from the general Web in
early 2009. This dataset contains 1 billion web pages, a substantial fraction
of which are spam --- pages designed to deceive search engines so as to deliver
an unwanted payload. We examine the effect of spam on the results of the TREC
2009 web ad hoc and relevance feedback tasks, which used the ClueWeb09 dataset.
We show that a simple content-based classifier with minimal training is
efficient enough to rank the "spamminess" of every page in the dataset using a
standard personal computer in 48 hours, and effective enough to yield
significant and substantive improvements in the fixed-cutoff precision (estP10)
as well as rank measures (estR-Precision, StatMAP, MAP) of nearly all submitted
runs. Moreover, using a set of "honeypot" queries the labeling of training data
may be reduced to an entirely automatic process. The results of classical
information retrieval methods are particularly enhanced by filtering --- from
among the worst to among the best.
|
1004.5181
|
Analysis of Feedback Overhead for MIMO Beamforming over Time-Varying
Channels
|
cs.IT math.IT
|
In this paper, the required amount of feedback overhead for multiple-input
multiple-output (MIMO) beamforming over time-varying channels is presented in
terms of the entropy of the feedback messages. In the case that each transmit
antenna has its own power amplifier which has individual power limit, it has
been known that only phase steering information is necessary to form the
optimal transmit beamforming vector. Since temporal correlation exists for
wireless fading channels, one can utilize the previous reported feedback
messages as prior information to efficiently encode the current feedback
message. Thus, phase tracking information, difference between two phase
steering information in adjacent feedback slots, is sufficient as a feedback
message. We show that while the entropy of the phase steering information is a
constant, the entropy of the phase tracking information is a function of the
temporal correlation parameter. For the phase tracking information, upperbounds
on the entropy are presented in the Gaussian entropy and the von-Mises entropy
by using the theory on the maximum entropy distributions. Derived results can
quantify the amount of reduction in feedback overhead of the phase tracking
information over the phase steering information. For application perspective,
the signal-to-noise ratio (SNR) gain of phase tracking beamforming over phase
steering beamforming is evaluated by using Monte-Carlo simulation. Also we show
that the derived entropies can determine the appropriate duration of the
feedback reports with respect to the degree of the channel variation rates.
|
1004.5189
|
Rate-distortion function via minimum mean square error estimation
|
cs.IT math.IT
|
We derive a simple general parametric representation of the rate-distortion
function of a memoryless source, where both the rate and the distortion are
given by integrals whose integrands include the minimum mean square error
(MMSE) of the distortion $\Delta=d(X,Y)$ based on the source symbol $X$, with
respect to a certain joint distribution of these two random variables. At first
glance, these relations may seem somewhat similar to the I-MMSE relations due
to Guo, Shamai and Verd\'u, but they are, in fact, quite different. The new
relations among rate, distortion, and MMSE are discussed from several aspects,
and more importantly, it is demonstrated that they can sometimes be rather
useful for obtaining non-trivial upper and lower bounds on the rate-distortion
function, as well as for determining the exact asymptotic behavior for very low
and for very large distortion. Analogous MMSE relations hold for channel
capacity as well.
|
1004.5194
|
Clustering processes
|
cs.LG cs.IT math.IT stat.ML
|
The problem of clustering is considered, for the case when each data point is
a sample generated by a stationary ergodic process. We propose a very natural
asymptotic notion of consistency, and show that simple consistent algorithms
exist, under most general non-parametric assumptions. The notion of consistency
is as follows: two samples should be put into the same cluster if and only if
they were generated by the same distribution. With this notion of consistency,
clustering generalizes such classical statistical problems as homogeneity
testing and process classification. We show that, for the case of a known
number of clusters, consistency can be achieved under the only assumption that
the joint distribution of the data is stationary ergodic (no parametric or
Markovian assumptions, no assumptions of independence, neither between nor
within the samples). If the number of clusters is unknown, consistency can be
achieved under appropriate assumptions on the mixing rates of the processes.
(again, no parametric or independence assumptions). In both cases we give
examples of simple (at most quadratic in each argument) algorithms which are
consistent.
|
1004.5195
|
On Perfect Codes in the Johnson Graph
|
cs.IT math.IT
|
In this paper we consider the existence of nontrivial perfect codes in the
Johnson graph J(n,w). We present combinatorial and number theory techniques to
provide necessary conditions for existence of such codes and reduce the range
of parameters in which 1-perfect and 2-perfect codes may exist.
|
1004.5214
|
Split-Extended LDPC codes for coded cooperation
|
cs.IT math.IT
|
We propose a new code design that aims to distribute an LDPC code over a
relay channel. It is based on a split-and-extend approach, which allows the
relay to split the set of bits connected to some parity-check of the LDPC code
into two or several subsets. Subsequently, the sums of bits within each subset
are used in a repeat-accumulate manner in order to generate extra bits sent
from the relay toward the destination. We show that the proposed design yields
LDPC codes with enhanced correction capacity and can be advantageously applied
to existing codes, which allows for addressing cooperation issues for evolving
standards. Finally, we derive density evolution equations for the proposed
design, and we show that Split-Extended LDPC codes can approach very closely
the capacity of the Gaussian relay channel.
|
1004.5215
|
System Dynamics Modelling of the Processes Involving the Maintenance of
the Naive T Cell Repertoire
|
cs.AI q-bio.CB
|
The study of immune system aging, i.e. immunosenescence, is a relatively new
research topic. It deals with understanding the processes of immunodegradation
that indicate signs of functionality loss possibly leading to death. Even
though it is not possible to prevent immunosenescence, there is great benefit
in comprehending its causes, which may help to reverse some of the damage done
and thus improve life expectancy. One of the main factors influencing the
process of immunosenescence is the number and phenotypical variety of naive T
cells in an individual. This work presents a review of immunosenescence,
proposes system dynamics modelling of the processes involving the maintenance
of the naive T cell repertoire and presents some preliminary results.
|
1004.5216
|
Optimized puncturing distributions for irregular non-binary LDPC codes
|
cs.IT math.IT
|
In this paper we design non-uniform bit-wise puncturing distributions for
irregular non-binary LDPC (NB-LDPC) codes. The puncturing distributions are
optimized by minimizing the decoding threshold of the punctured LDPC code, the
threshold being computed with a Monte-Carlo implementation of Density
Evolution. First, we show that Density Evolution computed with Monte-Carlo
simulations provides accurate (very close) and precise (small variance)
estimates of NB-LDPC code ensemble thresholds. Based on the proposed method, we
analyze several puncturing distributions for regular and semi-regular codes,
obtained either by clustering punctured bits, or spreading them over the
symbol-nodes of the Tanner graph. Finally, optimized puncturing distributions
for non-binary LDPC codes with small maximum degree are presented, which
exhibit a gap between 0.2 and 0.5 dB to the channel capacity, for punctured
rates varying from 0.5 to 0.9.
|
1004.5217
|
Analysis of Quasi-Cyclic LDPC codes under ML decoding over the erasure
channel
|
cs.IT math.IT
|
In this paper, we show that Quasi-Cyclic LDPC codes can efficiently
accommodate the hybrid iterative/ML decoding over the binary erasure channel.
We demonstrate that the quasi-cyclic structure of the parity-check matrix can
be advantageously used in order to significantly reduce the complexity of the
ML decoding. This is achieved by a simple row/column permutation that
transforms a QC matrix into a pseudo-band form. Based on this approach, we
propose a class of QC-LDPC codes with almost ideal error correction performance
under the ML decoding, while the required number of row/symbol operations
scales as $k\sqrt{k}$, where $k$ is the number of source symbols.
|
1004.5222
|
The Application of a Dendritic Cell Algorithm to a Robotic Classifier
|
cs.AI cs.NE cs.RO
|
The dendritic cell algorithm is an immune-inspired technique for processing
time-dependant data. Here we propose it as a possible solution for a robotic
classification problem. The dendritic cell algorithm is implemented on a real
robot and an investigation is performed into the effects of varying the
migration threshold median for the cell population. The algorithm performs well
on a classification task with very little tuning. Ways of extending the
implementation to allow it to be used as a classifier within the field of
robotic security are suggested.
|
1004.5229
|
Optimism in Reinforcement Learning and Kullback-Leibler Divergence
|
cs.LG math.ST stat.ML stat.TH
|
We consider model-based reinforcement learning in finite Markov De- cision
Processes (MDPs), focussing on so-called optimistic strategies. In MDPs,
optimism can be implemented by carrying out extended value it- erations under a
constraint of consistency with the estimated model tran- sition probabilities.
The UCRL2 algorithm by Auer, Jaksch and Ortner (2009), which follows this
strategy, has recently been shown to guarantee near-optimal regret bounds. In
this paper, we strongly argue in favor of using the Kullback-Leibler (KL)
divergence for this purpose. By studying the linear maximization problem under
KL constraints, we provide an ef- ficient algorithm, termed KL-UCRL, for
solving KL-optimistic extended value iteration. Using recent deviation bounds
on the KL divergence, we prove that KL-UCRL provides the same guarantees as
UCRL2 in terms of regret. However, numerical experiments on classical
benchmarks show a significantly improved behavior, particularly when the MDP
has reduced connectivity. To support this observation, we provide elements of
com- parison between the two algorithms based on geometric considerations.
|
1004.5262
|
On Application of the Local Search and the Genetic Algorithms Techniques
to Some Combinatorial Optimization Problems
|
cs.NE math.OC
|
In this paper the approach to solving several combinatorial optimization
problems using the local search and the genetic algorithm techniques is
proposed. Initially this approach was developed in purpose to overcome some
difficulties inhibiting the application of above mentioned techniques to the
problems of the Questionnaire Theory. But when the algorithms were developed it
became clear that them could be successfully applied also to the Minimum Set
Cover, the 0-1-Knapsack and probably to other combinatorial optimization
problems.
|
1004.5274
|
Robustness maximization of parallel multichannel systems
|
cs.IT math.IT
|
Bit error rate (BER) minimization and SNR-gap maximization, two robustness
optimization problems, are solved, under average power and bit-rate
constraints, according to the waterfilling policy. Under peak-power constraint
the solutions differ and this paper gives bit-loading solutions of both
robustness optimization problems over independent parallel channels. The study
is based on analytical approach with generalized Lagrangian relaxation tool and
on greedy-type algorithm approach. Tight BER expressions are used for square
and rectangular quadrature amplitude modulations. Integer bit solution of
analytical continuous bit-rates is performed with a new generalized secant
method. The asymptotic convergence of both robustness optimizations is proved
for both analytical and algorithmic approaches. We also prove that, in
conventional margin maximization problem, the equivalence between SNR-gap
maximization and power minimization does not hold with peak-power limitation.
Based on a defined dissimilarity measure, bit-loading solutions are compared
over power line communication channel for multicarrier systems. Simulation
results confirm the asymptotic convergence of both allocation policies. In non
asymptotic regime the allocation policies can be interchanged depending on the
robustness measure and the operating point of the communication system. The low
computational effort of the suboptimal solution based on analytical approach
leads to a good trade-off between performance and complexity.
|
1004.5305
|
Compressed Sensing with off-axis frequency-shifting holography
|
physics.optics cs.CV physics.med-ph
|
This work reveals an experimental microscopy acquisition scheme successfully
combining Compressed Sensing (CS) and digital holography in off-axis and
frequency-shifting conditions. CS is a recent data acquisition theory involving
signal reconstruction from randomly undersampled measurements, exploiting the
fact that most images present some compact structure and redundancy. We propose
a genuine CS-based imaging scheme for sparse gradient images, acquiring a
diffraction map of the optical field with holographic microscopy and recovering
the signal from as little as 7% of random measurements. We report experimental
results demonstrating how CS can lead to an elegant and effective way to
reconstruct images, opening the door for new microscopy applications.
|
1004.5326
|
Designing neural networks that process mean values of random variables
|
cond-mat.dis-nn cs.AI cs.LG
|
We introduce a class of neural networks derived from probabilistic models in
the form of Bayesian networks. By imposing additional assumptions about the
nature of the probabilistic models represented in the networks, we derive
neural networks with standard dynamics that require no training to determine
the synaptic weights, that perform accurate calculation of the mean values of
the random variables, that can pool multiple sources of evidence, and that deal
cleanly and consistently with inconsistent or contradictory evidence. The
presented neural networks capture many properties of Bayesian networks,
providing distributed versions of probabilistic models.
|
1004.5339
|
Query strategy for sequential ontology debugging
|
cs.LO cs.AI
|
Debugging of ontologies is an important prerequisite for their wide-spread
application, especially in areas that rely upon everyday users to create and
maintain knowledge bases, as in the case of the Semantic Web. Recent approaches
use diagnosis methods to identify causes of inconsistent or incoherent
ontologies. However, in most debugging scenarios these methods return many
alternative diagnoses, thus placing the burden of fault localization on the
user. This paper demonstrates how the target diagnosis can be identified by
performing a sequence of observations, that is, by querying an oracle about
entailments of the target ontology. We exploit a-priori probabilities of
typical user errors to formulate information-theoretic concepts for query
selection. Our evaluation showed that the proposed method significantly reduces
the number of required queries compared to myopic strategies. We experimented
with different probability distributions of user errors and different qualities
of the a-priori probabilities. Our measurements showed the advantageousness of
information-theoretic approach to query selection even in cases where only a
rough estimate of the priors is available.
|
1004.5351
|
Isometric Embeddings in Imaging and Vision: Facts and Fiction
|
cs.CV math.CV math.DG
|
We explore the practicability of Nash's Embedding Theorem in vision and
imaging sciences. In particular, we investigate the relevance of a result of
Burago and Zalgaller regarding the existence of isometric embeddings of
polyhedral surfaces in $\mathbb{R}^3$ and we show that their proof does not
extended directly to higher dimensions.
|
1004.5367
|
Multiplicatively Repeated Non-Binary LDPC Codes
|
cs.IT math.IT
|
We propose non-binary LDPC codes concatenated with multiplicative repetition
codes. By multiplicatively repeating the (2,3)-regular non-binary LDPC mother
code of rate 1/3, we construct rate-compatible codes of lower rates 1/6, 1/9,
1/12,... Surprisingly, such simple low-rate non-binary LDPC codes outperform
the best low-rate binary LDPC codes so far. Moreover, we propose the decoding
algorithm for the proposed codes, which can be decoded with almost the same
computational complexity as that of the mother code.
|
1004.5370
|
Self-Taught Hashing for Fast Similarity Search
|
cs.IR
|
The ability of fast similarity search at large scale is of great importance
to many Information Retrieval (IR) applications. A promising way to accelerate
similarity search is semantic hashing which designs compact binary codes for a
large number of documents so that semantically similar documents are mapped to
similar codes (within a short Hamming distance). Although some recently
proposed techniques are able to generate high-quality codes for documents known
in advance, obtaining the codes for previously unseen documents remains to be a
very challenging problem. In this paper, we emphasise this issue and propose a
novel Self-Taught Hashing (STH) approach to semantic hashing: we first find the
optimal $l$-bit binary codes for all documents in the given corpus via
unsupervised learning, and then train $l$ classifiers via supervised learning
to predict the $l$-bit code for any query document unseen before. Our
experiments on three real-world text datasets show that the proposed approach
using binarised Laplacian Eigenmap (LapEig) and linear Support Vector Machine
(SVM) outperforms state-of-the-art techniques significantly.
|
1004.5421
|
Interference Mitigation through Limited Transmitter Cooperation
|
cs.IT math.IT
|
Interference limits performance in wireless networks, and cooperation among
receivers or transmitters can help mitigate interference by forming distributed
MIMO systems. Earlier work shows how limited receiver cooperation helps
mitigate interference. The scenario with transmitter cooperation, however, is
more difficult to tackle. In this paper we study the two-user Gaussian
interference channel with conferencing transmitters to make progress towards
this direction. We characterize the capacity region to within 6.5 bits/s/Hz,
regardless of channel parameters. Based on the constant-to-optimality result,
we show that there is an interesting reciprocity between the scenario with
conferencing transmitters and the scenario with conferencing receivers, and
their capacity regions are within a constant gap to each other. Hence in the
interference-limited regime, the behavior of the benefit brought by transmitter
cooperation is the same as that by receiver cooperation.
|
1004.5424
|
Graphic Symbol Recognition using Graph Based Signature and Bayesian
Network Classifier
|
cs.CV cs.GR
|
We present a new approach for recognition of complex graphic symbols in
technical documents. Graphic symbol recognition is a well known challenge in
the field of document image analysis and is at heart of most graphic
recognition systems. Our method uses structural approach for symbol
representation and statistical classifier for symbol recognition. In our system
we represent symbols by their graph based signatures: a graphic symbol is
vectorized and is converted to an attributed relational graph, which is used
for computing a feature vector for the symbol. This signature corresponds to
geometry and topology of the symbol. We learn a Bayesian network to encode
joint probability distribution of symbol signatures and use it in a supervised
learning scenario for graphic symbol recognition. We have evaluated our method
on synthetically deformed and degraded images of pre-segmented 2D architectural
and electronic symbols from GREC databases and have obtained encouraging
recognition rates.
|
1004.5427
|
Employing fuzzy intervals and loop-based methodology for designing
structural signature: an application to symbol recognition
|
cs.CV cs.GR
|
Motivation of our work is to present a new methodology for symbol
recognition. We support structural methods for representing visual associations
in graphic documents. The proposed method employs a structural approach for
symbol representation and a statistical classifier for recognition. We
vectorize a graphic symbol, encode its topological and geometrical information
by an ARG and compute a signature from this structural graph. To address the
sensitivity of structural representations to deformations and degradations, we
use data adapted fuzzy intervals while computing structural signature. The
joint probability distribution of signatures is encoded by a Bayesian network.
This network in fact serves as a mechanism for pruning irrelevant features and
choosing a subset of interesting features from structural signatures, for
underlying symbol set. Finally we deploy the Bayesian network in supervised
learning scenario for recognizing query symbols. We have evaluated the
robustness of our method against noise, on synthetically deformed and degraded
images of pre-segmented 2D architectural and electronic symbols from GREC
databases and have obtained encouraging recognition rates. A second set of
experimentation was carried out for evaluating the performance of our method
against context noise i.e. symbols cropped from complete documents. The results
support the use of our signature by a symbol spotting system.
|
1004.5429
|
On Distance Properties of Quasi-Cyclic Protograph-Based LDPC Codes
|
cs.IT math.IT
|
Recent work has shown that properly designed protograph-based LDPC codes may
have minimum distance linearly increasing with block length. This notion rests
on ensemble arguments over all possible expansions of the base protograph. When
implementation complexity is considered, the expansion is typically chosen to
be quite orderly. For example, protograph expansion by cyclically shifting
connections creates a quasi-cyclic (QC) code. Other recent work has provided
upper bounds on the minimum distance of QC codes. In this paper, these bounds
are expanded upon to cover puncturing and tightened in several specific cases.
We then evaluate our upper bounds for the most prominent protograph code thus
far, one proposed for deep-space usage in the CCSDS experimental standard, the
code known as AR4JA.
|
1004.5442
|
Multiple-Relaxation-Time Lattice Boltzmann Approach to Compressible
Flows with Flexible Specific-Heat Ratio and Prandtl Number
|
cond-mat.soft cs.CE nlin.CG physics.comp-ph physics.flu-dyn stat.CO
|
A new multiple-relaxation-time lattice Boltzmann scheme for compressible
flows with arbitrary specific heat ratio and Prandtl number is presented. In
the new scheme, which is based on a two-dimensional 16-discrete-velocity model,
the moment space and the corresponding transformation matrix are constructed
according to the seven-moment relations associated with the local equilibrium
distribution function. In the continuum limit, the model recovers the
compressible Navier-Stokes equations with flexible specific-heat ratio and
Prandtl number. Numerical experiments show that compressible flows with strong
shocks can be simulated by the present model up to Mach numbers $Ma \sim 5$.
|
1004.5479
|
On Minimax Robust Detection of Stationary Gaussian Signals in White
Gaussian Noise
|
cs.IT math.IT math.ST stat.TH
|
The problem of detecting a wide-sense stationary Gaussian signal process
embedded in white Gaussian noise, where the power spectral density of the
signal process exhibits uncertainty, is investigated. The performance of
minimax robust detection is characterized by the exponential decay rate of the
miss probability under a Neyman-Pearson criterion with a fixed false alarm
probability, as the length of the observation interval grows without bound. A
dominance condition is identified for the uncertainty set of spectral density
functions, and it is established that, under the dominance condition, the
resulting minimax problem possesses a saddle point, which is achievable by the
likelihood ratio tests matched to a so-called dominated power spectral density
in the uncertainty set. No convexity condition on the uncertainty set is
required to establish this result.
|
1004.5500
|
Simple Type Theory as Framework for Combining Logics
|
cs.LO cs.AI
|
Simple type theory is suited as framework for combining classical and
non-classical logics. This claim is based on the observation that various
prominent logics, including (quantified) multimodal logics and intuitionistic
logics, can be elegantly embedded in simple type theory. Furthermore, simple
type theory is sufficiently expressive to model combinations of embedded logics
and it has a well understood semantics. Off-the-shelf reasoning systems for
simple type theory exist that can be uniformly employed for reasoning within
and about combinations of logics.
|
1004.5529
|
High-Rate Vector Quantization for the Neyman-Pearson Detection of
Correlated Processes
|
cs.IT math.IT math.PR math.ST stat.TH
|
This paper investigates the effect of quantization on the performance of the
Neyman-Pearson test. It is assumed that a sensing unit observes samples of a
correlated stationary ergodic multivariate process. Each sample is passed
through an N-point quantizer and transmitted to a decision device which
performs a binary hypothesis test. For any false alarm level, it is shown that
the miss probability of the Neyman-Pearson test converges to zero exponentially
as the number of samples tends to infinity, assuming that the observed process
satisfies certain mixing conditions. The main contribution of this paper is to
provide a compact closed-form expression of the error exponent in the high-rate
regime i.e., when the number N of quantization levels tends to infinity,
generalizing previous results of Gupta and Hero to the case of non-independent
observations. If d represents the dimension of one sample, it is proved that
the error exponent converges at rate N^{2/d} to the one obtained in the absence
of quantization. As an application, relevant high-rate quantization strategies
which lead to a large error exponent are determined. Numerical results indicate
that the proposed quantization rule can yield better performance than existing
ones in terms of detection error.
|
1004.5538
|
Bayesian estimation of regularization and PSF parameters for Wiener-Hunt
deconvolution
|
stat.CO cs.CV physics.data-an stat.ME
|
This paper tackles the problem of image deconvolution with joint estimation
of PSF parameters and hyperparameters. Within a Bayesian framework, the
solution is inferred via a global a posteriori law for unknown parameters and
object. The estimate is chosen as the posterior mean, numerically calculated by
means of a Monte-Carlo Markov chain algorithm. The estimates are efficiently
computed in the Fourier domain and the effectiveness of the method is shown on
simulated examples. Results show precise estimates for PSF parameters and
hyperparameters as well as precise image estimates including restoration of
high-frequencies and spatial details, within a global and coherent approach.
|
1004.5540
|
Strong Secrecy for Erasure Wiretap Channels
|
cs.IT math.IT
|
We show that duals of certain low-density parity-check (LDPC) codes, when
used in a standard coset coding scheme, provide strong secrecy over the binary
erasure wiretap channel (BEWC). This result hinges on a stopping set analysis
of ensembles of LDPC codes with block length $n$ and girth $\geq 2k$, for some
$k \geq 2$. We show that if the minimum left degree of the ensemble is
$l_\mathrm{min}$, the expected probability of block error is
$\calO(\frac{1}{n^{\lceil l_\mathrm{min} k /2 \rceil - k}})$ when the erasure
probability $\epsilon < \epsilon_\mathrm{ef}$, where $\epsilon_\mathrm{ef}$
depends on the degree distribution of the ensemble. As long as $l_\mathrm{min}
> 2$ and $k > 2$, the dual of this LDPC code provides strong secrecy over a
BEWC of erasure probability greater than $1 - \epsilon_\mathrm{ef}$.
|
1004.5551
|
Entanglement Transmission over Arbitrarily Varying Quantum Channels
|
quant-ph cs.IT math.IT
|
We derive a regularized formula for the common randomness assisted
entanglement transmission capacity of finite arbitrarily varying quantum
channels (AVQC's). For finite AVQC's with positive capacity for classical
message transmission we show, by derandomization through classical forward
communication, that the random capacity for entanglement transmission equals
the deterministic capacity for entanglement transmission. This is a quantum
version of the famous Ahlswede dichotomy. In the infinite case, we derive a
similar result for certain classes of AVQC's. At last, we give two possible
definitions of symmetrizability of an AVQC.
|
1004.5570
|
Optimal computation of symmetric Boolean functions in Tree networks
|
cs.IT cs.NI math.IT
|
In this paper, we address the scenario where nodes with sensor data are
connected in a tree network, and every node wants to compute a given symmetric
Boolean function of the sensor data. We first consider the problem of computing
a function of two nodes with integer measurements. We allow for block
computation to enhance data fusion efficiency, and determine the minimum
worst-case total number of bits to be exchanged to perform the desired
computation. We establish lower bounds using fooling sets, and provide a novel
scheme which attains the lower bounds, using information theoretic tools. For a
class of functions called sum-threshold functions, this scheme is shown to be
optimal. We then turn to tree networks and derive a lower bound for the number
of bits exchanged on each link by viewing it as a two node problem. We show
that the protocol of recursive innetwork aggregation achieves this lower bound
in the case of sumthreshold functions. Thus we have provided a communication
and in-network computation strategy that is optimal for each link. All the
results can be extended to the case of non-binary alphabets. In the case of
general graphs, we present a cut-set lower bound, and an achievable scheme
based on aggregation along trees. For complete graphs, the complexity of this
scheme is no more than twice that of the optimal scheme.
|
1004.5571
|
Optimal ordering of transmissions for computing Boolean threhold
functions
|
cs.IT cs.NI math.IT
|
We address a sequential decision problem that arises in the computation of
symmetric Boolean functions of distributed data. We consider a collocated
network, where each node's transmissions can be heard by every other node. Each
node has a Boolean measurement and we wish to compute a given Boolean function
of these measurements. We suppose that the measurements are independent and
Bernoulli distributed. Thus, the problem of optimal computation becomes the
problem of optimally ordering node's transmissions so as to minimize the total
expected number of bits. We solve the ordering problem for the class of Boolean
threshold functions. The optimal ordering is dynamic, i.e., it could
potentially depend on the values of previously transmitted bits. Further, it
depends only on the ordering of the marginal probabilites, but not on their
exact values. This provides an elegant structure for the optimal strategy. For
the case where each node has a block of measurements, the problem is
significantly harder, and we conjecture the optimal strategy.
|
1004.5588
|
On Achieving Local View Capacity Via Maximal Independent Graph
Scheduling
|
cs.IT math.IT
|
"If we know more, we can achieve more." This adage also applies to
communication networks, where more information about the network state
translates into higher sumrates. In this paper, we formalize this increase of
sum-rate with increased knowledge of the network state. The knowledge of
network state is measured in terms of the number of hops, h, of information
available to each transmitter and is labeled as h-local view. To understand how
much capacity is lost due to limited information, we propose to use the metric
of normalized sum-capacity, which is the h-local view sum-capacity divided by
global-view sum capacity. For the cases of one and two-local view, we
characterize the normalized sum-capacity for many classes of deterministic and
Gaussian interference networks. In many cases, a scheduling scheme called
maximal independent graph scheduling is shown to achieve normalized
sum-capacity. We also show that its generalization for 1-local view, labeled
coded set scheduling, achieves normalized sum-capacity in some cases where its
uncoded counterpart fails to do so.
|
1004.5601
|
Near MDS poset codes and distributions
|
cs.IT math.IT
|
We study $q$-ary codes with distance defined by a partial order of the
coordinates of the codewords. Maximum Distance Separable (MDS) codes in the
poset metric have been studied in a number of earlier works. We consider codes
that are close to MDS codes by the value of their minimum distance. For such
codes, we determine their weight distribution, and in the particular case of
the "ordered metric" characterize distributions of points in the unit cube
defined by the codes. We also give some constructions of codes in the ordered
Hamming space.
|
1005.0027
|
Learning from Multiple Outlooks
|
cs.LG
|
We propose a novel problem formulation of learning a single task when the
data are provided in different feature spaces. Each such space is called an
outlook, and is assumed to contain both labeled and unlabeled data. The
objective is to take advantage of the data from all the outlooks to better
classify each of the outlooks. We devise an algorithm that computes optimal
affine mappings from different outlooks to a target outlook by matching moments
of the empirical distributions. We further derive a probabilistic
interpretation of the resulting algorithm and a sample complexity bound
indicating how many samples are needed to adequately find the mapping. We
report the results of extensive experiments on activity recognition tasks that
show the value of the proposed approach in boosting performance.
|
1005.0047
|
A Geometric View of Conjugate Priors
|
cs.LG
|
In Bayesian machine learning, conjugate priors are popular, mostly due to
mathematical convenience. In this paper, we show that there are deeper reasons
for choosing a conjugate prior. Specifically, we formulate the conjugate prior
in the form of Bregman divergence and show that it is the inherent geometry of
conjugate priors that makes them appropriate and intuitive. This geometric
interpretation allows one to view the hyperparameters of conjugate priors as
the {\it effective} sample points, thus providing additional intuition. We use
this geometric understanding of conjugate priors to derive the hyperparameters
and expression of the prior used to couple the generative and discriminative
components of a hybrid model for semi-supervised learning.
|
1005.0052
|
On the Joint Decoding of LDPC Codes and Finite-State Channels via Linear
Programming
|
cs.IT math.IT
|
In this paper, the linear programming (LP) decoder for binary linear codes,
introduced by Feldman, et al. is extended to joint-decoding of binary-input
finite-state channels. In particular, we provide a rigorous definition of LP
joint-decoding pseudo-codewords (JD-PCWs) that enables evaluation of the
pairwise error probability between codewords and JD-PCWs. This leads naturally
to a provable upper bound on decoder failure probability. If the channel is a
finite-state intersymbol interference channel, then the LP joint decoder also
has the maximum-likelihood (ML) certificate property and all integer valued
solutions are codewords. In this case, the performance loss relative to ML
decoding can be explained completely by fractional valued JD-PCWs.
|
1005.0063
|
Large Margin Multiclass Gaussian Classification with Differential
Privacy
|
stat.ML cs.CR cs.LG
|
As increasing amounts of sensitive personal information is aggregated into
data repositories, it has become important to develop mechanisms for processing
the data without revealing information about individual data instances. The
differential privacy model provides a framework for the development and
theoretical analysis of such mechanisms. In this paper, we propose an algorithm
for learning a discriminatively trained multi-class Gaussian classifier that
satisfies differential privacy using a large margin loss function with a
perturbed regularization term. We present a theoretical upper bound on the
excess risk of the classifier introduced by the perturbation.
|
1005.0069
|
Perturbation Resilience and Superiorization of Iterative Algorithms
|
math.OC cs.CV physics.med-ph
|
Iterative algorithms aimed at solving some problems are discussed. For
certain problems, such as finding a common point in the intersection of a
finite number of convex sets, there often exist iterative algorithms that
impose very little demand on computer resources. For other problems, such as
finding that point in the intersection at which the value of a given function
is optimal, algorithms tend to need more computer memory and longer execution
time. A methodology is presented whose aim is to produce automatically for an
iterative algorithm of the first kind a "superiorized version" of it that
retains its computational efficiency but nevertheless goes a long way towards
solving an optimization problem. This is possible to do if the original
algorithm is "perturbation resilient," which is shown to be the case for
various projection algorithms for solving the consistent convex feasibility
problem. The superiorized versions of such algorithms use perturbations that
drive the process in the direction of the optimizer of the given function.
After presenting these intuitive ideas in a precise mathematical form, they are
illustrated in image reconstruction from projections for two different
projection algorithms superiorized for the function whose value is the total
variation of the image.
|
1005.0072
|
HyberLoc: Providing Physical Layer Location Privacy in Hybrid Sensor
Networks
|
cs.IT cs.CR math.IT
|
In many hybrid wireless sensor networks' applications, sensor nodes are
deployed in hostile environments where trusted and un-trusted nodes co-exist.
In anchor-based hybrid networks, it becomes important to allow trusted nodes to
gain full access to the location information transmitted in beacon frames
while, at the same time, prevent un-trusted nodes from using this information.
The main challenge is that un-trusted nodes can measure the physical signal
transmitted from anchor nodes, even if these nodes encrypt their transmission.
Using the measured signal strength, un-trusted nodes can still tri-laterate the
location of anchor nodes. In this paper, we propose HyberLoc, an algorithm that
provides anchor physical layer location privacy in anchor-based hybrid sensor
networks. The idea is for anchor nodes to dynamically change their transmission
power following a certain probability distribution, degrading the localization
accuracy at un-trusted nodes while maintaining high localization accuracy at
trusted nodes. Given an average power constraint, our analysis shows that the
discretized exponential distribution is the distribution that maximizes
location uncertainty at the untrusted nodes. Detailed evaluation through
analysis, simulation, and implementation shows that HyberLoc gives trusted
nodes up to 3.5 times better localization accuracy as compared to untrusted
nodes.
|
1005.0075
|
Distributive Stochastic Learning for Delay-Optimal OFDMA Power and
Subband Allocation
|
cs.LG
|
In this paper, we consider the distributive queue-aware power and subband
allocation design for a delay-optimal OFDMA uplink system with one base
station, $K$ users and $N_F$ independent subbands. Each mobile has an uplink
queue with heterogeneous packet arrivals and delay requirements. We model the
problem as an infinite horizon average reward Markov Decision Problem (MDP)
where the control actions are functions of the instantaneous Channel State
Information (CSI) as well as the joint Queue State Information (QSI). To
address the distributive requirement and the issue of exponential memory
requirement and computational complexity, we approximate the subband allocation
Q-factor by the sum of the per-user subband allocation Q-factor and derive a
distributive online stochastic learning algorithm to estimate the per-user
Q-factor and the Lagrange multipliers (LM) simultaneously and determine the
control actions using an auction mechanism. We show that under the proposed
auction mechanism, the distributive online learning converges almost surely
(with probability 1). For illustration, we apply the proposed distributive
stochastic learning framework to an application example with exponential packet
size distribution. We show that the delay-optimal power control has the {\em
multi-level water-filling} structure where the CSI determines the instantaneous
power allocation and the QSI determines the water-level. The proposed algorithm
has linear signaling overhead and computational complexity $\mathcal O(KN)$,
which is desirable from an implementation perspective.
|
1005.0080
|
Electronic Geometry Textbook: A Geometric Textbook Knowledge Management
System
|
cs.AI cs.MS
|
Electronic Geometry Textbook is a knowledge management system that manages
geometric textbook knowledge to enable users to construct and share dynamic
geometry textbooks interactively and efficiently. Based on a knowledge base
organizing and storing the knowledge represented in specific languages, the
system implements interfaces for maintaining the data representing that
knowledge as well as relations among those data, for automatically generating
readable documents for viewing or printing, and for automatically discovering
the relations among knowledge data. An interface has been developed for users
to create geometry textbooks with automatic checking, in real time, of the
consistency of the structure of each resulting textbook. By integrating an
external geometric theorem prover and an external dynamic geometry software
package, the system offers the facilities for automatically proving theorems
and generating dynamic figures in the created textbooks. This paper provides a
comprehensive account of the current version of Electronic Geometry Textbook.
|
1005.0089
|
The Exact Closest String Problem as a Constraint Satisfaction Problem
|
cs.AI
|
We report (to our knowledge) the first evaluation of Constraint Satisfaction
as a computational framework for solving closest string problems. We show that
careful consideration of symbol occurrences can provide search heuristics that
provide several orders of magnitude speedup at and above the optimal distance.
We also report (to our knowledge) the first analysis and evaluation -- using
any technique -- of the computational difficulties involved in the
identification of all closest strings for a given input set. We describe
algorithms for web-scale distributed solution of closest string problems, both
purely based on AI backtrack search and also hybrid numeric-AI methods.
|
1005.0104
|
Joint Structured Models for Extraction from Overlapping Sources
|
cs.AI
|
We consider the problem of jointly training structured models for extraction
from sources whose instances enjoy partial overlap. This has important
applications like user-driven ad-hoc information extraction on the web. Such
applications present new challenges in terms of the number of sources and their
arbitrary pattern of overlap not seen by earlier collective training schemes
applied on two sources. We present an agreement-based learning framework and
alternatives within it to trade-off tractability, robustness to noise, and
extent of agreement. We provide a principled scheme to discover low-noise
agreement sets in unlabeled data across the sources. Through extensive
experiments over 58 real datasets, we establish that our method of additively
rewarding agreement over maximal segments of text provides the best trade-offs,
and also scores over alternatives such as collective inference, staged
training, and multi-view learning.
|
1005.0117
|
On the Separation of Lossy Source-Network Coding and Channel Coding in
Wireline Networks
|
cs.IT math.IT
|
This paper proves the separation between source-network coding and channel
coding in networks of noisy, discrete, memoryless channels. We show that the
set of achievable distortion matrices in delivering a family of dependent
sources across such a network equals the set of achievable distortion matrices
for delivering the same sources across a distinct network which is built by
replacing each channel by a noiseless, point-to-point bit-pipe of the
corresponding capacity. Thus a code that applies source-network coding across
links that are made almost lossless through the application of independent
channel coding across each link asymptotically achieves the optimal performance
across the network as a whole.
|
1005.0125
|
Adaptive Bases for Reinforcement Learning
|
cs.LG cs.AI
|
We consider the problem of reinforcement learning using function
approximation, where the approximating basis can change dynamically while
interacting with the environment. A motivation for such an approach is
maximizing the value function fitness to the problem faced. Three errors are
considered: approximation square error, Bellman residual, and projected Bellman
residual. Algorithms under the actor-critic framework are presented, and shown
to converge. The advantage of such an adaptive basis is demonstrated in
simulations.
|
1005.0167
|
A digital interface for Gaussian relay and interference networks:
Lifting codes from the discrete superposition model
|
cs.IT math.IT
|
For every Gaussian network, there exists a corresponding deterministic
network called the discrete superposition network. We show that this discrete
superposition network provides a near-optimal digital interface for operating a
class consisting of many Gaussian networks in the sense that any code for the
discrete superposition network can be naturally lifted to a corresponding code
for the Gaussian network, while achieving a rate that is no more than a
constant number of bits lesser than the rate it achieves for the discrete
superposition network. This constant depends only on the number of nodes in the
network and not on the channel gains or SNR. Moreover the capacities of the two
networks are within a constant of each other, again independent of channel
gains and SNR. We show that the class of Gaussian networks for which this
interface property holds includes relay networks with a single
source-destination pair, interference networks, multicast networks, and the
counterparts of these networks with multiple transmit and receive antennas.
The code for the Gaussian relay network can be obtained from any code for the
discrete superposition network simply by pruning it. This lifting scheme
establishes that the superposition model can indeed potentially serve as a
strong surrogate for designing codes for Gaussian relay networks.
We present similar results for the K x K Gaussian interference network, MIMO
Gaussian interference networks, MIMO Gaussian relay networks, and multicast
networks, with the constant gap depending additionally on the number of
antennas in case of MIMO networks.
|
1005.0188
|
Generative and Latent Mean Map Kernels
|
cs.LG stat.ML
|
We introduce two kernels that extend the mean map, which embeds probability
measures in Hilbert spaces. The generative mean map kernel (GMMK) is a smooth
similarity measure between probabilistic models. The latent mean map kernel
(LMMK) generalizes the non-iid formulation of Hilbert space embeddings of
empirical distributions in order to incorporate latent variable models. When
comparing certain classes of distributions, the GMMK exhibits beneficial
regularization and generalization properties not shown for previous generative
kernels. We present experiments comparing support vector machine performance
using the GMMK and LMMK between hidden Markov models to the performance of
other methods on discrete and continuous observation sequence data. The results
suggest that, in many cases, the GMMK has generalization error competitive with
or better than other methods.
|
1005.0198
|
Personnalisation de Syst\`emes OLAP Annot\'es
|
cs.DB
|
This paper deals with personalization of annotated OLAP systems. Data
constellation is extended to support annotations and user preferences.
Annotations reflect the decision-maker experience whereas user preferences
enable users to focus on the most interesting data. User preferences allow
annotated contextual recommendations helping the decision-maker during his/her
multidimensional navigations.
|
1005.0201
|
Personnalisation de bases de donn\'ees multidimensionnelles
|
cs.DB
|
This paper deals with decision support systems resting on multidimensional
modelling of data. Moreover, we intend to offer a set of concepts and
mechanisms for personalized multidimensional database specifications. This
personalization consists in associating weights to different components of a
multidimensional schema. Personalization specifications are specified through
the use of a language based on the principle of Event Condition Action. This
personalisation determines multidimensional data display as well as their
analyses (with the use of drilling or rotating operations).
|
1005.0202
|
Dictionary Optimization for Block-Sparse Representations
|
cs.IT math.IT
|
Recent work has demonstrated that using a carefully designed dictionary
instead of a predefined one, can improve the sparsity in jointly representing a
class of signals. This has motivated the derivation of learning methods for
designing a dictionary which leads to the sparsest representation for a given
set of signals. In some applications, the signals of interest can have further
structure, so that they can be well approximated by a union of a small number
of subspaces (e.g., face recognition and motion segmentation). This implies the
existence of a dictionary which enables block-sparse representations of the
input signals once its atoms are properly sorted into blocks. In this paper, we
propose an algorithm for learning a block-sparsifying dictionary of a given set
of signals. We do not require prior knowledge on the association of signals
into groups (subspaces). Instead, we develop a method that automatically
detects the underlying block structure. This is achieved by iteratively
alternating between updating the block structure of the dictionary and updating
the dictionary atoms to better fit the data. Our experiments show that for
block-sparse data the proposed algorithm significantly improves the dictionary
recovery ability and lowers the representation error compared to dictionary
learning methods that do not employ block structure.
|
1005.0212
|
Construction graphique d'entrep\^ots et de magasins de donn\'ees
|
cs.DB
|
Nowadays, decisional systems have became a significant research topic in
databases. Data warehouses and data marts are the main elements of such
systems. This paper presents our decisional support system. We present
graphical interfaces which help the administrator to build data warehouses and
data marts. We present a data warehouse building interface based on an
object-oriented conceptual model. This model allows the warehouse data
historisation at three levels: attribute, class and environment. Also, we
present a data mart building interface which allows warehouse data to be
reorganised through a multidimensional object-oriented model.
|
1005.0213
|
Alg\`ebre OLAP et langage graphique
|
cs.DB
|
This article deals with OLAP systems based on multidimensional model. The
conceptual model we provide, represents data through a constellation
(multi-facts) composed of several multi-hierarchy dimensions. In this model,
data are displayed through multidimensional tables. We define a query algebra
handling these tables. This user oriented algebra is composed of a closure core
of OLAP operators as soon as advanced operators dedicated to complex analysis.
Finally, we specify a graphical OLAP language based on this algebra. This
language facilitates analyses of decision makers.
|
1005.0214
|
Mod\'elisation et extraction de donn\'ees pour un entrep\^ot objet
|
cs.DB
|
This paper describes an object-oriented model for designing complex and
time-variant data warehouse data. The main contribution is the warehouse class
concept, which extends the class concept by temporal and archive filters as
well as a mapping function. Filters allow the keeping of relevant data changes
whereas the mapping function defines the warehouse class schema from a global
data source schema. The approach take into account static properties as well as
dynamic properties. The behaviour extraction is based on the use-matrix
concept.
|
1005.0217
|
Analyse multigraduelle OLAP
|
cs.DB
|
Decisional systems are based on multidimensional databases improving OLAP
analyses. The paper describes a new OLAP operator named "BLEND" to perform
multigradual analyses. The operation transforms multidimensional structures
during querying in order to analyse measures according to various granularity
levels, which are reorganised into a single parameter. We study valid
combinations of the operation in the context of strict hierarchies. First
experimentations implement the operation in an R-OLAP framework showing the
slight cost of this operation.
|
1005.0218
|
Contraintes pour mod\`ele et langage multidimensionnels
|
cs.DB
|
This paper defines a constraint-based model dedicated to multidimensional
databases. The model we define represents data through a constellation of facts
(subjects of analyse) associated to dimensions (axis of analyse), which are
possibly shared. Each dimension is organised according to several hierarchies
(views of analyse) integrating several levels of data granularity. In order to
insure data consistency, we introduce 5 semantic constraints (exclusion,
inclusion, partition, simultaneity, totality) which can be intra-dimension or
inter-dimensions; the intra-dimension constraints allow the expression of
constraints between hierarchies within a same dimension whereas the
inter-dimensions constraints focus on hierarchies of distinct dimensions. We
also study repercussions of these constraints on multidimensional manipulations
and we provide extensions of the multidimensional operators.
|
1005.0219
|
Mod\'elisation et manipulation de donn\'ees historis\'ees et archiv\'ees
dans un entrep\^ot orient\'e objet
|
cs.DB
|
This paper deals with temporal and archive object-oriented data warehouse
modelling and querying. In a first step, we define a data model describing
warehouses as central repositories of complex and temporal data extracted from
one information source. The model is based on the concepts of warehouse object
and environment. A warehouse object is composed of one current state, several
past states (modelling value changes) and several archive states (summarising
some value changes). An environment defines temporal parts in a warehouse
schema according to a relevant granularity (attribute, class or graph). In a
second step, we provide a query algebra dedicated to data warehouses. This
algebra, which is based on common object algebras, integrates temporal
operators and operators for querying object states. An other important
contribution concerns dedicated operators allowing users to transform warehouse
objects in temporal series as well as operators facilitating analytical
treatments.
|
1005.0220
|
Elaboration d'entrep\^ots de donn\'ees complexes
|
cs.DB
|
In this paper, we study the data warehouse modelling used in decision support
systems. We provide an object-oriented data warehouse model allowing data
warehouse description as a central repository of relevant, complex and temporal
data. Our model integrates three concepts such as warehouse object, environment
and warehouse class. Each warehouse object is composed of one current state,
several past states (modelling its detailed evolutions) and several archive
states (modelling its evolutions within a summarised form). The environment
concept defines temporal parts in the data warehouse schema with significant
granularities (attribute, class, graph). Finally, we provide five functions
aiming at defining the data warehouse structures and two functions allowing the
warehouse class inheritance hierarchy organisation.
|
1005.0224
|
Towards Conceptual Multidimensional Design in Decision Support Systems
|
cs.DB
|
Multidimensional databases support efficiently on-line analytical processing
(OLAP). In this paper, we depict a model dedicated to multidimensional
databases. The approach we present designs decisional information through a
constellation of facts and dimensions. Each dimension is possibly shared
between several facts and it is organised according to multiple hierarchies. In
addition, we define a comprehensive query algebra regrouping the more popular
multidimensional operations in current commercial systems and research
approaches. We introduce new operators dedicated to a constellation. Finally,
we describe a prototype that allows managers to query constellations of facts,
dimensions and multiple hierarchies.
|
1005.0267
|
Recovery of sparsest signals via $\ell^q$-minimization
|
cs.IT math.IT
|
In this paper, it is proved that every $s$-sparse vector ${\bf x}\in {\mathbb
R}^n$ can be exactly recovered from the measurement vector ${\bf z}={\bf A}
{\bf x}\in {\mathbb R}^m$ via some $\ell^q$-minimization with $0< q\le 1$, as
soon as each $s$-sparse vector ${\bf x}\in {\mathbb R}^n$ is uniquely
determined by the measurement ${\bf z}$.
|
1005.0268
|
Node-Context Network Clustering using PARAFAC Tensor Decomposition
|
cs.IR
|
We describe a clustering method for labeled link network (semantic graph)
that can be used to group important nodes (highly connected nodes) with their
relevant link's labels by using PARAFAC tensor decomposition. In this kind of
network, the adjacency matrix can not be used to fully describe all information
about the network structure. We have to expand the matrix into 3-way adjacency
tensor, so that not only the information about to which nodes a node connects
to but by which link's labels is also included. And by applying PARAFAC
decomposition on this tensor, we get two lists, nodes and link's labels with
scores attached to each node and labels, for each decomposition group. So
clustering process to get the important nodes along with their relevant labels
can be done simply by sorting the lists in decreasing order. To test the
method, we construct labeled link network by using blog's dataset, where the
blogs are the nodes and labeled links are the shared words among them. The
similarity measures between the results and standard measures look promising,
especially for two most important tasks, finding the most relevant words to
blogs query and finding the most similar blogs to blogs query, about 0.87.
|
1005.0291
|
The Compound Multiple Access Channel with Partially Cooperating Encoders
|
cs.IT math.IT
|
The goal of this paper is to provide a rigorous information-theoretic
analysis of subnetworks of interference networks. We prove two coding theorems
for the compound multiple-access channel with an arbitrary number of channel
states. The channel state information at the transmitters is such that each
transmitter has a finite partition of the set of states and knows which element
of the partition the actual state belongs to. The receiver may have arbitrary
channel state information. The first coding theorem is for the case that both
transmitters have a common message and that each has an additional common
message. The second coding theorem is for the case where rate-constrained, but
noiseless transmitter cooperation is possible. This cooperation may be used to
exchange information about channel state information as well as the messages to
be transmitted. The cooperation protocol used here generalizes Willems'
conferencing. We show how this models base station cooperation in modern
wireless cellular networks used for interference coordination and capacity
enhancement. In particular, the coding theorem for the cooperative case shows
how much cooperation is necessary in order to achieve maximal capacity in the
network considered.
|
1005.0340
|
Statistical Learning in Automated Troubleshooting: Application to LTE
Interference Mitigation
|
cs.LG
|
This paper presents a method for automated healing as part of off-line
automated troubleshooting. The method combines statistical learning with
constraint optimization. The automated healing aims at locally optimizing radio
resource management (RRM) or system parameters of cells with poor performance
in an iterative manner. The statistical learning processes the data using
Logistic Regression (LR) to extract closed form (functional) relations between
Key Performance Indicators (KPIs) and Radio Resource Management (RRM)
parameters. These functional relations are then processed by an optimization
engine which proposes new parameter values. The advantage of the proposed
formulation is the small number of iterations required by the automated healing
method to converge, making it suitable for off-line implementation. The
proposed method is applied to heal an Inter-Cell Interference Coordination
(ICIC) process in a 3G Long Term Evolution (LTE) network which is based on
soft-frequency reuse scheme. Numerical simulations illustrate the benefits of
the proposed approach.
|
1005.0375
|
Performance Analysis of Cognitive Radio Systems under QoS Constraints
and Channel Uncertainty
|
cs.IT math.IT
|
In this paper, performance of cognitive transmission over time-selective flat
fading channels is studied under quality of service (QoS) constraints and
channel uncertainty. Cognitive secondary users (SUs) are assumed to initially
perform channel sensing to detect the activities of the primary users, and then
attempt to estimate the channel fading coefficients through training. Energy
detection is employed for channel sensing, and different minimum
mean-square-error (MMSE) estimation methods are considered for channel
estimation. In both channel sensing and estimation, erroneous decisions can be
made, and hence, channel uncertainty is not completely eliminated. In this
setting, performance is studied and interactions between channel sensing and
estimation are investigated.
Following the channel sensing and estimation tasks, SUs engage in data
transmission. Transmitter, being unaware of the channel fading coefficients, is
assumed to send the data at fixed power and rate levels that depend on the
channel sensing results. Under these assumptions, a state-transition model is
constructed by considering the reliability of the transmissions, channel
sensing decisions and their correctness, and the evolution of primary user
activity which is modeled as a two-state Markov process. In the data
transmission phase, an average power constraint on the secondary users is
considered to limit the interference to the primary users, and statistical
limitations on the buffer lengths are imposed to take into account the QoS
constraints of the secondary traffic. The maximum throughput under these
statistical QoS constraints is identified by finding the effective capacity of
the cognitive radio channel. Numerical results are provided for the power and
rate policies.
|
1005.0390
|
Machine Learning for Galaxy Morphology Classification
|
astro-ph.GA cs.LG
|
In this work, decision tree learning algorithms and fuzzy inferencing systems
are applied for galaxy morphology classification. In particular, the CART, the
C4.5, the Random Forest and fuzzy logic algorithms are studied and reliable
classifiers are developed to distinguish between spiral galaxies, elliptical
galaxies or star/unknown galactic objects. Morphology information for the
training and testing datasets is obtained from the Galaxy Zoo project while the
corresponding photometric and spectra parameters are downloaded from the SDSS
DR7 catalogue.
|
1005.0404
|
Approximate Capacity of Gaussian Interference-Relay Networks with Weak
Cross Links
|
cs.IT math.IT
|
In this paper we study a Gaussian relay-interference network, in which relay
(helper) nodes are to facilitate competing information flows over a wireless
network. We focus on a two-stage relay-interference network where there are
weak cross-links, causing the networks to behave like a chain of Z Gaussian
channels. For these Gaussian ZZ and ZS networks, we establish an approximate
characterization of the rate region. The outer bounds to the capacity region
are established using genie-aided techniques that yield bounds sharper than the
traditional cut-set outer bounds. For the inner bound of the ZZ network, we
propose a new interference management scheme, termed interference
neutralization, which is implemented using structured lattice codes. This
technique allows for over-the-air interference removal, without the
transmitters having complete access the interfering signals. For both the ZZ
and ZS networks, we establish a new network decomposition technique that
(approximately) achieves the capacity region. We use insights gained from an
exact characterization of the corresponding linear deterministic version of the
problems, in order to establish the approximate characterization for Gaussian
networks.
|
1005.0416
|
Incremental Sampling-based Algorithms for Optimal Motion Planning
|
cs.RO
|
During the last decade, incremental sampling-based motion planning
algorithms, such as the Rapidly-exploring Random Trees (RRTs) have been shown
to work well in practice and to possess theoretical guarantees such as
probabilistic completeness. However, no theoretical bounds on the quality of
the solution obtained by these algorithms have been established so far. The
first contribution of this paper is a negative result: it is proven that, under
mild technical conditions, the cost of the best path in the RRT converges
almost surely to a non-optimal value. Second, a new algorithm is considered,
called the Rapidly-exploring Random Graph (RRG), and it is shown that the cost
of the best path in the RRG converges to the optimum almost surely. Third, a
tree version of RRG is introduced, called the RRT$^*$ algorithm, which
preserves the asymptotic optimality of RRG while maintaining a tree structure
like RRT. The analysis of the new algorithms hinges on novel connections
between sampling-based motion planning algorithms and the theory of random
geometric graphs. In terms of computational complexity, it is shown that the
number of simple operations required by both the RRG and RRT$^*$ algorithms is
asymptotically within a constant factor of that required by RRT.
|
1005.0419
|
Capacity-Equivocation Region of the Gaussian MIMO Wiretap Channel
|
cs.IT cs.CR math.IT
|
We study the Gaussian multiple-input multiple-output (MIMO) wiretap channel,
which consists of a transmitter, a legitimate user, and an eavesdropper. In
this channel, the transmitter sends a common message to both the legitimate
user and the eavesdropper. In addition to this common message, the legitimate
user receives a private message, which is desired to be kept hidden as much as
possible from the eavesdropper. We obtain the entire capacity-equivocation
region of the Gaussian MIMO wiretap channel. This region contains all
achievable common message, private message, and private message's equivocation
(secrecy) rates. In particular, we show the sufficiency of jointly Gaussian
auxiliary random variables and channel input to evaluate the existing
single-letter description of the capacity-equivocation region due to
Csiszar-Korner.
|
1005.0426
|
Security in Distributed Storage Systems by Communicating a Logarithmic
Number of Bits
|
cs.CR cs.IT math.IT
|
We investigate the problem of maintaining an encoded distributed storage
system when some nodes contain adversarial errors. Using the error-correction
capabilities that are built into the existing redundancy of the system, we
propose a simple linear hashing scheme to detect errors in the storage nodes.
Our main result is that for storing a data object of total size $\size$ using
an $(n,k)$ MDS code over a finite field $\F_q$, up to
$t_1=\lfloor(n-k)/2\rfloor$ errors can be detected, with probability of failure
smaller than $1/ \size$, by communicating only $O(n(n-k)\log \size)$ bits to a
trusted verifier. Our result constructs small projections of the data that
preserve the errors with high probability and builds on a pseudorandom
generator that fools linear functions. The transmission rate achieved by our
scheme is asymptotically equal to the min-cut capacity between the source and
any receiver.
|
1005.0437
|
A Unifying View of Multiple Kernel Learning
|
stat.ML cs.LG
|
Recent research on multiple kernel learning has lead to a number of
approaches for combining kernels in regularized risk minimization. The proposed
approaches include different formulations of objectives and varying
regularization strategies. In this paper we present a unifying general
optimization criterion for multiple kernel learning and show how existing
formulations are subsumed as special cases. We also derive the criterion's dual
representation, which is suitable for general smooth optimization algorithms.
Finally, we evaluate multiple kernel learning in this framework analytically
using a Rademacher complexity bound on the generalization error and empirically
in a set of experiments.
|
1005.0498
|
Classes of lower bounds on outage error probability and MSE in Bayesian
parameter estimation
|
cs.IT math.IT
|
In this paper, new classes of lower bounds on the outage error probability
and on the mean-square-error (MSE) in Bayesian parameter estimation are
proposed. The minima of the h-outage error probability and the MSE are obtained
by the generalized maximum a-posteriori probability and the minimum MSE (MMSE)
estimators, respectively. However, computation of these estimators and their
corresponding performance is usually not tractable and thus, lower bounds on
these terms can be very useful for performance analysis. The proposed class of
lower bounds on the outage error probability is derived using Holder's
inequality. This class is utilized to derive a new class of Bayesian MSE
bounds. It is shown that for unimodal symmetric conditional probability density
functions (pdf) the tightest probability of outage error lower bound in the
proposed class attains the minimum probability of outage error and the tightest
MSE bound coincides with the MMSE performance. In addition, it is shown that
the proposed MSE bounds are always tighter than the Ziv-Zakai lower bound
(ZZLB). The proposed bounds are compared with other existing performance lower
bounds via some examples.
|
1005.0527
|
Detecting the Most Unusual Part of Two and Three-dimensional Digital
Images
|
physics.data-an cs.CV physics.med-ph
|
The purpose of this paper is to introduce an algorithm that can detect the
most unusual part of a digital image in probabilistic setting. The most unusual
part of a given shape is defined as a part of the image that has the maximal
distance to all non intersecting shapes with the same form. The method is
tested on two and three-dimensional images and has shown very good results
without any predefined model. A version of the method independent of the
contrast of the image is considered and is found to be useful for finding the
most unusual part (and the most similar part) of the image conditioned on given
image. The results can be used to scan large image databases, as for example
medical databases.
|
1005.0530
|
Feature Selection with Conjunctions of Decision Stumps and Learning from
Microarray Data
|
cs.LG cs.AI stat.ML
|
One of the objectives of designing feature selection learning algorithms is
to obtain classifiers that depend on a small number of attributes and have
verifiable future performance guarantees. There are few, if any, approaches
that successfully address the two goals simultaneously. Performance guarantees
become crucial for tasks such as microarray data analysis due to very small
sample sizes resulting in limited empirical evaluation. To the best of our
knowledge, such algorithms that give theoretical bounds on the future
performance have not been proposed so far in the context of the classification
of gene expression data. In this work, we investigate the premise of learning a
conjunction (or disjunction) of decision stumps in Occam's Razor, Sample
Compression, and PAC-Bayes learning settings for identifying a small subset of
attributes that can be used to perform reliable classification tasks. We apply
the proposed approaches for gene identification from DNA microarray data and
compare our results to those of well known successful approaches proposed for
the task. We show that our algorithm not only finds hypotheses with much
smaller number of genes while giving competitive classification accuracy but
also have tight risk guarantees on future performance unlike other approaches.
The proposed approaches are general and extensible in terms of both designing
novel algorithms and application to other domains.
|
1005.0545
|
Capacity of a Class of Broadcast Relay Channels
|
cs.IT math.IT
|
Consider the broadcast relay channel (BRC) which consists of a source sending
information over a two user broadcast
channel in presence of two relay nodes that help the transmission to the
destinations. Clearly, this network with
five nodes involves all the problems encountered in relay and broadcast
channels. New inner bounds on the capacity
region of this class of channels are derived. These results can be seen as a
generalization and hence unification of
previous work in this topic. Our bounds are based on the idea of
recombination of message bits and various effective
coding strategies for relay and broadcast channels. Capacity result is
obtained for the semi-degraded BRC-CR, where
one relay channel is degraded while the other one is reversely degraded. An
inner and upper bound is also presented
for the degraded BRC with common relay (BRC-CR), where both the relay and
broadcast channel are degraded which is
the capacity for the Gaussian case. Application of these results arise in the
context of opportunistic cooperation
of cellular networks.
|
1005.0605
|
An approach to visualize the course of solving of a research task in
humans
|
cs.AI
|
A technique to study the dynamics of solving of a research task is suggested.
The research task was based on specially developed software Right- Wrong
Responder (RWR), with the participants having to reveal the response logic of
the program. The participants interacted with the program in the form of a
semi-binary dialogue, which implies the feedback responses of only two kinds -
"right" or "wrong". The technique has been applied to a small pilot group of
volunteer participants. Some of them have successfully solved the task
(solvers) and some have not (non-solvers). In the beginning of the work, the
solvers did more wrong moves than non-solvers, and they did less wrong moves
closer to the finish of the work. A phase portrait of the work both in solvers
and non-solvers showed definite cycles that may correspond to sequences of
partially true hypotheses that may be formulated by the participants during the
solving of the task.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.