id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1109.5302
|
Simultaneous Codeword Optimization (SimCO) for Dictionary Update and
Learning
|
cs.LG cs.IT math.IT
|
We consider the data-driven dictionary learning problem. The goal is to seek
an over-complete dictionary from which every training signal can be best
approximated by a linear combination of only a few codewords. This task is
often achieved by iteratively executing two operations: sparse coding and
dictionary update. In the literature, there are two benchmark mechanisms to
update a dictionary. The first approach, such as the MOD algorithm, is
characterized by searching for the optimal codewords while fixing the sparse
coefficients. In the second approach, represented by the K-SVD method, one
codeword and the related sparse coefficients are simultaneously updated while
all other codewords and coefficients remain unchanged. We propose a novel
framework that generalizes the aforementioned two methods. The unique feature
of our approach is that one can update an arbitrary set of codewords and the
corresponding sparse coefficients simultaneously: when sparse coefficients are
fixed, the underlying optimization problem is similar to that in the MOD
algorithm; when only one codeword is selected for update, it can be proved that
the proposed algorithm is equivalent to the K-SVD method; and more importantly,
our method allows us to update all codewords and all sparse coefficients
simultaneously, hence the term simultaneous codeword optimization (SimCO).
Under the proposed framework, we design two algorithms, namely, primitive and
regularized SimCO. We implement these two algorithms based on a simple gradient
descent mechanism. Simulations are provided to demonstrate the performance of
the proposed algorithms, as compared with two baseline algorithms MOD and
K-SVD. Results show that regularized SimCO is particularly appealing in terms
of both learning performance and running speed.
|
1109.5311
|
Bias Plus Variance Decomposition for Survival Analysis Problems
|
cs.LG stat.ML
|
Bias - variance decomposition of the expected error defined for regression
and classification problems is an important tool to study and compare different
algorithms, to find the best areas for their application. Here the
decomposition is introduced for the survival analysis problem. In our
experiments, we study bias -variance parts of the expected error for two
algorithms: original Cox proportional hazard regression and CoxPath, path
algorithm for L1-regularized Cox regression, on the series of increased
training sets. The experiments demonstrate that, contrary expectations, CoxPath
does not necessarily have an advantage over Cox regression.
|
1109.5319
|
Optimal Foraging of Renewable Resources
|
cs.RO
|
Consider a team of agents in the plane searching for and visiting target
points that appear in a bounded environment according to a stochastic renewal
process with a known absolutely continuous spatial distribution. Agents must
detect targets with limited-range onboard sensors. It is desired to minimize
the expected waiting time between the appearance of a target point, and the
instant it is visited. When the sensing radius is small, the system time is
dominated by time spent searching, and it is shown that the optimal policy
requires the agents to search a region at a relative frequency proportional to
the square root of its renewal rate. On the other hand, when targets appear
frequently, the system time is dominated by time spent servicing known targets,
and it is shown that the optimal policy requires the agents to service a region
at a relative frequency proportional to the cube root of its renewal rate.
Furthermore, the presented algorithms in this case recover the optimal
performance achieved by agents with full information of the environment.
Simulation results verify the theoretical performance of the algorithms.
|
1109.5322
|
Synthesis of Optimal Ensemble Controls for Linear Systems using the
Singular Value Decomposition
|
math.OC cs.SY
|
An emerging and challenging area in mathematical control theory called
Ensemble Control encompasses a class of problems that involves the guidance of
an uncountably infinite collection of structurally identical dynamical systems,
which are indexed by a parameter set, by applying the same open-loop control.
The subject originates from the study of complex spin dynamics in Nuclear
Magnetic Resonance (NMR) spectroscopy and imaging (MRI). A fundamental question
concerns ensemble controllability, which determines the existence of controls
that transfer the system between desired initial and target states. For
ensembles of finite-dimensional time-varying linear systems, the necessary and
sufficient controllability conditions and analytical optimal control laws have
been shown to depend on the singular system of the operator characterizing the
system dynamics. Because analytical solutions are available only in the
simplest cases, there is a need to develop numerical methods for synthesizing
these controls. We introduce a direct, accurate, and computationally efficient
algorithm based on the singular value decomposition (SVD) that approximates
ensemble controls of minimum norm for such systems. This method enables the
application of ensemble control to engineering problems involving complex,
time-varying, and high-dimensional linear dynamic systems.
|
1109.5323
|
Squiggle - A Glyph Recognizer for Gesture Input
|
cs.HC cs.CV
|
Squiggle is a template-based glyph recognizer in the lineage of `$1
Recognizer' and `Protractor'. It seeks a good fit linear affine mapping between
the input and template glyphs which are represented as a list of milestone
points along the glyph path. The algorithm can recognize input glyphs invariant
of rotation, scaling, skew, and reflection symmetries. In practice the
algorithm is fast and robust enough to recognize user-generated glyphs as they
are being drawn in real time, and to project `shadows' of the matching
templates as feedback.
|
1109.5329
|
Space Weather Prediction with Exascale Computing
|
astro-ph.SR cs.CE physics.plasm-ph physics.space-ph
|
Space weather refers to conditions on the Sun, in the interplanetary space
and in the Earth space environment that can influence the performance and
reliability of space-borne and ground-based technological systems and can
endanger human life or health. Adverse conditions in the space environment can
cause disruption of satellite operations, communications, navigation, and
electric power distribution grids, leading to a variety of socioeconomic
losses. The conditions in space are also linked to the Earth climate. The
activity of the Sun affects the total amount of heat and light reaching the
Earth and the amount of cosmic rays arriving in the atmosphere, a phenomenon
linked with the amount of cloud cover and precipitation. Given these great
impacts on society, space weather is attracting a growing attention and is the
subject of international efforts worldwide. We focus here on the steps
necessary for achieving a true physics-based ability to predict the arrival and
consequences of major space weather storms. Great disturbances in the space
environment are common but their precise arrival and impact on human activities
varies greatly. Simulating such a system is a grand- challenge, requiring
computing resources at the limit of what is possible not only with current
technology but also with the foreseeable future generations of super computers
|
1109.5336
|
Achievable Rates for K-user Gaussian Interference Channels
|
cs.IT math.IT
|
The aim of this paper is to study the achievable rates for a $K$ user
Gaussian interference channels for any SNR using a combination of lattice and
algebraic codes. Lattice codes are first used to transform the Gaussian
interference channel (G-IFC) into a discrete input-output noiseless channel,
and subsequently algebraic codes are developed to achieve good rates over this
new alphabet. In this context, a quantity called efficiency is introduced which
reflects the effectiveness of the algebraic coding strategy. The paper first
addresses the problem of finding high efficiency algebraic codes. A combination
of these codes with Construction-A lattices is then used to achieve non trivial
rates for the original Gaussian interference channel.
|
1109.5346
|
Polar codes for degradable quantum channels
|
quant-ph cs.IT math.IT
|
Channel polarization is a phenomenon in which a particular recursive encoding
induces a set of synthesized channels from many instances of a memoryless
channel, such that a fraction of the synthesized channels becomes near perfect
for data transmission and the other fraction becomes near useless for this
task. Mahdavifar and Vardy have recently exploited this phenomenon to construct
codes that achieve the symmetric private capacity for private data transmission
over a degraded wiretap channel. In the current paper, we build on their work
and demonstrate how to construct quantum wiretap polar codes that achieve the
symmetric private capacity of a degraded quantum wiretap channel with a
classical eavesdropper. Due to the Schumacher-Westmoreland correspondence
between quantum privacy and quantum coherence, we can construct quantum polar
codes by operating these quantum wiretap polar codes in superposition, much
like Devetak's technique for demonstrating the achievability of the coherent
information rate for quantum data transmission. Our scheme achieves the
symmetric coherent information rate for quantum channels that are degradable
with a classical environment. This condition on the environment may seem
restrictive, but we show that many quantum channels satisfy this criterion,
including amplitude damping channels, photon-detected jump channels, dephasing
channels, erasure channels, and cloning channels. Our quantum polar coding
scheme has the desirable properties of being channel-adapted and symmetric
capacity-achieving along with having an efficient encoder, but we have not
demonstrated that the decoding is efficient. Also, the scheme may require
entanglement assistance, but we show that the rate of entanglement consumption
vanishes in the limit of large blocklength if the channel is degradable with
classical environment.
|
1109.5348
|
Dynkin Game of Stochastic Differential Equations with Random
Coefficients, and Associated Backward Stochastic Partial Differential
Variational Inequality
|
math.OC cs.SY math.AP
|
A Dynkin game is considered for stochastic differential equations with random
coefficients. We first apply Qiu and Tang's maximum principle for backward
stochastic partial differential equations to generalize Krylov estimate for the
distribution of a Markov process to that of a non-Markov process, and establish
a generalized It\^o-Kunita-Wentzell's formula allowing the test function to be
a random field of It\^o's type which takes values in a suitable Sobolev space.
We then prove the verification theorem that the Nash equilibrium point and the
value of the Dynkin game are characterized by the strong solution of the
associated Hamilton-Jacobi-Bellman-Isaacs equation, which is currently a
backward stochastic partial differential variational inequality (BSPDVI, for
short) with two obstacles. We obtain the existence and uniqueness result and a
comparison theorem for strong solution of the BSPDVI. Moreover, we study the
monotonicity on the strong solution of the BSPDVI by the comparison theorem for
BSPDVI and define the free boundaries. Finally, we identify the counterparts
for an optimal stopping time problem as a special Dynkin game.
|
1109.5351
|
Data processing inequalities based on a certain structured class of
information measures with application to estimation theory
|
cs.IT math.IT
|
We study data processing inequalities that are derived from a certain class
of generalized information measures, where a series of convex functions and
multiplicative likelihood ratios are nested alternately. While these
information measures can be viewed as a special case of the most general
Zakai-Ziv generalized information measure, this special nested structure calls
for attention and motivates our study. Specifically, a certain choice of the
convex functions leads to an information measure that extends the notion of the
Bhattacharyya distance (or the Chernoff divergence): While the ordinary
Bhattacharyya distance is based on the (weighted) geometric mean of two
replicas of the channel's conditional distribution, the more general
information measure allows an arbitrary number of such replicas. We apply the
data processing inequality induced by this information measure to a detailed
study of lower bounds of parameter estimation under additive white Gaussian
noise (AWGN) and show that in certain cases, tighter bounds can be obtained by
using more than two replicas. While the resulting lower bound may not compete
favorably with the best bounds available for the ordinary AWGN channel, the
advantage of the new lower bound, relative to the other bounds, becomes
significant in the presence of channel uncertainty, like unknown fading. This
different behavior in the presence of channel uncertainty is explained by the
convexity property of the information measure.
|
1109.5370
|
Higher-Order Markov Tag-Topic Models for Tagged Documents and Images
|
cs.CV cs.AI cs.IR cs.LG
|
This paper studies the topic modeling problem of tagged documents and images.
Higher-order relations among tagged documents and images are major and
ubiquitous characteristics, and play positive roles in extracting reliable and
interpretable topics. In this paper, we propose the tag-topic models (TTM) to
depict such higher-order topic structural dependencies within the Markov random
field (MRF) framework. First, we use the novel factor graph representation of
latent Dirichlet allocation (LDA)-based topic models from the MRF perspective,
and present an efficient loopy belief propagation (BP) algorithm for
approximate inference and parameter estimation. Second, we propose the factor
hypergraph representation of TTM, and focus on both pairwise and higher-order
relation modeling among tagged documents and images. Efficient loopy BP
algorithm is developed to learn TTM, which encourages the topic labeling
smoothness among tagged documents and images. Extensive experimental results
confirm the incorporation of higher-order relations to be effective in
enhancing the overall topic modeling performance, when compared with current
state-of-the-art topic models, in many text and image mining tasks of broad
interests such as word and link prediction, document classification, and tag
recommendation.
|
1109.5373
|
Degrees of Freedom Region of the MIMO Interference Channel with Output
Feedback and Delayed CSIT
|
cs.IT math.IT
|
The two-user multiple-input multiple-output (MIMO) interference channel (IC)
with arbitrary number of antennas at each terminal is considered and the
degrees of freedom (DoF) region is characterized in the presence of noiseless
channel output feedback from each receiver to its respective transmitter and
availability of delayed channel state information at the transmitters (CSIT).
It is shown that having output feedback and delayed CSIT can strictly enlarge
the DoF region of the MIMO IC when compared to the case in which only delayed
CSIT is present. The proposed coding schemes that achieve the corresponding DoF
region with feedback and delayed CSIT utilize both resources, i.e., feedback
and delayed CSIT in a non-trivial manner. It is also shown that the DoF region
with local feedback and delayed CSIT is equal to the DoF region with global
feedback and delayed CSIT, i.e., local feedback and delayed CSIT is equivalent
to global feedback and delayed CSIT from the perspective of the degrees of
freedom region. The converse is proved for a stronger setting in which the
channels to the two receivers need not be statistically equivalent.
|
1109.5375
|
Singular gradient flow of the distance function and homotopy equivalence
|
math.AP cs.SY math.DG math.OC
|
It is a generally shared opinion that significant information about the
topology of a bounded domain $\Omega $ of a riemannian manifold $M$ is encoded
into the properties of the distance, $d_{\partial\Omega}$, %,
$d:\Omega\rightarrow [0,\infty [$, from the boundary of $\Omega$. To confirm
such an idea we propose an approach based on the invariance of the singular set
of the distance function with respect to the generalized gradient flow of of
$d_{\partial\Omega}$. As an application, we deduce that such a singular set has
the same homotopy type as $\Omega$.
|
1109.5382
|
Discrete-Time Block Models for Transmission Line Channels: Static and
Doubly Selective Cases
|
cs.IT math.IT
|
Most methodologies for modeling Transmission Line (TL) based channels define
the input-output relationship in the frequency domain (FD) and handle the TL
resorting to a two-port network (2PN) formalism. These techniques have not yet
been formally mapped into a discrete-time (DT) block model, which is useful to
simulate and estimate the channel response as well as to design optimal
precoding strategies. TL methods also fall short when they are applied to Time
Varying (TV) systems, such as the power line channel. The objective of this
paper is to establish if and how one can introduce a DT block model for the
Power Line Channel. We prove that it is possible to use Lifting and Trailing
Zeros (L&TZ) techniques to derive a DT block model that maps the TL-based
input-output description directly in the time domain (TD) block channel model.
More specifically, we find an interesting relationship between the elements of
an ABCD matrix, defined in the FD, and filtering kernels that allow an elegant
representation of the channel in the TD. The same formalism is valid for both
the Linear Time Invariant (LTI) and the Linear TV (LTV) cases, and bridges
communications and signal processing methodologies with circuits and systems
analysis tools.
|
1109.5396
|
Degrees of Freedom of Interference Channels with CoMP Transmission and
Reception
|
cs.IT math.IT
|
We study the Degrees of Freedom (DoF) of the K-user interference channel with
coordinated multi-point (CoMP) transmission and reception. Each message is
jointly transmitted by M_t successive transmitters, and is jointly received by
M_r successive receivers. We refer to this channel as the CoMP channel with a
transmit cooperation order of M_t and receive cooperation order of M_r. Since
the channel has a total of K transmit antennas and K receive antennas, the
maximum possible DoF is equal to K. We show that the CoMP channel has K DoF if
and only if M_t + M_r is greater than or equal to K+1. For the general case, we
derive an outer bound that states that the DoF is bounded above by the ceiling
of (K+M_t+M_r-2)/2. For the special case with only CoMP transmission, i.e, M_r
= 1, we propose a scheme that can achieve (K+M_t-1)/2 DoF for all K < 10, and
conjecture that the result holds true for all K . The achievability proofs are
based on the notion of algebraic independence from algebraic geometry.
|
1109.5404
|
Towards Optimal Learning of Chain Graphs
|
stat.ML cs.AI math.ST stat.TH
|
In this paper, we extend Meek's conjecture (Meek 1997) from directed and
acyclic graphs to chain graphs, and prove that the extended conjecture is true.
Specifically, we prove that if a chain graph H is an independence map of the
independence model induced by another chain graph G, then (i) G can be
transformed into H by a sequence of directed and undirected edge additions and
feasible splits and mergings, and (ii) after each operation in the sequence H
remains an independence map of the independence model induced by G. Our result
has the same important consequence for learning chain graphs from data as the
proof of Meek's conjecture in (Chickering 2002) had for learning Bayesian
networks from data: It makes it possible to develop efficient and
asymptotically correct learning algorithms under mild assumptions.
|
1109.5415
|
Shannon Meets Nyquist: Capacity of Sampled Gaussian Channels
|
cs.IT math.IT
|
We explore two fundamental questions at the intersection of sampling theory
and information theory: how channel capacity is affected by sampling below the
channel's Nyquist rate, and what sub-Nyquist sampling strategy should be
employed to maximize capacity. In particular, we derive the capacity of sampled
analog channels for three prevalent sampling strategies: sampling with
filtering, sampling with filter banks, and sampling with modulation and filter
banks. These sampling mechanisms subsume most nonuniform sampling techniques
applied in practice. Our analyses illuminate interesting connections between
under-sampled channels and multiple-input multiple-output channels. The optimal
sampling structures are shown to extract out the frequencies with the highest
SNR from each aliased frequency set, while suppressing aliasing and out-of-band
noise. We also highlight connections between undersampled channel capacity and
minimum mean-squared error (MSE) estimation from sampled data. In particular,
we show that the filters maximizing capacity and the ones minimizing MSE are
equivalent under both filtering and filter-bank sampling strategies. These
results demonstrate the effect upon channel capacity of sub-Nyquist sampling
techniques, and characterize the tradeoff between information rate and sampling
rate.
|
1109.5420
|
Incremental Relaying for the Gaussian Interference Channel with a
Degraded Broadcasting Relay
|
cs.IT math.IT
|
This paper studies incremental relay strategies for a two-user Gaussian
relay-interference channel with an in-band-reception and
out-of-band-transmission relay, where the link between the relay and the two
receivers is modelled as a degraded broadcast channel. It is shown that
generalized hash-and-forward (GHF) can achieve the capacity region of this
channel to within a constant number of bits in a certain weak relay regime,
where the transmitter-to-relay link gains are not unboundedly stronger than the
interference links between the transmitters and the receivers. The GHF relaying
strategy is ideally suited for the broadcasting relay because it can be
implemented in an incremental fashion, i.e., the relay message to one receiver
is a degraded version of the message to the other receiver. A
generalized-degree-of-freedom (GDoF) analysis in the high signal-to-noise ratio
(SNR) regime reveals that in the symmetric channel setting, each common relay
bit can improve the sum rate roughly by either one bit or two bits
asymptotically depending on the operating regime, and the rate gain can be
interpreted as coming solely from the improvement of the common message rates,
or alternatively in the very weak interference regime as solely coming from the
rate improvement of the private messages. Further, this paper studies an
asymmetric case in which the relay has only a single single link to one of the
destinations. It is shown that with only one relay-destination link, the
approximate capacity region can be established for a larger regime of channel
parameters. Further, from a GDoF point of view, the sum-capacity gain due to
the relay can now be thought as coming from either signal relaying only, or
interference forwarding only.
|
1109.5426
|
The capacity for the linear time-invariant Gaussian relay channel
|
cs.IT math.IT
|
In this paper, the Gaussian relay channel with linear time-invariant relay
filtering is considered. Based on spectral theory for stationary processes, the
maximum achievable rate for this subclass of linear Gaussian relay operation is
obtained in finite-letter characterization. The maximum rate can be achieved by
dividing the overall frequency band into at most eight subbands and by making
the relay behave as an instantaneous amplify-and-forward relay at each subband.
Numerical results are provided to evaluate the performance of LTI relaying.
|
1109.5430
|
Recovery of Block-Sparse Representations from Noisy Observations via
Orthogonal Matching Pursuit
|
cs.IT math.IT
|
We study the problem of recovering the sparsity pattern of block-sparse
signals from noise-corrupted measurements. A simple, efficient recovery method,
namely, a block-version of the orthogonal matching pursuit (OMP) method, is
considered in this paper and its behavior for recovering the block-sparsity
pattern is analyzed. We provide sufficient conditions under which the
block-version of the OMP can successfully recover the block-sparse
representations in the presence of noise. Our analysis reveals that exploiting
block-sparsity can improve the recovery ability and lead to a guaranteed
recovery for a higher sparsity level. Numerical results are presented to
corroborate our theoretical claim.
|
1109.5433
|
Optimal Precoding Design and Power Allocation for Decentralized
Detection of Deterministic Signals
|
cs.IR
|
We consider a decentralized detection problem in a power-constrained wireless
sensor networks (WSNs), in which a number of sensor nodes collaborate to detect
the presence of a deterministic vector signal. The signal to be detected is
assumed known \emph{a priori}. Given a constraint on the total amount of
transmit power, we investigate the optimal linear precoding design for each
sensor node. More specifically, in order to achieve the best detection
performance, shall sensor nodes transmit their raw data to the fusion center
(FC), or transmit compressed versions of their original data? The optimal power
allocation among sensors is studied as well. Also, assuming a fixed total
transmit power, we examine how the detection performance behaves with the
number of sensors in the network. A new concept "detection outage" is proposed
to quantify the reliability of the overall detection system. Finally,
decentralized detection with unknown signals is studied. Numerical results are
conducted to corroborate our theoretical analysis and to illustrate the
performance of the proposed algorithm.
|
1109.5453
|
Posterior Mean Super-resolution with a Causal Gaussian Markov Random
Field Prior
|
cs.CV
|
We propose a Bayesian image super-resolution (SR) method with a causal
Gaussian Markov random field (MRF) prior. SR is a technique to estimate a
spatially high-resolution image from given multiple low-resolution images. An
MRF model with the line process supplies a preferable prior for natural images
with edges. We improve the existing image transformation model, the compound
MRF model, and its hyperparameter prior model. We also derive the optimal
estimator -- not the joint maximum a posteriori (MAP) or marginalized maximum
likelihood (ML), but the posterior mean (PM) -- from the objective function of
the L2-norm (mean square error) -based peak signal-to-noise ratio (PSNR). Point
estimates such as MAP and ML are generally not stable in ill-posed
high-dimensional problems because of overfitting, while PM is a stable
estimator because all the parameters in the model are evaluated as
distributions. The estimator is numerically determined by using variational
Bayes. Variational Bayes is a widely used method that approximately determines
a complicated posterior distribution, but it is generally hard to use because
it needs the conjugate prior. We solve this problem with simple Taylor
approximations. Experimental results have shown that the proposed method is
more accurate or comparable to existing methods.
|
1109.5454
|
The ubiquity of small-world networks
|
nlin.AO cs.SI physics.soc-ph
|
Small-world networks by Watts and Strogatz are a class of networks that are
highly clustered, like regular lattices, yet have small characteristic path
lengths, like random graphs. These characteristics result in networks with
unique properties of regional specialization with efficient information
transfer. Social networks are intuitive examples of this organization with
cliques or clusters of friends being interconnected, but each person is really
only 5-6 people away from anyone else. While this qualitative definition has
prevailed in network science theory, in application, the standard quantitative
application is to compare path length (a surrogate measure of distributed
processing) and clustering (a surrogate measure of regional specialization) to
an equivalent random network. It is demonstrated here that comparing network
clustering to that of a random network can result in aberrant findings and
networks once thought to exhibit small-world properties may not. We propose a
new small-world metric, {\omega} (omega), which compares network clustering to
an equivalent lattice network and path length to a random network, as Watts and
Strogatz originally described. Example networks are presented that would be
interpreted as small-world when clustering is compared to a random network but
are not small-world according to {\omega}. These findings have significant
implications in network science as small-world networks have unique topological
properties, and it is critical to accurately distinguish them from networks
without simultaneous high clustering and low path length.
|
1109.5460
|
The scaling of human mobility by taxis is exponential
|
physics.soc-ph cs.SI
|
As a significant factor in urban planning, traffic forecasting and prediction
of epidemics, modeling patterns of human mobility draws intensive attention
from researchers for decades. Power-law distribution and its variations are
observed from quite a few real-world human mobility datasets such as the
movements of banking notes, trackings of cell phone users' locations and
trajectories of vehicles. In this paper, we build models for 20 million
trajectories with fine granularity collected from more than 10 thousand taxis
in Beijing. In contrast to most models observed in human mobility data, the
taxis' traveling displacements in urban areas tend to follow an exponential
distribution instead of a power-law. Similarly, the elapsed time can also be
well approximated by an exponential distribution. Worth mentioning, analysis of
the interevent time indicates the bursty nature of human mobility, similar to
many other human activities.
|
1109.5466
|
Optimal Sensor Placement for Intruder Detection
|
cs.SY math.OC
|
We consider the centralized detection of an intruder, whose location is
modeled as uniform across a specified set of points, using an optimally placed
team of sensors. These sensors make conditionally independent observations. The
local detectors at the sensors are also assumed to be identical, with detection
probability $(P_{_{D}})$ and false alarm probability $(P_{_{F}})$. We formulate
the problem as an N-ary hypothesis testing problem, jointly optimizing the
sensor placement and detection policies at the fusion center. We prove that
uniform sensor placement is never strictly optimal when the number of sensors
$(M)$ equals the number of placement points $(N)$. We prove that for $N_{2} >
N_{1} > M$, where $N_{1},N_{2}$ are number of placement points, the framework
utilizing $M$ sensors and $N_{1}$ placement points has the same optimal
placement structure as the one utilizing $M$ sensors and $N_{2}$ placement
points. For $M\leq 5$ and for fixed $P_{_{D}}$, increasing $P_{_{F}}$ leads to
optimal placements that are higher in the majorization-based placement scale.
Similarly for $M\leq 5$ and for fixed $P_{_{F}}$, increasing $P_{_{D}}$ leads
to optimal placements that are higher in the majorization-based placement
scale. For $M>5$, this result does not necessarily hold and we provide a simple
counterexample. It is conjectured that the set of optimal placements for a
given $(M,N)$ can always be placed on a majorization-based placement scale.
|
1109.5482
|
Social Learning in a Changing World
|
cs.SI physics.soc-ph
|
We study a model of learning on social networks in dynamic environments,
describing a group of agents who are each trying to estimate an underlying
state that varies over time, given access to weak signals and the estimates of
their social network neighbors.
We study three models of agent behavior. In the "fixed response" model,
agents use a fixed linear combination to incorporate information from their
peers into their own estimate. This can be thought of as an extension of the
DeGroot model to a dynamic setting. In the "best response" model, players
calculate minimum variance linear estimators of the underlying state.
We show that regardless of the initial configuration, fixed response dynamics
converge to a steady state, and that the same holds for best response on the
complete graph. We show that best response dynamics can, in the long term, lead
to estimators with higher variance than is achievable using well chosen fixed
responses.
The "penultimate prediction" model is an elaboration of the best response
model. While this model only slightly complicates the computations required of
the agents, we show that in some cases it greatly increases the efficiency of
learning, and on complete graphs is in fact optimal, in a strong sense.
|
1109.5484
|
Two-hop Communication with Energy Harvesting
|
cs.IT math.IT
|
Communication nodes with the ability to harvest energy from the environment
have the potential to operate beyond the timeframe limited by the finite
capacity of their batteries; and accordingly, to extend the overall network
lifetime. However, the optimization of the communication system in the presence
of energy harvesting devices requires a new paradigm in terms of power
allocation since the energy becomes available over time. In this paper, we
consider the problem of two-hop relaying in the presence of energy harvesting
nodes. We identify the optimal offline transmission scheme for energy
harvesting source and relay when the relay operates in the full-duplex mode. In
the case of a half-duplex relay, we provide the optimal transmission scheme
when the source has a single energy packet.
|
1109.5488
|
MassChroQ: A versatile tool for mass spectrometry quantification
|
q-bio.QM cs.CE
|
Recently, many software tools have been developed to perform quantification
in LC-MS analyses. However, most of them are specific to either a
quantification strategy (e.g. label-free or isotopic labelling) or a
mass-spectrometry system (e.g. high or low resolution).
In this context, we have developed MassChroQ, a versatile software that
performs LC-MS data alignment and peptide quantification by peak area
integration on extracted ion chromatograms. MassChroQ is suitable for
quantification with or without labelling and is not limited to high resolution
systems. Peptides of interest (for example all the identified peptides) can be
determined automatically or manually by providing targeted m/z and retention
time values. It can handle large experiments that include protein or peptide
fractionation (as SDS-PAGE, 2D-LC). It is fully configurable. Every processing
step is traceable, the produced data are in open standard format and its
modularity allows easy integration into proteomic pipelines. The output results
are ready for use in statistical analyses.
Evaluation of MassChroQ on complex label-free data obtained from low and high
resolution mass spectrometers showed low CVs for technical reproducibility
(1.4%) and high coefficients of correlation to protein quantity (0.98).
MassChroQ is freely available under the GNU General Public Licence v3.0 at
http://pappso.inra.fr/bioinfo/masschroq/.
|
1109.5490
|
A General Framework for the Optimization of Energy Harvesting
Communication Systems with Battery Imperfections
|
cs.IT math.IT
|
Energy harvesting has emerged as a powerful technology for complementing
current battery-powered communication systems in order to extend their
lifetime. In this paper a general framework is introduced for the optimization
of communication systems in which the transmitter is able to harvest energy
from its environment. Assuming that the energy arrival process is known
non-causally at the transmitter, the structure of the optimal transmission
scheme, which maximizes the amount of transmitted data by a given deadline, is
identified. Our framework includes models with continuous energy arrival as
well as battery constraints. A battery that suffers from energy leakage is
studied further, and the optimal transmission scheme is characterized for a
constant leakage rate.
|
1109.5526
|
Are random axioms useful?
|
math.LO cs.IT cs.LO math.IT
|
The famous G\"odel incompleteness theorem says that for every sufficiently
rich formal theory (containing formal arithmetic in some natural sense) there
exist true unprovable statements. Such statements would be natural candidates
for being added as axioms, but where can we obtain them? One classical (and
well studied) approach is to add (to some theory T) an axiom that claims the
consistency of T. In this note we discuss the other one (motivated by Chaitin's
version of the G\"odel theorem) and show that it is not really useful (in the
sense that it does not help us to prove new interesting theorems), at least if
we are not limiting the proof complexity. We discuss also some related
questions.
|
1109.5560
|
Temporal effects in the growth of networks
|
physics.soc-ph cond-mat.stat-mech cs.DL cs.SI
|
We show that to explain the growth of the citation network by preferential
attachment (PA), one has to accept that individual nodes exhibit heterogeneous
fitness values that decay with time. While previous PA-based models assumed
either heterogeneity or decay in isolation, we propose a simple analytically
treatable model that combines these two factors. Depending on the input
assumptions, the resulting degree distribution shows an exponential, log-normal
or power-law decay, which makes the model an apt candidate for modeling a wide
range of real systems.
|
1109.5589
|
A General Framework for Performance Analysis of Spatial Modulation over
Correlated Fading Channels
|
cs.IT math.IT
|
We present a general method for the error analysis of spatial modulation (SM)
systems over correlated and uncorrelated Rayleigh and Rician fading channels.
The proposed method, making use of the properties of proper complex random
variables and vectors, provides an exact upper bound for the class of fading
channels considered for any number of transmit and receive antennas and for a
wide family of linear modulation alphabets. Theoretical derivations are
validated via simulation results.
|
1109.5593
|
Markov dynamics as a zooming lens for multiscale community detection:
non clique-like communities and the field-of-view limit
|
physics.soc-ph cs.SI
|
In recent years, there has been a surge of interest in community detection
algorithms for complex networks. A variety of computational heuristics, some
with a long history, have been proposed for the identification of communities
or, alternatively, of good graph partitions. In most cases, the algorithms
maximize a particular objective function, thereby finding the `right' split
into communities. Although a thorough comparison of algorithms is still
lacking, there has been an effort to design benchmarks, i.e., random graph
models with known community structure against which algorithms can be
evaluated. However, popular community detection methods and benchmarks normally
assume an implicit notion of community based on clique-like subgraphs, a form
of community structure that is not always characteristic of real networks.
Specifically, networks that emerge from geometric constraints can have natural
non clique-like substructures with large effective diameters, which can be
interpreted as long-range communities. In this work, we show that long-range
communities escape detection by popular methods, which are blinded by a
restricted `field-of-view' limit, an intrinsic upper scale on the communities
they can detect. The field-of-view limit means that long-range communities tend
to be overpartitioned. We show how by adopting a dynamical perspective towards
community detection (Delvenne et al. (2010) PNAS:107: 12755-12760; Lambiotte et
al. (2008) arXiv:0812.1770), in which the evolution of a Markov process on the
graph is used as a zooming lens over the structure of the network at all
scales, one can detect both clique- or non clique-like communities without
imposing an upper scale to the detection. Consequently, the performance of
algorithms on inherently low-diameter, clique-like benchmarks may not always be
indicative of equally good results in real networks with local, sparser
connectivity.
|
1109.5647
|
Making Gradient Descent Optimal for Strongly Convex Stochastic
Optimization
|
cs.LG math.OC
|
Stochastic gradient descent (SGD) is a simple and popular method to solve
stochastic optimization problems which arise in machine learning. For strongly
convex problems, its convergence rate was known to be O(\log(T)/T), by running
SGD for T iterations and returning the average point. However, recent results
showed that using a different algorithm, one can get an optimal O(1/T) rate.
This might lead one to believe that standard SGD is suboptimal, and maybe
should even be replaced as a method of choice. In this paper, we investigate
the optimality of SGD in a stochastic setting. We show that for smooth
problems, the algorithm attains the optimal O(1/T) rate. However, for
non-smooth problems, the convergence rate with averaging might really be
\Omega(\log(T)/T), and this is not just an artifact of the analysis. On the
flip side, we show that a simple modification of the averaging step suffices to
recover the O(1/T) rate, and no other change of the algorithm is necessary. We
also present experimental results which support our findings, and point out
open problems.
|
1109.5663
|
The Deterministic Part of IPC-4: An Overview
|
cs.AI
|
We provide an overview of the organization and results of the deterministic
part of the 4th International Planning Competition, i.e., of the part concerned
with evaluating systems doing deterministic planning. IPC-4 attracted even more
competing systems than its already large predecessors, and the competition
event was revised in several important respects. After giving an introduction
to the IPC, we briefly explain the main differences between the deterministic
part of IPC-4 and its predecessors. We then introduce formally the language
used, called PDDL2.2 that extends PDDL2.1 by derived predicates and timed
initial literals. We list the competing systems and overview the results of the
competition. The entire set of data is far too large to be presented in full.
We provide a detailed summary; the complete data is available in an online
appendix. We explain how we awarded the competition prizes.
|
1109.5664
|
Deterministic Feature Selection for $k$-means Clustering
|
cs.LG cs.DS
|
We study feature selection for $k$-means clustering. Although the literature
contains many methods with good empirical performance, algorithms with provable
theoretical behavior have only recently been developed. Unfortunately, these
algorithms are randomized and fail with, say, a constant probability. We
address this issue by presenting a deterministic feature selection algorithm
for k-means with theoretical guarantees. At the heart of our algorithm lies a
deterministic method for decompositions of the identity.
|
1109.5665
|
PDDL2.1 - The Art of the Possible? Commentary on Fox and Long
|
cs.AI
|
PDDL2.1 was designed to push the envelope of what planning algorithms can do,
and it has succeeded. It adds two important features: durative actions,which
take time (and may have continuous effects); and objective functions for
measuring the quality of plans. The concept of durative actions is flawed; and
the treatment of their semantics reveals too strong an attachment to the way
many contemporary planners work. Future PDDL innovators should focus on
producing a clean semantics for additions to the language, and let planner
implementers worry about coupling their algorithms to problems expressed in the
latest version of the language.
|
1109.5666
|
The Case for Durative Actions: A Commentary on PDDL2.1
|
cs.AI
|
The addition of durative actions to PDDL2.1 sparked some controversy. Fox and
Long argued that actions should be considered as instantaneous, but can start
and stop processes. Ultimately, a limited notion of durative actions was
incorporated into the language. I argue that this notion is still impoverished,
and that the underlying philosophical position of regarding durative actions as
being a shorthand for a start action, process, and stop action ignores the
realities of modelling and execution for complex systems.
|
1109.5711
|
Engineering a Conformant Probabilistic Planner
|
cs.AI
|
We present a partial-order, conformant, probabilistic planner, Probapop which
competed in the blind track of the Probabilistic Planning Competition in IPC-4.
We explain how we adapt distance based heuristics for use with probabilistic
domains. Probapop also incorporates heuristics based on probability of success.
We explain the successes and difficulties encountered during the design and
implementation of Probapop.
|
1109.5712
|
Cooperative Information Sharing to Improve Distributed Learning in
Multi-Agent Systems
|
cs.MA
|
Effective coordination of agents actions in partially-observable domains is a
major challenge of multi-agent systems research. To address this, many
researchers have developed techniques that allow the agents to make decisions
based on estimates of the states and actions of other agents that are typically
learnt using some form of machine learning algorithm. Nevertheless, many of
these approaches fail to provide an actual means by which the necessary
information is made available so that the estimates can be learnt. To this end,
we argue that cooperative communication of state information between agents is
one such mechanism. However, in a dynamically changing environment, the
accuracy and timeliness of this communicated information determine the fidelity
of the learned estimates and the usefulness of the actions taken based on
these. Given this, we propose a novel information-sharing protocol,
post-task-completion sharing, for the distribution of state information. We
then show, through a formal analysis, the improvement in the quality of
estimates produced using our strategy over the widely used protocol of sharing
information between nearest neighbours. Moreover, communication heuristics
designed around our information-sharing principle are subjected to empirical
evaluation along with other benchmark strategies (including Littmans Q-routing
and Stones TPOT-RL) in a simulated call-routing application. These studies,
conducted across a range of environmental settings, show that, compared to the
different benchmarks used, our strategy generates an improvement of up to 60%
in the call connection rate; of more than 1000% in the ability to connect
long-distance calls; and incurs as low as 0.25 of the message overhead.
|
1109.5713
|
Where 'Ignoring Delete Lists' Works: Local Search Topology in Planning
Benchmarks
|
cs.AI
|
Between 1998 and 2004, the planning community has seen vast progress in terms
of the sizes of benchmark examples that domain-independent planners can tackle
successfully. The key technique behind this progress is the use of heuristic
functions based on relaxing the planning task at hand, where the relaxation is
to assume that all delete lists are empty. The unprecedented success of such
methods, in many commonly used benchmark examples, calls for an understanding
of what classes of domains these methods are well suited for. In the
investigation at hand, we derive a formal background to such an understanding.
We perform a case study covering a range of 30 commonly used STRIPS and ADL
benchmark domains, including all examples used in the first four international
planning competitions. We *prove* connections between domain structure and
local search topology -- heuristic cost surface properties -- under an
idealized version of the heuristic functions used in modern planners. The
idealized heuristic function is called h^+, and differs from the practically
used functions in that it returns the length of an *optimal* relaxed plan,
which is NP-hard to compute. We identify several key characteristics of the
topology under h^+, concerning the existence/non-existence of unrecognized dead
ends, as well as the existence/non-existence of constant upper bounds on the
difficulty of escaping local minima and benches. These distinctions divide the
(set of all) planning domains into a taxonomy of classes of varying h^+
topology. As it turns out, many of the 30 investigated domains lie in classes
with a relatively easy topology. Most particularly, 12 of the domains lie in
classes where FFs search algorithm, provided with h^+, is a polynomial solving
mechanism. We also present results relating h^+ to its approximation as
implemented in FF. The behavior regarding dead ends is provably the same. We
summarize the results of an empirical investigation showing that, in many
domains, the topological qualities of h^+ are largely inherited by the
approximation. The overall investigation gives a rare example of a successful
analysis of the connections between typical-case problem structure, and search
performance. The theoretical investigation also gives hints on how the
topological phenomena might be automatically recognizable by domain analysis
techniques. We outline some preliminary steps we made into that direction.
|
1109.5714
|
Binary Encodings of Non-binary Constraint Satisfaction Problems:
Algorithms and Experimental Results
|
cs.AI
|
A non-binary Constraint Satisfaction Problem (CSP) can be solved directly
using extended versions of binary techniques. Alternatively, the non-binary
problem can be translated into an equivalent binary one. In this case, it is
generally accepted that the translated problem can be solved by applying
well-established techniques for binary CSPs. In this paper we evaluate the
applicability of the latter approach. We demonstrate that the use of standard
techniques for binary CSPs in the encodings of non-binary problems is
problematic and results in models that are very rarely competitive with the
non-binary representation. To overcome this, we propose specialized arc
consistency and search algorithms for binary encodings, and we evaluate them
theoretically and empirically. We consider three binary representations; the
hidden variable encoding, the dual encoding, and the double encoding.
Theoretical and empirical results show that, for certain classes of non-binary
constraints, binary encodings are a competitive option, and in many cases, a
better one than the non-binary representation.
|
1109.5716
|
Distributed Reasoning in a Peer-to-Peer Setting: Application to the
Semantic Web
|
cs.AI
|
In a peer-to-peer inference system, each peer can reason locally but can also
solicit some of its acquaintances, which are peers sharing part of its
vocabulary. In this paper, we consider peer-to-peer inference systems in which
the local theory of each peer is a set of propositional clauses defined upon a
local vocabulary. An important characteristic of peer-to-peer inference systems
is that the global theory (the union of all peer theories) is not known (as
opposed to partition-based reasoning systems). The main contribution of this
paper is to provide the first consequence finding algorithm in a peer-to-peer
setting: DeCA. It is anytime and computes consequences gradually from the
solicited peer to peers that are more and more distant. We exhibit a sufficient
condition on the acquaintance graph of the peer-to-peer inference system for
guaranteeing the completeness of this algorithm. Another important contribution
is to apply this general distributed reasoning setting to the setting of the
Semantic Web through the Somewhere semantic peer-to-peer data management
system. The last contribution of this paper is to provide an experimental
analysis of the scalability of the peer-to-peer infrastructure that we propose,
on large networks of 1000 peers.
|
1109.5717
|
Dynamic Local Search for the Maximum Clique Problem
|
cs.AI
|
In this paper, we introduce DLS-MC, a new stochastic local search algorithm
for the maximum clique problem. DLS-MC alternates between phases of iterative
improvement, during which suitable vertices are added to the current clique,
and plateau search, during which vertices of the current clique are swapped
with vertices not contained in the current clique. The selection of vertices is
solely based on vertex penalties that are dynamically adjusted during the
search, and a perturbation mechanism is used to overcome search stagnation. The
behaviour of DLS-MC is controlled by a single parameter, penalty delay, which
controls the frequency at which vertex penalties are reduced. We show
empirically that DLS-MC achieves substantial performance improvements over
state-of-the-art algorithms for the maximum clique problem over a large range
of the commonly used DIMACS benchmark instances.
|
1109.5720
|
SLPA: Uncovering Overlapping Communities in Social Networks via A
Speaker-listener Interaction Dynamic Process
|
cs.SI cs.DS physics.soc-ph
|
Overlap is one of the characteristics of social networks, in which a person
may belong to more than one social group. For this reason, discovering
overlapping structures is necessary for realistic social analysis. In this
paper, we present a novel, general framework to detect and analyze both
individual overlapping nodes and entire communities. In this framework, nodes
exchange labels according to dynamic interaction rules. A specific
implementation called Speaker-listener Label Propagation Algorithm (SLPA1)
demonstrates an excellent performance in identifying both overlapping nodes and
overlapping communities with different degrees of diversity.
|
1109.5730
|
Hypothesize and Bound: A Computational Focus of Attention Mechanism for
Simultaneous 3D Shape Reconstruction, Pose Estimation and Classification from
a Single 2D Image
|
cs.CV cs.CG
|
This article presents a mathematical framework to simultaneously tackle the
problems of 3D reconstruction, pose estimation and object classification, from
a single 2D image. In sharp contrast with state of the art methods that rely
primarily on 2D information and solve each of these three problems separately
or iteratively, we propose a mathematical framework that incorporates prior
"knowledge" about the 3D shapes of different object classes and solves these
problems jointly and simultaneously, using a hypothesize-and-bound (H&B)
algorithm. In the proposed H&B algorithm one hypothesis is defined for each
possible pair [object class, object pose], and the algorithm selects the
hypothesis H that maximizes a function L(H) encoding how well each hypothesis
"explains" the input image. To find this maximum efficiently, the function L(H)
is not evaluated exactly for each hypothesis H, but rather upper and lower
bounds for it are computed at a much lower cost. In order to obtain bounds for
L(H) that are tight yet inexpensive to compute, we extend the theory of shapes
described in [14] to handle projections of shapes. This extension allows us to
define a probabilistic relationship between the prior knowledge given in 3D and
the 2D input image. This relationship is derived from first principles and is
proven to be the only relationship having the properties that we intuitively
expect from a "projection." In addition to the efficiency and optimality
characteristics of H&B algorithms, the proposed framework has the desirable
property of integrating information in the 2D image with information in the 3D
prior to estimate the optimal reconstruction. While this article focuses
primarily on the problem mentioned above, we believe that the theory presented
herein has multiple other potential applications.
|
1109.5732
|
Representing Conversations for Scalable Overhearing
|
cs.AI
|
Open distributed multi-agent systems are gaining interest in the academic
community and in industry. In such open settings, agents are often coordinated
using standardized agent conversation protocols. The representation of such
protocols (for analysis, validation, monitoring, etc) is an important aspect of
multi-agent applications. Recently, Petri nets have been shown to be an
interesting approach to such representation, and radically different approaches
using Petri nets have been proposed. However, their relative strengths and
weaknesses have not been examined. Moreover, their scalability and suitability
for different tasks have not been addressed. This paper addresses both these
challenges. First, we analyze existing Petri net representations in terms of
their scalability and appropriateness for overhearing, an important task in
monitoring open multi-agent systems. Then, building on the insights gained, we
introduce a novel representation using Colored Petri nets that explicitly
represent legal joint conversation states and messages. This representation
approach offers significant improvements in scalability and is particularly
suitable for overhearing. Furthermore, we show that this new representation
offers a comprehensive coverage of all conversation features of FIPA
conversation standards. We also present a procedure for transforming AUML
conversation protocol diagrams (a standard human-readable representation), to
our Colored Petri net representation.
|
1109.5750
|
Improving Heuristics Through Relaxed Search - An Analysis of TP4 and
HSP*a in the 2004 Planning Competition
|
cs.AI
|
The hm admissible heuristics for (sequential and temporal) regression
planning are defined by a parameterized relaxation of the optimal cost function
in the regression search space, where the parameter m offers a trade-off
between the accuracy and computational cost of theheuristic. Existing methods
for computing the hm heuristic require time exponential in m, limiting them to
small values (m andlt= 2). The hm heuristic can also be viewed as the optimal
cost function in a relaxation of the search space: this paper presents relaxed
search, a method for computing this function partially by searching in the
relaxed space. The relaxed search method, because it computes hm only
partially, is computationally cheaper and therefore usable for higher values of
m. The (complete) hm heuristic is combined with partial hm heuristics, for m =
3,..., computed by relaxed search, resulting in a more accurate heuristic.
This use of the relaxed search method to improve on the hm heuristic is
evaluated by comparing two optimal temporal planners: TP4, which does not use
it, and HSP*a, which uses it but is otherwise identical to TP4. The comparison
is made on the domains used in the 2004 International Planning Competition, in
which both planners participated. Relaxed search is found to be cost effective
in some of these domains, but not all. Analysis reveals a characterization of
the domains in which relaxed search can be expected to be cost effective, in
terms of two measures on the original and relaxed search spaces. In the domains
where relaxed search is cost effective, expanding small states is
computationally cheaper than expanding large states and small states tend to
have small successor states.
|
1109.5779
|
The Degrees of Freedom Region of the MIMO Interference Channel with
Shannon Feedback
|
cs.IT math.IT
|
The two-user multiple-input multiple-output (MIMO) fast-fading interference
channel (IC) with an arbitrary number of antennas at each of the four terminals
is studied under the settings of Shannon feedback, limited Shannon feedback,
and output feedback, wherein all or certain channel matrices and outputs, or
just the channel outputs, respectively, are available to the transmitters with
a finite delay. While for most numbers of antennas at the four terminals, it is
shown that the DoF regions with Shannon feedback and for the limited Shannon
feedback settings considered here are identical, and equal to the DoF region
with just delayed channel state information (CSIT), it is shown that this is
not always the case. For a specific class of MIMO ICs characterized by a
certain relationship between the numbers of antennas at the four nodes, the DoF
regions with Shannon and the limited Shannon feedback settings, while again
being identical, are strictly bigger than the DoF region with just delayed
CSIT. To realize these DoF gains with Shannon or limited Shannon feedback, a
new retrospective interference alignment scheme is developed wherein
transmitter cooperation made possible by output feedback in addition to delayed
CSIT is employed to effect a more efficient form of interference alignment than
is feasible with previously known schemes that use just delayed CSIT. The DoF
region for just output feedback, in which each transmitter has delayed
knowledge of only the receivers' outputs, is also obtained for all but a class
of MIMO ICs that satisfy one of two inequalities involving the numbers of
antennas.
|
1109.5790
|
The Degrees of Freedom of the 2-Hop, 2-User Interference Channel with
Feedback
|
cs.IT math.IT
|
The layered two-hop, two-flow interference network is considered that
consists of two sources, two relays and two destinations with the first hop
network between he sources and the relays and the second hop network between
relays and destinations both being i.i.d. Rayleigh fading Gaussian interference
channels. Two feedback models are studied. In the first one, called the delayed
channel state information at the sources (delayed CSI-S) model, the sources
know all channel coefficients with a finite delay but the relays have no side
information whatsoever. In the second feedback model, referred to as the
limited Shannon feedback model, the relays know first hop channel coefficients
instantaneously and the second hop channel with a finite delay and one relay
knows the received signal of one of the destinations with a finite delay and
the other relay knows the received signal of the other destination with a
finite delay but there is no side information at the sources whatsoever. It is
shown in this paper that under both these settings, the layered two-hop,
two-flow interference channel has 4/3 degrees of freedom. The result is
obtained by developing a broadcast-channel-type upper-bound and new
achievability schemes based on the ideas of retrospective interference
alignment and retro-cooperative interference alignment, respectively.
|
1109.5796
|
Genetic Testing for Complex Diseases: a Simulation Study Perspective
|
stat.AP cs.CE q-bio.GN q-bio.QM
|
It is widely recognized nowadays that complex diseases are caused by, amongst
the others, multiple genetic factors. The recent advent of genome-wide
association study (GWA) has triggered a wave of research aimed at discovering
genetic factors underlying common complex diseases. While the number of
reported susceptible genetic variants is increasing steadily, the application
of such findings into diseases prognosis for the general population is still
unclear, and there are doubts about whether the size of the contribution by
such factors is significant. In this respect, some recent simulation-based
studies have shed more light to the prospect of genetic tests. In this report,
we discuss several aspects of simulation-based studies: their parameters, their
assumptions, and the information they provide.
|
1109.5798
|
Object-oriented semantics of English in natural language understanding
system
|
cs.CL
|
A new approach to the problem of natural language understanding is proposed.
The knowledge domain under consideration is the social behavior of people.
English sentences are translated into set of predicates of a semantic database,
which describe persons, occupations, organizations, projects, actions, events,
messages, machines, things, animals, location and time of actions, relations
between objects, thoughts, cause-and-effect relations, abstract objects. There
is a knowledge base containing the description of semantics of objects
(functions and structure), actions (motives and causes), and operations.
|
1109.5827
|
Security and complexity of the McEliece cryptosystem based on QC-LDPC
codes
|
cs.CR cs.IT math.IT
|
In the context of public key cryptography, the McEliece cryptosystem
represents a very smart solution based on the hardness of the decoding problem,
which is believed to be able to resist the advent of quantum computers. Despite
this, the original McEliece cryptosystem, based on Goppa codes, has encountered
limited interest in practical applications, partly because of some constraints
imposed by this very special class of codes. We have recently introduced a
variant of the McEliece cryptosystem including low-density parity-check codes,
that are state-of-the-art codes, now used in many telecommunication standards
and applications. In this paper, we discuss the possible use of a bit-flipping
decoder in this context, which gives a significant advantage in terms of
complexity. We also provide theoretical arguments and practical tools for
estimating the trade-off between security and complexity, in such a way to give
a simple procedure for the system design.
|
1109.5894
|
Learning Item Trees for Probabilistic Modelling of Implicit Feedback
|
cs.LG stat.ML
|
User preferences for items can be inferred from either explicit feedback,
such as item ratings, or implicit feedback, such as rental histories. Research
in collaborative filtering has concentrated on explicit feedback, resulting in
the development of accurate and scalable models. However, since explicit
feedback is often difficult to collect it is important to develop effective
models that take advantage of the more widely available implicit feedback. We
introduce a probabilistic approach to collaborative filtering with implicit
feedback based on modelling the user's item selection process. In the interests
of scalability, we restrict our attention to tree-structured distributions over
items and develop a principled and efficient algorithm for learning item trees
from data. We also identify a problem with a widely used protocol for
evaluating implicit feedback models and propose a way of addressing it using a
small quantity of explicit feedback data.
|
1109.5920
|
Models and Strategies for Variants of the Job Shop Scheduling Problem
|
cs.AI
|
Recently, a variety of constraint programming and Boolean satisfiability
approaches to scheduling problems have been introduced. They have in common the
use of relatively simple propagation mechanisms and an adaptive way to focus on
the most constrained part of the problem. In some cases, these methods compare
favorably to more classical constraint programming methods relying on
propagation algorithms for global unary or cumulative resource constraints and
dedicated search heuristics. In particular, we described an approach that
combines restarting, with a generic adaptive heuristic and solution guided
branching on a simple model based on a decomposition of disjunctive
constraints. In this paper, we introduce an adaptation of this technique for an
important subclass of job shop scheduling problems (JSPs), where the objective
function involves minimization of earliness/tardiness costs. We further show
that our technique can be improved by adding domain specific information for
one variant of the JSP (involving time lag constraints). In particular we
introduce a dedicated greedy heuristic, and an improved model for the case
where the maximal time lag is 0 (also referred to as no-wait JSPs).
|
1109.5938
|
Thresholding-based reconstruction of compressed correlated signals
|
cs.NI cs.IT math.IT
|
We consider the problem of recovering a set of correlated signals (e.g.,
images from different viewpoints) from a few linear measurements per signal. We
assume that each sensor in a network acquires a compressed signal in the form
of linear measurements and sends it to a joint decoder for reconstruction. We
propose a novel joint reconstruction algorithm that exploits correlation among
underlying signals. Our correlation model considers geometrical transformations
between the supports of the different signals. The proposed joint decoder
estimates the correlation and reconstructs the signals using a simple
thresholding algorithm. We give both theoretical and experimental evidence to
show that our method largely outperforms independent decoding in terms of
support recovery and reconstruction quality.
|
1109.5951
|
An Approximation of the Universal Intelligence Measure
|
cs.AI
|
The Universal Intelligence Measure is a recently proposed formal definition
of intelligence. It is mathematically specified, extremely general, and
captures the essence of many informal definitions of intelligence. It is based
on Hutter's Universal Artificial Intelligence theory, an extension of Ray
Solomonoff's pioneering work on universal induction. Since the Universal
Intelligence Measure is only asymptotically computable, building a practical
intelligence test from it is not straightforward. This paper studies the
practical issues involved in developing a real-world UIM-based performance
metric. Based on our investigation, we develop a prototype implementation which
we use to evaluate a number of different artificial agents.
|
1109.5966
|
Minimum settling time control design through direct search optimization
|
math.OC cs.SY
|
The aim of this work is to design controllers through explicit minimization
of the settling time of a closed-loop response, by using a class of methods
adequate for this objective. To the best of our knowledge, all the methods
available in the literature do not minimize directly the settling time but only
related objective functions. Indeed, the settling time objective function is
not only non-smooth but also discontinuous. Therefore we propose to use direct
search methods, which do not use any gradient information. An important reason
is a recent result that some direct search methods are guaranteed to
convergence on such discontinuous objective functions. The proposed approach is
self-standing but can also improve the solutions obtained with the alternatives
of the literature, which lead to good solutions but suboptimal in terms of the
settling time. Note also that this approach is very flexible and can be adapted
to a broad range of objectives as well as nonlinear systems or controllers, as
long as the time response can be simulated.
|
1109.5993
|
Optimally sparse approximations of 3D functions by compactly supported
shearlet frames
|
math.FA cs.IT cs.NA math.IT
|
We study efficient and reliable methods of capturing and sparsely
representing anisotropic structures in 3D data. As a model class for
multidimensional data with anisotropic features, we introduce generalized
three-dimensional cartoon-like images. This function class will have two
smoothness parameters: one parameter \beta controlling classical smoothness and
one parameter \alpha controlling anisotropic smoothness. The class then
consists of piecewise C^\beta-smooth functions with discontinuities on a
piecewise C^\alpha-smooth surface. We introduce a pyramid-adapted, hybrid
shearlet system for the three-dimensional setting and construct frames for
L^2(R^3) with this particular shearlet structure. For the smoothness range
1<\alpha =< \beta =< 2 we show that pyramid-adapted shearlet systems provide a
nearly optimally sparse approximation rate within the generalized cartoon-like
image model class measured by means of non-linear N-term approximations.
|
1109.6018
|
User-level sentiment analysis incorporating social networks
|
cs.CL cs.IR physics.data-an physics.soc-ph
|
We show that information about social relationships can be used to improve
user-level sentiment analysis. The main motivation behind our approach is that
users that are somehow "connected" may be more likely to hold similar opinions;
therefore, relationship information can complement what we can extract about a
user's viewpoints from their utterances. Employing Twitter as a source for our
experimental data, and working within a semi-supervised framework, we propose
models that are induced either from the Twitter follower/followee network or
from the network in Twitter formed by users referring to each other using "@"
mentions. Our transductive learning results reveal that incorporating
social-network information can indeed lead to statistically significant
sentiment-classification improvements over the performance of an approach based
on Support Vector Machines having access only to textual features.
|
1109.6029
|
An Improved Search Algorithm for Optimal Multiple-Sequence Alignment
|
cs.AI
|
Multiple sequence alignment (MSA) is a ubiquitous problem in computational
biology. Although it is NP-hard to find an optimal solution for an arbitrary
number of sequences, due to the importance of this problem researchers are
trying to push the limits of exact algorithms further. Since MSA can be cast as
a classical path finding problem, it is attracting a growing number of AI
researchers interested in heuristic search algorithms as a challenge with
actual practical relevance. In this paper, we first review two previous,
complementary lines of research. Based on Hirschbergs algorithm, Dynamic
Programming needs O(kN^(k-1)) space to store both the search frontier and the
nodes needed to reconstruct the solution path, for k sequences of length N.
Best first search, on the other hand, has the advantage of bounding the search
space that has to be explored using a heuristic. However, it is necessary to
maintain all explored nodes up to the final solution in order to prevent the
search from re-expanding them at higher cost. Earlier approaches to reduce the
Closed list are either incompatible with pruning methods for the Open list, or
must retain at least the boundary of the Closed list. In this article, we
present an algorithm that attempts at combining the respective advantages; like
A* it uses a heuristic for pruning the search space, but reduces both the
maximum Open and Closed size to O(kN^(k-1)), as in Dynamic Programming. The
underlying idea is to conduct a series of searches with successively increasing
upper bounds, but using the DP ordering as the key for the Open priority queue.
With a suitable choice of thresholds, in practice, a running time below four
times that of A* can be expected. In our experiments we show that our algorithm
outperforms one of the currently most successful algorithms for optimal
multiple sequence alignments, Partial Expansion A*, both in time and memory.
Moreover, we apply a refined heuristic based on optimal alignments not only of
pairs of sequences, but of larger subsets. This idea is not new; however, to
make it practically relevant we show that it is equally important to bound the
heuristic computation appropriately, or the overhead can obliterate any
possible gain. Furthermore, we discuss a number of improvements in time and
space efficiency with regard to practical implementations. Our algorithm, used
in conjunction with higher-dimensional heuristics, is able to calculate for the
first time the optimal alignment for almost all of the problems in Reference 1
of the benchmark database BAliBASE.
|
1109.6030
|
Probabilistic Hybrid Action Models for Predicting Concurrent
Percept-driven Robot Behavior
|
cs.AI
|
This article develops Probabilistic Hybrid Action Models (PHAMs), a realistic
causal model for predicting the behavior generated by modern percept-driven
robot plans. PHAMs represent aspects of robot behavior that cannot be
represented by most action models used in AI planning: the temporal structure
of continuous control processes, their non-deterministic effects, several modes
of their interferences, and the achievement of triggering conditions in
closed-loop robot plans.
The main contributions of this article are: (1) PHAMs, a model of concurrent
percept-driven behavior, its formalization, and proofs that the model generates
probably, qualitatively accurate predictions; and (2) a resource-efficient
inference method for PHAMs based on sampling projections from probabilistic
action models and state descriptions. We show how PHAMs can be applied to
planning the course of action of an autonomous robot office courier based on
analytical and experimental results.
|
1109.6033
|
Generative Prior Knowledge for Discriminative Classification
|
cs.AI
|
We present a novel framework for integrating prior knowledge into
discriminative classifiers. Our framework allows discriminative classifiers
such as Support Vector Machines (SVMs) to utilize prior knowledge specified in
the generative setting. The dual objective of fitting the data and respecting
prior knowledge is formulated as a bilevel program, which is solved
(approximately) via iterative application of second-order cone programming. To
test our approach, we consider the problem of using WordNet (a semantic
database of English language) to improve low-sample classification accuracy of
newsgroup categorization. WordNet is viewed as an approximate, but readily
available source of background knowledge, and our framework is capable of
utilizing it in a flexible way.
|
1109.6037
|
The Control Theory of Motion-Based Communication: Problems in Teaching
Robots to Dance
|
cs.SY
|
The paper describes results on two components of a research program focused
on motion-based communication mediated by the dynamics of a control system.
Specifically we are interested in how mobile agents engaged in a shared
activity such as dance can use motion as a medium for transmitting certain
types of messages. The first part of the paper adopts the terminology of motion
description languages and deconstructs an elementary form of the well-known
popular dance, Salsa, in terms of four motion primitives (dance steps). Several
notions of dance complexity are introduced. We describe an experiment in which
ten performances by an actual pair of dancers are evaluated by judges and then
compared in terms of proposed complexity metrics. An energy metric is also
defined. Values of this metric are obtained by summing the lengths of motion
segments executed by wheeled robots replicating the movements of the human
dancers in each of the ten dance performances. Of all the metrics that are
considered in this experiment, energy is the most closely correlated with the
human judges' assessments of performance quality.
The second part of the paper poses a general class of dual objective motion
control problems in which a primary objective (artistic execution of a dance
step or efficient movement toward a specified terminal state) is combined with
a communication objective. Solutions of varying degrees of explicitness can be
given in several classes of problems of communicating through the dynamics of
finite dimensional linear control systems. In this setting it is shown that the
cost of adding a communication component to motions that steer a system between
prescribed pairs of states is independent of those states. At the same time,
the optimal encoding problem itself is shown to be a problem of packing
geometric objects, and it remains open.
|
1109.6046
|
Improving the Usability of Privacy Settings in Facebook
|
cs.CR cs.CY cs.SI
|
The ever increasing popularity of Facebook and other Online Social Networks
has left a wealth of personal and private data on the web, aggregated and
readily accessible for broad and automatic retrieval. Protection from both
undesired recipients as well as harvesting through crawlers is implemented by
simple access control at the provider, configured by manual authorization
through the publishing user. Several studies demonstrate that standard settings
directly cause an unnoticed over-sharing and that the users have trouble
understanding and configuring adequate settings. Using the three simple
principles of color coding, ease of access, and application of common
practices, we developed a new privacy interface that increases the usability
significantly. The results of our user study underlines the extent of the
initial problem and documents that our interface enables faster, more precise
authorisation and leads to increased intelligibility.
|
1109.6051
|
The Fast Downward Planning System
|
cs.AI
|
Fast Downward is a classical planning system based on heuristic search. It
can deal with general deterministic planning problems encoded in the
propositional fragment of PDDL2.2, including advanced features like ADL
conditions and effects and derived predicates (axioms). Like other well-known
planners such as HSP and FF, Fast Downward is a progression planner, searching
the space of world states of a planning task in the forward direction. However,
unlike other PDDL planning systems, Fast Downward does not use the
propositional PDDL representation of a planning task directly. Instead, the
input is first translated into an alternative representation called
multi-valued planning tasks, which makes many of the implicit constraints of a
propositional planning task explicit. Exploiting this alternative
representation, Fast Downward uses hierarchical decompositions of planning
tasks for computing its heuristic function, called the causal graph heuristic,
which is very different from traditional HSP-like heuristics based on ignoring
negative interactions of operators.
In this article, we give a full account of Fast Downwards approach to solving
multi-valued planning tasks. We extend our earlier discussion of the causal
graph heuristic to tasks involving axioms and conditional effects and present
some novel techniques for search control that are used within Fast Downwards
best-first search algorithm: preferred operators transfer the idea of helpful
actions from local search to global best-first search, deferred evaluation of
heuristic functions mitigates the negative effect of large branching factors on
search performance, and multi-heuristic best-first search combines several
heuristic evaluation functions within a single search algorithm in an
orthogonal way. We also describe efficient data structures for fast state
expansion (successor generators and axiom evaluators) and present a new
non-heuristic search algorithm called focused iterative-broadening search,
which utilizes the information encoded in causal graphs in a novel way.
Fast Downward has proven remarkably successful: It won the "classical (i.e.,
propositional, non-optimising) track of the 4th International Planning
Competition at ICAPS 2004, following in the footsteps of planners such as FF
and LPG. Our experiments show that it also performs very well on the benchmarks
of the earlier planning competitions and provide some insights about the
usefulness of the new search enhancements.
|
1109.6052
|
Asynchronous Partial Overlay: A New Algorithm for Solving Distributed
Constraint Satisfaction Problems
|
cs.AI
|
Distributed Constraint Satisfaction (DCSP) has long been considered an
important problem in multi-agent systems research. This is because many
real-world problems can be represented as constraint satisfaction and these
problems often present themselves in a distributed form. In this article, we
present a new complete, distributed algorithm called Asynchronous Partial
Overlay (APO) for solving DCSPs that is based on a cooperative mediation
process. The primary ideas behind this algorithm are that agents, when acting
as a mediator, centralize small, relevant portions of the DCSP, that these
centralized subproblems overlap, and that agents increase the size of their
subproblems along critical paths within the DCSP as the problem solving
unfolds. We present empirical evidence that shows that APO outperforms other
known, complete DCSP techniques.
|
1109.6101
|
Channel Quantization for Physical Layer Network-Coded Two-Way Relaying
|
cs.IT math.IT
|
The design of modulation schemes for the physical layer network-coded two way
relaying scenario is considered with the protocol which employs two phases:
Multiple access (MA) Phase and Broadcast (BC) phase. It was observed by
Koike-Akino et al. that adaptively changing the network coding map used at the
relay according to the channel conditions greatly reduces the impact of
multiple access interference which occurs at the relay during the MA phase. In
other words, the set of all possible channel realizations (the complex plane)
is quantized into a finite number of regions, with a specific network coding
map giving the best performance in a particular region. We highlight the issues
associated with the scheme proposed by Koike-Akino et al. and propose a scheme
which solves these issues. We obtain a quantization of the set of all possible
channel realizations analytically for the case when $M$-PSK (for $M$ any power
of 2) is the signal set used during the MA phase. It is shown that the complex
plane can be classified into two regions: a region in which any network coding
map which satisfies the so called exclusive law gives the same best performance
and a region in which the choice of the network coding map affects the
performance, which is further quantized based on the choice of the network
coding map which optimizes the performance. The quantization thus obtained
analytically, leads to the same as the one obtained using computer search for
4-PSK signal set by Koike-Akino et al., when specialized for $M=4.$
|
1109.6112
|
A Visual Entity-Relationship Model for Constraint-Based University
Timetabling
|
cs.AI cs.PL
|
University timetabling (UTT) is a complex problem due to its combinatorial
nature but also the type of constraints involved. The holy grail of
(constraint) programming: "the user states the problem the program solves it"
remains a challenge since solution quality is tightly coupled with deriving
"effective models", best handled by technology experts. In this paper, focusing
on the field of university timetabling, we introduce a visual graphic
communication tool that lets the user specify her problem in an abstract
manner, using a visual entity-relationship model. The entities are nodes of
mainly two types: resource nodes (lecturers, assistants, student groups) and
events nodes (lectures, lab sessions, tutorials). The links between the nodes
signify a desired relationship between them. The visual modeling abstraction
focuses on the nature of the entities and their relationships and abstracts
from an actual constraint model.
|
1109.6126
|
The Statistical Coherence-based Theory of Robust Recovery of Sparsest
Overcomplete Representation
|
cs.IT math.IT
|
The recovery of sparsest overcomplete representation has recently attracted
intensive research activities owe to its important potential in the many
applied fields such as signal processing, medical imaging, communication, and
so on. This problem can be stated in the following, i.e., to seek for the
sparse coefficient vector of the given noisy observation over a redundant
dictionary such that, where is the corrupted error. Elad et al. made the
worst-case result, which shows the condition of stable recovery of sparest
overcomplete representation over is where . Although it's of easy operation for
any given matrix, this result can't provide us realistic guide in many cases.
On the other hand, most of popular analysis on the sparse reconstruction relies
heavily on the so-called RIP (Restricted Isometric Property) for matrices
developed by Candes et al., which is usually very difficult or impossible to be
justified for a given measurement matrix. In this article, we introduced a
simple and efficient way of determining the ability of given D used to recover
the sparse signal based on the statistical analysis of coherence coefficients,
where is the coherence coefficients between any two different columns of given
measurement matrix . The key mechanism behind proposed paradigm is the analysis
of statistical distribution (the mean and covariance) of . We proved that if
the resulting mean of are zero, and their covariance are as small as possible,
one can faithfully recover approximately sparse signals from a minimal number
of noisy measurements with overwhelming probability. The resulting theory is
not only suitable for almost all models - e.g. Gaussian, frequency
measurements-discussed in the literature of compressed sampling, but also
provides a framework for new measurement strategies as well.
|
1109.6202
|
On Variable Density Compressive Sampling
|
cs.IT math.IT
|
We advocate an optimization procedure for variable density sampling in the
context of compressed sensing. In this perspective, we introduce a minimization
problem for the coherence between the sparsity and sensing bases, whose
solution provides an optimized sampling profile. This minimization problem is
solved with the use of convex optimization algorithms. We also propose a
refinement of our technique when prior information is available on the signal
support in the sparsity basis. The effectiveness of the method is confirmed by
numerical experiments. Our results also provide a theoretical underpinning to
state-of-the-art variable density Fourier sampling procedures used in magnetic
resonance imaging.
|
1109.6206
|
A Framework for Prefetching Relevant Web Pages using Predictive
Prefetching Engine (PPE)
|
cs.IR
|
This paper presents a framework for increasing the relevancy of the web pages
retrieved by the search engine. The approach introduces a Predictive
Prefetching Engine (PPE) which makes use of various data mining algorithms on
the log maintained by the search engine. The underlying premise of the approach
is that in the case of cluster accesses, the next pages requested by users of
the Web server are typically based on the current and previous pages requested.
Based on same, rules are drawn which then lead the path for prefetching the
desired pages. To carry out the desired task of prefetching the more relevant
pages, agents have been introduced.
|
1109.6222
|
Robust Sparse Analysis Regularization
|
cs.IT math.IT
|
This paper investigates the theoretical guarantees of L1-analysis
regularization when solving linear inverse problems. Most of previous works in
the literature have mainly focused on the sparse synthesis prior where the
sparsity is measured as the L1 norm of the coefficients that synthesize the
signal from a given dictionary. In contrast, the more general analysis
regularization minimizes the L1 norm of the correlations between the signal and
the atoms in the dictionary, where these correlations define the analysis
support. The corresponding variational problem encompasses several well-known
regularizations such as the discrete total variation and the Fused Lasso. Our
main contributions consist in deriving sufficient conditions that guarantee
exact or partial analysis support recovery of the true signal in presence of
noise. More precisely, we give a sufficient condition to ensure that a signal
is the unique solution of the L1-analysis regularization in the noiseless case.
The same condition also guarantees exact analysis support recovery and
L2-robustness of the L1-analysis minimizer vis-a-vis an enough small noise in
the measurements. This condition turns to be sharp for the robustness of the
analysis support. To show partial support recovery and L2-robustness to an
arbitrary bounded noise, we introduce a stronger sufficient condition. When
specialized to the L1-synthesis regularization, our results recover some
corresponding recovery and robustness guarantees previously known in the
literature. From this perspective, our work is a generalization of these
results. We finally illustrate these theoretical findings on several examples
to study the robustness of the 1-D total variation and Fused Lasso
regularizations.
|
1109.6263
|
The Pollution Effect: Optimizing Keyword Auctions by Favoring Relevant
Advertising
|
cs.GT cs.CY cs.IR
|
Most search engines sell slots to place advertisements on the search results
page through keyword auctions. Advertisers offer bids for how much they are
willing to pay when someone enters a search query, sees the search results, and
then clicks on one of their ads. Search engines typically order the
advertisements for a query by a combination of the bids and expected
clickthrough rates for each advertisement. In this paper, we extend a model of
Yahoo's and Google's advertising auctions to include an effect where repeatedly
showing less relevant ads has a persistent impact on all advertising on the
search engine, an impact we designate as the pollution effect. In Monte-Carlo
simulations using distributions fitted to Yahoo data, we show that a modest
pollution effect is sufficient to dramatically change the advertising rank
order that yields the optimal advertising revenue for a search engine. In
addition, if a pollution effect exists, it is possible to maximize revenue
while also increasing advertiser, and publisher utility. Our results suggest
that search engines could benefit from making relevant advertisements less
expensive and irrelevant advertisements more costly for advertisers than is the
current practice.
|
1109.6269
|
Precoder Design for Physical Layer Multicasting
|
cs.IT math.IT
|
This paper studies the instantaneous rate maximization and the weighted sum
delay minimization problems over a K-user multicast channel, where multiple
antennas are available at the transmitter as well as at all the receivers.
Motivated by the degree of freedom optimality and the simplicity offered by
linear precoding schemes, we consider the design of linear precoders using the
aforementioned two criteria. We first consider the scenario wherein the linear
precoder can be any complex-valued matrix subject to rank and power
constraints. We propose cyclic alternating ascent based precoder design
algorithms and establish their convergence to respective stationary points.
Simulation results reveal that our proposed algorithms considerably outperform
known competing solutions. We then consider a scenario in which the linear
precoder can be formed by selecting and concatenating precoders from a given
finite codebook of precoding matrices, subject to rank and power constraints.
We show that under this scenario, the instantaneous rate maximization problem
is equivalent to a robust submodular maximization problem which is strongly NP
hard. We propose a deterministic approximation algorithm and show that it
yields a bicriteria approximation. For the weighted sum delay minimization
problem we propose a simple deterministic greedy algorithm, which at each step
entails approximately maximizing a submodular set function subject to multiple
knapsack constraints, and establish its performance guarantee.
|
1109.6276
|
Lattices for Physical-layer Secrecy: A Computational Perspective
|
cs.IT cs.CR math.IT
|
In this paper, we use the hardness of quantization over general lattices as
the basis of developing a physical layer secrecy system. Assuming that the
channel state observed by the legitimate receiver and the eavesdropper are
distinct, this asymmetry is used to develop a cryptosystem that resembles the
McEliece cryptosystem, designed to be implemented at the physical layer. We
ensure that the legitimate receiver observes a specific lattice over which
decoding is known to be possible in polynomial-time. while the eavesdropper
observes a lattice over which decoding will prove to have the complexity of
lattice quantization over a general lattice
|
1109.6297
|
Low-rank data modeling via the Minimum Description Length principle
|
cs.IT cs.MM math.IT stat.ML
|
Robust low-rank matrix estimation is a topic of increasing interest, with
promising applications in a variety of fields, from computer vision to data
mining and recommender systems. Recent theoretical results establish the
ability of such data models to recover the true underlying low-rank matrix when
a large portion of the measured matrix is either missing or arbitrarily
corrupted. However, if low rank is not a hypothesis about the true nature of
the data, but a device for extracting regularity from it, no current guidelines
exist for choosing the rank of the estimated matrix. In this work we address
this problem by means of the Minimum Description Length (MDL) principle -- a
well established information-theoretic approach to statistical inference -- as
a guideline for selecting a model for the data at hand. We demonstrate the
practical usefulness of our formal approach with results for complex background
extraction in video sequences.
|
1109.6299
|
Sensitivity Analysis for Declarative Relational Query Languages with
Ordinal Ranks
|
cs.DB
|
We present sensitivity analysis for results of query executions in a
relational model of data extended by ordinal ranks. The underlying model of
data results from the ordinary Codd's model of data in which we consider
ordinal ranks of tuples in data tables expressing degrees to which tuples match
queries. In this setting, we show that ranks assigned to tuples are insensitive
to small changes, i.e., small changes in the input data do not yield large
changes in the results of queries.
|
1109.6303
|
Reduced-Dimension Multiuser Detection
|
cs.IT math.IT
|
We present a reduced-dimension multiuser detector (RD-MUD) structure for
synchronous systems that significantly decreases the number of required
correlation branches at the receiver front-end, while still achieving
performance similar to that of the conventional matched-filter (MF) bank.
RD-MUD exploits the fact that, in some wireless systems, the number of active
users may be small relative to the total number of users in the system. Hence,
the ideas of analog compressed sensing may be used to reduce the number of
correlators. The correlating signals used by each correlator are chosen as an
appropriate linear combination of the users' spreading waveforms. We derive the
probability-of-symbol-error when using two methods for recovery of active users
and their transmitted symbols: the reduced-dimension decorrelating (RDD)
detector, which combines subspace projection and thresholding to determine
active users and sign detection for data recovery, and the reduced-dimension
decision-feedback (RDDF) detector, which combines decision-feedback matching
pursuit for active user detection and sign detection for data recovery. We
derive probability of error bounds for both detectors, and show that the number
of correlators needed to achieve a small probability-of-symbol-error is on the
order of the logarithm of the number of users in the system. The theoretical
performance results are validated via numerical simulations.
|
1109.6310
|
The Dispersion of Joint Source-Channel Coding
|
cs.IT math.IT
|
In this work we investigate the behavior of the distortion threshold that can
be guaranteed in joint source-channel coding, to within a prescribed
excess-distortion probability. We show that the gap between this threshold and
the optimal average distortion is governed by a constant that we call the joint
source-channel dispersion. This constant can be easily computed, since it is
the sum of the source and channel dispersions, previously derived. The
resulting performance is shown to be better than that of any separation-based
scheme. For the proof, we use unequal error protection channel coding, thus we
also evaluate the dispersion of that setting.
|
1109.6340
|
Negotiating Socially Optimal Allocations of Resources
|
cs.MA
|
A multiagent system may be thought of as an artificial society of autonomous
software agents and we can apply concepts borrowed from welfare economics and
social choice theory to assess the social welfare of such an agent society. In
this paper, we study an abstract negotiation framework where agents can agree
on multilateral deals to exchange bundles of indivisible resources. We then
analyse how these deals affect social welfare for different instances of the
basic framework and different interpretations of the concept of social welfare
itself. In particular, we show how certain classes of deals are both sufficient
and necessary to guarantee that a socially optimal allocation of resources will
be reached eventually.
|
1109.6341
|
Domain Adaptation for Statistical Classifiers
|
cs.LG cs.CL
|
The most basic assumption used in statistical learning theory is that
training data and test data are drawn from the same underlying distribution.
Unfortunately, in many applications, the "in-domain" test data is drawn from a
distribution that is related, but not identical, to the "out-of-domain"
distribution of the training data. We consider the common case in which labeled
out-of-domain data is plentiful, but labeled in-domain data is scarce. We
introduce a statistical formulation of this problem in terms of a simple
mixture model and present an instantiation of this framework to maximum entropy
classifiers and their linear chain counterparts. We present efficient inference
algorithms for this special case based on the technique of conditional
expectation maximization. Our experimental results show that our approach leads
to improved performance on three real world tasks on four different data sets
from the natural language processing domain.
|
1109.6344
|
Admissible and Restrained Revision
|
cs.AI
|
As partial justification of their framework for iterated belief revision
Darwiche and Pearl convincingly argued against Boutiliers natural revision and
provided a prototypical revision operator that fits into their scheme. We show
that the Darwiche-Pearl arguments lead naturally to the acceptance of a smaller
class of operators which we refer to as admissible. Admissible revision ensures
that the penultimate input is not ignored completely, thereby eliminating
natural revision, but includes the Darwiche-Pearl operator, Nayaks
lexicographic revision operator, and a newly introduced operator called
restrained revision. We demonstrate that restrained revision is the most
conservative of admissible revision operators, effecting as few changes as
possible, while lexicographic revision is the least conservative, and point out
that restrained revision can also be viewed as a composite operator, consisting
of natural revision preceded by an application of a "backwards revision"
operator previously studied by Papini. Finally, we propose the establishment of
a principled approach for choosing an appropriate revision operator in
different contexts and discuss future work.
|
1109.6345
|
On Graphical Modeling of Preference and Importance
|
cs.AI
|
In recent years, CP-nets have emerged as a useful tool for supporting
preference elicitation, reasoning, and representation. CP-nets capture and
support reasoning with qualitative conditional preference statements,
statements that are relatively natural for users to express. In this paper, we
extend the CP-nets formalism to handle another class of very natural
qualitative statements one often uses in expressing preferences in daily life -
statements of relative importance of attributes. The resulting formalism,
TCP-nets, maintains the spirit of CP-nets, in that it remains focused on using
only simple and natural preference statements, uses the ceteris paribus
semantics, and utilizes a graphical representation of this information to
reason about its consistency and to perform, possibly constrained, optimization
using it. The extra expressiveness it provides allows us to better model
tradeoffs users would like to make, more faithfully representing their
preferences.
|
1109.6346
|
The Planning Spectrum - One, Two, Three, Infinity
|
cs.AI
|
Linear Temporal Logic (LTL) is widely used for defining conditions on the
execution paths of dynamic systems. In the case of dynamic systems that allow
for nondeterministic evolutions, one has to specify, along with an LTL formula
f, which are the paths that are required to satisfy the formula. Two extreme
cases are the universal interpretation A.f, which requires that the formula be
satisfied for all execution paths, and the existential interpretation E.f,
which requires that the formula be satisfied for some execution path.
When LTL is applied to the definition of goals in planning problems on
nondeterministic domains, these two extreme cases are too restrictive. It is
often impossible to develop plans that achieve the goal in all the
nondeterministic evolutions of a system, and it is too weak to require that the
goal is satisfied by some execution.
In this paper we explore alternative interpretations of an LTL formula that
are between these extreme cases. We define a new language that permits an
arbitrary combination of the A and E quantifiers, thus allowing, for instance,
to require that each finite execution can be extended to an execution
satisfying an LTL formula (AE.f), or that there is some finite execution whose
extensions all satisfy an LTL formula (EA.f). We show that only eight of these
combinations of path quantifiers are relevant, corresponding to an alternation
of the quantifiers of length one (A and E), two (AE and EA), three (AEA and
EAE), and infinity ((AE)* and (EA)*). We also present a planning algorithm for
the new language that is based on an automata-theoretic approach, and study its
complexity.
|
1109.6348
|
Fault Tolerant Boolean Satisfiability
|
cs.AI
|
A delta-model is a satisfying assignment of a Boolean formula for which any
small alteration, such as a single bit flip, can be repaired by flips to some
small number of other bits, yielding a new satisfying assignment. These
satisfying assignments represent robust solutions to optimization problems
(e.g., scheduling) where it is possible to recover from unforeseen events
(e.g., a resource becoming unavailable). The concept of delta-models was
introduced by Ginsberg, Parkes and Roy (AAAI 1998), where it was proved that
finding delta-models for general Boolean formulas is NP-complete. In this
paper, we extend that result by studying the complexity of finding delta-models
for classes of Boolean formulas which are known to have polynomial time
satisfiability solvers. In particular, we examine 2-SAT, Horn-SAT, Affine-SAT,
dual-Horn-SAT, 0-valid and 1-valid SAT. We see a wide variation in the
complexity of finding delta-models, e.g., while 2-SAT and Affine-SAT have
polynomial time tests for delta-models, testing whether a Horn-SAT formula has
one is NP-complete.
|
1109.6361
|
Cognitive Principles in Robust Multimodal Interpretation
|
cs.AI
|
Multimodal conversational interfaces provide a natural means for users to
communicate with computer systems through multiple modalities such as speech
and gesture. To build effective multimodal interfaces, automated interpretation
of user multimodal inputs is important. Inspired by the previous investigation
on cognitive status in multimodal human machine interaction, we have developed
a greedy algorithm for interpreting user referring expressions (i.e.,
multimodal reference resolution). This algorithm incorporates the cognitive
principles of Conversational Implicature and Givenness Hierarchy and applies
constraints from various sources (e.g., temporal, semantic, and contextual) to
resolve references. Our empirical results have shown the advantage of this
algorithm in efficiently resolving a variety of user references. Because of its
simplicity and generality, this approach has the potential to improve the
robustness of multimodal input interpretation.
|
1109.6371
|
Multi-User MIMO with outdated CSI: Training, Feedback and Scheduling
|
cs.IT math.IT
|
Conventional MU-MIMO techniques, e.g. Linear Zero-Forced Beamforming (LZFB),
require sufficiently accurate channel state information at the transmitter
(CSIT) in order to realize spectral efficient transmission (degree of freedom
gains). In practical settings, however, CSIT accuracy can be limited by a
number of issues including CSI estimation, CSI feedback delay between user
terminals to base stations, and the time/frequency coherence of the channel.
The latter aspects of CSIT-feedback delay and channel-dynamics can lead to
significant challenges in the deployment of efficient MU-MIMO systems. Recently
it has been shown by Maddah-Ali and Tse (MAT) that degree of freedom gains can
be realized by MU-MIMO even when the knowledge of CSIT is completely outdated.
Specifically, outdated CSIT, albeit perfect CSIT, is known for transmissions
only after they have taken place. This aspect of insensitivity to CSIT-feedback
delay is of particular interest since it allows one to reconsider MU-MIMO
design in dynamic channel conditions. Indeed, as we show, with appropriate
scheduling, and even in the context of CSI estimation and feedback errors, the
proposed MAT scheme can have performance advantages over conventional MU-MIMO
in such scenarios.
|
1109.6390
|
Performance of Orthogonal Matching Pursuit for Multiple Measurement
Vectors
|
cs.IT math.IT
|
In this paper, we consider orthogonal matching pursuit (OMP) algorithm for
multiple measurement vectors (MMV) problem. The robustness of OMPMMV is studied
under general perturbations---when the measurement vectors as well as the
sensing matrix are incorporated with additive noise. The main result shows that
although exact recovery of the sparse solutions is unrealistic in noisy
scenario, recovery of the support set of the solutions is guaranteed under
suitable conditions. Specifically, a sufficient condition is derived that
guarantees exact recovery of the sparse solutions in noiseless scenario.
|
1109.6391
|
Distributed Algorithms for Consensus and Coordination in the Presence of
Packet-Dropping Communication Links - Part I: Statistical Moments Analysis
Approach
|
cs.SY math.OC
|
This two-part paper discusses robustification methodologies for
linear-iterative distributed algorithms for consensus and coordination problems
in multicomponent systems, in which unreliable communication links may drop
packets. We consider a setup where communication links between components can
be asymmetric (i.e., component j might be able to send information to component
i, but not necessarily vice-versa), so that the information exchange between
components in the system is in general described by a directed graph that is
assumed to be strongly connected. In the absence of communication link
failures, each component i maintains two auxiliary variables and updates each
of their values to be a linear combination of their corresponding previous
values and the corresponding previous values of neighboring components (i.e.,
components that send information to node i). By appropriately initializing
these two (decoupled) iterations, the system components can asymptotically
calculate variables of interest in a distributed fashion; in particular, the
average of the initial conditions can be calculated as a function that involves
the ratio of these two auxiliary variables. The focus of this paper to
robustify this double-iteration algorithm against communication link failures.
We achieve this by modifying the double-iteration algorithm (by introducing
some additional auxiliary variables) and prove that the modified
double-iteration converges almost surely to average consensus. In the first
part of the paper, we study the first and second moments of the two iterations,
and use them to establish convergence, and illustrate the performance of the
algorithm with several numerical examples. In the second part, in order to
establish the convergence of the algorithm, we use coefficients of ergodicity
commonly used in analyzing inhomogeneous Markov chains.
|
1109.6392
|
Distributed Algorithms for Consensus and Coordination in the Presence of
Packet-Dropping Communication Links - Part II: Coefficients of Ergodicity
Analysis Approach
|
cs.SY math.OC
|
In this two-part paper, we consider multicomponent systems in which each
component can iteratively exchange information with other components in its
neighborhood in order to compute, in a distributed fashion, the average of the
components' initial values or some other quantity of interest (i.e., some
function of these initial values). In particular, we study an iterative
algorithm for computing the average of the initial values of the nodes. In this
algorithm, each component maintains two sets of variables that are updated via
two identical linear iterations. The average of the initial values of the nodes
can be asymptotically computed by each node as the ratio of two of the
variables it maintains. In the first part of this paper, we show how the update
rules for the two sets of variables can be enhanced so that the algorithm
becomes tolerant to communication links that may drop packets, independently
among them and independently between different transmission times. In this
second part, by rewriting the collective dynamics of both iterations, we show
that the resulting system is mathematically equivalent to a finite inhomogenous
Markov chain whose transition matrix takes one of finitely many values at each
step. Then, by using e a coefficients of ergodicity approach, a method commonly
used for convergence analysis of Markov chains, we prove convergence of the
robustified consensus scheme. The analysis suggests that similar convergence
should hold under more general conditions as well.
|
1109.6401
|
An Interpretation of Belief Functions by means of a Probabilistic
Multi-modal Logic
|
cs.LO cs.AI math.LO
|
While belief functions may be seen formally as a generalization of
probabilistic distributions, the question of the interactions between belief
functions and probability is still an issue in practice. This question is
difficult, since the contexts of use of these theory are notably different and
the semantics behind these theories are not exactly the same. A prominent issue
is increasingly regarded by the community, that is the management of the
conflicting information. Recent works have introduced new rules for handling
the conflict redistribution while combining belief functions. The notion of
conflict, or its cancellation by an hypothesis of open world, seems by itself
to prevent a direct interpretation of belief function in a probabilistic
framework. This paper addresses the question of a probabilistic interpretation
of belief functions. It first introduces and implements a theoretically
grounded rule, which is in essence an adaptive conjunctive rule. It is shown,
how this rule is derived from a logical interpretation of the belief functions
by means of a probabilistic multimodal logic; in addition, a concept of source
independence is introduced, based on a principle of entropy maximization.
|
1109.6402
|
Extension of Boolean algebra by a Bayesian operator; application to the
definition of a Deterministic Bayesian Logic
|
math.LO cs.AI cs.LO
|
This work contributes to the domains of Boolean algebra and of Bayesian
probability, by proposing an algebraic extension of Boolean algebras, which
implements an operator for the Bayesian conditional inference and is closed
under this operator. It is known since the work of Lewis (Lewis' triviality)
that it is not possible to construct such conditional operator within the space
of events. Nevertheless, this work proposes an answer which complements Lewis'
triviality, by the construction of a conditional operator outside the space of
events, thus resulting in an algebraic extension. In particular, it is proved
that any probability defined on a Boolean algebra may be extended to its
algebraic extension in compliance with the multiplicative definition of the
conditional probability. In the last part of this paper, a new bivalent logic
is introduced on the basis of this algebraic extension, and basic properties
are derived.
|
1109.6437
|
An Error Probability Approach to MIMO Wiretap Channels
|
cs.IT math.IT
|
We consider MIMO (Multiple Input Multiple Output) wiretap channels, where a
legitimate transmitter Alice is communicating with a legitimate receiver Bob in
the presence of an eavesdropper Eve, and communication is done via MIMO
channels. We suppose that Alice's strategy is to use a codebook which has a
lattice structure, which then allows her to perform coset encoding. We analyze
Eve's probability of correctly decoding the message Alice meant to Bob, and
from minimizing this probability, we derive a code design criterion for MIMO
lattice wiretap codes. The case of block fading channels is treated similarly,
and fast fading channels are derived as a particular case. The Alamouti code is
carefully studied as an illustration of the analysis provided.
|
1109.6440
|
Extropy: Complementary Dual of Entropy
|
cs.IT math.IT math.PR math.ST physics.data-an stat.TH
|
This article provides a completion to theories of information based on
entropy, resolving a longstanding question in its axiomatization as proposed by
Shannon and pursued by Jaynes. We show that Shannon's entropy function has a
complementary dual function which we call "extropy." The entropy and the
extropy of a binary distribution are identical. However, the measure bifurcates
into a pair of distinct measures for any quantity that is not merely an event
indicator. As with entropy, the maximum extropy distribution is also the
uniform distribution, and both measures are invariant with respect to
permutations of their mass functions. However, they behave quite differently in
their assessments of the refinement of a distribution, the axiom which
concerned Shannon and Jaynes. Their duality is specified via the relationship
among the entropies and extropies of course and fine partitions. We also
analyze the extropy function for densities, showing that relative extropy
constitutes a dual to the Kullback-Leibler divergence, widely recognized as the
continuous entropy measure. These results are unified within the general
structure of Bregman divergences. In this context they identify half the $L_2$
metric as the extropic dual to the entropic directed distance. We describe a
statistical application to the scoring of sequential forecast distributions
which provoked the discovery.
|
1109.6441
|
Memetic Algorithms: Parametrization and Balancing Local and Global
Search
|
cs.NE
|
This is a preprint of a book chapter from the Handbook of Memetic Algorithms,
Studies in Computational Intelligence, Vol. 379, ISBN 978-3-642-23246-6,
Springer, edited by F. Neri, C. Cotta, and P. Moscato. It is devoted to the
parametrization of memetic algorithms and how to find a good balance between
global and local search.
|
1109.6442
|
ABHIVYAKTI: A Vision Based Intelligent System for Elder and Sick Persons
|
cs.CV
|
This paper describes an intelligent system ABHIVYAKTI, which would be
pervasive in nature and based on the Computer Vision. It would be very easy in
use and deployment. Elder and sick people who are not able to talk or walk,
they are dependent on other human beings and need continuous monitoring, while
our system provides flexibility to the sick or elder person to announce his or
her need to their caretaker by just showing a particular gesture with the
developed system, if the caretaker is not nearby. This system will use
fingertip detection techniques for acquiring gesture and Artificial Neural
Networks (ANNs) will be used for gesture recognition.
|
1109.6505
|
The Reliability Value of Storage in a Volatile Environment
|
math.OC cs.SY
|
This paper examines the value of storage in securing reliability of a system
with uncertain supply and demand, and supply friction. The storage is
frictionless as a supply source, but once used, it cannot be filled up
instantaneously. The focus application is a power supply network in which the
base supply and demand are assumed to match perfectly, while deviations from
the base are modeled as random shocks with stochastic arrivals. Due to
friction, the random surge shocks cannot be tracked by the main supply sources.
Storage, when available, can be used to compensate, fully or partially, for the
surge in demand or loss of supply. The problem of optimal utilization of
storage with the objective of maximizing system reliability is formulated as
minimization of the expected discounted cost of blackouts over an infinite
horizon. It is shown that when the stage cost is linear in the size of the
blackout, the optimal policy is myopic in the sense that all shocks are
compensated by storage up to the available level of storage. However, when the
stage cost is strictly convex, it may be optimal to curtail some of the demand
and allow a small current blackout in the interest of maintaining a higher
level of reserve to avoid a large blackout in the future. The value of storage
capacity in improving system's reliability, as well as the effects of the
associated optimal policies under different stage costs on the probability
distribution of blackouts are examined.
|
1109.6510
|
Exact Performance Analysis of Partial Relay Selection Based on Shadowing
Side Information over Generalized Composite Fading Channels
|
cs.IT math.IT math.PR math.ST stat.TH
|
Relay technology has recently gained great interest in millimeter wave (60
GHz or above) radio frequencies as a promising transmission technique improving
the quality of service, providing high data rate, and extending the coverage
area without additional transmit power in deeply shadowed fading environments.
The performance of relay-based systems considerably depends on which relay
selection protocols (RSPs) are used. These RSPs are typically using the channel
side information (CSI). Specifically, the relay terminal (RT) is chosen among
all available RTs by a central entity (CE) which receives all RTs' CSI via
feedback channels. However, in the millimeter wave radio frequencies, the rate
of the CSI variation is much higher than that of the CSI variation in 6 GHz
frequencies under the same mobility conditions, which evidently results in a
serious problem causing that the CSI at the CE is inaccurate for the RSP since
the feedback channels have a backhaul / transmission delay. However and
fortunately, the shadowing side information (SSI) varies very slowly in
comparison to the rate of the CSI variation. In this context, we propose in
this paper a partial-RSP in dual-hop amplify-and-forward relaying system, which
utilize only the SSI of the RTs instead of their CSI. Then for the performance
analysis, we obtain an exact average unified performance (AUP) of the proposed
SSI-based partial-RSP for a variety shadowed fading environments. In
particular, we offer a generic AUP expression whose special cases include the
average bit error probability (ABEP) analysis for binary modulation schemes,
the ergodic capacity analysis and the moments-generating function (MGF)-based
characterization. The correctness of our newly theoretical results is validated
with some selected numerical examples in an extended generalized-K fading
environment.
|
1109.6541
|
On the Achievable DoF and User Scaling Law of Opportunistic Interference
Alignment in 3-Transmitter MIMO Interference Channels
|
cs.IT math.IT
|
In this paper, we propose opportunistic interference alignment (OIA) schemes
for three-transmitter multiple-input multiple-output (MIMO) interference
channels (ICs). In the proposed OIA, each transmitter has its own user group
and selects a single user who has the most aligned interference signals. The
user dimensions provided by multiple users are exploited to align interfering
signals. Contrary to conventional IA, perfect channel state information of all
channel links is not required at the transmitter, and each user just feeds back
one scalar value to indicate how well the interfering channels are aligned. We
prove that each transmitter can achieve the same degrees of freedom (DoF) as
the interference free case via user selection in our system model that the
number of receive antennas is twice of the number of transmit antennas. Using
the geometric interpretation, we find the required user scaling to obtain an
arbitrary non-zero DoF. Two OIA schemes are proposed and compared with various
user selection schemes in terms of achievable rate/DoF and complexity.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.