id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1105.6170
|
How Many Transmit Antennas to Use in a MIMO Interference Channel
|
cs.IT math.IT
|
The problem of finding the optimal number of data streams to transmit in a
multi-user MIMO scenario, where both the transmitters and receivers are
equipped with multiple antennas is considered. Without channel state
information at any transmitter, with a zero-forcing receiver each user is shown
to transmit a single data stream to maximize its own outage capacity in the
presence of sufficient number of users. Transmitting a single data stream is
also shown to be optimal in terms of maximizing the sum of the outage
capacities in the presence of sufficient number of users.
|
1105.6176
|
Energy-Delay Considerations in Coded Packet Flows
|
cs.IT math.IT
|
We consider a line of terminals which is connected by packet erasure channels
and where random linear network coding is carried out at each node prior to
transmission. In particular, we address an online approach in which each
terminal has local information to be conveyed to the base station at the end of
the line and provide a queueing theoretic analysis of this scenario. First, a
genie-aided scenario is considered and the average delay and average
transmission energy depending on the link erasure probabilities and the Poisson
arrival rates at each node are analyzed. We then assume that all nodes cannot
send and receive at the same time. The transmitting nodes in the network send
coded data packets before stopping to wait for the receiving nodes to
acknowledge the number of degrees of freedom, if any, that are required to
decode correctly the information. We analyze this problem for an infinite queue
size at the terminals and show that there is an optimal number of coded data
packets at each node, in terms of average completion time or transmission
energy, to be sent before stopping to listen.
|
1105.6199
|
Transmission Schemes based on Sum Rate Analysis in Distributed Antenna
Systems
|
cs.IT math.IT
|
In this paper, we study single cell multi-user downlink distributed antenna
systems (DAS) where antenna ports are geographically separated in a cell.
First, we derive an expression of the ergodic sum rate for the DAS in the
presence of pathloss. Then, we propose a transmission selection scheme based on
the derived expressions which does not require channel state information at the
transmitter. Utilizing the knowledge of distance information from a user to
each distributed antenna (DA) port, we consider the optimization of pairings of
DA ports and users to maximize the system performance. Based on the ergodic sum
rate expressions, the proposed scheme chooses the best mode maximizing the
ergodic sum rate among mode candidates. In our proposed scheme, the number of
mode candidates are greatly reduced compared to that of ideal mode selection.
In addition, we analyze the signal to noise ratio cross-over point for
different modes using the sum rate expressions. Through Monte Carlo
simulations, we show the accuracy of our derivations for the ergodic sum rate.
Moreover, simulation results with the pathloss modeling confirm that the
proposed scheme produces the average sum rate identical to the ideal mode
selection with significantly reduced candidates.
|
1105.6205
|
Cloud-based Evolutionary Algorithms: An algorithmic study
|
cs.NE cs.DC cs.NI
|
After a proof of concept using Dropbox(tm), a free storage and
synchronization service, showed that an evolutionary algorithm using several
dissimilar computers connected via WiFi or Ethernet had a good scaling behavior
in terms of evaluations per second, it remains to be proved whether that effect
also translates to the algorithmic performance of the algorithm. In this paper
we will check several different, and difficult, problems, and see what effects
the automatic load-balancing and asynchrony have on the speed of resolution of
problems.
|
1105.6213
|
Using Context to Improve the Evaluation of Information Retrieval Systems
|
cs.IR
|
The crucial role of the evaluation in the development of the information
retrieval tools is useful evidence to improve the performance of these tools
and the quality of results that they return. However, the classic evaluation
approaches have limitations and shortcomings especially regarding to the user
consideration, the measure of the adequacy between the query and the returned
documents and the consideration of characteristics, specifications and
behaviors of the search tool. Therefore, we believe that the exploitation of
contextual elements could be a very good way to evaluate the search tools. So,
this paper presents a new approach that takes into account the context during
the evaluation process at three complementary levels. The experiments gives at
the end of this article has shown the applicability of the proposed approach to
real research tools. The tests were performed with the most popular searching
engine (i.e. Google, Bing and Yahoo) selected in particular for their high
selectivity. The obtained results revealed that the ability of these engines to
rejecting dead links, redundant results and parasites pages depends strongly to
how queries are formulated, and to the political of sites offering this
information to present their content. The relevance evaluation of results
provided by these engines, using the user's judgments, then using an automatic
manner to take into account the query context has also shown a general decline
in the perceived relevance according to the number of the considered results.
|
1105.6224
|
Upper and Lower Bounds on the Minimum Distance of Expander Codes
|
cs.IT math.IT
|
The minimum distance of expander codes over GF(q) is studied. A new upper
bound on the minimum distance of expander codes is derived. The bound is shown
to lie under the Varshamov-Gilbert (VG) bound while q >= 32. Lower bounds on
the minimum distance of some families of expander codes are obtained. A lower
bound on the minimum distance of low-density parity-check (LDPC) codes with a
Reed--Solomon constituent code over GF(q) is obtained. The bound is shown to be
very close to the VG bound and to lie above the upper bound for expander codes.
|
1105.6245
|
Confidence sets for network structure
|
stat.ME cs.SI physics.soc-ph
|
Latent variable models are frequently used to identify structure in
dichotomous network data, in part because they give rise to a Bernoulli product
likelihood that is both well understood and consistent with the notion of
exchangeable random graphs. In this article we propose conservative confidence
sets that hold with respect to these underlying Bernoulli parameters as a
function of any given partition of network nodes, enabling us to assess
estimates of 'residual' network structure, that is, structure that cannot be
explained by known covariates and thus cannot be easily verified by manual
inspection. We demonstrate the proposed methodology by analyzing student
friendship networks from the National Longitudinal Survey of Adolescent Health
that include race, gender, and school year as covariates. We employ a
stochastic expectation-maximization algorithm to fit a logistic regression
model that includes these explanatory variables as well as a latent stochastic
blockmodel component and additional node-specific effects. Although
maximum-likelihood estimates do not appear consistent in this context, we are
able to evaluate confidence sets as a function of different blockmodel
partitions, which enables us to qualitatively assess the significance of
estimated residual network structure relative to a baseline, which models
covariates but lacks block structure.
|
1105.6265
|
Hierarchical structure in phonographic market
|
q-fin.GN cs.SI physics.soc-ph stat.AP
|
I find a topological arrangement of assets traded in a phonographic market
which has associated a meaningful economic taxonomy. I continue using the
Minimal Spanning Tree and the Life-time Of Correlations between assets, but now
outside the stock markets. This is the first attempt to use these methods on
phonographic market where we have artists instead of stocks. The value of an
artist is defined by record sales. The graph is obtained starting from the
matrix of correlations coefficient computed between the world's most popular 30
artists by considering the synchronous time evolution of the difference of the
logarithm of weekly record sales. This method provides the hierarchical
structure of phonographic market and information on which music genre is
meaningful according to customers.
|
1105.6277
|
Incremental Top-k List Comparison Approach to Robust Multi-Structure
Model Fitting
|
cs.CV
|
Random hypothesis sampling lies at the core of many popular robust fitting
techniques such as RANSAC. In this paper, we propose a novel hypothesis
sampling scheme based on incremental computation of distances between partial
rankings (top-$k$ lists) derived from residual sorting information. Our method
simultaneously (1) guides the sampling such that hypotheses corresponding to
all true structures can be quickly retrieved and (2) filters the hypotheses
such that only a small but very promising subset remain. This permits the usage
of simple agglomerative clustering on the surviving hypotheses for accurate
model selection. The outcome is a highly efficient multi-structure robust
estimation technique. Experiments on synthetic and real data show the superior
performance of our approach over previous methods.
|
1105.6288
|
Analysis of Overlapped Chunked Codes with Small Chunks over Line
Networks
|
cs.IT math.IT
|
To lower the complexity of network codes over packet line networks with
arbitrary schedules, chunked codes (CC) and overlapped chunked codes (OCC) were
proposed in earlier works. These codes have been previously analyzed for
relatively large chunks. In this paper, we prove that for smaller chunks, CC
and OCC asymptotically approach the capacity with an arbitrarily small but
non-zero constant gap. We also show that unlike the case for large chunks, the
larger is the overlap size, the better would be the tradeoff between the speed
of convergence and the message or packet error rate. This implies that OCC are
superior to CC for shorter chunks. Simulations consistent with the theoretical
results are also presented, suggesting great potential for the application of
OCC for multimedia transmission over packet networks.
|
1105.6307
|
Crawling Facebook for Social Network Analysis Purposes
|
cs.SI cs.CY physics.soc-ph
|
We describe our work in the collection and analysis of massive data
describing the connections between participants to online social networks.
Alternative approaches to social network data collection are defined and
evaluated in practice, against the popular Facebook Web site. Thanks to our
ad-hoc, privacy-compliant crawlers, two large samples, comprising millions of
connections, have been collected; the data is anonymous and organized as an
undirected graph. We describe a set of tools that we developed to analyze
specific properties of such social-network graphs, i.e., among others, degree
distribution, centrality measures, scaling laws and distribution of friendship.
|
1105.6314
|
Activity-Based Search for Black-Box Contraint-Programming Solvers
|
cs.AI cs.MS
|
Robust search procedures are a central component in the design of black-box
constraint-programming solvers. This paper proposes activity-based search, the
idea of using the activity of variables during propagation to guide the search.
Activity-based search was compared experimentally to impact-based search and
the WDEG heuristics. Experimental results on a variety of benchmarks show that
activity-based search is more robust than other heuristics and may produce
significant improvements in performance.
|
1105.6326
|
Two Unicast Information Flows over Linear Deterministic Networks
|
cs.IT math.IT
|
We investigate the two unicast flow problem over layered linear deterministic
networks with arbitrary number of nodes. When the minimum cut value between
each source-destination pair is constrained to be 1, it is obvious that the
triangular rate region {(R_1,R_2):R_1,R_2> 0, R_1+R_2< 1} can be achieved, and
that one cannot achieve beyond the square rate region {(R_1,R_2):R_1,R_2> 0,
R_1< 1,R_2< 1}. Analogous to the work by Wang and Shroff for wired networks, we
provide the necessary and sufficient conditions for the capacity region to be
the triangular region and the necessary and sufficient conditions for it to be
the square region. Moreover, we completely characterize the capacity region and
conclude that there are exactly three more possible capacity regions of this
class of networks, in contrast to the result in wired networks where only the
triangular and square rate regions are possible. Our achievability scheme is
based on linear coding over an extension field with at most four nodes
performing special linear coding operations, namely interference neutralization
and zero forcing, while all other nodes perform random linear coding.
|
1105.6368
|
Message-Passing Estimation from Quantized Samples
|
cs.IT math.IT math.ST stat.TH
|
Estimation of a vector from quantized linear measurements is a common problem
for which simple linear techniques are suboptimal -- sometimes greatly so. This
paper develops generalized approximate message passing (GAMP) algorithms for
minimum mean-squared error estimation of a random vector from quantized linear
measurements, notably allowing the linear expansion to be overcomplete or
undercomplete and the scalar quantization to be regular or non-regular. GAMP is
a recently-developed class of algorithms that uses Gaussian approximations in
belief propagation and allows arbitrary separable input and output channels.
Scalar quantization of measurements is incorporated into the output channel
formalism, leading to the first tractable and effective method for
high-dimensional estimation problems involving non-regular scalar quantization.
Non-regular quantization is empirically demonstrated to greatly improve
rate-distortion performance in some problems with oversampling or with
undersampling combined with a sparsity-inducing prior. Under the assumption of
a Gaussian measurement matrix with i.i.d. entries, the asymptotic error
performance of GAMP can be accurately predicted and tracked through the state
evolution formalism. We additionally use state evolution to design MSE-optimal
scalar quantizers for GAMP signal reconstruction and empirically demonstrate
the superior error performance of the resulting quantizers.
|
1105.6374
|
Universality for the Noisy Slepian-Wolf Problem Via Spatial Coupling
|
cs.IT math.IT
|
We consider a noisy Slepian-Wolf problem where two correlated sources are
separately encoded and transmitted over two independent binary memoryless
symmetric channels. Each channel capacity is assumed to be characterized by a
single parameter which is not known at the transmitter. The receiver has
knowledge of both the source correlation and the channel parameters. We call a
system universal if it retains near-capacity performance without channel
knowledge at the transmitter. Kudekar et al. recently showed that terminated
low-density parity-check (LDPC) convolutional codes (a.k.a. spatially-coupled
LDPC ensembles) can have belief-propagation thresholds that approach their
maximum a-posteriori thresholds. This was proven for binary erasure channels
and shown empirically for binary memoryless symmetric channels. They also
conjectured that the principle of spatial coupling is very general and the
phenomenon of threshold saturation applies to a very broad class of graphical
models. In this work, we derive an area theorem for the joint decoder and
empirically show that threshold saturation occurs for this problem. As a
result, we demonstrate near-universal performance for this problem using the
proposed spatially-coupled coding system. A similar result is also discussed
briefly for the 2-user multiple-access channel.
|
1106.0016
|
A New Position Control Strategy for VTOL UAVs using IMU and GPS
measurements
|
math.OC cs.SY
|
We propose a new position control strategy for VTOL-UAVs using IMU and GPS
measurements. Since there is no sensor that measures the attitude, our approach
does not rely on the knowledge (or reconstruction) of the system orientation as
usually done in the existing literature. Instead, IMU and GPS measurements are
directly incorporated in the control law. An important feature of the proposed
strategy, is that the accelerometer is used to measure the apparent
acceleration of the vehicle, as opposed to only measuring the gravity vector,
which would otherwise lead to unexpected performance when the vehicle is
accelerating (i.e. not in a hover configuration). Simulation results are
provided to demonstrate the performance of the proposed position control
strategy in the presence of noise and disturbances.
|
1106.0027
|
On the geometry of wireless network multicast in 2-D
|
cs.IT cs.NI math.IT
|
We provide a geometric solution to the problem of optimal relay positioning
to maximize the multicast rate for low-SNR networks. The networks we consider,
consist of a single source, multiple receivers and the only intermediate and
locatable node as the relay. We construct network the hypergraph of the system
nodes from the underlying information theoretic model of low-SNR regime that
operates using superposition coding and FDMA in conjunction (which we call the
"achievable hypergraph model"). We make the following contributions. 1) We show
that the problem of optimal relay positioning maximizing the multicast rate can
be completely decoupled from the flow optimization by noticing and exploiting
geometric properties of multicast flow. 2) All the flow maximizing the
multicast rate is sent over at most two paths, in succession. The relay
position is dependent only on one path (out of the two), irrespective of the
number of receiver nodes in the system. Subsequently, we propose simple and
efficient geometric algorithms to compute the optimal relay position. 3)
Finally, we show that in our model at the optimal relay position, the
difference between the maximized multicast rate and the cut-set bound is
minimum. We solve the problem for all (Ps,Pr) pairs of source and relay
transmit powers and the path loss exponent \alpha greater than 2.
|
1106.0032
|
Multiterminal Source Coding with an Entropy-Based Distortion Measure
|
cs.IT math.IT
|
In this paper, we consider a class of multiterminal source coding problems,
each subject to distortion constraints computed using a specific,
entropy-based, distortion measure. We provide the achievable rate distortion
region for two cases and, in so doing, we demonstrate a relationship between
the lossy multiterminal source coding problems with our specific distortion
measure and (1) the canonical Slepian-Wolf lossless distributed source coding
network, and (2) the Ahlswede-K\"{o}rner-Wyner source coding with side
information problem in which only one of the sources is recovered losslessly.
|
1106.0041
|
Assessing the consistency of community structure in complex networks
|
cs.SI nlin.AO physics.soc-ph
|
In recent years, community structure has emerged as a key component of
complex network analysis. As more data has been collected, researchers have
begun investigating changing community structure across multiple networks.
Several methods exist to analyze changing communities, but most of these are
limited to evolution of a single network over time. In addition, most of the
existing methods are more concerned with change at the community level than at
the level of the individual node. In this paper, we introduce scaled
inclusivity, which is a method to quantify the change in community structure
across networks. Scaled inclusivity evaluates the consistency of the
classiffication of every node in a network independently. In addition, the
method can be applied cross-sectionally as well as longitudinally. In this
paper, we calculate the scaled inclusivity for a set of simulated networks of
United States cities and a set of real networks consisting of teams that play
in the top division of American college football. We found that scaled
inclusivity yields reasonable results for the consistency of individual nodes
in both sets of networks. We propose that scaled inclusivity may provide a
useful way to quantify the change in a network's community structure.
|
1106.0057
|
Absorbing Set Spectrum Approach for Practical Code Design
|
cs.IT math.IT
|
This paper focuses on controlling the absorbing set spectrum for a class of
regular LDPC codes known as separable, circulant-based (SCB) codes. For a
specified circulant matrix, SCB codes all share a common mother matrix,
examples of which are array-based LDPC codes and many common quasi-cyclic
codes. SCB codes retain the standard properties of quasi-cyclic LDPC codes such
as girth, code structure, and compatibility with efficient decoder
implementations. In this paper, we define a cycle consistency matrix (CCM) for
each absorbing set of interest in an SCB LDPC code. For an absorbing set to be
present in an SCB LDPC code, the associated CCM must not be full columnrank.
Our approach selects rows and columns from the SCB mother matrix to
systematically eliminate dominant absorbing sets by forcing the associated CCMs
to be full column-rank. We use the CCM approach to select rows from the SCB
mother matrix to design SCB codes of column weight 5 that avoid all low-weight
absorbing sets (4, 8), (5, 9), and (6, 8). Simulation results demonstrate that
the newly designed code has a steeper error-floor slope and provides at least
one order of magnitude of improvement in the low error rate region as compared
to an elementary array-based code.
|
1106.0061
|
Error Probability Bounds for Binary Relay Trees with Crummy Sensors
|
cs.IT math.IT
|
We study the detection error probability associated with balanced binary
relay trees, in which sensor nodes fail with some probability. We consider N
identical and independent crummy sensors, represented by leaf nodes of the
tree. The root of the tree represents the fusion center, which makes the final
decision between two hypotheses. Every other node is a relay node, which fuses
at most two binary messages into one binary message and forwards the new
message to its parent node. We derive tight upper and lower bounds for the
total error probability at the fusion center as functions of N and characterize
how fast the total error probability converges to 0 with respect to N. We show
that the convergence of the total error probability is sub-linear, with the
same decay exponent as that in a balanced binary relay tree without sensor
failures. We also show that the total error probability converges to 0, even if
the individual sensors have total error probabilities that converge to 1/2 and
the failure probabilities that converge to 1, provided that the convergence
rates are sufficiently slow.
|
1106.0070
|
Modeling and Information Rates for Synchronization Error Channels
|
cs.IT math.IT
|
We propose a new channel model for channels with synchronization errors.
Using this model, we give simple, non-trivial and, in some cases, tight lower
bounds on the capacity for certain synchronization error channels.
|
1106.0075
|
Windowed Decoding of Spatially Coupled Codes
|
cs.IT math.IT
|
Spatially coupled codes have been of interest recently owing to their
superior performance over memoryless binary-input channels. The performance is
good both asymptotically, since the belief propagation thresholds approach
capacity, as well as for finite lengths, since degree-2 variables that result
in high error floors can be completely avoided. However, to realize the
promised good performance, one needs large blocklengths. This in turn implies a
large latency and decoding complexity. For the memoryless binary erasure
channel, we consider the decoding of spatially coupled codes through a windowed
decoder that aims to retain many of the attractive features of belief
propagation, while trying to reduce complexity further. We characterize the
performance of this scheme by defining thresholds on channel erasure rates that
guarantee a target erasure rate. We give analytical lower bounds on these
thresholds and show that the performance approaches that of belief propagation
exponentially fast in the window size. We give numerical results including the
thresholds computed using density evolution and the erasure rate curves for
finite-length spatially coupled codes.
|
1106.0086
|
Generating Functional Analysis of Iterative Algorithms for Compressed
Sensing
|
cs.IT cond-mat.dis-nn math.IT
|
It has been shown that approximate message passing algorithm is effective in
reconstruction problems for compressed sensing. To evaluate dynamics of such an
algorithm, the state evolution (SE) has been proposed. If an algorithm can
cancel the correlation between the present messages and their past values, SE
can accurately tract its dynamics via a simple one-dimensional map. In this
paper, we focus on dynamics of algorithms which cannot cancel the correlation
and evaluate it by the generating functional analysis (GFA), which allows us to
study the dynamics by an exact way in the large system limit.
|
1106.0107
|
Handwritten Character Recognition of South Indian Scripts: A Review
|
cs.CV cs.CL cs.CY
|
Handwritten character recognition is always a frontier area of research in
the field of pattern recognition and image processing and there is a large
demand for OCR on hand written documents. Even though, sufficient studies have
performed in foreign scripts like Chinese, Japanese and Arabic characters, only
a very few work can be traced for handwritten character recognition of Indian
scripts especially for the South Indian scripts. This paper provides an
overview of offline handwritten character recognition in South Indian Scripts,
namely Malayalam, Tamil, Kannada and Telungu.
|
1106.0113
|
The BG-simulation for Byzantine Mobile Robots
|
cs.DC cs.RO
|
This paper investigates the task solvability of mobile robot systems subject
to Byzantine faults. We first consider the gathering problem, which requires
all robots to meet in finite time at a non-predefined location. It is known
that the solvability of Byzantine gathering strongly depends on a number of
system attributes, such as synchrony, the number of Byzantine robots,
scheduling strategy, obliviousness, orientation of local coordinate systems and
so on. However, the complete characterization of the attributes making
Byzantine gathering solvable still remains open.
In this paper, we show strong impossibility results of Byzantine gathering.
Namely, we prove that Byzantine gathering is impossible even if we assume one
Byzantine fault, an atomic execution system, the n-bounded centralized
scheduler, non-oblivious robots, instantaneous movements and a common
orientation of local coordinate systems (where n denote the number of correct
robots). Those hypotheses are much weaker than used in previous work, inducing
a much stronger impossibility result.
At the core of our impossibility result is a reduction from the distributed
consensus problem in asynchronous shared-memory systems. In more details, we
newly construct a generic reduction scheme based on the distributed
BG-simulation. Interestingly, because of its versatility, we can easily extend
our impossibility result for general pattern formation problems.
|
1106.0117
|
A Nonlinear Approach to Interference Alignment
|
cs.IT math.IT
|
Cadambe and Jafar (CJ) alignment strategy for the K-user scalar
frequency-selective fading Gaussian channel, with encoding over blocks of 2n+1
random channel coefficients (subcarriers) is considered. The linear
zero-forcing (LZF) strategy is compared with a novel approach based on lattice
alignment and lattice decoding (LD). Despite both LZF and LD achieve the same
degrees of freedom, it is shown that LD can achieve very significant
improvements in terms of error rates at practical SNRs with respect to the
conventional LZF proposed in the literature. We also show that these gains are
realized provided that channel gains are controlled to be near constant, for
example, by means of power control and opportunistic carrier and user selection
strategies. In presence of relatively-small variations in the normalized
channel coefficient amplitudes, CJ alignment strategy yields very disappointing
results at finite SNRs, and the gain of LD over ZLF significantly reduces. In
light of these results, the practical applicability of CJ alignment scheme
remains questionable, in particular for Rayleigh fading channels, where channel
inversion power control yields to unbounded average transmit power.
|
1106.0118
|
1st International Workshop on Distributed Evolutionary Computation in
Informal Environments
|
cs.NE cs.DC cs.ET cs.NI
|
Online conference proceedings for the IWDECIE workshop, taking place in New
Orleans on June 5th, 2011. The workshop focuses on non-conventional
implementations of bioinspired algorithms and its conceptual implications.
|
1106.0171
|
Proposal of Pattern Recognition as a necessary and sufficient Principle
to Cognitive Science
|
cs.AI
|
Despite the prevalence of the Computational Theory of Mind and the
Connectionist Model, the establishing of the key principles of the Cognitive
Science are still controversy and inconclusive. This paper proposes the concept
of Pattern Recognition as Necessary and Sufficient Principle for a general
cognitive science modeling, in a very ambitious scientific proposal. A formal
physical definition of the pattern recognition concept is also proposed to
solve many key conceptual gaps on the field.
|
1106.0178
|
Achievable Rates of MIMO Systems with Linear Precoding and Iterative
LMMSE Detection
|
cs.IT math.IT
|
We establish area theorems for iterative detection over coded linear systems
(including multiple-input multipleoutput (MIMO) channels,
inter-symbol-interference (ISI) channels, and orthogonal frequency-division
multiplexing (OFDM) systems). We propose a linear precoding technique that
asymptotically ensures the Gaussianness of the messages passed in iterative
detection, as the transmission block length tends to infinity. We show that the
proposed linear precoding scheme with iterative linear minimum mean-square
error (LMMSE) detection is potentially information lossless, under various
assumptions on the availability of channel state information at the transmitter
(CSIT). Numerical results are provided to verify our analysis.
|
1106.0190
|
Evolution of Things
|
cs.NE
|
Evolution is one of the major omnipresent powers in the universe that has
been studied for about two centuries. Recent scientific and technical
developments make it possible to make the transition from passively
understanding to actively mastering evolution. As of today, the only area where
human experimenters can design and manipulate evolutionary processes in full is
that of Evolutionary Computing, where evolutionary processes are carried out in
a digital space, inside computers, in simulation. We argue that in the near
future it will be possible to move evolutionary computing outside such
imaginary spaces and make it physically embodied. In other words, we envision
the "Evolution of Things", rather than just the evolution of code, leading to a
new field of Embodied Artificial Evolution (EAE). The main objective of the
present paper is to offer an umbrella term and vision in order to aid the
development of this high potential research area. To this end, we introduce the
notion of EAE, discuss a few examples and applications, and elaborate on the
expected benefits as well as the grand challenges this developing field will
have to address.
|
1106.0217
|
Using Lotkaian Informetrics for Ranking in Digital Libraries
|
cs.IR cs.DL
|
The purpose of this paper is to propose the use of models, theories and laws
in bibliometrics and scientometrics to enhance information retrieval processes,
especially ranking. A common pattern in many man-made data sets is Lotka's Law
which follows the well-known power-law distributions. These informetric
distributions can be used to give an alternative order to large and scattered
result sets and can be applied as a new ranking mechanism. The
polyrepresentation of information in Digital Library systems is used to enhance
the retrieval quality, to overcome the drawbacks of the typical term-based
ranking approaches and to enable users to explore retrieved document sets from
a different perspective.
|
1106.0218
|
The Good Old Davis-Putnam Procedure Helps Counting Models
|
cs.AI
|
As was shown recently, many important AI problems require counting the number
of models of propositional formulas. The problem of counting models of such
formulas is, according to present knowledge, computationally intractable in a
worst case. Based on the Davis-Putnam procedure, we present an algorithm, CDP,
that computes the exact number of models of a propositional CNF or DNF formula
F. Let m and n be the number of clauses and variables of F, respectively, and
let p denote the probability that a literal l of F occurs in a clause C of F,
then the average running time of CDP is shown to be O(nm^d), where
d=-1/log(1-p). The practical performance of CDP has been estimated in a series
of experiments on a wide variety of CNF formulas.
|
1106.0219
|
Identifying Mislabeled Training Data
|
cs.AI
|
This paper presents a new approach to identifying and eliminating mislabeled
training instances for supervised learning. The goal of this approach is to
improve classification accuracies produced by learning algorithms by improving
the quality of the training data. Our approach uses a set of learning
algorithms to create classifiers that serve as noise filters for the training
data. We evaluate single algorithm, majority vote and consensus filters on five
datasets that are prone to labeling errors. Our experiments illustrate that
filtering significantly improves classification accuracy for noise levels up to
30 percent. An analytical and empirical evaluation of the precision of our
approach shows that consensus filters are conservative at throwing away good
data at the expense of retaining bad data and that majority filters are better
at detecting bad data at the expense of throwing away good data. This suggests
that for situations in which there is a paucity of data, consensus filters are
preferable, whereas majority vote filters are preferable for situations with an
abundance of data.
|
1106.0220
|
Committee-Based Sample Selection for Probabilistic Classifiers
|
cs.AI
|
In many real-world learning tasks, it is expensive to acquire a sufficient
number of labeled examples for training. This paper investigates methods for
reducing annotation cost by `sample selection'. In this approach, during
training the learning program examines many unlabeled examples and selects for
labeling only those that are most informative at each stage. This avoids
redundantly labeling examples that contribute little new information. Our work
follows on previous research on Query By Committee, extending the
committee-based paradigm to the context of probabilistic classification. We
describe a family of empirical methods for committee-based sample selection in
probabilistic classification models, which evaluate the informativeness of an
example by measuring the degree of disagreement between several model variants.
These variants (the committee) are drawn randomly from a probability
distribution conditioned by the training set labeled so far. The method was
applied to the real-world natural language processing task of stochastic
part-of-speech tagging. We find that all variants of the method achieve a
significant reduction in annotation cost, although their computational
efficiency differs. In particular, the simplest variant, a two member committee
with no parameters to tune, gives excellent results. We also show that sample
selection yields a significant reduction in the size of the model used by the
tagger.
|
1106.0221
|
Evolutionary Algorithms for Reinforcement Learning
|
cs.LG cs.AI cs.NE
|
There are two distinct approaches to solving reinforcement learning problems,
namely, searching in value function space and searching in policy space.
Temporal difference methods and evolutionary algorithms are well-known examples
of these approaches. Kaelbling, Littman and Moore recently provided an
informative survey of temporal difference methods. This article focuses on the
application of evolutionary algorithms to the reinforcement learning problem,
emphasizing alternative policy representations, credit assignment methods, and
problem-specific genetic operators. Strengths and weaknesses of the
evolutionary approach to reinforcement learning are presented, along with a
survey of representative applications.
|
1106.0222
|
Markov Localization for Mobile Robots in Dynamic Environments
|
cs.AI cs.RO
|
Localization, that is the estimation of a robot's location from sensor data,
is a fundamental problem in mobile robotics. This papers presents a version of
Markov localization which provides accurate position estimates and which is
tailored towards dynamic environments. The key idea of Markov localization is
to maintain a probability density over the space of all locations of a robot in
its environment. Our approach represents this space metrically, using a
fine-grained grid to approximate densities. It is able to globally localize the
robot from scratch and to recover from localization failures. It is robust to
approximate models of the environment (such as occupancy grid maps) and noisy
sensors (such as ultrasound sensors). Our approach also includes a filtering
technique which allows a mobile robot to reliably estimate its position even in
densely populated environments in which crowds of people block the robot's
sensors for extended periods of time. The method described here has been
implemented and tested in several real-world applications of mobile robots,
including the deployments of two mobile robots as interactive museum
tour-guides.
|
1106.0223
|
Decentralized Markets versus Central Control: A Comparative Study
|
cs.MA cs.AI
|
Multi-Agent Systems (MAS) promise to offer solutions to problems where
established, older paradigms fall short. In order to validate such claims that
are repeatedly made in software agent publications, empirical in-depth studies
of advantages and weaknesses of multi-agent solutions versus conventional ones
in practical applications are needed. Climate control in large buildings is one
application area where multi-agent systems, and market-oriented programming in
particular, have been reported to be very successful, although central control
solutions are still the standard practice. We have therefore constructed and
implemented a variety of market designs for this problem, as well as different
standard control engineering solutions. This article gives a detailed analysis
and comparison, so as to learn about differences between standard versus agent
approaches, and yielding new insights about benefits and limitations of
computational markets. An important outcome is that "local information plus
market communication produces global control".
|
1106.0224
|
Reasoning about Minimal Belief and Negation as Failure
|
cs.AI
|
We investigate the problem of reasoning in the propositional fragment of
MBNF, the logic of minimal belief and negation as failure introduced by
Lifschitz, which can be considered as a unifying framework for several
nonmonotonic formalisms, including default logic, autoepistemic logic,
circumscription, epistemic queries, and logic programming. We characterize the
complexity and provide algorithms for reasoning in propositional MBNF. In
particular, we show that entailment in propositional MBNF lies at the third
level of the polynomial hierarchy, hence it is harder than reasoning in all the
above mentioned propositional formalisms for nonmonotonic reasoning. We also
prove the exact correspondence between negation as failure in MBNF and negative
introspection in Moore's autoepistemic logic.
|
1106.0225
|
Randomized Algorithms for the Loop Cutset Problem
|
cs.AI
|
We show how to find a minimum weight loop cutset in a Bayesian network with
high probability. Finding such a loop cutset is the first step in the method of
conditioning for inference. Our randomized algorithm for finding a loop cutset
outputs a minimum loop cutset after O(c 6^k kn) steps with probability at least
1 - (1 - 1/(6^k))^c6^k, where c > 1 is a constant specified by the user, k is
the minimal size of a minimum weight loop cutset, and n is the number of
vertices. We also show empirically that a variant of this algorithm often finds
a loop cutset that is closer to the minimum weight loop cutset than the ones
found by the best deterministic algorithms known.
|
1106.0229
|
OBDD-based Universal Planning for Synchronized Agents in
Non-Deterministic Domains
|
cs.AI
|
Recently model checking representation and search techniques were shown to be
efficiently applicable to planning, in particular to non-deterministic
planning. Such planning approaches use Ordered Binary Decision Diagrams (OBDDs)
to encode a planning domain as a non-deterministic finite automaton and then
apply fast algorithms from model checking to search for a solution. OBDDs can
effectively scale and can provide universal plans for complex planning domains.
We are particularly interested in addressing the complexities arising in
non-deterministic, multi-agent domains. In this article, we present UMOP, a new
universal OBDD-based planning framework for non-deterministic, multi-agent
domains. We introduce a new planning domain description language, NADL, to
specify non-deterministic, multi-agent domains. The language contributes the
explicit definition of controllable agents and uncontrollable environment
agents. We describe the syntax and semantics of NADL and show how to build an
efficient OBDD-based representation of an NADL description. The UMOP planning
system uses NADL and different OBDD-based universal planning algorithms. It
includes the previously developed strong and strong cyclic planning algorithms.
In addition, we introduce our new optimistic planning algorithm that relaxes
optimality guarantees and generates plausible universal plans in some domains
where no strong nor strong cyclic solution exists. We present empirical results
applying UMOP to domains ranging from deterministic and single-agent with no
environment actions to non-deterministic and multi-agent with complex
environment actions. UMOP is shown to be a rich and efficient planning system.
|
1106.0230
|
Planning Graph as a (Dynamic) CSP: Exploiting EBL, DDB and other CSP
Search Techniques in Graphplan
|
cs.AI
|
This paper reviews the connections between Graphplan's planning-graph and the
dynamic constraint satisfaction problem and motivates the need for adapting CSP
search techniques to the Graphplan algorithm. It then describes how explanation
based learning, dependency directed backtracking, dynamic variable ordering,
forward checking, sticky values and random-restart search strategies can be
adapted to Graphplan. Empirical results are provided to demonstrate that these
augmentations improve Graphplan's performance significantly (up to 1000x
speedups) on several benchmark problems. Special attention is paid to the
explanation-based learning and dependency directed backtracking techniques as
they are empirically found to be most useful in improving the performance of
Graphplan.
|
1106.0233
|
Space Efficiency of Propositional Knowledge Representation Formalisms
|
cs.AI
|
We investigate the space efficiency of a Propositional Knowledge
Representation (PKR) formalism. Intuitively, the space efficiency of a
formalism F in representing a certain piece of knowledge A, is the size of the
shortest formula of F that represents A. In this paper we assume that knowledge
is either a set of propositional interpretations (models) or a set of
propositional formulae (theorems). We provide a formal way of talking about the
relative ability of PKR formalisms to compactly represent a set of models or a
set of theorems. We introduce two new compactness measures, the corresponding
classes, and show that the relative space efficiency of a PKR formalism in
representing models/theorems is directly related to such classes. In
particular, we consider formalisms for nonmonotonic reasoning, such as
circumscription and default logic, as well as belief revision operators and the
stable model semantics for logic programs with negation. One interesting result
is that formalisms with the same time complexity do not necessarily belong to
the same space efficiency class.
|
1106.0234
|
Value-Function Approximations for Partially Observable Markov Decision
Processes
|
cs.AI
|
Partially observable Markov decision processes (POMDPs) provide an elegant
mathematical framework for modeling complex decision and planning problems in
stochastic domains in which states of the system are observable only
indirectly, via a set of imperfect or noisy observations. The modeling
advantage of POMDPs, however, comes at a price -- exact methods for solving
them are computationally very expensive and thus applicable in practice only to
very simple problems. We focus on efficient approximation (heuristic) methods
that attempt to alleviate the computational problem and trade off accuracy for
speed. We have two objectives here. First, we survey various approximation
methods, analyze their properties and relations and provide some new insights
into their differences. Second, we present a number of new approximation
methods and novel refinements of existing techniques. The theoretical results
are supported by experiments on a problem from the agent navigation domain.
|
1106.0235
|
Robust Agent Teams via Socially-Attentive Monitoring
|
cs.MA cs.AI
|
Agents in dynamic multi-agent environments must monitor their peers to
execute individual and group plans. A key open question is how much monitoring
of other agents' states is required to be effective: The Monitoring Selectivity
Problem. We investigate this question in the context of detecting failures in
teams of cooperating agents, via Socially-Attentive Monitoring, which focuses
on monitoring for failures in the social relationships between the agents. We
empirically and analytically explore a family of socially-attentive teamwork
monitoring algorithms in two dynamic, complex, multi-agent domains, under
varying conditions of task distribution and uncertainty. We show that a
centralized scheme using a complex algorithm trades correctness for
completeness and requires monitoring all teammates. In contrast, a simple
distributed teamwork monitoring algorithm results in correct and complete
detection of teamwork failures, despite relying on limited, uncertain
knowledge, and monitoring only key agents in a team. In addition, we report on
the design of a socially-attentive monitoring system and demonstrate its
generality in monitoring several coordination relationships, diagnosing
detected failures, and both on-line and off-line applications.
|
1106.0237
|
On Deducing Conditional Independence from d-Separation in Causal Graphs
with Feedback (Research Note)
|
cs.AI
|
Pearl and Dechter (1996) claimed that the d-separation criterion for
conditional independence in acyclic causal networks also applies to networks of
discrete variables that have feedback cycles, provided that the variables of
the system are uniquely determined by the random disturbances. I show by
example that this is not true in general. Some condition stronger than
uniqueness is needed, such as the existence of a causal dynamics guaranteed to
lead to the unique solution.
|
1106.0238
|
What's in an Attribute? Consequences for the Least Common Subsumer
|
cs.AI
|
Functional relationships between objects, called `attributes', are of
considerable importance in knowledge representation languages, including
Description Logics (DLs). A study of the literature indicates that papers have
made, often implicitly, different assumptions about the nature of attributes:
whether they are always required to have a value, or whether they can be
partial functions. The work presented here is the first explicit study of this
difference for subclasses of the CLASSIC DL, involving the same-as concept
constructor. It is shown that although determining subsumption between concept
descriptions has the same complexity (though requiring different algorithms),
the story is different in the case of determining the least common subsumer
(lcs). For attributes interpreted as partial functions, the lcs exists and can
be computed relatively easily; even in this case our results correct and extend
three previous papers about the lcs of DLs. In the case where attributes must
have a value, the lcs may not exist, and even if it exists it may be of
exponential size. Interestingly, it is possible to decide in polynomial time if
the lcs exists.
|
1106.0239
|
The Complexity of Reasoning with Cardinality Restrictions and Nominals
in Expressive Description Logics
|
cs.AI
|
We study the complexity of the combination of the Description Logics ALCQ and
ALCQI with a terminological formalism based on cardinality restrictions on
concepts. These combinations can naturally be embedded into C^2, the two
variable fragment of predicate logic with counting quantifiers, which yields
decidability in NExpTime. We show that this approach leads to an optimal
solution for ALCQI, as ALCQI with cardinality restrictions has the same
complexity as C^2 (NExpTime-complete). In contrast, we show that for ALCQ, the
problem can be solved in ExpTime. This result is obtained by a reduction of
reasoning with cardinality restrictions to reasoning with the (in general
weaker) terminological formalism of general axioms for ALCQ extended with
nominals. Using the same reduction, we show that, for the extension of ALCQI
with nominals, reasoning with general axioms is a NExpTime-complete problem.
Finally, we sharpen this result and show that pure concept satisfiability for
ALCQI with nominals is NExpTime-complete. Without nominals, this problem is
known to be PSpace-complete.
|
1106.0240
|
Backbone Fragility and the Local Search Cost Peak
|
cs.AI
|
The local search algorithm WSat is one of the most successful algorithms for
solving the satisfiability (SAT) problem. It is notably effective at solving
hard Random 3-SAT instances near the so-called `satisfiability threshold', but
still shows a peak in search cost near the threshold and large variations in
cost over different instances. We make a number of significant contributions to
the analysis of WSat on high-cost random instances, using the
recently-introduced concept of the backbone of a SAT instance. The backbone is
the set of literals which are entailed by an instance. We find that the number
of solutions predicts the cost well for small-backbone instances but is much
less relevant for the large-backbone instances which appear near the threshold
and dominate in the overconstrained region. We show a very strong correlation
between search cost and the Hamming distance to the nearest solution early in
WSat's search. This pattern leads us to introduce a measure of the backbone
fragility of an instance, which indicates how persistent the backbone is as
clauses are removed. We propose that high-cost random instances for local
search are those with very large backbones which are also backbone-fragile. We
suggest that the decay in cost beyond the satisfiability threshold is due to
increasing backbone robustness (the opposite of backbone fragility). Our
hypothesis makes three correct predictions. First, that the backbone robustness
of an instance is negatively correlated with the local search cost when other
factors are controlled for. Second, that backbone-minimal instances (which are
3-SAT instances altered so as to be more backbone-fragile) are unusually hard
for WSat. Third, that the clauses most often unsatisfied during search are
those whose deletion has the most effect on the backbone. In understanding the
pathologies of local search methods, we hope to contribute to the development
of new and better techniques.
|
1106.0241
|
An Application of Reinforcement Learning to Dialogue Strategy Selection
in a Spoken Dialogue System for Email
|
cs.AI
|
This paper describes a novel method by which a spoken dialogue system can
learn to choose an optimal dialogue strategy from its experience interacting
with human users. The method is based on a combination of reinforcement
learning and performance modeling of spoken dialogue systems. The reinforcement
learning component applies Q-learning (Watkins, 1989), while the performance
modeling component applies the PARADISE evaluation framework (Walker et al.,
1997) to learn the performance function (reward) used in reinforcement
learning. We illustrate the method with a spoken dialogue system named ELVIS
(EmaiL Voice Interactive System), that supports access to email over the phone.
We conduct a set of experiments for training an optimal dialogue strategy on a
corpus of 219 dialogues in which human users interact with ELVIS over the
phone. We then test that strategy on a corpus of 18 dialogues. We show that
ELVIS can learn to optimize its strategy selection for agent initiative, for
reading messages, and for summarizing email folders.
|
1106.0242
|
Nonapproximability Results for Partially Observable Markov Decision
Processes
|
cs.AI
|
We show that for several variations of partially observable Markov decision
processes, polynomial-time algorithms for finding control policies are unlikely
to or simply don't have guarantees of finding policies within a constant factor
or a constant summand of optimal. Here "unlikely" means "unless some complexity
classes collapse," where the collapses considered are P=NP, P=PSPACE, or P=EXP.
Until or unless these collapses are shown to hold, any control-policy designer
must choose between such performance guarantees and efficient computation.
|
1106.0243
|
On Reasonable and Forced Goal Orderings and their Use in an
Agenda-Driven Planning Algorithm
|
cs.AI
|
The paper addresses the problem of computing goal orderings, which is one of
the longstanding issues in AI planning. It makes two new contributions. First,
it formally defines and discusses two different goal orderings, which are
called the reasonable and the forced ordering. Both orderings are defined for
simple STRIPS operators as well as for more complex ADL operators supporting
negation and conditional effects. The complexity of these orderings is
investigated and their practical relevance is discussed. Secondly, two
different methods to compute reasonable goal orderings are developed. One of
them is based on planning graphs, while the other investigates the set of
actions directly. Finally, it is shown how the ordering relations, which have
been derived for a given set of goals G, can be used to compute a so-called
goal agenda that divides G into an ordered set of subgoals. Any planner can
then, in principle, use the goal agenda to plan for increasing sets of
subgoals. This can lead to an exponential complexity reduction, as the solution
to a complex planning problem is found by solving easier subproblems. Since
only a polynomial overhead is caused by the goal agenda computation, a
potential exists to dramatically speed up planning algorithms as we demonstrate
in the empirical evaluation, where we use this method in the IPP planner.
|
1106.0244
|
Asimovian Adaptive Agents
|
cs.AI
|
The goal of this research is to develop agents that are adaptive and
predictable and timely. At first blush, these three requirements seem
contradictory. For example, adaptation risks introducing undesirable side
effects, thereby making agents' behavior less predictable. Furthermore,
although formal verification can assist in ensuring behavioral predictability,
it is known to be time-consuming. Our solution to the challenge of satisfying
all three requirements is the following. Agents have finite-state automaton
plans, which are adapted online via evolutionary learning (perturbation)
operators. To ensure that critical behavioral constraints are always satisfied,
agents' plans are first formally verified. They are then reverified after every
adaptation. If reverification concludes that constraints are violated, the
plans are repaired. The main objective of this paper is to improve the
efficiency of reverification after learning, so that agents have a sufficiently
rapid response time. We present two solutions: positive results that certain
learning operators are a priori guaranteed to preserve useful classes of
behavioral assurance constraints (which implies that no reverification is
needed for these operators), and efficient incremental reverification
algorithms for those learning operators that have negative a priori results.
|
1106.0245
|
A Model of Inductive Bias Learning
|
cs.AI
|
A major problem in machine learning is that of inductive bias: how to choose
a learner's hypothesis space so that it is large enough to contain a solution
to the problem being learnt, yet small enough to ensure reliable generalization
from reasonably-sized training sets. Typically such bias is supplied by hand
through the skill and insights of experts. In this paper a model for
automatically learning bias is investigated. The central assumption of the
model is that the learner is embedded within an environment of related learning
tasks. Within such an environment the learner can sample from multiple tasks,
and hence it can search for a hypothesis space that contains good solutions to
many of the problems in the environment. Under certain restrictions on the set
of all hypothesis spaces available to the learner, we show that a hypothesis
space that performs well on a sufficiently large number of training tasks will
also perform well when learning novel tasks in the same environment. Explicit
bounds are also derived demonstrating that learning multiple tasks within an
environment of related tasks can potentially give much better generalization
than learning a single task.
|
1106.0246
|
Mean Field Methods for a Special Class of Belief Networks
|
cs.AI
|
The chief aim of this paper is to propose mean-field approximations for a
broad class of Belief networks, of which sigmoid and noisy-or networks can be
seen as special cases. The approximations are based on a powerful mean-field
theory suggested by Plefka. We show that Saul, Jaakkola and Jordan' s approach
is the first order approximation in Plefka's approach, via a variational
derivation. The application of Plefka's theory to belief networks is not
computationally tractable. To tackle this problem we propose new approximations
based on Taylor series. Small scale experiments show that the proposed schemes
are attractive.
|
1106.0247
|
On the Compilability and Expressive Power of Propositional Planning
Formalisms
|
cs.AI
|
The recent approaches of extending the GRAPHPLAN algorithm to handle more
expressive planning formalisms raise the question of what the formal meaning of
"expressive power" is. We formalize the intuition that expressive power is a
measure of how concisely planning domains and plans can be expressed in a
particular formalism by introducing the notion of "compilation schemes" between
planning formalisms. Using this notion, we analyze the expressiveness of a
large family of propositional planning formalisms, ranging from basic STRIPS to
a formalism with conditional effects, partial state specifications, and
propositional formulae in the preconditions. One of the results is that
conditional effects cannot be compiled away if plan size should grow only
linearly but can be compiled away if we allow for polynomial growth of the
resulting plans. This result confirms that the recently proposed extensions to
the GRAPHPLAN algorithm concerning conditional effects are optimal with respect
to the "compilability" framework. Another result is that general propositional
formulae cannot be compiled into conditional effects if the plan size should be
preserved linearly. This implies that allowing general propositional formulae
in preconditions and effect conditions adds another level of difficulty in
generating a plan.
|
1106.0248
|
Technical Paper Recommendation: A Study in Combining Multiple
Information Sources
|
cs.IR
|
The growing need to manage and exploit the proliferation of online data
sources is opening up new opportunities for bringing people closer to the
resources they need. For instance, consider a recommendation service through
which researchers can receive daily pointers to journal papers in their fields
of interest. We survey some of the known approaches to the problem of technical
paper recommendation and ask how they can be extended to deal with multiple
information sources. More specifically, we focus on a variant of this problem -
recommending conference paper submissions to reviewing committee members -
which offers us a testbed to try different approaches. Using WHIRL - an
information integration system - we are able to implement different
recommendation algorithms derived from information retrieval principles. We
also use a novel autonomous procedure for gathering reviewer interest
information from the Web. We evaluate our approach and compare it to other
methods using preference data provided by members of the AAAI-98 conference
reviewing committee along with data about the actual submissions.
|
1106.0249
|
Partial-Order Planning with Concurrent Interacting Actions
|
cs.AI
|
In order to generate plans for agents with multiple actuators, agent teams,
or distributed controllers, we must be able to represent and plan using
concurrent actions with interacting effects. This has historically been
considered a challenging task requiring a temporal planner with the ability to
reason explicitly about time. We show that with simple modifications, the
STRIPS action representation language can be used to represent interacting
actions. Moreover, algorithms for partial-order planning require only small
modifications in order to be applied in such multiagent domains. We demonstrate
this fact by developing a sound and complete partial-order planner for planning
with concurrent interacting actions, POMP, that extends existing partial-order
planners in a straightforward way. These results open the way to the use of
partial-order planners for the centralized control of cooperative multiagent
systems.
|
1106.0250
|
Planning by Rewriting
|
cs.AI
|
Domain-independent planning is a hard combinatorial problem. Taking into
account plan quality makes the task even more difficult. This article
introduces Planning by Rewriting (PbR), a new paradigm for efficient
high-quality domain-independent planning. PbR exploits declarative
plan-rewriting rules and efficient local search techniques to transform an
easy-to-generate, but possibly suboptimal, initial plan into a high-quality
plan. In addition to addressing the issues of planning efficiency and plan
quality, this framework offers a new anytime planning algorithm. We have
implemented this planner and applied it to several existing domains. The
experimental results show that the PbR approach provides significant savings in
planning effort while generating high-quality plans.
|
1106.0251
|
Speeding Up the Convergence of Value Iteration in Partially Observable
Markov Decision Processes
|
cs.AI
|
Partially observable Markov decision processes (POMDPs) have recently become
popular among many AI researchers because they serve as a natural model for
planning under uncertainty. Value iteration is a well-known algorithm for
finding optimal policies for POMDPs. It typically takes a large number of
iterations to converge. This paper proposes a method for accelerating the
convergence of value iteration. The method has been evaluated on an array of
benchmark problems and was found to be very effective: It enabled value
iteration to converge after only a few iterations on all the test problems.
|
1106.0252
|
Conformant Planning via Symbolic Model Checking
|
cs.AI
|
We tackle the problem of planning in nondeterministic domains, by presenting
a new approach to conformant planning. Conformant planning is the problem of
finding a sequence of actions that is guaranteed to achieve the goal despite
the nondeterminism of the domain. Our approach is based on the representation
of the planning domain as a finite state automaton. We use Symbolic Model
Checking techniques, in particular Binary Decision Diagrams, to compactly
represent and efficiently search the automaton. In this paper we make the
following contributions. First, we present a general planning algorithm for
conformant planning, which applies to fully nondeterministic domains, with
uncertainty in the initial condition and in action effects. The algorithm is
based on a breadth-first, backward search, and returns conformant plans of
minimal length, if a solution to the planning problem exists, otherwise it
terminates concluding that the problem admits no conformant solution. Second,
we provide a symbolic representation of the search space based on Binary
Decision Diagrams (BDDs), which is the basis for search techniques derived from
symbolic model checking. The symbolic representation makes it possible to
analyze potentially large sets of states and transitions in a single
computation step, thus providing for an efficient implementation. Third, we
present CMBP (Conformant Model Based Planner), an efficient implementation of
the data structures and algorithm described above, directly based on BDD
manipulations, which allows for a compact representation of the search layers
and an efficient implementation of the search steps. Finally, we present an
experimental comparison of our approach with the state-of-the-art conformant
planners CGP, QBFPLAN and GPT. Our analysis includes all the planning problems
from the distribution packages of these systems, plus other problems defined to
stress a number of specific factors. Our approach appears to be the most
effective: CMBP is strictly more expressive than QBFPLAN and CGP and, in all
the problems where a comparison is possible, CMBP outperforms its competitors,
sometimes by orders of magnitude.
|
1106.0253
|
AIS-BN: An Adaptive Importance Sampling Algorithm for Evidential
Reasoning in Large Bayesian Networks
|
cs.AI
|
Stochastic sampling algorithms, while an attractive alternative to exact
algorithms in very large Bayesian network models, have been observed to perform
poorly in evidential reasoning with extremely unlikely evidence. To address
this problem, we propose an adaptive importance sampling algorithm, AIS-BN,
that shows promising convergence rates even under extreme conditions and seems
to outperform the existing sampling algorithms consistently. Three sources of
this performance improvement are (1) two heuristics for initialization of the
importance function that are based on the theoretical properties of importance
sampling in finite-dimensional integrals and the structural advantages of
Bayesian networks, (2) a smooth learning method for the importance function,
and (3) a dynamic weighting function for combining samples from different
stages of the algorithm. We tested the performance of the AIS-BN algorithm
along with two state of the art general purpose sampling algorithms, likelihood
weighting (Fung and Chang, 1989; Shachter and Peot, 1989) and self-importance
sampling (Shachter and Peot, 1989). We used in our tests three large real
Bayesian network models available to the scientific community: the CPCS network
(Pradhan et al., 1994), the PathFinder network (Heckerman, Horvitz, and
Nathwani, 1990), and the ANDES network (Conati, Gertner, VanLehn, and Druzdzel,
1997), with evidence as unlikely as 10^-41. While the AIS-BN algorithm always
performed better than the other two algorithms, in the majority of the test
cases it achieved orders of magnitude improvement in precision of the results.
Improvement in speed given a desired precision is even more dramatic, although
we are unable to report numerical results here, as the other algorithms almost
never achieved the precision reached even by the first few iterations of the
AIS-BN algorithm.
|
1106.0254
|
Conflict-Directed Backjumping Revisited
|
cs.AI
|
In recent years, many improvements to backtracking algorithms for solving
constraint satisfaction problems have been proposed. The techniques for
improving backtracking algorithms can be conveniently classified as look-ahead
schemes and look-back schemes. Unfortunately, look-ahead and look-back schemes
are not entirely orthogonal as it has been observed empirically that the
enhancement of look-ahead techniques is sometimes counterproductive to the
effects of look-back techniques. In this paper, we focus on the relationship
between the two most important look-ahead techniques---using a variable
ordering heuristic and maintaining a level of local consistency during the
backtracking search---and the look-back technique of conflict-directed
backjumping (CBJ). We show that there exists a "perfect" dynamic variable
ordering such that CBJ becomes redundant. We also show theoretically that as
the level of local consistency that is maintained in the backtracking search is
increased, the less that backjumping will be an improvement. Our theoretical
results partially explain why a backtracking algorithm doing more in the
look-ahead phase cannot benefit more from the backjumping look-back scheme.
Finally, we show empirically that adding CBJ to a backtracking algorithm that
maintains generalized arc consistency (GAC), an algorithm that we refer to as
GAC-CBJ, can still provide orders of magnitude speedups. Our empirical results
contrast with Bessiere and Regin's conclusion (1996) that CBJ is useless to an
algorithm that maintains arc consistency.
|
1106.0256
|
Grounding the Lexical Semantics of Verbs in Visual Perception using
Force Dynamics and Event Logic
|
cs.AI
|
This paper presents an implemented system for recognizing the occurrence of
events described by simple spatial-motion verbs in short image sequences. The
semantics of these verbs is specified with event-logic expressions that
describe changes in the state of force-dynamic relations between the
participants of the event. An efficient finite representation is introduced for
the infinite sets of intervals that occur when describing liquid and
semi-liquid events. Additionally, an efficient procedure using this
representation is presented for inferring occurrences of compound events,
described with event-logic expressions, from occurrences of primitive events.
Using force dynamics and event logic to specify the lexical semantics of events
allows the system to be more robust than prior systems based on motion profile.
|
1106.0257
|
Popular Ensemble Methods: An Empirical Study
|
cs.AI
|
An ensemble consists of a set of individually trained classifiers (such as
neural networks or decision trees) whose predictions are combined when
classifying novel instances. Previous research has shown that an ensemble is
often more accurate than any of the single classifiers in the ensemble. Bagging
(Breiman, 1996c) and Boosting (Freund and Shapire, 1996; Shapire, 1990) are two
relatively new but popular methods for producing ensembles. In this paper we
evaluate these methods on 23 data sets using both neural networks and decision
trees as our classification algorithm. Our results clearly indicate a number of
conclusions. First, while Bagging is almost always more accurate than a single
classifier, it is sometimes much less accurate than Boosting. On the other
hand, Boosting can create ensembles that are less accurate than a single
classifier -- especially when using neural networks. Analysis indicates that
the performance of the Boosting methods is dependent on the characteristics of
the data set being examined. In fact, further results show that Boosting
ensembles may overfit noisy data sets, thus decreasing its performance.
Finally, consistent with previous studies, our work suggests that most of the
gain in an ensemble's performance comes in the first few classifiers combined;
however, relatively large gains can be seen up to 25 classifiers when Boosting
decision trees.
|
1106.0263
|
Indirect stabilization of weakly coupled systems with hybrid boundary
conditions
|
math.OC cs.SY math.AP
|
We investigate stability properties of indirectly damped systems of evolution
equations in Hilbert spaces, under new compatibility assumptions. We prove
polynomial decay for the energy of solutions and optimize our results by
interpolation techniques, obtaining a full range of power-like decay rates. In
particular, we give explicit estimates with respect to the initial data. We
discuss several applications to hyperbolic systems with {\em hybrid} boundary
conditions, including the coupling of two wave equations subject to Dirichlet
and Robin type boundary conditions, respectively.
|
1106.0264
|
Achievable Degrees of Freedom of the K-user Interference Channel with
Partial Cooperation
|
cs.IT math.IT
|
In this paper, we consider the K-user interference channel with partial
cooperation, where a strict subset of the K users cooperate. For the K-user
interference channel with cooperating subsets of length M, the outer bound of
the total degrees of freedom is KM/(M+1). In this paper, we propose a signal
space-based interference alignment scheme that proves the achievability of
these degrees of freedom for the case K=M+2. The proposed scheme consists of a
design for the transmit precoding matrices and a processing algorithm which we
call the Successive Interference Alignment (SIA) algorithm. The decoder of each
message uses the SIA algorithm to process the signals received by the M
cooperating receivers in order to get the maximum available degrees of freedom.
|
1106.0281
|
The media effect in Axelrod's model explained
|
physics.soc-ph cs.SI
|
We revisit the problem of introducing an external global field -- the mass
media -- in Axelrod's model of social dynamics, where in addition to their
nearest neighbors, the agents can interact with a virtual neighbor whose
cultural features are fixed from the outset. The finding that this apparently
homogenizing field actually increases the cultural diversity has been
considered a puzzle since the phenomenon was first reported more than a decade
ago. Here we offer a simple explanation for it, which is based on the
pedestrian observation that Axelrod's model exhibits more cultural diversity,
i.e., more distinct cultural domains, when the agents are allowed to interact
solely with the media field than when they can interact with their neighbors as
well. In this perspective, it is the local homogenizing interactions that work
towards making the absorbing configurations less fragmented as compared with
the extreme situation in which the agents interact with the media only.
|
1106.0284
|
An Evolutionary Algorithm with Advanced Goal and Priority Specification
for Multi-objective Optimization
|
cs.AI
|
This paper presents an evolutionary algorithm with a new goal-sequence
domination scheme for better decision support in multi-objective optimization.
The approach allows the inclusion of advanced hard/soft priority and constraint
information on each objective component, and is capable of incorporating
multiple specifications with overlapping or non-overlapping objective functions
via logical 'OR' and 'AND' connectives to drive the search towards multiple
regions of trade-off. In addition, we propose a dynamic sharing scheme that is
simple and adaptively estimated according to the on-line population
distribution without needing any a priori parameter setting. Each feature in
the proposed algorithm is examined to show its respective contribution, and the
performance of the algorithm is compared with other evolutionary optimization
methods. It is shown that the proposed algorithm has performed well in the
diversity of evolutionary search and uniform distribution of non-dominated
individuals along the final trade-offs, without significant computational
effort. The algorithm is also applied to the design optimization of a practical
servo control system for hard disk drives with a single voice-coil-motor
actuator. Results of the evolutionary designed servo control system show a
superior closed-loop performance compared to classical PID or RPT approaches.
|
1106.0285
|
The GRT Planning System: Backward Heuristic Construction in Forward
State-Space Planning
|
cs.AI
|
This paper presents GRT, a domain-independent heuristic planning system for
STRIPS worlds. GRT solves problems in two phases. In the pre-processing phase,
it estimates the distance between each fact and the goals of the problem, in a
backward direction. Then, in the search phase, these estimates are used in
order to further estimate the distance between each intermediate state and the
goals, guiding so the search process in a forward direction and on a best-first
basis. The paper presents the benefits from the adoption of opposite directions
between the preprocessing and the search phases, discusses some difficulties
that arise in the pre-processing phase and introduces techniques to cope with
them. Moreover, it presents several methods of improving the efficiency of the
heuristic, by enriching the representation and by reducing the size of the
problem. Finally, a method of overcoming local optimal states, based on domain
axioms, is proposed. According to it, difficult problems are decomposed into
easier sub-problems that have to be solved sequentially. The performance
results from various domains, including those of the recent planning
competitions, show that GRT is among the fastest planners.
|
1106.0286
|
Popularity versus Similarity in Growing Networks
|
physics.soc-ph cond-mat.stat-mech cs.NI cs.SI
|
Popularity is attractive -- this is the formula underlying preferential
attachment, a popular explanation for the emergence of scaling in growing
networks. If new connections are made preferentially to more popular nodes,
then the resulting distribution of the number of connections that nodes have
follows power laws observed in many real networks. Preferential attachment has
been directly validated for some real networks, including the Internet.
Preferential attachment can also be a consequence of different underlying
processes based on node fitness, ranking, optimization, random walks, or
duplication. Here we show that popularity is just one dimension of
attractiveness. Another dimension is similarity. We develop a framework where
new connections, instead of preferring popular nodes, optimize certain
trade-offs between popularity and similarity. The framework admits a geometric
interpretation, in which popularity preference emerges from local optimization.
As opposed to preferential attachment, the optimization framework accurately
describes large-scale evolution of technological (Internet), social (web of
trust), and biological (E.coli metabolic) networks, predicting the probability
of new links in them with a remarkable precision. The developed framework can
thus be used for predicting new links in evolving networks, and provides a
different perspective on preferential attachment as an emergent phenomenon.
|
1106.0288
|
Emergence of Bursts and Communities in Evolving Weighted Networks
|
physics.soc-ph cs.SI
|
Understanding the patterns of human dynamics and social interaction, and the
way they lead to the formation of an organized and functional society are
important issues especially for techno-social development. Addressing these
issues of social networks has recently become possible through large scale data
analysis of e.g. mobile phone call records, which has revealed the existence of
modular or community structure with many links between nodes of the same
community and relatively few links between nodes of different communities. The
weights of links, e.g. the number of calls between two users, and the network
topology are found correlated such that intra-community links are stronger
compared to the weak inter-community links. This is known as Granovetter's "The
strength of weak ties" hypothesis. In addition to this inhomogeneous community
structure, the temporal patterns of human dynamics turn out to be inhomogeneous
or bursty, characterized by the heavy tailed distribution of inter-event time
between two consecutive events. In this paper, we study how the community
structure and the bursty dynamics emerge together in an evolving weighted
network model. The principal mechanisms behind these patterns are social
interaction by cyclic closure, i.e. links to friends of friends and the focal
closure, i.e. links to individuals sharing similar attributes or interests, and
human dynamics by task handling process. These three mechanisms have been
implemented as a network model with local attachment, global attachment, and
priority-based queuing processes. By comprehensive numerical simulations we
show that the interplay of these mechanisms leads to the emergence of heavy
tailed inter-event time distribution and the evolution of Granovetter-type
community structure. Moreover, the numerical results are found to be in
qualitative agreement with empirical results from mobile phone call dataset.
|
1106.0296
|
The Emergence of Leadership in Social Networks
|
physics.soc-ph cs.SI q-fin.GN
|
We study a networked version of the minority game in which agents can choose
to follow the choices made by a neighbouring agent in a social network. We show
that for a wide variety of networks a leadership structure always emerges, with
most agents following the choice made by a few agents. We find a suitable
parameterisation which highlights the universal aspects of the behaviour and
which also indicates where results depend on the type of social network.
|
1106.0297
|
Hypergraphs and City Street Networks
|
physics.soc-ph cs.SI
|
The map of a city's streets constitutes a particular case of spatial complex
network. However a city is not limited to its topology: it is above all a
geometrical object whose particularity is to organize into short and long axes
called streets. In this article we present and discuss two algorithms aiming at
recovering the notion of street from a graph representation of a city. Then we
show that the length of the so-called streets scales logarithmically. This
phenomenon leads to assume that a city is shaped into a logic of extension and
division of space.
|
1106.0304
|
Using Ontologies for the Design of Data Warehouses
|
cs.DB
|
Obtaining an implementation of a data warehouse is a complex task that forces
designers to acquire wide knowledge of the domain, thus requiring a high level
of expertise and becoming it a prone-to-fail task. Based on our experience, we
have detected a set of situations we have faced up with in real-world projects
in which we believe that the use of ontologies will improve several aspects of
the design of data warehouses. The aim of this article is to describe several
shortcomings of current data warehouse design approaches and discuss the
benefit of using ontologies to overcome them. This work is a starting point for
discussing the convenience of using ontologies in data warehouse design.
|
1106.0346
|
Entropy-based Classification of 'Retweeting' Activity on Twitter
|
cs.SI cs.CY
|
Twitter is used for a variety of reasons, including information
dissemination, marketing, political organizing and to spread propaganda,
spamming, promotion, conversations, and so on. Characterizing these activities
and categorizing associated user generated content is a challenging task. We
present a information-theoretic approach to classification of user activity on
Twitter. We focus on tweets that contain embedded URLs and study their
collective `retweeting' dynamics. We identify two features, time-interval and
user entropy, which we use to classify retweeting activity. We achieve good
separation of different activities using just these two features and are able
to categorize content based on the collective user response it generates.
We have identified five distinct categories of retweeting activity on
Twitter: automatic/robotic activity, newsworthy information dissemination,
advertising and promotion, campaigns, and parasitic advertisement. In the
course of our investigations, we have shown how Twitter can be exploited for
promotional and spam-like activities. The content-independent, entropy-based
activity classification method is computationally efficient, scalable and
robust to sampling and missing data. It has many applications, including
automatic spam-detection, trend identification, trust management,
user-modeling, social search and content classification on online social media.
|
1106.0357
|
Learning Hierarchical Sparse Representations using Iterative Dictionary
Learning and Dimension Reduction
|
cs.LG cs.AI cs.CV
|
This paper introduces an elemental building block which combines Dictionary
Learning and Dimension Reduction (DRDL). We show how this foundational element
can be used to iteratively construct a Hierarchical Sparse Representation (HSR)
of a sensory stream. We compare our approach to existing models showing the
generality of our simple prescription. We then perform preliminary experiments
using this framework, illustrating with the example of an object recognition
task using standard datasets. This work introduces the very first steps towards
an integrated framework for designing and analyzing various computational tasks
from learning to attention to action. The ultimate goal is building a
mathematically rigorous, integrated theory of intelligence.
|
1106.0359
|
Composite Social Network for Predicting Mobile Apps Installation
|
cs.SI cs.HC physics.soc-ph
|
We have carefully instrumented a large portion of the population living in a
university graduate dormitory by giving participants Android smart phones
running our sensing software. In this paper, we propose the novel problem of
predicting mobile application (known as "apps") installation using social
networks and explain its challenge. Modern smart phones, like the ones used in
our study, are able to collect different social networks using built-in
sensors. (e.g. Bluetooth proximity network, call log network, etc) While this
information is accessible to app market makers such as the iPhone AppStore, it
has not yet been studied how app market makers can use these information for
marketing research and strategy development. We develop a simple computational
model to better predict app installation by using a composite network computed
from the different networks sensed by phones. Our model also captures
individual variance and exogenous factors in app adoption. We show the
importance of considering all these factors in predicting app installations,
and we observe the surprising result that app installation is indeed
predictable. We also show that our model achieves the best results compared
with generic approaches: our results are four times better than random guess,
and predict almost 45% of all apps users install with almost 45% precision (F1
score= 0.43).
|
1106.0365
|
Lower Bounds for Sparse Recovery
|
cs.DS cs.IT math.IT
|
We consider the following k-sparse recovery problem: design an m x n matrix
A, such that for any signal x, given Ax we can efficiently recover x'
satisfying
||x-x'||_1 <= C min_{k-sparse} x"} ||x-x"||_1.
It is known that there exist matrices A with this property that have only O(k
log (n/k)) rows.
In this paper we show that this bound is tight. Our bound holds even for the
more general /randomized/ version of the problem, where A is a random variable
and the recovery algorithm is required to work for any fixed x with constant
probability (over A).
|
1106.0371
|
A Novel Image Segmentation Enhancement Technique based on Active Contour
and Topological Alignments
|
cs.CV
|
Topological alignments and snakes are used in image processing, particularly
in locating object boundaries. Both of them have their own advantages and
limitations. To improve the overall image boundary detection system, we focused
on developing a novel algorithm for image processing. The algorithm we propose
to develop will based on the active contour method in conjunction with
topological alignments method to enhance the image detection approach. The
algorithm presents novel technique to incorporate the advantages of both
Topological Alignments and snakes. Where the initial segmentation by
Topological Alignments is firstly transformed into the input of the snake model
and begins its evolvement to the interested object boundary. The results show
that the algorithm can deal with low contrast images and shape cells,
demonstrate the segmentation accuracy under weak image boundaries, which
responsible for lacking accuracy in image detecting techniques. We have
achieved better segmentation and boundary detecting for the image, also the
ability of the system to improve the low contrast and deal with over and under
segmentation.
|
1106.0380
|
A Note on Multiple-Access Channels with Strictly-Causal State
Information
|
cs.IT math.IT
|
We propose a new inner bound on the capacity region of a memoryless
multiple-access channel that is governed by a memoryless state that is known
strictly causally to the encoders. The new inner bound contains the previous
bounds, and we provide an example demonstrating that the inclusion can be
strict.
A variation on this example is then applied to the case where the channel is
governed by two independent state sequences, where each transmitter knows one
of the states strictly causally. The example proves that, as conjectured by Li
et al., an inner bound that they derived for this scenario can indeed by
strictly better than previous bounds.
|
1106.0390
|
Asymmetric random matrices: What do we need them for?
|
physics.data-an cs.CE q-fin.ST
|
Complex systems are typically represented by large ensembles of observations.
Correlation matrices provide an efficient formal framework to extract
information from such multivariate ensembles and identify in a quantifiable way
patterns of activity that are reproducible with statistically significant
frequency compared to a reference chance probability, usually provided by
random matrices as fundamental reference. The character of the problem and
especially the symmetries involved must guide the choice of random matrices to
be used for the definition of a baseline reference. For standard correlation
matrices this is the Wishart ensemble of symmetric random matrices. The real
world complexity however often shows asymmetric information flows and therefore
more general correlation matrices are required to adequately capture the
asymmetry. Here we first summarize the relevant theoretical concepts. We then
present some examples of human brain activity where asymmetric time-lagged
correlations are evident and hence highlight the need for further theoretical
developments.
|
1106.0411
|
Quantum-Like Uncertain Conditionals for Text Analysis
|
cs.CL quant-ph
|
Simple representations of documents based on the occurrences of terms are
ubiquitous in areas like Information Retrieval, and also frequent in Natural
Language Processing. In this work we propose a logical-probabilistic approach
to the analysis of natural language text based in the concept of Uncertain
Conditional, on top of a formulation of lexical measurements inspired in the
theoretical concept of ideal quantum measurements. The proposed concept can be
used for generating topic-specific representations of text, aiming to match in
a simple way the perception of a user with a pre-established idea of what the
usage of terms in the text should be. A simple example is developed with two
versions of a text in two languages, showing how regularities in the use of
terms are detected and easily represented.
|
1106.0419
|
Mean field solutions of kinetic exchange opinion models
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We present here the exact solution of an infinite range, discrete, opinion
formation model. The model shows an active-absorbing phase transition, similar
to that numerically found in its recently proposed continuous version
(Lallouache et al., Phys. Rev E 82, 056112 (2010)). Apart from the two-agent
interactions here we also report the effect of having three agent interactions.
The phase diagram has a continuous transition line (two agent interaction
dominated) and a discontinuous transition line (three agent interaction
dominated) separated by a tricritical point.
|
1106.0423
|
Physarum Can Compute Shortest Paths
|
cs.DS cs.CE cs.ET cs.SY math.DS math.OC physics.bio-ph
|
Physarum Polycephalum is a slime mold that is apparently able to solve
shortest path problems.
A mathematical model has been proposed by biologists to describe the feedback
mechanism used by the slime mold to adapt its tubular channels while foraging
two food sources s0 and s1. We prove that, under this model, the mass of the
mold will eventually converge to the shortest s0 - s1 path of the network that
the mold lies on, independently of the structure of the network or of the
initial mass distribution.
This matches the experimental observations by the biologists and can be seen
as an example of a "natural algorithm", that is, an algorithm developed by
evolution over millions of years.
|
1106.0436
|
Linear-algebraic list decoding of folded Reed-Solomon codes
|
cs.IT cs.DS math.IT
|
Folded Reed-Solomon codes are an explicit family of codes that achieve the
optimal trade-off between rate and error-correction capability: specifically,
for any $\eps > 0$, the author and Rudra (2006,08) presented an $n^{O(1/\eps)}$
time algorithm to list decode appropriate folded RS codes of rate $R$ from a
fraction $1-R-\eps$ of errors. The algorithm is based on multivariate
polynomial interpolation and root-finding over extension fields. It was noted
by Vadhan that interpolating a linear polynomial suffices if one settles for a
smaller decoding radius (but still enough for a statement of the above form).
Here we give a simple linear-algebra based analysis of this variant that
eliminates the need for the computationally expensive root-finding step over
extension fields (and indeed any mention of extension fields). The entire list
decoding algorithm is linear-algebraic, solving one linear system for the
interpolation step, and another linear system to find a small subspace of
candidate solutions. Except for the step of pruning this subspace, the
algorithm can be implemented to run in {\em quadratic} time. The theoretical
drawback of folded RS codes are that both the decoding complexity and proven
worst-case list-size bound are $n^{\Omega(1/\eps)}$. By combining the above
idea with a pseudorandom subset of all polynomials as messages, we get a Monte
Carlo construction achieving a list size bound of $O(1/\eps^2)$ which is quite
close to the existential $O(1/\eps)$ bound (however, the decoding complexity
remains $n^{\Omega(1/\eps)}$). Our work highlights that constructing an
explicit {\em subspace-evasive} subset that has small intersection with
low-dimensional subspaces could lead to explicit codes with better
list-decoding guarantees.
|
1106.0438
|
Analytical approach to model of scientific revolutions
|
physics.soc-ph cs.SI
|
The model of scientific paradigms spreading throughout the community of
agents with memory is analyzed using the master equation. The case of two
competing ideas is considered for various networks of interactions, including
agents placed at Erd\H{o}s-R\'{e}nyi graphs or complete graphs. The pace of
adopting a new idea by a community is analyzed, along with the distribution of
periods after which a new idea replaces the old one. The approach is extended
for the chain topology onto the more general case when more than two ideas
compete. Our analytical results are in agreement with numerical simulations.
|
1106.0439
|
Model of communities isolation at hierarchical modular networks
|
physics.soc-ph cs.SI
|
The model of community isolation was extended to the case when individuals
are randomly placed at nodes of hierarchical modular networks. It was shown
that the average number of blocked nodes (individuals) increases in time as a
power function, with the exponent depending on network parameters. The
distribution of time when the first isolated cluster appears is unimodal,
non-gaussian. The developed analytical approach is in a good agreement with the
simulation data.
|
1106.0468
|
From Boolean Functional Equations to Control Software
|
cs.SY cs.LO cs.SE
|
Many software as well digital hardware automatic synthesis methods define the
set of implementations meeting the given system specifications with a boolean
relation K. In such a context a fundamental step in the software (hardware)
synthesis process is finding effective solutions to the functional equation
defined by K. This entails finding a (set of) boolean function(s) F (typically
represented using OBDDs, Ordered Binary Decision Diagrams) such that: 1) for
all x for which K is satisfiable, K(x, F(x)) = 1 holds; 2) the implementation
of F is efficient with respect to given implementation parameters such as code
size or execution time. While this problem has been widely studied in digital
hardware synthesis, little has been done in a software synthesis context.
Unfortunately the approaches developed for hardware synthesis cannot be
directly used in a software context. This motivates investigation of effective
methods to solve the above problem when F has to be implemented with software.
In this paper we present an algorithm that, from an OBDD representation for K,
generates a C code implementation for F that has the same size as the OBDD for
F and a WCET (Worst Case Execution Time) at most O(nr), being n = |x| the
number of arguments of functions in F and r the number of functions in F.
|
1106.0483
|
Learning unbelievable marginal probabilities
|
cs.AI cs.LG
|
Loopy belief propagation performs approximate inference on graphical models
with loops. One might hope to compensate for the approximation by adjusting
model parameters. Learning algorithms for this purpose have been explored
previously, and the claim has been made that every set of locally consistent
marginals can arise from belief propagation run on a graphical model. On the
contrary, here we show that many probability distributions have marginals that
cannot be reached by belief propagation using any set of model parameters or
any learning algorithm. We call such marginals `unbelievable.' This problem
occurs whenever the Hessian of the Bethe free energy is not positive-definite
at the target marginals. All learning algorithms for belief propagation
necessarily fail in these cases, producing beliefs or sets of beliefs that may
even be worse than the pre-learning approximation. We then show that averaging
inaccurate beliefs, each obtained from belief propagation using model
parameters perturbed about some learned mean values, can achieve the
unbelievable marginals.
|
1106.0488
|
A Half-Duplex Cooperative Scheme with Partial Decode-Forward Relaying
|
cs.IT math.IT
|
In this paper, we present a new cooperative communication scheme consisting
of two users in half-duplex mode communicating with one destination over a
discrete memoryless channel. The users encode messages in independent blocks
and divide the transmission of each block into 3 time slots with variable
durations. Cooperation is performed by partial decodeforward relaying over
these 3 time slots. During the first two time slots, each user alternatively
transmits and decodes, while during the last time slot, both users cooperate to
send information to the destination. An achievable rate region for this scheme
is derived using superposition encoding and joint maximum likelihood (ML)
decoding across the 3 time slots. An example of the Gaussian channel is treated
in detail and its achievable rate region is given explicitly. Results show that
the proposed half-duplex scheme achieves significantly larger rate region than
the classical multiple access channel and approaches the performance of a
full-duplex cooperative scheme as the inter-user channel quality increases.
|
1106.0489
|
Recovery from Link Failures in Networks with Arbitrary Topology via
Diversity Coding
|
cs.NI cs.IT math.IT
|
Link failures in wide area networks are common. To recover from such
failures, a number of methods such as SONET rings, protection cycles, and
source rerouting have been investigated. Two important considerations in such
approaches are the recovery time and the needed spare capacity to complete the
recovery. Usually, these techniques attempt to achieve a recovery time less
than 50 ms. In this paper we introduce an approach that provides link failure
recovery in a hitless manner, or without any appreciable delay. This is
achieved by means of a method called diversity coding. We present an algorithm
for the design of an overlay network to achieve recovery from single link
failures in arbitrary networks via diversity coding. This algorithm is designed
to minimize spare capacity for recovery. We compare the recovery time and spare
capacity performance of this algorithm against conventional techniques in terms
of recovery time, spare capacity, and a joint metric called Quality of Recovery
(QoR). QoR incorporates both the spare capacity percentages and worst case
recovery times. Based on these results, we conclude that the proposed technique
provides much shorter recovery times while achieving similar extra capacity, or
better QoR performance overall.
|
1106.0518
|
Submodular Functions Are Noise Stable
|
cs.LG cs.CC cs.GT
|
We show that all non-negative submodular functions have high {\em
noise-stability}. As a consequence, we obtain a polynomial-time learning
algorithm for this class with respect to any product distribution on
$\{-1,1\}^n$ (for any constant accuracy parameter $\epsilon$). Our algorithm
also succeeds in the agnostic setting. Previous work on learning submodular
functions required either query access or strong assumptions about the types of
submodular functions to be learned (and did not hold in the agnostic setting).
|
1106.0541
|
Sum rate analysis of a reduced feedback OFDMA system employing joint
scheduling and diversity
|
cs.IT math.IT
|
We consider joint scheduling and diversity to enhance the benefits of
multiuser diversity in an \OFDMA{} system. The \OFDMA{} spectrum is assumed to
consist of $\Nrb$ resource blocks and the reduced feedback scheme consists of
each user feeding back channel quality information (\CQI) for only the
best-$\NFb$ resource blocks. Assuming largest normalized \CQI{} scheduling and
a general value for $\NFb$, we develop a unified framework to analyze the sum
rate of the system for both the quantized and non-quantized \CQI{} feedback
schemes. Based on this framework, we provide closed-form expressions for the
sum rate for three different multi-antenna transmitter schemes; Transmit
antenna selection (\TAS), orthogonal space time block codes (\OSTBC) and cyclic
delay diversity (\CDD). Furthermore, we approximate the sum rate expression and
determine the feedback ratio $(\frac{\NFb}{\Nrb})$ required to achieve a sum
rate comparable to the sum rate obtained by a full feedback scheme.
|
1106.0560
|
Collective response of human populations to large-scale emergencies
|
physics.soc-ph cs.SI physics.data-an
|
Despite recent advances in uncovering the quantitative features of stationary
human activity patterns, many applications, from pandemic prediction to
emergency response, require an understanding of how these patterns change when
the population encounters unfamiliar conditions. To explore societal response
to external perturbations we identified real-time changes in communication and
mobility patterns in the vicinity of eight emergencies, such as bomb attacks
and earthquakes, comparing these with eight non-emergencies, like concerts and
sporting events. We find that communication spikes accompanying emergencies are
both spatially and temporally localized, but information about emergencies
spreads globally, resulting in communication avalanches that engage in a
significant manner the social network of eyewitnesses. These results offer a
quantitative view of behavioral changes in human activity under extreme
conditions, with potential long-term impact on emergency detection and
response.
|
1106.0566
|
The Impact of Mutation Rate on the Computation Time of Evolutionary
Dynamic Optimization
|
cs.AI cs.CC
|
Mutation has traditionally been regarded as an important operator in
evolutionary algorithms. In particular, there have been many experimental
studies which showed the effectiveness of adapting mutation rates for various
static optimization problems. Given the perceived effectiveness of adaptive and
self-adaptive mutation for static optimization problems, there have been
speculations that adaptive and self-adaptive mutation can benefit dynamic
optimization problems even more since adaptation and self-adaptation are
capable of following a dynamic environment. However, few theoretical results
are available in analyzing rigorously evolutionary algorithms for dynamic
optimization problems. It is unclear when adaptive and self-adaptive mutation
rates are likely to be useful for evolutionary algorithms in solving dynamic
optimization problems. This paper provides the first rigorous analysis of
adaptive mutation and its impact on the computation times of evolutionary
algorithms in solving certain dynamic optimization problems. More specifically,
for both individual-based and population-based EAs, we have shown that any
time-variable mutation rate scheme will not significantly outperform a fixed
mutation rate on some dynamic optimization problem instances. The proofs also
offer some insights into conditions under which any time-variable mutation
scheme is unlikely to be useful and into the relationships between the problem
characteristics and algorithmic features (e.g., different mutation schemes).
|
1106.0596
|
Finding and testing network communities by lumped Markov chains
|
physics.soc-ph cs.SI
|
Identifying communities (or clusters), namely groups of nodes with
comparatively strong internal connectivity, is a fundamental task for deeply
understanding the structure and function of a network. Yet, there is a lack of
formal criteria for defining communities and for testing their significance. We
propose a sharp definition which is based on a significance threshold. By means
of a lumped Markov chain model of a random walker, a quality measure called
"persistence probability" is associated to a cluster. Then the cluster is
defined as an "$\alpha$-community" if such a probability is not smaller than
$\alpha$. Consistently, a partition composed of $\alpha$-communities is an
"$\alpha$-partition". These definitions turn out to be very effective for
finding and testing communities. If a set of candidate partitions is available,
setting the desired $\alpha$-level allows one to immediately select the
$\alpha$-partition with the finest decomposition. Simultaneously, the
persistence probabilities quantify the significance of each single community.
Given its ability in individually assessing the quality of each cluster, this
approach can also disclose single well-defined communities even in networks
which overall do not possess a definite clusterized structure.
|
1106.0599
|
Research on the visitor flow pattern of Expo 2010
|
physics.soc-ph cs.SI stat.AP
|
Expo 2010 Shanghai China was a successful, splendid and unforgettable event,
remaining us with valuable experiences. The visitor flow pattern of Expo is
investigated in this paper. The Hurst exponent, mean value and standard
deviation of visitor volume prove that the visitor flow is fractal with
long-term stability and correlation as well as obvious fluctuation in short
period. Then the time series of visitor volume is converted to complex network
by visibility algorithm. It can be inferred from the topological properties of
the visibility graph that the network is scale-free, small-world and
hierarchically constructed, conforming that the time series are fractal and
close relationship exit between the visitor volume on different days.
Furthermore, it is inevitable to show some extreme visitor volume in the
original visitor flow, and these extreme points may appear in group to a great
extent.
|
1106.0622
|
Control-constrained parabolic optimal control problems on evolving
surfaces - theory and variational discretization
|
math.OC cs.SY math.AP math.NA
|
We consider control-constrained linear-quadratic optimal control problems on
evolving surfaces. In order to formulate well-posed problems, we prove
existence and uniqueness of weak solutions for the state equation, in the sense
of vector-valued distributions. We then carry out and prove convergence of the
variational discretization of a distributed optimal control problem. In the
process, we investigate the convergence of a fully discrete approximation of
the state equation, and obtain optimal orders of convergence under weak
regularity assumptions. We conclude with a numerical example.
|
1106.0664
|
The Complexity of Reasoning about Spatial Congruence
|
cs.AI
|
In the recent literature of Artificial Intelligence, an intensive research
effort has been spent, for various algebras of qualitative relations used in
the representation of temporal and spatial knowledge, on the problem of
classifying the computational complexity of reasoning problems for subsets of
algebras. The main purpose of these researches is to describe a restricted set
of maximal tractable subalgebras, ideally in an exhaustive fashion with respect
to the hosting algebras. In this paper we introduce a novel algebra for
reasoning about Spatial Congruence, show that the satisfiability problem in the
spatial algebra MC-4 is NP-complete, and present a complete classification of
tractability in the algebra, based on the individuation of three maximal
tractable subclasses, one containing the basic relations. The three algebras
are formed by 14, 10 and 9 relations out of 16 which form the full algebra.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.