id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1110.6221
|
Optimal control with reset-renewable resources
|
math.OC cs.RO
|
We consider both discrete and continuous control problems constrained by a
fixed budget of some resource, which may be renewed upon entering a preferred
subset of the state space. In the discrete case, we consider both deterministic
and stochastic shortest path problems with full budget resets in all preferred
nodes. In the continuous case, we derive augmented PDEs of optimal control,
which are then solved numerically on the extended state space with a
full/instantaneous budget reset on the preferred subset. We introduce an
iterative algorithm for solving these problems efficiently. The method's
performance is demonstrated on a range of computational examples, including the
optimal path planning with constraints on prolonged visibility by a static
enemy observer.
In addition, we also develop an algorithm that works on the original state
space to solve a related but simpler problem: finding the subsets of the domain
"reachable-within-the-budget".
This manuscript is an extended version of the paper accepted for publication
by SIAM J. on Control and Optimization. In the journal version, Section 3 and
the Appendix were omitted due to space limitations.
|
1110.6251
|
Unique Decoding of Plane AG Codes via Interpolation
|
cs.IT math.IT
|
We present a unique decoding algorithm of algebraic geometry codes on plane
curves, Hermitian codes in particular, from an interpolation point of view. The
algorithm successfully corrects errors of weight up to half of the order bound
on the minimum distance of the AG code. The decoding algorithm is the first to
combine some features of the interpolation based list decoding with the
performance of the syndrome decoding with majority voting scheme. The regular
structure of the algorithm allows a straightforward parallel implementation.
|
1110.6267
|
An empirical analysis of the relationship between web usage and academic
performance in undergraduate students
|
cs.SI cs.CY
|
The use of the internet, and in particular web browsing, offers many
potential advantages for educational institutions as students have access to a
wide range of information previously not available. However, there are
potential negative effects due to factors such as time-wasting and asocial
behaviour.
In this study, we conducted an empirical investigation of the academic
performance and the web-usage pattern of 2153 undergraduate students. Data from
university proxy logs allows us to examine usage patterns and we compared this
data to the students' academic performance.
The results show that there is a small but significant (both statistically
and educationally) association between heavier web browsing and poorer academic
results (lower average mark, higher failure rates). In addition, among good
students, the proportion of students who are relatively light users of the
internet is significantly greater than would be expected by chance.
|
1110.6287
|
Deciding of HMM parameters based on number of critical points for
gesture recognition from motion capture data
|
cs.LG
|
This paper presents a method of choosing number of states of a HMM based on
number of critical points of the motion capture data. The choice of Hidden
Markov Models(HMM) parameters is crucial for recognizer's performance as it is
the first step of the training and cannot be corrected automatically within
HMM. In this article we define predictor of number of states based on number of
critical points of the sequence and test its effectiveness against sample data.
|
1110.6290
|
Modelling Constraint Solver Architecture Design as a Constraint Problem
|
cs.AI
|
Designing component-based constraint solvers is a complex problem. Some
components are required, some are optional and there are interdependencies
between the components. Because of this, previous approaches to solver design
and modification have been ad-hoc and limited. We present a system that
transforms a description of the components and the characteristics of the
target constraint solver into a constraint problem. Solving this problem yields
the description of a valid solver. Our approach represents a significant step
towards the automated design and synthesis of constraint solvers that are
specialised for individual constraint problem classes or instances.
|
1110.6293
|
The Cubical Homology of Trace Monoids
|
math.AT cs.MA math.KT
|
This article contains an overview of the results of the author in a field of
algebraic topology used in computer science. The relationship between the
cubical homology groups of generalized tori and homology groups of partial
trace monoid actions is described. Algorithms for computing the homology groups
of asynchronous systems, Petri nets, and Mazurkiewicz trace languages are
shown.
|
1110.6296
|
Implications for compressed sensing of a new sampling theorem on the
sphere
|
cs.IT astro-ph.IM math.IT
|
A sampling theorem on the sphere has been developed recently, requiring half
as many samples as alternative equiangular sampling theorems on the sphere. A
reduction by a factor of two in the number of samples required to represent a
band-limited signal on the sphere exactly has important implications for
compressed sensing, both in terms of the dimensionality and sparsity of
signals. We illustrate the impact of this property with an inpainting problem
on the sphere, where we show the superior reconstruction performance when
adopting the new sampling theorem compared to the alternative.
|
1110.6297
|
Sampling theorems and compressive sensing on the sphere
|
cs.IT astro-ph.IM math.IT
|
We discuss a novel sampling theorem on the sphere developed by McEwen & Wiaux
recently through an association between the sphere and the torus. To represent
a band-limited signal exactly, this new sampling theorem requires less than
half the number of samples of other equiangular sampling theorems on the
sphere, such as the canonical Driscoll & Healy sampling theorem. A reduction in
the number of samples required to represent a band-limited signal on the sphere
has important implications for compressive sensing, both in terms of the
dimensionality and sparsity of signals. We illustrate the impact of this
property with an inpainting problem on the sphere, where we show superior
reconstruction performance when adopting the new sampling theorem.
|
1110.6298
|
A novel sampling theorem on the sphere
|
cs.IT astro-ph.IM math.IT
|
We develop a novel sampling theorem on the sphere and corresponding fast
algorithms by associating the sphere with the torus through a periodic
extension. The fundamental property of any sampling theorem is the number of
samples required to represent a band-limited signal. To represent exactly a
signal on the sphere band-limited at L, all sampling theorems on the sphere
require O(L^2) samples. However, our sampling theorem requires less than half
the number of samples of other equiangular sampling theorems on the sphere and
an asymptotically identical, but smaller, number of samples than the
Gauss-Legendre sampling theorem. The complexity of our algorithms scale as
O(L^3), however, the continual use of fast Fourier transforms reduces the
constant prefactor associated with the asymptotic scaling considerably,
resulting in algorithms that are fast. Furthermore, we do not require any
precomputation and our algorithms apply to both scalar and spin functions on
the sphere without any change in computational complexity or computation time.
We make our implementation of these algorithms available publicly and perform
numerical experiments demonstrating their speed and accuracy up to very high
band-limits. Finally, we highlight the advantages of our sampling theorem in
the context of potential applications, notably in the field of compressive
sampling.
|
1110.6317
|
Risk-sensitive Markov control processes
|
math.OC cs.CE math.DS stat.ML
|
We introduce a general framework for measuring risk in the context of Markov
control processes with risk maps on general Borel spaces that generalize known
concepts of risk measures in mathematical finance, operations research and
behavioral economics. Within the framework, applying weighted norm spaces to
incorporate also unbounded costs, we study two types of infinite-horizon
risk-sensitive criteria, discounted total risk and average risk, and solve the
associated optimization problems by dynamic programming. For the discounted
case, we propose a new discount scheme, which is different from the
conventional form but consistent with the existing literature, while for the
average risk criterion, we state Lyapunov-like stability conditions that
generalize known conditions for Markov chains to ensure the existence of
solutions to the optimality equation.
|
1110.6384
|
Backdoors to Acyclic SAT
|
cs.DS cs.AI cs.CC math.CO
|
Backdoor sets, a notion introduced by Williams et al. in 2003, are certain
sets of key variables of a CNF formula F that make it easy to solve the
formula; by assigning truth values to the variables in a backdoor set, the
formula gets reduced to one or several polynomial-time solvable formulas. More
specifically, a weak backdoor set of F is a set X of variables such that there
exits a truth assignment t to X that reduces F to a satisfiable formula F[t]
that belongs to a polynomial-time decidable base class C. A strong backdoor set
is a set X of variables such that for all assignments t to X, the reduced
formula F[t] belongs to C.
We study the problem of finding backdoor sets of size at most k with respect
to the base class of CNF formulas with acyclic incidence graphs, taking k as
the parameter. We show that
1. the detection of weak backdoor sets is W[2]-hard in general but
fixed-parameter tractable for r-CNF formulas, for any fixed r>=3, and
2. the detection of strong backdoor sets is fixed-parameter approximable.
Result 1 is the the first positive one for a base class that does not have a
characterization with obstructions of bounded size. Result 2 is the first
positive one for a base class for which strong backdoor sets are more powerful
than deletion backdoor sets.
Not only SAT, but also #SAT can be solved in polynomial time for CNF formulas
with acyclic incidence graphs. Hence Result 2 establishes a new structural
parameter that makes #SAT fixed-parameter tractable and that is incomparable
with known parameters such as treewidth and clique-width.
We obtain the algorithms by a combination of an algorithmic version of the
Erd\"os-P\'osa Theorem, Courcelle's model checking for monadic second order
logic, and new combinatorial results on how disjoint cycles can interact with
the backdoor set.
|
1110.6387
|
Backdoors to Satisfaction
|
cs.DS cs.AI cs.CC math.CO
|
A backdoor set is a set of variables of a propositional formula such that
fixing the truth values of the variables in the backdoor set moves the formula
into some polynomial-time decidable class. If we know a small backdoor set we
can reduce the question of whether the given formula is satisfiable to the same
question for one or several easy formulas that belong to the tractable class
under consideration. In this survey we review parameterized complexity results
for problems that arise in the context of backdoor sets, such as the problem of
finding a backdoor set of size at most k, parameterized by k. We also discuss
recent results on backdoor sets for problems that are beyond NP.
|
1110.6426
|
A Distributed Power Control and Transmission Rate Allocation Algorithm
over Multiple Channels
|
math.DS cs.IT math.IT
|
In this paper, we consider multiple channels and wireless nodes with multiple
transceivers. Each node assigns one transmitter at each available channel. For
each assigned transmitter the node decides the power level and data rate of
transmission in a distributed fashion, such that certain Quality of Service
(QoS) demands for the wireless node are satisfied. More specifically, we
investigate the case in which the average SINR over all channels for each
communication pair is kept above a certain threshold. A joint distributed power
and rate control algorithm for each transmitter is proposed that dynamically
adjusts the data rate to meet a target SINR at each channel, and to update the
power levels allowing for variable desired SINRs. The algorithm is fully
distributed and requires only local interference measurements. The performance
of the proposed algorithm is shown through illustrative examples.
|
1110.6437
|
Anthropic decision theory
|
physics.data-an cs.AI hep-th physics.pop-ph
|
This paper sets out to resolve how agents ought to act in the Sleeping Beauty
problem and various related anthropic (self-locating belief) problems, not
through the calculation of anthropic probabilities, but through finding the
correct decision to make. It creates an anthropic decision theory (ADT) that
decides these problems from a small set of principles. By doing so, it
demonstrates that the attitude of agents with regards to each other (selfish or
altruistic) changes the decisions they reach, and that it is very important to
take this into account. To illustrate ADT, it is then applied to two major
anthropic problems and paradoxes, the Presumptuous Philosopher and Doomsday
problems, thus resolving some issues about the probability of human extinction.
|
1110.6476
|
Key Generation Using External Source Excitation: Capacity, Reliability,
and Secrecy Exponent
|
cs.IT math.IT
|
We study the fundamental limits to secret key generation from an excited
distributed source (EDS). In an EDS a pair of terminals observe dependent
sources of randomness excited by a pre-arranged signal. We first determine the
secret key capacity for such systems with one-way public messaging. We then
characterize a tradeoff between the secret key rate and exponential bounds on
the probability of key agreement failure and on the secrecy of the key
generated. We find that there is a fundamental tradeoff between reliability and
secrecy.
We then explore this framework within the context of reciprocal wireless
channels. In this setting, the users transmit pre-arranged excitation signals
to each other. When the fading is Rayleigh, the observations of the users are
jointly Gaussian sources. We show that an on-off excitation signal with an
SNR-dependent duty cycle achieves the secret key capacity of this system.
Furthermore, we characterize a fundamental metric -- minimum energy per key bit
for reliable key generation -- and show that in contrast to conventional AWGN
channels, there is a non-zero threshold SNR that achieves the minimum energy
per key bit. The capacity achieving on-off excitation signal achieves the
minimum energy per key bit at any SNR below the threshold. Finally, we build
off our error exponent results to investigate the energy required to generate a
key using a finite block length. Again we find that on-off excitation signals
yield an improvement when compared to constant excitation signals. In addition
to Rayleigh fading, we analyze the performance of a system based on binary
channel phase quantization.
|
1110.6483
|
Iris Codes Classification Using Discriminant and Witness Directions
|
cs.NE cs.AI cs.CV
|
The main topic discussed in this paper is how to use intelligence for
biometric decision defuzzification. A neural training model is proposed and
tested here as a possible solution for dealing with natural fuzzification that
appears between the intra- and inter-class distribution of scores computed
during iris recognition tests. It is shown here that the use of proposed neural
network support leads to an improvement in the artificial perception of the
separation between the intra- and inter-class score distributions by moving
them away from each other.
|
1110.6487
|
On the Feedback Capacity of the Fully Connected $K$-User Interference
Channel
|
cs.IT math.IT
|
The symmetric K user interference channel with fully connected topology is
considered, in which (a) each receiver suffers interference from all other
(K-1) transmitters, and (b) each transmitter has causal and noiseless feedback
from its respective receiver. The number of generalized degrees of freedom
(GDoF) is characterized in terms of \alpha, where the interference-to-noise
ratio (INR) is given by INR=SNR^\alpha. It is shown that the per-user GDoF of
this network is the same as that of the 2-user interference channel with
feedback, except for \alpha=1, for which existence of feedback does not help in
terms of GDoF. The coding scheme proposed for this network, termed cooperative
interference alignment, is based on two key ingredients, namely, interference
alignment and interference decoding. Moreover, an approximate characterization
is provided for the symmetric feedback capacity of the network, when the SNR
and INR are far apart from each other.
|
1110.6589
|
A cognitive diversity framework for radar target classification
|
cs.AI
|
Classification of targets by radar has proved to be notoriously difficult
with the best systems still yet to attain sufficiently high levels of
performance and reliability. In the current contribution we explore a new
design of radar based target recognition, where angular diversity is used in a
cognitive manner to attain better performance. Performance is bench- marked
against conventional classification schemes. The proposed scheme can easily be
extended to cognitive target recognition based on multiple diversity
strategies.
|
1110.6590
|
New constructions of WOM codes using the Wozencraft ensemble
|
cs.IT math.IT
|
In this paper we give several new constructions of WOM codes. The novelty in
our constructions is the use of the so called Wozencraft ensemble of linear
codes. Specifically, we obtain the following results.
We give an explicit construction of a two-write Write-Once-Memory (WOM for
short) code that approaches capacity, over the binary alphabet. More formally,
for every \epsilon>0, 0<p<1 and n =(1/\epsilon)^{O(1/p\epsilon)} we give a
construction of a two-write WOM code of length n and capacity
H(p)+1-p-\epsilon. Since the capacity of a two-write WOM code is max_p
(H(p)+1-p), we get a code that is \epsilon-close to capacity. Furthermore,
encoding and decoding can be done in time O(n^2.poly(log n)) and time
O(n.poly(log n)), respectively, and in logarithmic space.
We obtain a new encoding scheme for 3-write WOM codes over the binary
alphabet. Our scheme achieves rate 1.809-\epsilon, when the block length is
exp(1/\epsilon). This gives a better rate than what could be achieved using
previous techniques.
We highlight a connection to linear seeded extractors for bit-fixing sources.
In particular we show that obtaining such an extractor with seed length O(log
n) can lead to improved parameters for 2-write WOM codes. We then give an
application of existing constructions of extractors to the problem of designing
encoding schemes for memory with defects.
|
1110.6591
|
On some quasigroup cryptographical primitives
|
math.GR cs.CR cs.IT math.IT
|
We propose modifications of known quasigroup based stream ciphers. Systems of
orthogonal n-ary groupoids are used.
|
1110.6647
|
On Predictive Modeling for Optimizing Transaction Execution in Parallel
OLTP Systems
|
cs.DB
|
A new emerging class of parallel database management systems (DBMS) is
designed to take advantage of the partitionable workloads of on-line
transaction processing (OLTP) applications. Transactions in these systems are
optimized to execute to completion on a single node in a shared-nothing cluster
without needing to coordinate with other nodes or use expensive concurrency
control measures. But some OLTP applications cannot be partitioned such that
all of their transactions execute within a single-partition in this manner.
These distributed transactions access data not stored within their local
partitions and subsequently require more heavy-weight concurrency control
protocols. Further difficulties arise when the transaction's execution
properties, such as the number of partitions it may need to access or whether
it will abort, are not known beforehand. The DBMS could mitigate these
performance issues if it is provided with additional information about
transactions. Thus, in this paper we present a Markov model-based approach for
automatically selecting which optimizations a DBMS could use, namely (1) more
efficient concurrency control schemes, (2) intelligent scheduling, (3) reduced
undo logging, and (4) speculative execution. To evaluate our techniques, we
implemented our models and integrated them into a parallel, main-memory OLTP
DBMS to show that we can improve the performance of applications with diverse
workloads.
|
1110.6648
|
View Selection in Semantic Web Databases
|
cs.DB
|
We consider the setting of a Semantic Web database, containing both explicit
data encoded in RDF triples, and implicit data, implied by the RDF semantics.
Based on a query workload, we address the problem of selecting a set of views
to be materialized in the database, minimizing a combination of query
processing, view storage, and view maintenance costs. Starting from an existing
relational view selection method, we devise new algorithms for recommending
view sets, and show that they scale significantly beyond the existing
relational ones when adapted to the RDF context. To account for implicit
triples in query answers, we propose a novel RDF query reformulation algorithm
and an innovative way of incorporating it into view selection in order to avoid
a combinatorial explosion in the complexity of the selection process. The
interest of our techniques is demonstrated through a set of experiments.
|
1110.6649
|
Building Wavelet Histograms on Large Data in MapReduce
|
cs.DB
|
MapReduce is becoming the de facto framework for storing and processing
massive data, due to its excellent scalability, reliability, and elasticity. In
many MapReduce applications, obtaining a compact accurate summary of data is
essential. Among various data summarization tools, histograms have proven to be
particularly important and useful for summarizing data, and the wavelet
histogram is one of the most widely used histograms. In this paper, we
investigate the problem of building wavelet histograms efficiently on large
datasets in MapReduce. We measure the efficiency of the algorithms by both
end-to-end running time and communication cost. We demonstrate straightforward
adaptations of existing exact and approximate methods for building wavelet
histograms to MapReduce clusters are highly inefficient. To that end, we design
new algorithms for computing exact and approximate wavelet histograms and
discuss their implementation in MapReduce. We illustrate our techniques in
Hadoop, and compare to baseline solutions with extensive experiments performed
in a heterogeneous Hadoop cluster of 16 nodes, using large real and synthetic
datasets, up to hundreds of gigabytes. The results suggest significant (often
orders of magnitude) performance improvement achieved by our new algorithms.
|
1110.6650
|
Summarization and Matching of Density-Based Clusters in Streaming
Environments
|
cs.DB
|
Density-based cluster mining is known to serve a broad range of applications
ranging from stock trade analysis to moving object monitoring. Although methods
for efficient extraction of density-based clusters have been studied in the
literature, the problem of summarizing and matching of such clusters with
arbitrary shapes and complex cluster structures remains unsolved. Therefore,
the goal of our work is to extend the state-of-art of density-based cluster
mining in streams from cluster extraction only to now also support analysis and
management of the extracted clusters. Our work solves three major technical
challenges. First, we propose a novel multi-resolution cluster summarization
method, called Skeletal Grid Summarization (SGS), which captures the key
features of density-based clusters, covering both their external shape and
internal cluster structures. Second, in order to summarize the extracted
clusters in real-time, we present an integrated computation strategy C-SGS,
which piggybacks the generation of cluster summarizations within the online
clustering process. Lastly, we design a mechanism to efficiently execute
cluster matching queries, which identify similar clusters for given cluster of
analyst's interest from clusters extracted earlier in the stream history. Our
experimental study using real streaming data shows the clear superiority of our
proposed methods in both efficiency and effectiveness for cluster summarization
and cluster matching queries to other potential alternatives.
|
1110.6651
|
Multilingual Schema Matching for Wikipedia Infoboxes
|
cs.DB
|
Recent research has taken advantage of Wikipedia's multilingualism as a
resource for cross-language information retrieval and machine translation, as
well as proposed techniques for enriching its cross-language structure. The
availability of documents in multiple languages also opens up new opportunities
for querying structured Wikipedia content, and in particular, to enable answers
that straddle different languages. As a step towards supporting such queries,
in this paper, we propose a method for identifying mappings between attributes
from infoboxes that come from pages in different languages. Our approach finds
mappings in a completely automated fashion. Because it does not require
training data, it is scalable: not only can it be used to find mappings between
many language pairs, but it is also effective for languages that are
under-represented and lack sufficient training samples. Another important
benefit of our approach is that it does not depend on syntactic similarity
between attribute names, and thus, it can be applied to language pairs that
have distinct morphologies. We have performed an extensive experimental
evaluation using a corpus consisting of pages in Portuguese, Vietnamese, and
English. The results show that not only does our approach obtain high precision
and recall, but it also outperforms state-of-the-art techniques. We also
present a case study which demonstrates that the multilingual mappings we
derive lead to substantial improvements in answer quality and coverage for
structured queries over Wikipedia content.
|
1110.6652
|
Controlling False Positives in Association Rule Mining
|
cs.DB
|
Association rule mining is an important problem in the data mining area. It
enumerates and tests a large number of rules on a dataset and outputs rules
that satisfy user-specified constraints. Due to the large number of rules being
tested, rules that do not represent real systematic effect in the data can
satisfy the given constraints purely by random chance. Hence association rule
mining often suffers from a high risk of false positive errors. There is a lack
of comprehensive study on controlling false positives in association rule
mining. In this paper, we adopt three multiple testing correction
approaches---the direct adjustment approach, the permutation-based approach and
the holdout approach---to control false positives in association rule mining,
and conduct extensive experiments to study their performance. Our results show
that (1) Numerous spurious rules are generated if no correction is made. (2)
The three approaches can control false positives effectively. Among the three
approaches, the permutation-based approach has the highest power of detecting
real association rules, but it is very computationally expensive. We employ
several techniques to reduce its cost effectively.
|
1110.6654
|
Pointwise Relations between Information and Estimation in Gaussian Noise
|
cs.IT math.IT
|
Many of the classical and recent relations between information and estimation
in the presence of Gaussian noise can be viewed as identities between
expectations of random quantities. These include the I-MMSE relationship of Guo
et al.; the relative entropy and mismatched estimation relationship of
Verd\'{u}; the relationship between causal estimation and mutual information of
Duncan, and its extension to the presence of feedback by Kadota et al.; the
relationship between causal and non-casual estimation of Guo et al., and its
mismatched version of Weissman. We dispense with the expectations and explore
the nature of the pointwise relations between the respective random quantities.
The pointwise relations that we find are as succinctly stated as - and give
considerable insight into - the original expectation identities.
As an illustration of our results, consider Duncan's 1970 discovery that the
mutual information is equal to the causal MMSE in the AWGN channel, which can
equivalently be expressed saying that the difference between the input-output
information density and half the causal estimation error is a zero mean random
variable (regardless of the distribution of the channel input). We characterize
this random variable explicitly, rather than merely its expectation. Classical
estimation and information theoretic quantities emerge with new and surprising
roles. For example, the variance of this random variable turns out to be given
by the causal MMSE (which, in turn, is equal to the mutual information by
Duncan's result).
|
1110.6698
|
An algebraic approach to source coding with side information using list
decoding
|
cs.IT math.IT
|
Existing literature on source coding with side information (SCSI) mostly uses
the state-of-the-art channel codes namely LDPC codes, turbo codes, and their
variants and assume classical unique decoding. In this paper, we present an
algebraic approach to SCSI based on the list decoding of the underlying channel
codes. We show that the theoretical limit of SCSI can be achieved in the
proposed list decoding based framework when the correlation between the source
and side information is $q$-ary symmetric. We argue that, as opposed to channel
coding, the correct sequence from the list produced by the list decoder can
effectively be recovered in case of SCSI with a few CRC symbols. The CRC
symbols, which allow the decoder to identify the correct sequence, incur
negligible overhead for large block lengths. More importantly, these CRC
symbols are not subject to noise since we are dealing with a virtual noisy
channel rather than a real noisy channel. Finally, we present a guideline for
designing constructive SCSI schemes for non-binary and binary sources using
Reed Solomon codes and BCH codes, respectively. This guideline allows us to
design a SCSI scheme for any arbitrary $q$-ary symmetric correlation without
resorting to simulation.
|
1110.6739
|
The Binary Perfect Phylogeny with Persistent characters
|
cs.DS cs.CE
|
The binary perfect phylogeny model is too restrictive to model biological
events such as back mutations. In this paper we consider a natural
generalization of the model that allows a special type of back mutation. We
investigate the problem of reconstructing a near perfect phylogeny over a
binary set of characters where characters are persistent: characters can be
gained and lost at most once. Based on this notion, we define the problem of
the Persistent Perfect Phylogeny (referred as P-PP). We restate the P-PP
problem as a special case of the Incomplete Directed Perfect Phylogeny, called
Incomplete Perfect Phylogeny with Persistent Completion, (refereed as IP-PP),
where the instance is an incomplete binary matrix M having some missing
entries, denoted by symbol ?, that must be determined (or completed) as 0 or 1
so that M admits a binary perfect phylogeny. We show that the IP-PP problem can
be reduced to a problem over an edge colored graph since the completion of each
column of the input matrix can be represented by a graph operation. Based on
this graph formulation, we develop an exact algorithm for solving the P-PP
problem that is exponential in the number of characters and polynomial in the
number of species.
|
1110.6755
|
PAC-Bayes-Bernstein Inequality for Martingales and its Application to
Multiarmed Bandits
|
cs.LG
|
We develop a new tool for data-dependent analysis of the
exploration-exploitation trade-off in learning under limited feedback. Our tool
is based on two main ingredients. The first ingredient is a new concentration
inequality that makes it possible to control the concentration of weighted
averages of multiple (possibly uncountably many) simultaneously evolving and
interdependent martingales. The second ingredient is an application of this
inequality to the exploration-exploitation trade-off via importance weighted
sampling. We apply the new tool to the stochastic multiarmed bandit problem,
however, the main importance of this paper is the development and understanding
of the new tool rather than improvement of existing algorithms for stochastic
multiarmed bandits. In the follow-up work we demonstrate that the new tool can
improve over state-of-the-art in structurally richer problems, such as
stochastic multiarmed bandits with side information (Seldin et al., 2011a).
|
1110.6778
|
Towards Optimal CSI Allocation in Multicell MIMO Channels
|
cs.IT math.IT
|
In this work, we consider the joint precoding across K transmitters (TXs),
sharing the knowledge of the user's data symbols to be transmitted towards K
single-antenna receivers (RXs). We consider a distributed channel state
information (DCSI) configuration where each TX has its own local estimate of
the overall multiuser MIMO channel. The focus of this work is on the
optimization of the allocation of the CSI feedback subject to a constraint on
the total sharing through the backhaul network. Building upon the Wyner model,
we derive a new approach to allocate the CSI feedback while making efficient
use of the pathloss structure to reduce the amount of feedback necessary. We
show that the proposed CSI allocation achieves good performance with only a
number of CSI bits per TX which does not scale with the number of cooperating
TXs, thus making the joint transmission from a large number of TXs more
practical than previously thought. Indeed, the proposed CSI allocation reduces
the cooperation to a local scale, which allows also for a reduced allocation of
the user's data symbols. We further show that the approach can be extended to a
more general class of channel: the exponentially decaying channels, which model
accuratly the cooperation of TXs located on a one dimensional space. Finally,
we verify by simulations that the proposed CSI allocation leads to very little
performance losses.
|
1110.6787
|
Spatially Coupled Repeat-Accumulate Codes
|
cs.IT math.IT
|
In this paper we propose a new class of spatially coupled codes based on
repeat-accumulate protographs. We show that spatially coupled repeat-accumulate
codes have several advantages over spatially coupled low-density parity-check
codes including simpler encoders and slightly higher code rates than spatially
coupled low-density parity-check codes with similar thresholds and decoding
complexity (as measured by the Tanner graph edge density).
|
1110.6832
|
Multicommodity Flows and Cuts in Polymatroidal Networks
|
cs.DS cs.DM cs.IT math.IT
|
We consider multicommodity flow and cut problems in {\em polymatroidal}
networks where there are submodular capacity constraints on the edges incident
to a node. Polymatroidal networks were introduced by Lawler and Martel and
Hassin in the single-commodity setting and are closely related to the
submodular flow model of Edmonds and Giles; the well-known maxflow-mincut
theorem holds in this more general setting. Polymatroidal networks for the
multicommodity case have not, as far as the authors are aware, been previously
explored. Our work is primarily motivated by applications to information flow
in wireless networks. We also consider the notion of undirected polymatroidal
networks and observe that they provide a natural way to generalize flows and
cuts in edge and node capacitated undirected networks.
We establish poly-logarithmic flow-cut gap results in several scenarios that
have been previously considered in the standard network flow models where
capacities are on the edges or nodes. Our results have already found
aplications in wireless network information flow and we anticipate more in the
future. On the technical side our key tools are the formulation and analysis of
the dual of the flow relaxations via continuous extensions of submodular
functions, in particular the Lov\'asz extension. For directed graphs we rely on
a simple yet useful reduction from polymatroidal networks to standard networks.
For undirected graphs we rely on the interplay between the Lov\'asz extension
of a submodular function and line embeddings with low average distortion
introduced by Matousek and Rabinovich; this connection is inspired by, and
generalizes, the work of Feige, Hajiaghayi and Lee on node-capacitated
multicommodity flows and cuts. The applicability of embeddings to polymatroidal
networks is of independent mathematical interest.
|
1110.6850
|
Virtual communities? the middle east revolutions at the Guardian forum:
Comment Is Free
|
physics.soc-ph cs.SI
|
We investigate the possibility of virtual community formation in an online
social network under a rapid increase of activity of members and newcomers. The
evolution is studied of the activity of online users at the Guardian - Comment
Is Free forum - covering topics related to the Middle East turmoil during the
period of 1st of January 2010 to the 28th of March 2011. Despite a threefold
upsurge of forum users and the formation of a giant component, the main network
characteristics, i.e. degree and weight distribution and clustering
coefficient, remained almost unchanged.
|
1110.6864
|
Asymptotics for numbers of line segments and lines in a square grid
|
math.NT cs.IT math.CO math.IT
|
We present an asymptotic formula for the number of line segments connecting
q+1 points of an nxn square grid, and a sharper formula, assuming the Riemann
hypothesis. We also present asymptotic formulas for the number of lines through
at least q points and, respectively, through exactly q points of the grid. The
well-known case q=2 is so generalized.
|
1110.6886
|
PAC-Bayesian Inequalities for Martingales
|
cs.LG cs.IT math.IT stat.ML
|
We present a set of high-probability inequalities that control the
concentration of weighted averages of multiple (possibly uncountably many)
simultaneously evolving and interdependent martingales. Our results extend the
PAC-Bayesian analysis in learning theory from the i.i.d. setting to martingales
opening the way for its application to importance weighted sampling,
reinforcement learning, and other interactive learning domains, as well as many
other domains in probability theory and statistics, where martingales are
encountered.
We also present a comparison inequality that bounds the expectation of a
convex function of a martingale difference sequence shifted to the [0,1]
interval by the expectation of the same function of independent Bernoulli
variables. This inequality is applied to derive a tighter analog of
Hoeffding-Azuma's inequality.
|
1110.6916
|
Multi-Terminal Source Coding With Action Dependent Side Information
|
cs.IT math.IT
|
We consider multi-terminal source coding with a single encoder and multiple
decoders where either the encoder or the decoders can take cost constrained
actions which affect the quality of the side information present at the
decoders. For the scenario where decoders take actions, we characterize the
rate-cost trade-off region for lossless source coding, and give an
achievability scheme for lossy source coding for two decoders which is optimum
for a variety of special cases of interest. For the case where the encoder
takes actions, we characterize the rate-cost trade-off for a class of lossless
source coding scenarios with multiple decoders. Finally, we also consider
extensions to other multi-terminal source coding settings with actions, and
characterize the rate -distortion-cost tradeoff for a case of successive
refinement with actions.
|
1111.0024
|
Text-Independent Speaker Recognition for Low SNR Environments with
Encryption
|
cs.SD cs.CR cs.LG eess.AS
|
Recognition systems are commonly designed to authenticate users at the access
control levels of a system. A number of voice recognition methods have been
developed using a pitch estimation process which are very vulnerable in low
Signal to Noise Ratio (SNR) environments thus, these programs fail to provide
the desired level of accuracy and robustness. Also, most text independent
speaker recognition programs are incapable of coping with unauthorized attempts
to gain access by tampering with the samples or reference database. The
proposed text-independent voice recognition system makes use of multilevel
cryptography to preserve data integrity while in transit or storage. Encryption
and decryption follow a transform based approach layered with pseudorandom
noise addition whereas for pitch detection, a modified version of the
autocorrelation pitch extraction algorithm is used. The experimental results
show that the proposed algorithm can decrypt the signal under test with
exponentially reducing Mean Square Error over an increasing range of SNR.
Further, it outperforms the conventional algorithms in actual identification
tasks even in noisy environments. The recognition rate thus obtained using the
proposed method is compared with other conventional methods used for speaker
identification.
|
1111.0033
|
Optimized reduction of uncertainty in bursty human dynamics
|
physics.soc-ph cs.SI
|
Human dynamics is known to be inhomogeneous and bursty but the detailed
understanding of the role of human factors in bursty dynamics is still lacking.
In order to investigate their role we devise an agent-based model, where an
agent in an uncertain situation tries to reduce the uncertainty by
communicating with information providers while having to wait time for
responses. Here the waiting time can be considered as cost. We show that the
optimal choice of the waiting time under uncertainty gives rise to the bursty
dynamics, characterized by the heavy-tailed distribution of optimal waiting
time. We find that in all cases the efficiency for communication is relevant to
the scaling behavior of the optimal waiting time distribution. On the other
hand the cost turns out in some cases to be irrelevant depending on the degree
of uncertainty and efficiency.
|
1111.0034
|
Diffusion Adaptation Strategies for Distributed Optimization and
Learning over Networks
|
math.OC cs.IT cs.LG cs.SI math.IT physics.soc-ph
|
We propose an adaptive diffusion mechanism to optimize a global cost function
in a distributed manner over a network of nodes. The cost function is assumed
to consist of a collection of individual components. Diffusion adaptation
allows the nodes to cooperate and diffuse information in real-time; it also
helps alleviate the effects of stochastic gradient noise and measurement noise
through a continuous learning process. We analyze the mean-square-error
performance of the algorithm in some detail, including its transient and
steady-state behavior. We also apply the diffusion algorithm to two problems:
distributed estimation with sparse parameters and distributed localization.
Compared to well-studied incremental methods, diffusion methods do not require
the use of a cyclic path over the nodes and are robust to node and link
failure. Diffusion methods also endow networks with adaptation abilities that
enable the individual nodes to continue learning even when the cost function
changes with time. Examples involving such dynamic cost functions with moving
targets are common in the context of biological networks.
|
1111.0039
|
Reasoning with Very Expressive Fuzzy Description Logics
|
cs.AI
|
It is widely recognized today that the management of imprecision and
vagueness will yield more intelligent and realistic knowledge-based
applications. Description Logics (DLs) are a family of knowledge representation
languages that have gained considerable attention the last decade, mainly due
to their decidability and the existence of empirically high performance of
reasoning algorithms. In this paper, we extend the well known fuzzy ALC DL to
the fuzzy SHIN DL, which extends the fuzzy ALC DL with transitive role axioms
(S), inverse roles (I), role hierarchies (H) and number restrictions (N). We
illustrate why transitive role axioms are difficult to handle in the presence
of fuzzy interpretations and how to handle them properly. Then we extend these
results by adding role hierarchies and finally number restrictions. The main
contributions of the paper are the decidability proof of the fuzzy DL languages
fuzzy-SI and fuzzy-SHIN, as well as decision procedures for the knowledge base
satisfiability problem of the fuzzy-SI and fuzzy-SHIN.
|
1111.0040
|
New Inference Rules for Max-SAT
|
cs.AI
|
Exact Max-SAT solvers, compared with SAT solvers, apply little inference at
each node of the proof tree. Commonly used SAT inference rules like unit
propagation produce a simplified formula that preserves satisfiability but,
unfortunately, solving the Max-SAT problem for the simplified formula is not
equivalent to solving it for the original formula. In this paper, we define a
number of original inference rules that, besides being applied efficiently,
transform Max-SAT instances into equivalent Max-SAT instances which are easier
to solve. The soundness of the rules, that can be seen as refinements of unit
resolution adapted to Max-SAT, are proved in a novel and simple way via an
integer programming transformation. With the aim of finding out how powerful
the inference rules are in practice, we have developed a new Max-SAT solver,
called MaxSatz, which incorporates those rules, and performed an experimental
investigation. The results provide empirical evidence that MaxSatz is very
competitive, at least, on random Max-2SAT, random Max-3SAT, Max-Cut, and Graph
3-coloring instances, as well as on the benchmarks from the Max-SAT Evaluation
2006.
|
1111.0041
|
On the Formal Semantics of Speech-Act Based Communication in an
Agent-Oriented Programming Language
|
cs.AI cs.MA cs.PL
|
Research on agent communication languages has typically taken the speech acts
paradigm as its starting point. Despite their manifest attractions, speech-act
models of communication have several serious disadvantages as a foundation for
communication in artificial agent systems. In particular, it has proved to be
extremely difficult to give a satisfactory semantics to speech-act based agent
communication languages. In part, the problem is that speech-act semantics
typically make reference to the "mental states" of agents (their beliefs,
desires, and intentions), and there is in general no way to attribute such
attitudes to arbitrary computational agents. In addition, agent programming
languages have only had their semantics formalised for abstract, stand-alone
versions, neglecting aspects such as communication primitives. With respect to
communication, implemented agent programming languages have tended to be rather
ad hoc. This paper addresses both of these problems, by giving semantics to
speech-act based messages received by an AgentSpeak agent. AgentSpeak is a
logic-based agent programming language which incorporates the main features of
the PRS model of reactive planning systems. The paper builds upon a structural
operational semantics to AgentSpeak that we developed in previous work. The
main contributions of this paper are as follows: an extension of our earlier
work on the theoretical foundations of AgentSpeak interpreters; a
computationally grounded semantics for (the core) performatives used in
speech-act based agent communication languages; and a well-defined extension of
AgentSpeak that supports agent communication.
|
1111.0043
|
Obtaining Reliable Feedback for Sanctioning Reputation Mechanisms
|
cs.AI
|
Reputation mechanisms offer an effective alternative to verification
authorities for building trust in electronic markets with moral hazard. Future
clients guide their business decisions by considering the feedback from past
transactions; if truthfully exposed, cheating behavior is sanctioned and thus
becomes irrational.
It therefore becomes important to ensure that rational clients have the right
incentives to report honestly. As an alternative to side-payment schemes that
explicitly reward truthful reports, we show that honesty can emerge as a
rational behavior when clients have a repeated presence in the market. To this
end we describe a mechanism that supports an equilibrium where truthful
feedback is obtained. Then we characterize the set of pareto-optimal equilibria
of the mechanism, and derive an upper bound on the percentage of false reports
that can be recorded by the mechanism. An important role in the existence of
this bound is played by the fact that rational clients can establish a
reputation for reporting honestly.
|
1111.0044
|
Probabilistic Planning via Heuristic Forward Search and Weighted Model
Counting
|
cs.AI
|
We present a new algorithm for probabilistic planning with no observability.
Our algorithm, called Probabilistic-FF, extends the heuristic forward-search
machinery of Conformant-FF to problems with probabilistic uncertainty about
both the initial state and action effects. Specifically, Probabilistic-FF
combines Conformant-FFs techniques with a powerful machinery for weighted model
counting in (weighted) CNFs, serving to elegantly define both the search space
and the heuristic function. Our evaluation of Probabilistic-FF shows its fine
scalability in a range of probabilistic domains, constituting a several orders
of magnitude improvement over previous results in this area. We use a
problematic case to point out the main open issue to be addressed by further
research.
|
1111.0045
|
Query-time Entity Resolution
|
cs.DB cs.AI
|
Entity resolution is the problem of reconciling database references
corresponding to the same real-world entities. Given the abundance of publicly
available databases that have unresolved entities, we motivate the problem of
query-time entity resolution quick and accurate resolution for answering
queries over such unclean databases at query-time. Since collective entity
resolution approaches --- where related references are resolved jointly ---
have been shown to be more accurate than independent attribute-based resolution
for off-line entity resolution, we focus on developing new algorithms for
collective resolution for answering entity resolution queries at query-time.
For this purpose, we first formally show that, for collective resolution,
precision and recall for individual entities follow a geometric progression as
neighbors at increasing distances are considered. Unfolding this progression
leads naturally to a two stage expand and resolve query processing strategy. In
this strategy, we first extract the related records for a query using two novel
expansion operators, and then resolve the extracted records collectively. We
then show how the same strategy can be adapted for query-time entity resolution
by identifying and resolving only those database references that are the most
helpful for processing the query. We validate our approach on two large
real-world publication databases where we show the usefulness of collective
resolution and at the same time demonstrate the need for adaptive strategies
for query processing. We then show how the same queries can be answered in
real-time using our adaptive approach while preserving the gains of collective
resolution. In addition to experiments on real datasets, we use synthetically
generated data to empirically demonstrate the validity of the performance
trends predicted by our analysis of collective entity resolution over a wide
range of structural characteristics in the data.
|
1111.0048
|
Individual and Domain Adaptation in Sentence Planning for Dialogue
|
cs.CL
|
One of the biggest challenges in the development and deployment of spoken
dialogue systems is the design of the spoken language generation module. This
challenge arises from the need for the generator to adapt to many features of
the dialogue domain, user population, and dialogue context. A promising
approach is trainable generation, which uses general-purpose linguistic
knowledge that is automatically adapted to the features of interest, such as
the application domain, individual user, or user group. In this paper we
present and evaluate a trainable sentence planner for providing restaurant
information in the MATCH dialogue system. We show that trainable sentence
planning can produce complex information presentations whose quality is
comparable to the output of a template-based generator tuned to this domain. We
also show that our method easily supports adapting the sentence planner to
individuals, and that the individualized sentence planners generally perform
better than models trained and tested on a population of individuals. Previous
work has documented and utilized individual preferences for content selection,
but to our knowledge, these results provide the first demonstration of
individual preferences for sentence planning operations, affecting the content
order, discourse structure and sentence structure of system responses. Finally,
we evaluate the contribution of different feature sets, and show that, in our
application, n-gram features often do as well as features based on higher-level
linguistic representations.
|
1111.0049
|
Conjunctive Query Answering for the Description Logic SHIQ
|
cs.AI
|
Conjunctive queries play an important role as an expressive query language
for Description Logics (DLs). Although modern DLs usually provide for
transitive roles, conjunctive query answering over DL knowledge bases is only
poorly understood if transitive roles are admitted in the query. In this paper,
we consider unions of conjunctive queries over knowledge bases formulated in
the prominent DL SHIQ and allow transitive roles in both the query and the
knowledge base. We show decidability of query answering in this setting and
establish two tight complexity bounds: regarding combined complexity, we prove
that there is a deterministic algorithm for query answering that needs time
single exponential in the size of the KB and double exponential in the size of
the query, which is optimal. Regarding data complexity, we prove containment in
co-NP.
|
1111.0051
|
Qualitative System Identification from Imperfect Data
|
cs.AI
|
Experience in the physical sciences suggests that the only realistic means of
understanding complex systems is through the use of mathematical models.
Typically, this has come to mean the identification of quantitative models
expressed as differential equations. Quantitative modelling works best when the
structure of the model (i.e., the form of the equations) is known; and the
primary concern is one of estimating the values of the parameters in the model.
For complex biological systems, the model-structure is rarely known and the
modeler has to deal with both model-identification and parameter-estimation. In
this paper we are concerned with providing automated assistance to the first of
these problems. Specifically, we examine the identification by machine of the
structural relationships between experimentally observed variables. These
relationship will be expressed in the form of qualitative abstractions of a
quantitative model. Such qualitative models may not only provide clues to the
precise quantitative model, but also assist in understanding the essence of
that model. Our position in this paper is that background knowledge
incorporating system modelling principles can be used to constrain effectively
the set of good qualitative models. Utilising the model-identification
framework provided by Inductive Logic Programming (ILP) we present empirical
support for this position using a series of increasingly complex artificial
datasets. The results are obtained with qualitative and quantitative data
subject to varying amounts of noise and different degrees of sparsity. The
results also point to the presence of a set of qualitative states, which we
term kernel subsets, that may be necessary for a qualitative model-learner to
learn correct models. We demonstrate scalability of the method to biological
system modelling by identification of the glycolysis metabolic pathway from
data.
|
1111.0053
|
Exploiting Subgraph Structure in Multi-Robot Path Planning
|
cs.AI
|
Multi-robot path planning is difficult due to the combinatorial explosion of
the search space with every new robot added. Complete search of the combined
state-space soon becomes intractable. In this paper we present a novel form of
abstraction that allows us to plan much more efficiently. The key to this
abstraction is the partitioning of the map into subgraphs of known structure
with entry and exit restrictions which we can represent compactly. Planning
then becomes a search in the much smaller space of subgraph configurations.
Once an abstract plan is found, it can be quickly resolved into a correct (but
possibly sub-optimal) concrete plan without the need for further search. We
prove that this technique is sound and complete and demonstrate its practical
effectiveness on a real map.
A contending solution, prioritised planning, is also evaluated and shown to
have similar performance albeit at the cost of completeness. The two approaches
are not necessarily conflicting; we demonstrate how they can be combined into a
single algorithm which outperforms either approach alone.
|
1111.0054
|
CTL Model Update for System Modifications
|
cs.AI cs.SE
|
Model checking is a promising technology, which has been applied for
verification of many hardware and software systems. In this paper, we introduce
the concept of model update towards the development of an automatic system
modification tool that extends model checking functions. We define primitive
update operations on the models of Computation Tree Logic (CTL) and formalize
the principle of minimal change for CTL model update. These primitive update
operations, together with the underlying minimal change principle, serve as the
foundation for CTL model update. Essential semantic and computational
characterizations are provided for our CTL model update approach. We then
describe a formal algorithm that implements this approach. We also illustrate
two case studies of CTL model updates for the well-known microwave oven example
and the Andrew File System 1, from which we further propose a method to
optimize the update results in complex system modifications.
|
1111.0055
|
Extended RDF as a Semantic Foundation of Rule Markup Languages
|
cs.AI
|
Ontologies and automated reasoning are the building blocks of the Semantic
Web initiative. Derivation rules can be included in an ontology to define
derived concepts, based on base concepts. For example, rules allow to define
the extension of a class or property, based on a complex relation between the
extensions of the same or other classes and properties. On the other hand, the
inclusion of negative information both in the form of negation-as-failure and
explicit negative information is also needed to enable various forms of
reasoning. In this paper, we extend RDF graphs with weak and strong negation,
as well as derivation rules. The ERDF stable model semantics of the extended
framework (Extended RDF) is defined, extending RDF(S) semantics. A distinctive
feature of our theory, which is based on Partial Logic, is that both truth and
falsity extensions of properties and classes are considered, allowing for truth
value gaps. Our framework supports both closed-world and open-world reasoning
through the explicit representation of the particular closed-world assumptions
and the ERDF ontological categories of total properties and total classes.
|
1111.0056
|
The Complexity of Planning Problems With Simple Causal Graphs
|
cs.AI
|
We present three new complexity results for classes of planning problems with
simple causal graphs. First, we describe a polynomial-time algorithm that uses
macros to generate plans for the class 3S of planning problems with binary
state variables and acyclic causal graphs. This implies that plan generation
may be tractable even when a planning problem has an exponentially long minimal
solution. We also prove that the problem of plan existence for planning
problems with multi-valued variables and chain causal graphs is NP-hard.
Finally, we show that plan existence for planning problems with binary state
variables and polytree causal graphs is NP-complete.
|
1111.0059
|
Loosely Coupled Formulations for Automated Planning: An Integer
Programming Perspective
|
cs.AI
|
We represent planning as a set of loosely coupled network flow problems,
where each network corresponds to one of the state variables in the planning
domain. The network nodes correspond to the state variable values and the
network arcs correspond to the value transitions. The planning problem is to
find a path (a sequence of actions) in each network such that, when merged,
they constitute a feasible plan. In this paper we present a number of integer
programming formulations that model these loosely coupled networks with varying
degrees of flexibility. Since merging may introduce exponentially many ordering
constraints we implement a so-called branch-and-cut algorithm, in which these
constraints are dynamically generated and added to the formulation when needed.
Our results are very promising, they improve upon previous planning as integer
programming approaches and lay the foundation for integer programming
approaches for cost optimal planning.
|
1111.0060
|
A Constraint Programming Approach for Solving a Queueing Control Problem
|
cs.AI
|
In a facility with front room and back room operations, it is useful to
switch workers between the rooms in order to cope with changing customer
demand. Assuming stochastic customer arrival and service times, we seek a
policy for switching workers such that the expected customer waiting time is
minimized while the expected back room staffing is sufficient to perform all
work. Three novel constraint programming models and several shaving procedures
for these models are presented. Experimental results show that a model based on
closed-form expressions together with a combination of shaving procedures is
the most efficient. This model is able to find and prove optimal solutions for
many problem instances within a reasonable run-time. Previously, the only
available approach was a heuristic algorithm. Furthermore, a hybrid method
combining the heuristic and the best constraint programming method is shown to
perform as well as the heuristic in terms of solution quality over time, while
achieving the same performance in terms of proving optimality as the pure
constraint programming model. This is the first work of which we are aware that
solves such queueing-based problems with constraint programming.
|
1111.0062
|
Optimal and Approximate Q-value Functions for Decentralized POMDPs
|
cs.AI
|
Decision-theoretic planning is a popular approach to sequential decision
making problems, because it treats uncertainty in sensing and acting in a
principled way. In single-agent frameworks like MDPs and POMDPs, planning can
be carried out by resorting to Q-value functions: an optimal Q-value function
Q* is computed in a recursive manner by dynamic programming, and then an
optimal policy is extracted from Q*. In this paper we study whether similar
Q-value functions can be defined for decentralized POMDP models (Dec-POMDPs),
and how policies can be extracted from such value functions. We define two
forms of the optimal Q-value function for Dec-POMDPs: one that gives a
normative description as the Q-value function of an optimal pure joint policy
and another one that is sequentially rational and thus gives a recipe for
computation. This computation, however, is infeasible for all but the smallest
problems. Therefore, we analyze various approximate Q-value functions that
allow for efficient computation. We describe how they relate, and we prove that
they all provide an upper bound to the optimal Q-value function Q*. Finally,
unifying some previous approaches for solving Dec-POMDPs, we describe a family
of algorithms for extracting policies from such Q-value functions, and perform
an experimental evaluation on existing test problems, including a new
firefighting benchmark problem.
|
1111.0065
|
Communication-Based Decomposition Mechanisms for Decentralized MDPs
|
cs.AI
|
Multi-agent planning in stochastic environments can be framed formally as a
decentralized Markov decision problem. Many real-life distributed problems that
arise in manufacturing, multi-robot coordination and information gathering
scenarios can be formalized using this framework. However, finding the optimal
solution in the general case is hard, limiting the applicability of recently
developed algorithms. This paper provides a practical approach for solving
decentralized control problems when communication among the decision makers is
possible, but costly. We develop the notion of communication-based mechanism
that allows us to decompose a decentralized MDP into multiple single-agent
problems. In this framework, referred to as decentralized semi-Markov decision
process with direct communication (Dec-SMDP-Com), agents operate separately
between communications. We show that finding an optimal mechanism is equivalent
to solving optimally a Dec-SMDP-Com. We also provide a heuristic search
algorithm that converges on the optimal decomposition. Restricting the
decomposition to some specific types of local behaviors reduces significantly
the complexity of planning. In particular, we present a polynomial-time
algorithm for the case in which individual agents perform goal-oriented
behaviors between communications. The paper concludes with an additional
tractable algorithm that enables the introduction of human knowledge, thereby
reducing the overall problem to finding the best time to communicate. Empirical
results show that these approaches provide good approximate solutions.
|
1111.0067
|
A General Theory of Additive State Space Abstractions
|
cs.AI
|
Informally, a set of abstractions of a state space S is additive if the
distance between any two states in S is always greater than or equal to the sum
of the corresponding distances in the abstract spaces. The first known additive
abstractions, called disjoint pattern databases, were experimentally
demonstrated to produce state of the art performance on certain state spaces.
However, previous applications were restricted to state spaces with special
properties, which precludes disjoint pattern databases from being defined for
several commonly used testbeds, such as Rubiks Cube, TopSpin and the Pancake
puzzle. In this paper we give a general definition of additive abstractions
that can be applied to any state space and prove that heuristics based on
additive abstractions are consistent as well as admissible. We use this new
definition to create additive abstractions for these testbeds and show
experimentally that well chosen additive abstractions can reduce search time
substantially for the (18,4)-TopSpin puzzle and by three orders of magnitude
over state of the art methods for the 17-Pancake puzzle. We also derive a way
of testing if the heuristic value returned by additive abstractions is provably
too low and show that the use of this test can reduce search time for the
15-puzzle and TopSpin by roughly a factor of two.
|
1111.0068
|
First Order Decision Diagrams for Relational MDPs
|
cs.AI
|
Markov decision processes capture sequential decision making under
uncertainty, where an agent must choose actions so as to optimize long term
reward. The paper studies efficient reasoning mechanisms for Relational Markov
Decision Processes (RMDP) where world states have an internal relational
structure that can be naturally described in terms of objects and relations
among them. Two contributions are presented. First, the paper develops First
Order Decision Diagrams (FODD), a new compact representation for functions over
relational structures, together with a set of operators to combine FODDs, and
novel reduction techniques to keep the representation small. Second, the paper
shows how FODDs can be used to develop solutions for RMDPs, where reasoning is
performed at the abstract level and the resulting optimal policy is independent
of domain size (number of objects) or instantiation. In particular, a variant
of the value iteration algorithm is developed by using special operations over
FODDs, and the algorithm is shown to converge to the optimal policy.
|
1111.0073
|
Diffusion and Contagion in Networks with Heterogeneous Agents and
Homophily
|
physics.soc-ph cs.SI
|
We study how a behavior (an idea, buying a product, having a disease,
adopting a cultural fad or a technology) spreads among agents in an a social
network that exhibits segregation or homophily (the tendency of agents to
associate with others similar to themselves). Individuals are distinguished by
their types (e.g., race, gender, age, wealth, religion, profession, etc.)
which, together with biased interaction patterns, induce heterogeneous rates of
adoption. We identify the conditions under which a behavior diffuses and
becomes persistent in the population. These conditions relate to the level of
homophily in a society, the underlying proclivities of various types for
adoption or infection, as well as how each type interacts with its own type. In
particular, we show that homophily can facilitate diffusion from a small
initial seed of adopters.
|
1111.0084
|
Lattice codes for the Gaussian relay channel: Decode-and-Forward and
Compress-and-Forward
|
cs.IT math.IT
|
Lattice codes are known to achieve capacity in the Gaussian point-to-point
channel, achieving the same rates as independent, identically distributed
(i.i.d.) random Gaussian codebooks. Lattice codes are also known to outperform
random codes for certain channel models that are able to exploit their
linearity. In this work, we show that lattice codes may be used to achieve the
same performance as known i.i.d. Gaussian random coding techniques for the
Gaussian relay channel, and show several examples of how this may be combined
with the linearity of lattices codes in multi-source relay networks. In
particular, we present a nested lattice list decoding technique, by which,
lattice codes are shown to achieve the Decode-and-Forward (DF) rate of single
source, single destination Gaussian relay channels with one or more relays. We
next present two examples of how this DF scheme may be combined with the
linearity of lattice codes to achieve new rate regions which for some channel
conditions outperform analogous known Gaussian random coding techniques in
multi-source relay channels. That is, we derive a new achievable rate region
for the two-way relay channel with direct links and compare it to existing
schemes, and derive another achievable rate region for the multiple access
relay channel. We furthermore present a lattice Compress-and-Forward (CF)
scheme for the Gaussian relay channel which exploits a lattice Wyner-Ziv
binning scheme and achieves the same rate as the Cover-El Gamal CF rate
evaluated for Gaussian random codes. These results suggest that
structured/lattice codes may be used to mimic, and sometimes outperform, random
Gaussian codes in general Gaussian networks.
|
1111.0107
|
Correlated multiplexity and connectivity of multiplex random networks
|
physics.soc-ph cs.SI
|
Nodes in a complex networked system often engage in more than one type of
interactions among them; they form a multiplex network with multiple types of
links. In real-world complex systems, a node's degree for one type of links and
that for the other are not randomly distributed but correlated, which we term
correlated multiplexity. In this paper we study a simple model of multiplex
random networks and demonstrate that the correlated multiplexity can
drastically affect the properties of giant component in the network.
Specifically, when the degrees of a node for different interactions in a duplex
Erdos-Renyi network are maximally correlated, the network contains the giant
component for any nonzero link densities. In contrast, when the degrees of a
node are maximally anti-correlated, the emergence of giant component is
significantly delayed, yet the entire network becomes connected into a single
component at a finite link density. We also discuss the mixing patterns and the
cases with imperfect correlated multiplexity.
|
1111.0129
|
Output Feedback Tracking Control for a Class of Uncertain Systems
subject to Unmodeled Dynamics and Delay at Input
|
cs.SY math.OC
|
Besides parametric uncertainties and disturbances, the unmodeled dynamics and
time delay at the input are often present in practical systems, which cannot be
ignored in some cases. This paper aims to solve output feedback tracking
control problem for a class of nonlinear uncertain systems subject to unmodeled
high-frequency gains and time delay at the input. By the additive
decomposition, the uncertain system is transformed to an uncertainty-free
system, where the uncertainties, disturbance and effect of unmodeled dynamics
plus time delay are lumped into a new disturbance at the output. Sequently,
additive decomposition is used to decompose the transformed system, which
simplifies the tracking controller design. To demonstrate the effectiveness,
the proposed control scheme is applied to three benchmark examples.
|
1111.0158
|
Applying Fuzzy ID3 Decision Tree for Software Effort Estimation
|
cs.SE cs.AI
|
Web Effort Estimation is a process of predicting the efforts and cost in
terms of money, schedule and staff for any software project system. Many
estimation models have been proposed over the last three decades and it is
believed that it is a must for the purpose of: Budgeting, risk analysis,
project planning and control, and project improvement investment analysis. In
this paper, we investigate the use of Fuzzy ID3 decision tree for software cost
estimation; it is designed by integrating the principles of ID3 decision tree
and the fuzzy set-theoretic concepts, enabling the model to handle uncertain
and imprecise data when describing the software projects, which can improve
greatly the accuracy of obtained estimates. MMRE and Pred are used as measures
of prediction accuracy for this study. A series of experiments is reported
using two different software projects datasets namely, Tukutuku and COCOMO'81
datasets. The results are compared with those produced by the crisp version of
the ID3 decision tree.
|
1111.0207
|
Geometric protean graphs
|
physics.soc-ph cs.SI math.CO
|
We study the link structure of on-line social networks (OSNs), and introduce
a new model for such networks which may help infer their hidden underlying
reality. In the geo-protean (GEO-P) model for OSNs nodes are identified with
points in Euclidean space, and edges are stochastically generated by a mixture
of the relative distance of nodes and a ranking function. With high
probability, the GEO-P model generates graphs satisfying many observed
properties of OSNs, such as power law degree distributions, the small world
property, densification power law, and bad spectral expansion. We introduce the
dimension of an OSN based on our model, and examine this new parameter using
actual OSN data. We discuss how the geo-protean model may eventually be used as
a tool to group users with similar attributes using only the link structure of
the network.
|
1111.0219
|
On Optimum Causal Cognitive Spectrum Reutilization Strategy
|
cs.IT math.IT
|
In this paper we study opportunistic transmission strategies for cognitive
radios (CR) in which causal noisy observation from a primary user(s) (PU) state
is available. PU is assumed to be operating in a slotted manner, according to a
two-state Markov model. The objective is to maximize utilization ratio (UR),
i.e., relative number of the PU-idle slots that are used by CR, subject to
interference ratio (IR), i.e., relative number of the PU-active slots that are
used by CR, below a certain level. We introduce an a-posteriori LLR-based
cognitive transmission strategy and show that this strategy is optimum in the
sense of maximizing UR given a certain maximum allowed IR. Two methods for
calculating threshold for this strategy in practical situations are presented.
One of them performs well in higher SNRs but might have too large IR at low
SNRs and low PU activity levels, and the other is proven to never violate the
allowed IR at the price of a reduced UR. In addition, an upper-bound for the UR
of any CR strategy operating in the presence of Markovian PU is presented.
Simulation results have shown a more than 116% improvement in UR at SNR of -3dB
and IR level of 10% with PU state estimation. Thus, this opportunistic CR
mechanism possesses a high potential in practical scenarios in which there
exists no information about true states of PU.
|
1111.0235
|
New Methods for Handling Singular Sample Covariance Matrices
|
math.PR cs.IT math.IT math.ST stat.TH
|
The estimation of a covariance matrix from an insufficient amount of data is
one of the most common problems in fields as diverse as multivariate
statistics, wireless communications, signal processing, biology, learning
theory and finance. In a joint work of Marzetta, Tucci and Simon, a new
approach to handle singular covariance matrices was suggested. The main idea
was to use dimensionality reduction in conjunction with an average over the
Stiefel manifold. In this paper we continue with this research and we consider
some new approaches to solve this problem. One of the methods is called the
Ewens estimator and uses a randomization of the sample covariance matrix over
all the permutation matrices with respect to the Ewens measure. The techniques
used to attack this problem are broad and run from random matrix theory to
combinatorics.
|
1111.0253
|
Nearly Complete Graphs Decomposable into Large Induced Matchings and
their Applications
|
math.CO cs.DS cs.IT math.IT
|
We describe two constructions of (very) dense graphs which are edge disjoint
unions of large {\em induced} matchings. The first construction exhibits graphs
on $N$ vertices with ${N \choose 2}-o(N^2)$ edges, which can be decomposed into
pairwise disjoint induced matchings, each of size $N^{1-o(1)}$. The second
construction provides a covering of all edges of the complete graph $K_N$ by
two graphs, each being the edge disjoint union of at most $N^{2-\delta}$
induced matchings, where $\delta > 0.058$. This disproves (in a strong form) a
conjecture of Meshulam, substantially improves a result of Birk, Linial and
Meshulam on communicating over a shared channel, and (slightly) extends the
analysis of H{\aa}stad and Wigderson of the graph test of Samorodnitsky and
Trevisan for linearity. Additionally, our constructions settle a combinatorial
question of Vempala regarding a candidate rounding scheme for the directed
Steiner tree problem.
|
1111.0268
|
Topology on locally finite metric spaces
|
math.MG cs.CV cs.DM math.AT math.CO
|
The necessity of a theory of General Topology and, most of all, of Algebraic
Topology on locally finite metric spaces comes from many areas of research in
both Applied and Pure Mathematics: Molecular Biology, Mathematical Chemistry,
Computer Science, Topological Graph Theory and Metric Geometry. In this paper
we propose the basic notions of such a theory and some applications: we replace
the classical notions of continuous function, homeomorphism and homotopic
equivalence with the notions of NPP-function, NPP-local-isomorphism and
NPP-homotopy (NPP stands for Nearest Point Preserving); we also introduce the
notion of NPP-isomorphism. We construct three invariants under NPP-isomorphisms
and, in particular, we define the fundamental group of a locally finite metric
space. As first applications, we propose the following: motivated by the
longstanding question whether there is a purely metric condition which extends
the notion of amenability of a group to any metric space, we propose the
property SN (Small Neighborhood); motivated by some applicative problems in
Computer Science, we prove the analog of the Jordan curve theorem in $\mathbb
Z^2$; motivated by a question asked during a lecture at Lausanne, we extend to
any locally finite metric space a recent inequality of P.N.Jolissaint and
Valette regarding the $\ell_p$-distortion.
|
1111.0284
|
A topological interpretation of the walk distances
|
math.CO cs.DM cs.SI math.MG
|
The walk distances in graphs have no direct interpretation in terms of walk
weights, since they are introduced via the \emph{logarithms} of walk weights.
Only in the limiting cases where the logarithms vanish such representations
follow straightforwardly. The interpretation proposed in this paper rests on
the identity $\ln\det B=\tr\ln B$ applied to the cofactors of the matrix
$I-tA,$ where $A$ is the weighted adjacency matrix of a weighted multigraph and
$t$ is a sufficiently small positive parameter. In addition, this
interpretation is based on the power series expansion of the logarithm of a
matrix. Kasteleyn (1967) was probably the first to apply the foregoing approach
to expanding the determinant of $I-A$. We show that using a certain linear
transformation the same approach can be extended to the cofactors of $I-tA,$
which provides a topological interpretation of the walk distances.
|
1111.0307
|
Swayed by Friends or by the Crowd?
|
cs.SI cs.CY physics.soc-ph
|
We have conducted three empirical studies of the effects of friend
recommendations and general ratings on how online users make choices. These two
components of social influence were investigated through user studies on
Mechanical Turk. We find that for a user deciding between two choices an
additional rating star has a much larger effect than an additional friend's
recommendation on the probability of selecting an item. Equally important,
negative opinions from friends are more influential than positive opinions, and
people exhibit more random behavior in their choices when the decision involves
less cost and risk. Our results can be generalized across different
demographics, implying that individuals trade off recommendations from friends
and ratings in a similar fashion.
|
1111.0352
|
Revisiting k-means: New Algorithms via Bayesian Nonparametrics
|
cs.LG stat.ML
|
Bayesian models offer great flexibility for clustering
applications---Bayesian nonparametrics can be used for modeling infinite
mixtures, and hierarchical Bayesian models can be utilized for sharing clusters
across multiple data sets. For the most part, such flexibility is lacking in
classical clustering methods such as k-means. In this paper, we revisit the
k-means clustering algorithm from a Bayesian nonparametric viewpoint. Inspired
by the asymptotic connection between k-means and mixtures of Gaussians, we show
that a Gibbs sampling algorithm for the Dirichlet process mixture approaches a
hard clustering algorithm in the limit, and further that the resulting
algorithm monotonically minimizes an elegant underlying k-means-like clustering
objective that includes a penalty for the number of clusters. We generalize
this analysis to the case of clustering multiple data sets through a similar
asymptotic argument with the hierarchical Dirichlet process. We also discuss
further extensions that highlight the benefits of our analysis: i) a spectral
relaxation involving thresholded eigenvectors, and ii) a normalized cut graph
clustering algorithm that does not fix the number of clusters in the graph.
|
1111.0356
|
Toric complete intersection codes
|
math.AG cs.IT math.IT
|
In this paper we give lower bounds for the minimum distance of evaluation
codes constructed from complete intersections in toric varieties. This
generalizes the results of Gold-Little-Schenck and Ballico-Fontanari who
considered evaluation codes on complete intersections in the projective space.
|
1111.0379
|
Fast reconstruction of phylogenetic trees using locality-sensitive
hashing
|
q-bio.PE cs.CE
|
We present the first sub-quadratic time algorithm that with high probability
correctly reconstructs phylogenetic trees for short sequences generated by a
Markov model of evolution. Due to rapid expansion in sequence databases, such
very fast algorithms are becoming necessary. Other fast heuristics have been
developed for building trees from very large alignments (Price et al, and Brown
et al), but they lack theoretical performance guarantees. Our new algorithm
runs in $O(n^{1+\gamma(g)}\log^2n)$ time, where $\gamma$ is an increasing
function of an upper bound on the branch lengths in the phylogeny, the upper
bound $g$ must be below$1/2-\sqrt{1/8} \approx 0.15$, and $\gamma(g)<1$ for all
$g$. For phylogenies with very short branches, the running time of our
algorithm is close to linear. For example, if all branch lengths correspond to
a mutation probability of less than 0.02, the running time of our algorithm is
roughly $O(n^{1.2}\log^2n)$. Via a prototype and a sequence of large-scale
experiments, we show that many large phylogenies can be reconstructed fast,
without compromising reconstruction accuracy.
|
1111.0414
|
Sufficient Conditions on the Existence of Switching Observers for
Nonlinear Time-Varying Systems
|
math.OC cs.SY
|
We derive sufficient conditions for the solvability of the observer design
problem for a wide class of nonlinear time-varying systems, including those
having triangular structure. We establish that, under weaker assumptions than
those imposed in the existing works in the literature, it is possible to
construct a switching sequence of time-varying noncausal dynamics, exhibiting
the state determination of our system.
|
1111.0432
|
Approximate Stochastic Subgradient Estimation Training for Support
Vector Machines
|
cs.LG cs.AI
|
Subgradient algorithms for training support vector machines have been quite
successful for solving large-scale and online learning problems. However, they
have been restricted to linear kernels and strongly convex formulations. This
paper describes efficient subgradient approaches without such limitations. Our
approaches make use of randomized low-dimensional approximations to nonlinear
kernels, and minimization of a reduced primal formulation using an algorithm
based on robust stochastic approximation, which do not require strong
convexity. Experiments illustrate that our approaches produce solutions of
comparable prediction accuracy with the solutions acquired from existing SVM
solvers, but often in much shorter time. We also suggest efficient prediction
schemes that depend only on the dimension of kernel approximation, not on the
number of support vectors.
|
1111.0466
|
Kernel diff-hash
|
cs.CV cs.AI
|
This paper presents a kernel formulation of the recently introduced diff-hash
algorithm for the construction of similarity-sensitive hash functions. Our
kernel diff-hash algorithm that shows superior performance on the problem of
image feature descriptor matching.
|
1111.0499
|
Evaluating geometric queries using few arithmetic operations
|
cs.DS cs.DB math.AG
|
Let $\cp:=(P_1,...,P_s)$ be a given family of $n$-variate polynomials with
integer coefficients and suppose that the degrees and logarithmic heights of
these polynomials are bounded by $d$ and $h$, respectively. Suppose furthermore
that for each $1\leq i\leq s$ the polynomial $P_i$ can be evaluated using $L$
arithmetic operations (additions, subtractions, multiplications and the
constants 0 and 1). Assume that the family $\cp$ is in a suitable sense
\emph{generic}. We construct a database $\cal D$, supported by an algebraic
computation tree, such that for each $x\in [0,1]^n$ the query for the signs of
$P_1(x),...,P_s(x)$ can be answered using $h d^{\cO(n^2)}$ comparisons and $nL$
arithmetic operations between real numbers. The arithmetic-geometric tools
developed for the construction of $\cal D$ are then employed to exhibit example
classes of systems of $n$ polynomial equations in $n$ unknowns whose
consistency may be checked using only few arithmetic operations, admitting
however an exponential number of comparisons.
|
1111.0500
|
Development of a Cost-efficient Autonomous MAV for an Unstructured
Indoor Environment
|
cs.RO
|
Performing rescuing and surveillance operations with autonomous ground and
aerial vehicles become more and more apparent task. Involving unmanned robot
systems allows making these operations more efficient, safe and reliable
especially in hazardous areas. This work is devoted to the development of a
cost-efficient micro aerial vehicle in a quadrocopter shape for developmental
purposes within indoor scenarios. It has been constructed with off-the-shelf
components available for mini helicopters. Additional sensors and electronics
are incorporated into this aerial vehicle to stabilize its flight behavior and
to provide a capability of an autonomous navigation in a partially unstructured
indoor environment.
|
1111.0508
|
Geometric Graph Properties of the Spatial Preferred Attachment model
|
cs.SI math.CO physics.soc-ph
|
The spatial preferred attachment (SPA) model is a model for networked
information spaces such as domains of the World Wide Web, citation graphs, and
on-line social networks. It uses a metric space to model the hidden attributes
of the vertices. Thus, vertices are elements of a metric space, and link
formation depends on the metric distance between vertices. We show, through
theoretical analysis and simulation, that for graphs formed according to the
SPA model it is possible to infer the metric distance between vertices from the
link structure of the graph. Precisely, the estimate is based on the number of
common neighbours of a pair of vertices, a measure known as {\sl co-citation}.
To be able to calculate this estimate, we derive a precise relation between the
number of common neighbours and metric distance. We also analyze the
distribution of {\sl edge lengths}, where the length of an edge is the metric
distance between its end points. We show that this distribution has three
different regimes, and that the tail of this distribution follows a power law.
|
1111.0547
|
Interstellar Communication: The Case for Spread Spectrum
|
astro-ph.IM cs.IT math.IT physics.pop-ph
|
Spread spectrum, widely employed in modern digital wireless terrestrial radio
systems, chooses a signal with a noise-like character and much higher bandwidth
than necessary. This paper advocates spread spectrum modulation for
interstellar communication, motivated by robust immunity to radio-frequency
interference (RFI) of technological origin in the vicinity of the receiver
while preserving full detection sensitivity in the presence of natural sources
of noise. Receiver design for noise immunity alone provides no basis for
choosing a signal with any specific character, therefore failing to reduce
ambiguity. By adding RFI to noise immunity as a design objective, the
conjunction of choice of signal (by the transmitter) together with optimum
detection for noise immunity (in the receiver) leads through simple
probabilistic argument to the conclusion that the signal should possess the
statistical properties of a burst of white noise, and also have a large
time-bandwidth product. Thus spread spectrum also provides an implicit
coordination between transmitter and receiver by reducing the ambiguity as to
the signal character. This strategy requires the receiver to guess the specific
noise-like signal, and it is contended that this is feasible if an appropriate
pseudorandom signal is generated algorithmically. For example, conceptually
simple algorithms like the binary expansion of common irrational numbers like
Pi are shown to be suitable. Due to its deliberately wider bandwidth, spread
spectrum is more susceptible to dispersion and distortion in propagation
through the interstellar medium, desirably reducing ambiguity in parameters
like bandwidth and carrier frequency. This suggests a promising new direction
in interstellar communication using spread spectrum modulation techniques.
|
1111.0567
|
A Primal Dual Algorithm for a Heterogeneous Traveling Salesman Problem
|
cs.DM cs.DS cs.RO math.CO
|
Surveillance applications require a collection of heterogeneous vehicles to
visit a set of targets. In this article, we consider a fundamental routing
problem that arises in these applications involving two vehicles. Specifically,
we consider a routing problem where there are two heterogeneous vehicles that
start from distinct initial locations, and a set of targets. The objective is
to find a tour for each vehicle such that each of the targets is visited at
least once by a vehicle and the sum of the distances traveled by the vehicles
is a minimum. We present a primal-dual algorithm for a variant of this routing
problem that provides an approximation ratio of 2.
|
1111.0594
|
Exploring Oracle RDBMS latches using Solaris DTrace
|
cs.DB cs.DC cs.PF
|
Rise of hundreds cores technologies bring again to the first plan the problem
of interprocess synchronization in database engines. Spinlocks are widely used
in contemporary DBMS to synchronize processes at microsecond timescale. Latches
are Oracle RDBMS specific spinlocks. The latch contention is common to observe
in contemporary high concurrency OLTP environments.
In contrast to system spinlocks used in operating systems kernels, latches
work in user context. Such user level spinlocks are influenced by context
preemption and multitasking. Until recently there were no direct methods to
measure effectiveness of user spinlocks. This became possible with the
emergence of Solaris 10 Dynamic Tracing framework. DTrace allows tracing and
profiling both OS and user applications.
This work investigates the possibilities to diagnose and tune Oracle latches.
It explores the contemporary latch realization and spinning-blocking
strategies, analyses corresponding statistic counters.
A mathematical model developed to estimate analytically the effect of tuning
_SPIN_COUNT value.
|
1111.0595
|
An achievable region for the double unicast problem based on a minimum
cut analysis
|
cs.IT math.IT
|
We consider the multiple unicast problem under network coding over directed
acyclic networks when there are two source-terminal pairs, $s_1-t_1$ and
$s_2-t_2$. Current characterizations of the multiple unicast capacity region in
this setting have a large number of inequalities, which makes them hard to
explicitly evaluate. In this work we consider a slightly different problem. We
assume that we only know certain minimum cut values for the network, e.g.,
mincut$(S_i, T_j)$, where $S_i \subseteq \{s_1, s_2\}$ and $T_j \subseteq
\{t_1, t_2\}$ for different subsets $S_i$ and $T_j$. Based on these values, we
propose an achievable rate region for this problem based on linear codes.
Towards this end, we begin by defining a base region where both sources are
multicast to both the terminals. Following this we enlarge the region by
appropriately encoding the information at the source nodes, such that terminal
$t_i$ is only guaranteed to decode information from the intended source $s_i$,
while decoding a linear function of the other source. The rate region takes
different forms depending upon the relationship of the different cut values in
the network.
|
1111.0654
|
Distributed Lossy Source Coding Using Real-Number Codes
|
cs.IT cs.CV cs.NI math.IT
|
We show how real-number codes can be used to compress correlated sources, and
establish a new framework for lossy distributed source coding, in which we
quantize compressed sources instead of compressing quantized sources. This
change in the order of binning and quantization blocks makes it possible to
model correlation between continuous-valued sources more realistically and
correct quantization error when the sources are completely correlated. The
encoding and decoding procedures are described in detail, for discrete Fourier
transform (DFT) codes. Reconstructed signal, in the mean squared error sense,
is seen to be better than that in the conventional approach.
|
1111.0663
|
On Identity Testing of Tensors, Low-rank Recovery and Compressed Sensing
|
cs.CC cs.IT math.IT
|
We study the problem of obtaining efficient, deterministic, black-box
polynomial identity testing algorithms for depth-3 set-multilinear circuits
(over arbitrary fields). This class of circuits has an efficient,
deterministic, white-box polynomial identity testing algorithm (due to Raz and
Shpilka), but has no known such black-box algorithm. We recast this problem as
a question of finding a low-dimensional subspace H, spanned by rank 1 tensors,
such that any non-zero tensor in the dual space ker(H) has high rank. We obtain
explicit constructions of essentially optimal-size hitting sets for tensors of
degree 2 (matrices), and obtain quasi-polynomial sized hitting sets for
arbitrary tensors (but this second hitting set is less explicit).
We also show connections to the task of performing low-rank recovery of
matrices, which is studied in the field of compressed sensing. Low-rank
recovery asks (say, over the reals) to recover a matrix M from few
measurements, under the promise that M is rank <=r. We also give a formal
connection between low-rank recovery and the task of sparse (vector) recovery:
any sparse-recovery algorithm that exactly recovers vectors of length n and
sparsity 2r, using m non-adaptive measurements, yields a low-rank recovery
scheme for exactly recovering nxn matrices of rank <=r, making 2nm non-adaptive
measurements. Furthermore, if the sparse-recovery algorithm runs in time \tau,
then the low-rank recovery algorithm runs in time O(rn^2+n\tau). We obtain this
reduction using linear-algebraic techniques, and not using convex optimization,
which is more commonly seen in compressed sensing algorithms. By using a dual
Reed-Solomon code, we are able to (deterministically) construct low-rank
recovery schemes taking 4nr measurements over the reals, such that the
measurements can be all rank-1 matrices, or all sparse matrices.
|
1111.0683
|
A Sieve Method for Consensus-type Network Tomography
|
math.OC cs.SY
|
In this note, we examine the problem of identifying the interaction geometry
among a known number of agents, adopting a consensus-type algorithm for their
coordination. The proposed identification process is facilitated by introducing
"ports" for stimulating a subset of network vertices via an appropriately
defined interface and observing the network's response at another set of
vertices. It is first noted that under the assumption of controllability and
observability of corresponding steered-and-observed network, the proposed
procedure identifies a number of important features of the network using the
spectrum of the graph Laplacian. We then proceed to use degree-based graph
reconstruction methods to propose a sieve method for further characterization
of the underlying network. An example demonstrates the application of the
proposed method.
|
1111.0689
|
Symmetrical Multilevel Diversity Coding and Subset Entropy Inequalities
|
cs.IT math.IT
|
Symmetrical multilevel diversity coding (SMDC) is a classical model for
coding over distributed storage. In this setting, a simple separate encoding
strategy known as superposition coding was shown to be optimal in terms of
achieving the minimum sum rate (Roche, Yeung, and Hau, 1997) and the entire
admissible rate region (Yeung and Zhang, 1999) of the problem. The proofs
utilized carefully constructed induction arguments, for which the classical
subset entropy inequality of Han (1978) played a key role. This paper includes
two parts. In the first part the existing optimality proofs for classical SMDC
are revisited, with a focus on their connections to subset entropy
inequalities. First, a new sliding-window subset entropy inequality is
introduced and then used to establish the optimality of superposition coding
for achieving the minimum sum rate under a weaker source-reconstruction
requirement. Second, a subset entropy inequality recently proved by Madiman and
Tetali (2010) is used to develop a new structural understanding to the proof of
Yeung and Zhang on the optimality of superposition coding for achieving the
entire admissible rate region. Building on the connections between classical
SMDC and the subset entropy inequalities developed in the first part, in the
second part the optimality of superposition coding is further extended to the
cases where there is either an additional all-access encoder (SMDC-A) or an
additional secrecy constraint (S-SMDC).
|
1111.0700
|
Finite Alphabet Control of Logistic Networks with Discrete Uncertainty
|
math.OC cs.SY
|
We consider logistic networks in which the control and disturbance inputs
take values in finite sets. We derive a necessary and sufficient condition for
the existence of robustly control invariant (hyperbox) sets. We show that a
stronger version of this condition is sufficient to guarantee robust global
attractivity, and we construct a counterexample demonstrating that it is not
necessary. Being constructive, our proofs of sufficiency allow us to extract
the corresponding robust control laws and to establish the invariance of
certain sets. Finally, we highlight parallels between our results and existing
results in the literature, and we conclude our study with two simple
illustrative examples.
|
1111.0708
|
Bayesian Causal Induction
|
stat.ML cs.AI
|
Discovering causal relationships is a hard task, often hindered by the need
for intervention, and often requiring large amounts of data to resolve
statistical uncertainty. However, humans quickly arrive at useful causal
relationships. One possible reason is that humans extrapolate from past
experience to new, unseen situations: that is, they encode beliefs over causal
invariances, allowing for sound generalization from the observations they
obtain from directly acting in the world.
Here we outline a Bayesian model of causal induction where beliefs over
competing causal hypotheses are modeled using probability trees. Based on this
model, we illustrate why, in the general case, we need interventions plus
constraints on our causal hypotheses in order to extract causal information
from our experience.
|
1111.0711
|
Hierarchical and High-Girth QC LDPC Codes
|
cs.IT math.IT
|
We present a general approach to designing capacity-approaching high-girth
low-density parity-check (LDPC) codes that are friendly to hardware
implementation. Our methodology starts by defining a new class of
"hierarchical" quasi-cyclic (HQC) LDPC codes that generalizes the structure of
quasi-cyclic (QC) LDPC codes. Whereas the parity check matrices of QC LDPC
codes are composed of circulant sub-matrices, those of HQC LDPC codes are
composed of a hierarchy of circulant sub-matrices that are in turn constructed
from circulant sub-matrices, and so on, through some number of levels. We show
how to map any class of codes defined using a protograph into a family of HQC
LDPC codes. Next, we present a girth-maximizing algorithm that optimizes the
degrees of freedom within the family of codes to yield a high-girth HQC LDPC
code. Finally, we discuss how certain characteristics of a code protograph will
lead to inevitable short cycles, and show that these short cycles can be
eliminated using a "squashing" procedure that results in a high-girth QC LDPC
code, although not a hierarchical one. We illustrate our approach with designed
examples of girth-10 QC LDPC codes obtained from protographs of one-sided
spatially-coupled codes.
|
1111.0712
|
Online Learning with Preference Feedback
|
cs.LG cs.AI
|
We propose a new online learning model for learning with preference feedback.
The model is especially suited for applications like web search and recommender
systems, where preference data is readily available from implicit user feedback
(e.g. clicks). In particular, at each time step a potentially structured object
(e.g. a ranking) is presented to the user in response to a context (e.g.
query), providing him or her with some unobserved amount of utility. As
feedback the algorithm receives an improved object that would have provided
higher utility. We propose a learning algorithm with provable regret bounds for
this online learning setting and demonstrate its effectiveness on a web-search
application. The new learning model also applies to many other interactive
learning problems and admits several interesting extensions.
|
1111.0727
|
Self-Interference Cancellation in Multi-hop Full-Duplex Networks via
Structured Signaling
|
cs.IT math.IT
|
This paper discusses transmission strategies for dealing with the problem of
self-interference in multi-hop wireless networks in which the nodes communicate
in a full- duplex mode. An information theoretic study of the simplest such
multi-hop network: the two-hop source-relay-destination network, leads to a
novel transmission strategy called structured self-interference cancellation
(or just "structured cancellation" for short). In the structured cancellation
strategy the source restrains from transmitting on certain signal levels, and
the relay structures its transmit signal such that it can learn the residual
self-interference channel, and undo the self-interference, by observing the
portion of its own transmit signal that appears at the signal levels left empty
by the source. It is shown that in certain nontrivial regimes, the structured
cancellation strategy outperforms not only half-duplex but also full-duplex
schemes in which time-orthogonal training is used for estimating the residual
self-interference channel.
|
1111.0737
|
Practical design of multi-channel oversampled warped cosine-modulated
filter banks
|
cs.IT math.IT
|
A practical approach to optimal design of multichannel oversampled warped
cosine-modulated filter banks (CMFB) is proposed. Warped CMFB is obtained by
allpass transformation of uniform CMFB. The paper addresses the problems of
minimization amplitude distortion and suppression of aliasing components
emerged due to oversampling of filter bank channel signals. Proposed
optimization-based design considerably reduces distortions of overall filter
bank transfer function taking into account channel subsampling ratios.
Matlab-implementation of the proposed warped CMFB design method is available in
public GitHub repository.
|
1111.0753
|
Towards "Intelligent Compression" in Streams: A Biased Reservoir
Sampling based Bloom Filter Approach
|
cs.IR cs.DS
|
With the explosion of information stored world-wide,data intensive computing
has become a central area of research.Efficient management and processing of
this massively exponential amount of data from diverse sources,such as
telecommunication call data records,online transaction records,etc.,has become
a necessity.Removing redundancy from such huge(multi-billion records) datasets
resulting in resource and compute efficiency for downstream processing
constitutes an important area of study. "Intelligent compression" or
deduplication in streaming scenarios,for precise identification and elimination
of duplicates from the unbounded datastream is a greater challenge given the
realtime nature of data arrival.Stable Bloom Filters(SBF) address this problem
to a certain extent.However,SBF suffers from a high false negative rate(FNR)
and slow convergence rate,thereby rendering it inefficient for applications
with low FNR tolerance.In this paper, we present a novel Reservoir Sampling
based Bloom Filter,(RSBF) data structure,based on the combined concepts of
reservoir sampling and Bloom filters for approximate detection of duplicates in
data streams.Using detailed theoretical analysis we prove analytical bounds on
its false positive rate(FPR),false negative rate(FNR) and convergence rates
with low memory requirements.We show that RSBF offers the currently lowest FN
and convergence rates,and are better than those of SBF while using the same
memory.Using empirical analysis on real-world datasets(3 million records) and
synthetic datasets with around 1 billion records,we demonstrate upto 2x
improvement in FNR with better convergence rates as compared to SBF,while
exhibiting comparable FPR.To the best of our knowledge,this is the first
attempt to integrate reservoir sampling method with Bloom filters for
deduplication in streaming scenarios.
|
1111.0794
|
Global Exponential Observers for Two Classes of Nonlinear Systems
|
math.OC cs.SY
|
This paper develops sufficient conditions for the existence of global
exponential observers for two classes of nonlinear systems: (i) the class of
systems with a globally asymptotically stable compact set, and (ii) the class
of systems that evolve on an open set. In the first class, the derived
continuous-time observer also leads to the construction of a robust global
sampled-data exponential observer, under additional conditions. Two
illustrative examples of applications of the general results are presented, one
is a system with monotone nonlinearities and the other is the chemostat system.
|
1111.0854
|
The Homology Groups of a Partial Trace Monoid Action
|
math.AT cs.MA
|
The aim of this paper is to investigate the homology groups of mathematical
models of concurrency. We study the Baues-Wirsching homology groups of a small
category associated with a partial monoid action on a set. We prove that these
groups can be reduced to the Leech homology groups of the monoid. For a trace
monoid with an action on a set, we will build a cubical complex of free Abelian
groups with homology groups isomorphic to the integral homology groups of the
action category. It allows us to solve the problem posed by the author in 2004
of the constructing an algorithm for computing homology groups of the CE nets.
We describe the algorithm and give examples of calculating the homology groups.
|
1111.0860
|
Clause/Term Resolution and Learning in the Evaluation of Quantified
Boolean Formulas
|
cs.AI
|
Resolution is the rule of inference at the basis of most procedures for
automated reasoning. In these procedures, the input formula is first translated
into an equisatisfiable formula in conjunctive normal form (CNF) and then
represented as a set of clauses. Deduction starts by inferring new clauses by
resolution, and goes on until the empty clause is generated or satisfiability
of the set of clauses is proven, e.g., because no new clauses can be generated.
In this paper, we restrict our attention to the problem of evaluating
Quantified Boolean Formulas (QBFs). In this setting, the above outlined
deduction process is known to be sound and complete if given a formula in CNF
and if a form of resolution, called Q-resolution, is used. We introduce
Q-resolution on terms, to be used for formulas in disjunctive normal form. We
show that the computation performed by most of the available procedures for
QBFs --based on the Davis-Logemann-Loveland procedure (DLL) for propositional
satisfiability-- corresponds to a tree in which Q-resolution on terms and
clauses alternate. This poses the theoretical bases for the introduction of
learning, corresponding to recording Q-resolution formulas associated with the
nodes of the tree. We discuss the problems related to the introduction of
learning in DLL based procedures, and present solutions extending
state-of-the-art proposals coming from the literature on propositional
satisfiability. Finally, we show that our DLL based solver extended with
learning, performs significantly better on benchmarks used in the 2003 QBF
solvers comparative evaluation.
|
1111.0873
|
Collective Energy Foraging of Robot Swarms and Robot Organisms
|
cs.RO
|
Cooperation and competition among stand-alone swarm agents increase
collective fitness of the whole system. A principally new kind of collective
systems is demonstrated by some bacteria and fungi, when they build symbiotic
organisms. Symbiotic life forms emerge new functional and self-developmental
capabilities, which allow better survival of swarm agents in different
environments. In this paper we consider energy foraging scenario for two
robotic species, swarm robots and symbiotic robot organism. It is indicated
that aggregation of microrobots into a robot organism can provide better
functional fitness for the whole group. A prototype of microrobots capable of
autonomous aggregation and disaggregation are shown.
|
1111.0885
|
Graph Regularized Nonnegative Matrix Factorization for Hyperspectral
Data Unmixing
|
cs.CV
|
Spectral unmixing is an important tool in hyperspectral data analysis for
estimating endmembers and abundance fractions in a mixed pixel. This paper
examines the applicability of a recently developed algorithm called graph
regularized nonnegative matrix factorization (GNMF) for this aim. The proposed
approach exploits the intrinsic geometrical structure of the data besides
considering positivity and full additivity constraints. Simulated data based on
the measured spectral signatures, is used for evaluating the proposed
algorithm. Results in terms of abundance angle distance (AAD) and spectral
angle distance (SAD) show that this method can effectively unmix hyperspectral
data.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.