id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1106.0665
|
Infinite-Horizon Policy-Gradient Estimation
|
cs.AI
|
Gradient-based approaches to direct policy search in reinforcement learning
have received much recent attention as a means to solve problems of partial
observability and to avoid some of the problems associated with policy
degradation in value-function methods. In this paper we introduce GPOMDP, a
simulation-based algorithm for generating a {\em biased} estimate of the
gradient of the {\em average reward} in Partially Observable Markov Decision
Processes (POMDPs) controlled by parameterized stochastic policies. A similar
algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The
algorithm's chief advantages are that it requires storage of only twice the
number of policy parameters, uses one free parameter $\beta\in [0,1)$ (which
has a natural interpretation in terms of bias-variance trade-off), and requires
no knowledge of the underlying state. We prove convergence of GPOMDP, and show
how the correct choice of the parameter $\beta$ is related to the {\em mixing
time} of the controlled POMDP. We briefly describe extensions of GPOMDP to
controlled Markov chains, continuous state, observation and control spaces,
multiple-agents, higher-order derivatives, and a version for training
stochastic policies with internal states. In a companion paper (Baxter,
Bartlett, & Weaver, 2001) we show how the gradient estimates generated by
GPOMDP can be used in both a traditional stochastic gradient algorithm and a
conjugate-gradient procedure to find local optima of the average reward
|
1106.0666
|
Experiments with Infinite-Horizon, Policy-Gradient Estimation
|
cs.AI cs.LG
|
In this paper, we present algorithms that perform gradient ascent of the
average reward in a partially observable Markov decision process (POMDP). These
algorithms are based on GPOMDP, an algorithm introduced in a companion paper
(Baxter and Bartlett, this volume), which computes biased estimates of the
performance gradient in POMDPs. The algorithm's chief advantages are that it
uses only one free parameter beta, which has a natural interpretation in terms
of bias-variance trade-off, it requires no knowledge of the underlying state,
and it can be applied to infinite state, control and observation spaces. We
show how the gradient estimates produced by GPOMDP can be used to perform
gradient ascent, both with a traditional stochastic-gradient algorithm, and
with an algorithm based on conjugate-gradients that utilizes gradient
information to bracket maxima in line searches. Experimental results are
presented illustrating both the theoretical results of (Baxter and Bartlett,
this volume) on a toy problem, and practical aspects of the algorithms on a
number of more realistic problems.
|
1106.0667
|
Reasoning within Fuzzy Description Logics
|
cs.AI
|
Description Logics (DLs) are suitable, well-known, logics for managing
structured knowledge. They allow reasoning about individuals and well defined
concepts, i.e., set of individuals with common properties. The experience in
using DLs in applications has shown that in many cases we would like to extend
their capabilities. In particular, their use in the context of Multimedia
Information Retrieval (MIR) leads to the convincement that such DLs should
allow the treatment of the inherent imprecision in multimedia object content
representation and retrieval. In this paper we will present a fuzzy extension
of ALC, combining Zadeh's fuzzy logic with a classical DL. In particular,
concepts becomes fuzzy and, thus, reasoning about imprecise concepts is
supported. We will define its syntax, its semantics, describe its properties
and present a constraint propagation calculus for reasoning in it.
|
1106.0668
|
An Analysis of Reduced Error Pruning
|
cs.AI
|
Top-down induction of decision trees has been observed to suffer from the
inadequate functioning of the pruning phase. In particular, it is known that
the size of the resulting tree grows linearly with the sample size, even though
the accuracy of the tree does not improve. Reduced Error Pruning is an
algorithm that has been used as a representative technique in attempts to
explain the problems of decision tree learning. In this paper we present
analyses of Reduced Error Pruning in three different settings. First we study
the basic algorithmic properties of the method, properties that hold
independent of the input decision tree and pruning examples. Then we examine a
situation that intuitively should lead to the subtree under consideration to be
replaced by a leaf node, one in which the class label and attribute values of
the pruning examples are independent of each other. This analysis is conducted
under two different assumptions. The general analysis shows that the pruning
probability of a node fitting pure noise is bounded by a function that
decreases exponentially as the size of the tree grows. In a specific analysis
we assume that the examples are distributed uniformly to the tree. This
assumption lets us approximate the number of subtrees that are pruned because
they do not receive any pruning examples. This paper clarifies the different
variants of the Reduced Error Pruning algorithm, brings new insight to its
algorithmic properties, analyses the algorithm with less imposed assumptions
than before, and includes the previously overlooked empty subtrees to the
analysis.
|
1106.0669
|
GIB: Imperfect Information in a Computationally Challenging Game
|
cs.AI
|
This paper investigates the problems arising in the construction of a program
to play the game of contract bridge. These problems include both the difficulty
of solving the game's perfect information variant, and techniques needed to
address the fact that bridge is not, in fact, a perfect information game. GIB,
the program being described, involves five separate technical advances:
partition search, the practical application of Monte Carlo techniques to
realistic problems, a focus on achievable sets to solve problems inherent in
the Monte Carlo approach, an extension of alpha-beta pruning from total orders
to arbitrary distributive lattices, and the use of squeaky wheel optimization
to find approximately optimal solutions to cardplay problems. GIB is currently
believed to be of approximately expert caliber, and is currently the strongest
computer bridge program in the world.
|
1106.0671
|
Domain Filtering Consistencies
|
cs.AI
|
Enforcing local consistencies is one of the main features of constraint
reasoning. Which level of local consistency should be used when searching for
solutions in a constraint network is a basic question. Arc consistency and
partial forms of arc consistency have been widely studied, and have been known
for sometime through the forward checking or the MAC search algorithms. Until
recently, stronger forms of local consistency remained limited to those that
change the structure of the constraint graph, and thus, could not be used in
practice, especially on large networks. This paper focuses on the local
consistencies that are stronger than arc consistency, without changing the
structure of the network, i.e., only removing inconsistent values from the
domains. In the last five years, several such local consistencies have been
proposed by us or by others. We make an overview of all of them, and highlight
some relations between them. We compare them both theoretically and
experimentally, considering their pruning efficiency and the time required to
enforce them.
|
1106.0672
|
Policy Recognition in the Abstract Hidden Markov Model
|
cs.AI
|
In this paper, we present a method for recognising an agent's behaviour in
dynamic, noisy, uncertain domains, and across multiple levels of abstraction.
We term this problem on-line plan recognition under uncertainty and view it
generally as probabilistic inference on the stochastic process representing the
execution of the agent's plan. Our contributions in this paper are twofold. In
terms of probabilistic inference, we introduce the Abstract Hidden Markov Model
(AHMM), a novel type of stochastic processes, provide its dynamic Bayesian
network (DBN) structure and analyse the properties of this network. We then
describe an application of the Rao-Blackwellised Particle Filter to the AHMM
which allows us to construct an efficient, hybrid inference method for this
model. In terms of plan recognition, we propose a novel plan recognition
framework based on the AHMM as the plan execution model. The Rao-Blackwellised
hybrid inference for AHMM can take advantage of the independence properties
inherent in a model of plan execution, leading to an algorithm for online
probabilistic plan recognition that scales well with the number of levels in
the plan hierarchy. This illustrates that while stochastic models for plan
execution can be complex, they exhibit special structures which, if exploited,
can lead to efficient plan recognition algorithms. We demonstrate the
usefulness of the AHMM framework via a behaviour recognition system in a
complex spatial environment using distributed video surveillance data.
|
1106.0673
|
Computational Approach to Anaphora Resolution in Spanish Dialogues
|
cs.CL
|
This paper presents an algorithm for identifying noun-phrase antecedents of
pronouns and adjectival anaphors in Spanish dialogues. We believe that anaphora
resolution requires numerous sources of information in order to find the
correct antecedent of the anaphor. These sources can be of different kinds,
e.g., linguistic information, discourse/dialogue structure information, or
topic information. For this reason, our algorithm uses various different kinds
of information (hybrid information). The algorithm is based on linguistic
constraints and preferences and uses an anaphoric accessibility space within
which the algorithm finds the noun phrase. We present some experiments related
to this algorithm and this space using a corpus of 204 dialogues. The algorithm
is implemented in Prolog. According to this study, 95.9% of antecedents were
located in the proposed space, a precision of 81.3% was obtained for pronominal
anaphora resolution, and 81.5% for adjectival anaphora.
|
1106.0675
|
The FF Planning System: Fast Plan Generation Through Heuristic Search
|
cs.AI
|
We describe and evaluate the algorithmic techniques that are used in the FF
planning system. Like the HSP system, FF relies on forward state space search,
using a heuristic that estimates goal distances by ignoring delete lists.
Unlike HSP's heuristic, our method does not assume facts to be independent. We
introduce a novel search strategy that combines hill-climbing with systematic
search, and we show how other powerful heuristic information can be extracted
and used to prune the search space. FF was the most successful automatic
planner at the recent AIPS-2000 planning competition. We review the results of
the competition, give data for other benchmark domains, and investigate the
reasons for the runtime performance of FF compared to HSP.
|
1106.0676
|
Optimizing Dialogue Management with Reinforcement Learning: Experiments
with the NJFun System
|
cs.LG cs.AI
|
Designing the dialogue policy of a spoken dialogue system involves many
nontrivial choices. This paper presents a reinforcement learning approach for
automatically optimizing a dialogue policy, which addresses the technical
challenges in applying reinforcement learning to a working dialogue system with
human users. We report on the design, construction and empirical evaluation of
NJFun, an experimental spoken dialogue system that provides users with access
to information about fun things to do in New Jersey. Our results show that by
optimizing its performance via reinforcement learning, NJFun measurably
improves system performance.
|
1106.0678
|
ATTac-2000: An Adaptive Autonomous Bidding Agent
|
cs.AI
|
The First Trading Agent Competition (TAC) was held from June 22nd to July
8th, 2000. TAC was designed to create a benchmark problem in the complex domain
of e-marketplaces and to motivate researchers to apply unique approaches to a
common task. This article describes ATTac-2000, the first-place finisher in
TAC. ATTac-2000 uses a principled bidding strategy that includes several
elements of adaptivity. In addition to the success at the competition, isolated
empirical results are presented indicating the robustness and effectiveness of
ATTac-2000's adaptive strategy.
|
1106.0679
|
Efficient Methods for Qualitative Spatial Reasoning
|
cs.AI
|
The theoretical properties of qualitative spatial reasoning in the RCC8
framework have been analyzed extensively. However, no empirical investigation
has been made yet. Our experiments show that the adaption of the algorithms
used for qualitative temporal reasoning can solve large RCC8 instances, even if
they are in the phase transition region -- provided that one uses the maximal
tractable subsets of RCC8 that have been identified by us. In particular, we
demonstrate that the orthogonal combination of heuristic methods is successful
in solving almost all apparently hard instances in the phase transition region
up to a certain size in reasonable time.
|
1106.0680
|
Learning Geometrically-Constrained Hidden Markov Models for Robot
Navigation: Bridging the Topological-Geometrical Gap
|
cs.AI cs.RO
|
Hidden Markov models (HMMs) and partially observable Markov decision
processes (POMDPs) provide useful tools for modeling dynamical systems. They
are particularly useful for representing the topology of environments such as
road networks and office buildings, which are typical for robot navigation and
planning. The work presented here describes a formal framework for
incorporating readily available odometric information and geometrical
constraints into both the models and the algorithm that learns them. By taking
advantage of such information, learning HMMs/POMDPs can be made to generate
better solutions and require fewer iterations, while being robust in the face
of data reduction. Experimental results, obtained from both simulated and real
robot data, demonstrate the effectiveness of the approach.
|
1106.0681
|
Accelerating Reinforcement Learning through Implicit Imitation
|
cs.LG cs.AI
|
Imitation can be viewed as a means of enhancing learning in multiagent
environments. It augments an agent's ability to learn useful behaviors by
making intelligent use of the knowledge implicit in behaviors demonstrated by
cooperative teachers or other more experienced agents. We propose and study a
formal model of implicit imitation that can accelerate reinforcement learning
dramatically in certain cases. Roughly, by observing a mentor, a
reinforcement-learning agent can extract information about its own capabilities
in, and the relative value of, unvisited parts of the state space. We study two
specific instantiations of this model, one in which the learning agent and the
mentor have identical abilities, and one designed to deal with agents and
mentors with different action sets. We illustrate the benefits of implicit
imitation by integrating it with prioritized sweeping, and demonstrating
improved performance and convergence through observation of single and multiple
mentors. Though we make some stringent assumptions regarding observability and
possible interactions, we briefly comment on extensions of the model that relax
these restricitions.
|
1106.0706
|
Actor-network procedures: Modeling multi-factor authentication, device
pairing, social interactions
|
cs.CR cs.CY cs.LO cs.SI
|
As computation spreads from computers to networks of computers, and migrates
into cyberspace, it ceases to be globally programmable, but it remains
programmable indirectly: network computations cannot be controlled, but they
can be steered by local constraints on network nodes. The tasks of
"programming" global behaviors through local constraints belong to the area of
security. The "program particles" that assure that a system of local
interactions leads towards some desired global goals are called security
protocols. As computation spreads beyond cyberspace, into physical and social
spaces, new security tasks and problems arise. As networks are extended by
physical sensors and controllers, including the humans, and interlaced with
social networks, the engineering concepts and techniques of computer security
blend with the social processes of security. These new connectors for
computational and social software require a new "discipline of programming" of
global behaviors through local constraints. Since the new discipline seems to
be emerging from a combination of established models of security protocols with
older methods of procedural programming, we use the name procedures for these
new connectors, that generalize protocols. In the present paper we propose
actor-networks as a formal model of computation in heterogenous networks of
computers, humans and their devices; and we introduce Procedure Derivation
Logic (PDL) as a framework for reasoning about security in actor-networks. On
the way, we survey the guiding ideas of Protocol Derivation Logic (also PDL)
that evolved through our work in security in last 10 years. Both formalisms are
geared towards graphic reasoning and tool support. We illustrate their workings
by analysing a popular form of two-factor authentication, and a multi-channel
device pairing procedure, devised for this occasion.
|
1106.0707
|
Efficient Reinforcement Learning Using Recursive Least-Squares Methods
|
cs.LG cs.AI
|
The recursive least-squares (RLS) algorithm is one of the most well-known
algorithms used in adaptive filtering, system identification and adaptive
control. Its popularity is mainly due to its fast convergence speed, which is
considered to be optimal in practice. In this paper, RLS methods are used to
solve reinforcement learning problems, where two new reinforcement learning
algorithms using linear value function approximators are proposed and analyzed.
The two algorithms are called RLS-TD(lambda) and Fast-AHC (Fast Adaptive
Heuristic Critic), respectively. RLS-TD(lambda) can be viewed as the extension
of RLS-TD(0) from lambda=0 to general lambda within interval [0,1], so it is a
multi-step temporal-difference (TD) learning algorithm using RLS methods. The
convergence with probability one and the limit of convergence of RLS-TD(lambda)
are proved for ergodic Markov chains. Compared to the existing LS-TD(lambda)
algorithm, RLS-TD(lambda) has advantages in computation and is more suitable
for online learning. The effectiveness of RLS-TD(lambda) is analyzed and
verified by learning prediction experiments of Markov chains with a wide range
of parameter settings. The Fast-AHC algorithm is derived by applying the
proposed RLS-TD(lambda) algorithm in the critic network of the adaptive
heuristic critic method. Unlike conventional AHC algorithm, Fast-AHC makes use
of RLS methods to improve the learning-prediction efficiency in the critic.
Learning control experiments of the cart-pole balancing and the acrobot
swing-up problems are conducted to compare the data efficiency of Fast-AHC with
conventional AHC. From the experimental results, it is shown that the data
efficiency of learning control can also be improved by using RLS methods in the
learning-prediction process of the critic. The performance of Fast-AHC is also
compared with that of the AHC method using LS-TD(lambda). Furthermore, it is
demonstrated in the experiments that different initial values of the variance
matrix in RLS-TD(lambda) are required to get better performance not only in
learning prediction but also in learning control. The experimental results are
analyzed based on the existing theoretical work on the transient phase of
forgetting factor RLS methods.
|
1106.0708
|
Optimal Sensor Configurations for Rectangular Target Dectection
|
math.OC cs.MA cs.RO cs.SY
|
Optimal search strategies where targets are observed at several different
angles are found. Targets are assumed to exhibit rectangular symmetry and have
a uniformly-distributed orientation. By rectangular symmetry, it is meant that
one side of a target is the mirror image of its opposite side. Finding an
optimal solution is generally a hard problem. Fortunately, symmetry principles
allow analytical and intuitive solutions to be found. One such optimal search
strategy consists of choosing n angles evenly separated on the half-circle and
leads to a lower bound of the probability of not detecting targets. As no prior
knowledge of the target orientation is required, such search strategies are
also robust, a desirable feature in search and detection missions.
|
1106.0718
|
Probabilistic Management of OCR Data using an RDBMS
|
cs.DB cs.DL cs.IR
|
The digitization of scanned forms and documents is changing the data sources
that enterprises manage. To integrate these new data sources with enterprise
data, the current state-of-the-art approach is to convert the images to ASCII
text using optical character recognition (OCR) software and then to store the
resulting ASCII text in a relational database. The OCR problem is challenging,
and so the output of OCR often contains errors. In turn, queries on the output
of OCR may fail to retrieve relevant answers. State-of-the-art OCR programs,
e.g., the OCR powering Google Books, use a probabilistic model that captures
many alternatives during the OCR process. Only when the results of OCR are
stored in the database, do these approaches discard the uncertainty. In this
work, we propose to retain the probabilistic models produced by OCR process in
a relational database management system. A key technical challenge is that the
probabilistic data produced by OCR software is very large (a single book blows
up to 2GB from 400kB as ASCII). As a result, a baseline solution that
integrates these models with an RDBMS is over 1000x slower versus standard text
processing for single table select-project queries. However, many applications
may have quality-performance needs that are in between these two extremes of
ASCII and the complete model output by the OCR software. Thus, we propose a
novel approximation scheme called Staccato that allows a user to trade recall
for query performance. Additionally, we provide a formal analysis of our
scheme's properties, and describe how we integrate our scheme with
standard-RDBMS text indexing.
|
1106.0730
|
Rademacher complexity of stationary sequences
|
stat.ML cs.LG
|
We show how to control the generalization error of time series models wherein
past values of the outcome are used to predict future values. The results are
based on a generalization of standard i.i.d. concentration inequalities to
dependent data without the mixing assumptions common in the time series
setting. Our proof and the result are simpler than previous analyses with
dependent data or stochastic adversaries which use sequential Rademacher
complexities rather than the expected Rademacher complexity for i.i.d.
processes. We also derive empirical Rademacher results without mixing
assumptions resulting in fully calculable upper bounds.
|
1106.0733
|
Short-term Performance Limits of MIMO Systems with Side Information at
the Transmitter
|
cs.IT math.IT
|
The fundamental performance limits of space-time block code (STBC) designs
when perfect channel information is available at the transmitter (CSIT) are
studied in this report. With CSIT, the transmitter can perform various
techniques such as rate adaption, power allocation, or beamforming. Previously,
the exploration of these fundamental results assumed long-term constraints, for
example, channel codes can have infinite decoding delay, and power or rate is
normalized over infinite channel-uses. With long-term constraints, the
transmitter can operate at the rate lower than the instantaneous mutual
information and error-free transmission can be supported. In this report, we
focus on the performance limits of short-term behavior for STBC systems. We
assume that the system has block power constraint, block rate constraint, and
finite decoding delay. With these constraints, although the transmitter can
perform rate adaption, power control, or beamforming, we show that
decoding-error is unavoidable. In the high SNR regime, the diversity gain is
upperbounded by the product of the number of transmit antennas, receive
antennas, and independent fading block channels that messages spread over. In
other words, fading cannot be completely combatted with short-term constraints.
The proof is based on a sphere-packing argument.
|
1106.0776
|
Semantics for Possibilistic Disjunctive Programs
|
cs.AI cs.LO cs.PL
|
In this paper, a possibilistic disjunctive logic programming approach for
modeling uncertain, incomplete and inconsistent information is defined. This
approach introduces the use of possibilistic disjunctive clauses which are able
to capture incomplete information and incomplete states of a knowledge base at
the same time.
By considering a possibilistic logic program as a possibilistic logic theory,
a construction of a possibilistic logic programming semantic based on answer
sets and the proof theory of possibilistic logic is defined. It shows that this
possibilistic semantics for disjunctive logic programs can be characterized by
a fixed-point operator. It is also shown that the suggested possibilistic
semantics can be computed by a resolution algorithm and the consideration of
optimal refutations from a possibilistic logic theory.
In order to manage inconsistent possibilistic logic programs, a preference
criterion between inconsistent possibilistic models is defined; in addition,
the approach of cuts for restoring consistency of an inconsistent possibilistic
knowledge base is adopted. The approach is illustrated in a medical scenario.
|
1106.0800
|
Optimal Reinforcement Learning for Gaussian Systems
|
stat.ML cs.LG
|
The exploration-exploitation trade-off is among the central challenges of
reinforcement learning. The optimal Bayesian solution is intractable in
general. This paper studies to what extent analytic statements about optimal
learning are possible if all beliefs are Gaussian processes. A first order
approximation of learning of both loss and dynamics, for nonlinear,
time-varying systems in continuous time and space, subject to a relatively weak
restriction on the dynamics, is described by an infinite-dimensional partial
differential equation. An approximate finite-dimensional projection gives an
impression for how this result may be helpful.
|
1106.0823
|
Recovering Epipolar Geometry from Images of Smooth Surfaces
|
cs.CV cs.AI
|
We present four methods for recovering the epipolar geometry from images of
smooth surfaces. In the existing methods for recovering epipolar geometry
corresponding feature points are used that cannot be found in such images. The
first method is based on finding corresponding characteristic points created by
illumination (ICPM - illumination characteristic points' method (PM)). The
second method is based on correspondent tangency points created by tangents
from epipoles to outline of smooth bodies (OTPM - outline tangent PM). These
two methods are exact and give correct results for real images, because
positions of the corresponding illumination characteristic points and
corresponding outline are known with small errors. But the second method is
limited either to special type of scenes or to restricted camera motion. We
also consider two more methods which are termed CCPM (curve characteristic PM)
and CTPM (curve tangent PM), for searching epipolar geometry for images of
smooth bodies based on a set of level curves with constant illumination
intensity. The CCPM method is based on searching correspondent points on
isophoto curves with the help of correlation of curvatures between these lines.
The CTPM method is based on property of the tangential to isophoto curve
epipolar line to map into the tangential to correspondent isophoto curves
epipolar line. The standard method (SM) based on knowledge of pairs of the
almost exact correspondent points. The methods have been implemented and tested
by SM on pairs of real images. Unfortunately, the last two methods give us only
a finite subset of solutions including "good" solution. Exception is "epipoles
in infinity". The main reason is inaccuracy of assumption of constant
brightness for smooth bodies. But outline and illumination characteristic
points are not influenced by this inaccuracy. So, the first pair of methods
gives exact results.
|
1106.0831
|
Optimal Real-time Spectrum Sharing between Cooperative Relay and Ad-hoc
Networks
|
cs.IT math.IT
|
Optimization based spectrum sharing strategies have been widely studied.
However, these strategies usually require a great amount of real-time
computation and significant signaling delay, and thus are hard to be fulfilled
in practical scenarios. This paper investigates optimal real-time spectrum
sharing between a cooperative relay network (CRN) and a nearby ad-hoc network.
Specifically, we optimize the spectrum access and resource allocation
strategies of the CRN so that the average traffic collision time between the
two networks can be minimized while maintaining a required throughput for the
CRN. The development is first for a frame-level setting, and then is extended
to an ergodic setting. For the latter setting, we propose an appealing optimal
real-time spectrum sharing strategy via Lagrangian dual optimization. The
proposed method only involves a small amount of real-time computation and
negligible control delay, and thus is suitable for practical implementations.
Simulation results are presented to demonstrate the efficiency of the proposed
strategies.
|
1106.0843
|
A Novel Adaptive Channel Equalization Method Using Variable Step-Size
Partial Rank Algorithm
|
cs.IT cs.SD math.IT
|
Recently a framework has been introduced within which a large number of
classical and modern adaptive filter algorithms can be viewed as special cases.
Variable Step-Size (VSS) normalized least mean square (VSSNLMS) and VSS Affine
Projection Algorithms (VSSAPA) are two particular examples of the adaptive
algorithms that can be covered by this generic adaptive filter. In this paper,
we introduce a new VSS Partial Rank (VSSPR) adaptive algorithm based on the
generic VSS adaptive filter and use it for channel equalization. The proposed
algorithm performs very well in attenuating noise and inter-symbol interference
(ISI) in comparison with the standard NLMS and the recently introduced AP
algorithms.
|
1106.0869
|
Impact of Mobility on MIMO Green Wireless Systems
|
cs.IT math.IT
|
This paper studies the impact of mobility on the power consumption of
wireless networks. With increasing mobility, we show that the network should
dedicate a non negligible fraction of the useful rate to estimate the different
degrees of freedom. In order to keep the rate constant, we quantify the
increase of power required for several cases of interest. In the case of a
point to point MIMO link, we calculate the minimum transmit power required for
a target rate and outage probability as a function of the coherence time and
the number of antennas. Interestingly, the results show that there is an
optimal number of antennas to be used for a given coherence time and power
consumption. This provides a lower bound limit on the minimum power required
for maintaining a green network.
|
1106.0870
|
The finite-step realizability of the joint spectral radius of a pair of
$d\times d$ matrices one of which being rank-one
|
math.OC cs.SY math.DS
|
We study the finite-step realizability of the joint/generalized spectral
radius of a pair of real $d\times d$ matrices, one of which has rank 1. Then we
prove that there always exists a finite-length word for which there holds the
spectral finiteness property for the set of matrices under consideration. This
implies that stability is algorithmically decidable in our case.
|
1106.0872
|
Model of Opinion Spreading in Social Networks
|
physics.soc-ph cs.SI
|
We proposed a new model, which capture the main difference between
information and opinion spreading. In information spreading additional exposure
to certain information has a small effect. Contrary, when an actor is exposed
to 2 opinioned actors the probability to adopt the opinion is significant
higher than in the case of contact with one such actor (called by J. Kleinberg
"the 0-1-2 effect"). In each time step if an actor does not have an opinion, we
randomly choose 2 his network neighbors. If one of them has an opinion, the
actor adopts opinion with some low probability, if two - with a higher
probability. Opinion spreading was simulated on different real world social
networks and similar random scale-free networks. The results show that small
world structure has a crucial impact on tipping point time. The "0-1-2" effect
causes a significant difference between ability of the actors to start opinion
spreading. Actor is an influencer according to his topological position in the
network.
|
1106.0895
|
Computable Bounds for Rate Distortion with Feed-Forward for Stationary
and Ergodic Sources
|
cs.IT math.IT
|
In this paper we consider the rate distortion problem of discrete-time,
ergodic, and stationary sources with feed forward at the receiver. We derive a
sequence of achievable and computable rates that converge to the feed-forward
rate distortion. We show that, for ergodic and stationary sources, the rate
{align} R_n(D)=\frac{1}{n}\min I(\hat{X}^n\rightarrow X^n){align} is achievable
for any $n$, where the minimization is taken over the transition conditioning
probability $p(\hat{x}^n|x^n)$ such that $\ex{}{d(X^n,\hat{X}^n)}\leq D$. The
limit of $R_n(D)$ exists and is the feed-forward rate distortion. We follow
Gallager's proof where there is no feed-forward and, with appropriate
modification, obtain our result. We provide an algorithm for calculating
$R_n(D)$ using the alternating minimization procedure, and present several
numerical examples. We also present a dual form for the optimization of
$R_n(D)$, and transform it into a geometric programming problem.
|
1106.0934
|
Phase transition in the Sznajd model with independence
|
physics.soc-ph cs.SI
|
We propose a model of opinion dynamics which describes two major types of
social influence -- conformity and independence. Conformity in our model is
described by the so called outflow dynamics (known as Sznajd model). According
to sociologists' suggestions, we introduce also a second type of social
influence, known in social psychology as independence. Various social
experiments have shown that the level of conformity depends on the society. We
introduce this level as a parameter of the model and show that there is a
continuous phase transition between conformity and independence.
|
1106.0941
|
Link Delay Estimation via Expander Graphs
|
cs.NI cs.IT math.IT
|
One of the purposes of network tomography is to infer the status of
parameters (e.g., delay) for the links inside a network through end-to-end
probing between (external) boundary nodes along predetermined routes. In this
work, we apply concepts from compressed sensing and expander graphs to the
delay estimation problem. We first show that a relative majority of network
topologies are not expanders for existing expansion criteria. Motivated by this
challenge, we then relax such criteria, enabling us to acquire simulation
evidence that link delays can be estimated for 30% more networks. That is, our
relaxation expands the list of identifiable networks with bounded estimation
error by 30%. We conduct a simulation performance analysis of delay estimation
and congestion detection on the basis of l1 minimization, demonstrating that
accurate estimation is feasible for an increasing proportion of networks.
|
1106.0954
|
Bits from Photons: Oversampled Image Acquisition Using Binary Poisson
Statistics
|
cs.IT cs.MM math.IT
|
We study a new image sensor that is reminiscent of traditional photographic
film. Each pixel in the sensor has a binary response, giving only a one-bit
quantized measurement of the local light intensity. To analyze its performance,
we formulate the oversampled binary sensing scheme as a parameter estimation
problem based on quantized Poisson statistics. We show that, with a
single-photon quantization threshold and large oversampling factors, the
Cram\'er-Rao lower bound (CRLB) of the estimation variance approaches that of
an ideal unquantized sensor, that is, as if there were no quantization in the
sensor measurements. Furthermore, the CRLB is shown to be asymptotically
achievable by the maximum likelihood estimator (MLE). By showing that the
log-likelihood function of our problem is concave, we guarantee the global
optimality of iterative algorithms in finding the MLE. Numerical results on
both synthetic data and images taken by a prototype sensor verify our
theoretical analysis and demonstrate the effectiveness of our image
reconstruction algorithm. They also suggest the potential application of the
oversampled binary sensing scheme in high dynamic range photography.
|
1106.0962
|
An efficient circle detection scheme in digital images using ant system
algorithm
|
cs.CV
|
Detection of geometric features in digital images is an important exercise in
image analysis and computer vision. The Hough Transform techniques for
detection of circles require a huge memory space for data processing hence
requiring a lot of time in computing the locations of the data space, writing
to and searching through the memory space. In this paper we propose a novel and
efficient scheme for detecting circles in edge-detected grayscale digital
images. We use Ant-system algorithm for this purpose which has not yet found
much application in this field. The main feature of this scheme is that it can
detect both intersecting as well as non-intersecting circles with a time
efficiency that makes it useful in real time applications. We build up an ant
system of new type which finds out closed loops in the image and then tests
them for circles.
|
1106.0967
|
Hashing Algorithms for Large-Scale Learning
|
stat.ML cs.LG
|
In this paper, we first demonstrate that b-bit minwise hashing, whose
estimators are positive definite kernels, can be naturally integrated with
learning algorithms such as SVM and logistic regression. We adopt a simple
scheme to transform the nonlinear (resemblance) kernel into linear (inner
product) kernel; and hence large-scale problems can be solved extremely
efficiently. Our method provides a simple effective solution to large-scale
learning in massive and extremely high-dimensional datasets, especially when
data do not fit in memory.
We then compare b-bit minwise hashing with the Vowpal Wabbit (VW) algorithm
(which is related the Count-Min (CM) sketch). Interestingly, VW has the same
variances as random projections. Our theoretical and empirical comparisons
illustrate that usually $b$-bit minwise hashing is significantly more accurate
(at the same storage) than VW (and random projections) in binary data.
Furthermore, $b$-bit minwise hashing can be combined with VW to achieve further
improvements in terms of training speed, especially when $b$ is large.
|
1106.0969
|
Long-Term Proportional Fair QoS Profile Follower Sub-carrier Allocation
Algorithm in Dynamic OFDMA Systems
|
cs.NI cs.IT math.IT
|
In this paper, Long-Term Proportional Fair (LTPF) resource allocation
algorithm in dynamic OFDMA system is presented, which provides long-term QoS
guarantee (mainly throughput requirement satisfaction) to individual user and
follows every user's QoS profile at long-term by incremental optimization of
proportional fairness and overall system rate maximization. The LTPF algorithm
dynamically allocates the OFDMA sub-carriers to the users in such a way that in
long-term the individual QoS requirement is achieved as well as fairness among
the users is maintained even in a heterogeneous traffic condition. Here more
than maintaining individual user's instantaneous QoS; emphasis is given to
follow mean QoS profile of all the users in long-term to retain the objectives
of both proportional fairness and multi-user raw rate maximization. Compared to
the algorithms, which provide proportional fair optimization and raw-rate
maximization independently, this algorithm attempts to provide both kinds of
optimizations simultaneously and reach an optimum point when computed in
long-term by exploiting the time diversity gain of mobile wireless environment.
|
1106.0987
|
Nearest Prime Simplicial Complex for Object Recognition
|
cs.LG cs.AI cs.CG cs.CV
|
The structure representation of data distribution plays an important role in
understanding the underlying mechanism of generating data. In this paper, we
propose nearest prime simplicial complex approaches (NSC) by utilizing
persistent homology to capture such structures. Assuming that each class is
represented with a prime simplicial complex, we classify unlabeled samples
based on the nearest projection distances from the samples to the simplicial
complexes. We also extend the extrapolation ability of these complexes with a
projection constraint term. Experiments in simulated and practical datasets
indicate that compared with several published algorithms, the proposed NSC
approaches achieve promising performance without losing the structure
representation.
|
1106.0989
|
A study of the singularity locus in the joint space of planar parallel
manipulators: special focus on cusps and nodes
|
cs.RO
|
Cusps and nodes on plane sections of the singularity locus in the joint space
of parallel manipulators play an important role in nonsingular assembly-mode
changing motions. This paper analyses in detail such points, both in the joint
space and in the workspace. It is shown that a cusp (resp. a node) defines a
point of tangency (resp. a crossing point) in the workspace between the
singular curves and the curves associated with the so-called characteristics
surfaces. The study is conducted on a planar 3-RPR manipulator for illustrative
purposes.
|
1106.1017
|
MMSE of "Bad" Codes
|
cs.IT math.IT
|
We examine codes, over the additive Gaussian noise channel, designed for
reliable communication at some specific signal-to-noise ratio (SNR) and
constrained by the permitted minimum mean-square error (MMSE) at lower SNRs.
The maximum possible rate is below point-to-point capacity, and hence these are
non-optimal codes (alternatively referred to as "bad" codes). We show that the
maximum possible rate is the one attained by superposition codebooks. Moreover,
the MMSE and mutual information behavior as a function of SNR, for any code
attaining the maximum rate under the MMSE constraint, is known for all SNR. We
also provide a lower bound on the MMSE for finite length codes, as a function
of the error probability of the code.
|
1106.1113
|
Complexity Analysis of Vario-eta through Structure
|
cs.LG cs.DM
|
Graph-based representations of images have recently acquired an important
role for classification purposes within the context of machine learning
approaches. The underlying idea is to consider that relevant information of an
image is implicitly encoded into the relationships between more basic entities
that compose by themselves the whole image. The classification problem is then
reformulated in terms of an optimization problem usually solved by a
gradient-based search procedure. Vario-eta through structure is an approximate
second order stochastic optimization technique that achieves a good trade-off
between speed of convergence and the computational effort required. However,
the robustness of this technique for large scale problems has not been yet
assessed. In this paper we firstly provide a theoretical justification of the
assumptions made by this optimization procedure. Secondly, a complexity
analysis of the algorithm is performed to prove its suitability for large scale
learning problems.
|
1106.1151
|
Reconstruction from anisotropic random measurements
|
math.ST cs.IT math.FA math.IT stat.TH
|
Random matrices are widely used in sparse recovery problems, and the relevant
properties of matrices with i.i.d. entries are well understood. The current
paper discusses the recently introduced Restricted Eigenvalue (RE) condition,
which is among the most general assumptions on the matrix, guaranteeing
recovery. We prove a reduction principle showing that the RE condition can be
guaranteed by checking the restricted isometry on a certain family of
low-dimensional subspaces. This principle allows us to establish the RE
condition for several broad classes of random matrices with dependent entries,
including random matrices with subgaussian rows and non-trivial covariance
structure, as well as matrices with independent rows, and uniformly bounded
entries.
|
1106.1157
|
Bayesian and L1 Approaches to Sparse Unsupervised Learning
|
cs.LG cs.AI stat.ML
|
The use of L1 regularisation for sparse learning has generated immense
research interest, with successful application in such diverse areas as signal
acquisition, image coding, genomics and collaborative filtering. While existing
work highlights the many advantages of L1 methods, in this paper we find that
L1 regularisation often dramatically underperforms in terms of predictive
performance when compared with other methods for inferring sparsity. We focus
on unsupervised latent variable models, and develop L1 minimising factor
models, Bayesian variants of "L1", and Bayesian models with a stronger L0-like
sparsity induced through spike-and-slab distributions. These spike-and-slab
Bayesian factor models encourage sparsity while accounting for uncertainty in a
principled manner and avoiding unnecessary shrinkage of non-zero values. We
demonstrate on a number of data sets that in practice spike-and-slab Bayesian
methods outperform L1 minimisation, even on a computational budget. We thus
highlight the need to re-assess the wide use of L1 methods in sparsity-reliant
applications, particularly when we care about generalising to previously unseen
data, and provide an alternative that, over many varying conditions, provides
improved generalisation performance.
|
1106.1194
|
Constructing Runge-Kutta Methods with the Use of Artificial Neural
Networks
|
cs.NE math.NA
|
A methodology that can generate the optimal coefficients of a numerical
method with the use of an artificial neural network is presented in this work.
The network can be designed to produce a finite difference algorithm that
solves a specific system of ordinary differential equations numerically. The
case we are examining here concerns an explicit two-stage Runge-Kutta method
for the numerical solution of the two-body problem. Following the
implementation of the network, the latter is trained to obtain the optimal
values for the coefficients of the Runge-Kutta method. The comparison of the
new method to others that are well known in the literature proves its
efficiency and demonstrates the capability of the network to provide efficient
algorithms for specific problems.
|
1106.1207
|
Stability Analysis of Linear Time-Invariant Distributed-Order Systems
|
cs.SY math.OC
|
Bounded-input bounded-output stability condition of linear time invariant
(LTI) distributed-order system over integral interval $(0,1)$ has been
established for the first time. Two cases about weighting function of the
distributed order are investigated, and sufficient and necessary conditions of
stability for these two types of distributed-order systems are derived. Based
on the complex integration analysis, time-domain responses of distributed-order
systems are also given by analytical method, and numerical examples are
presented to illustrate the proposed conditions.
|
1106.1211
|
Stability of fractional-order linear time-invariant system with
noncommensurate orders
|
cs.SY math.OC
|
Bounded-input bounded-output stability conditions for fractional-order linear
time-invariant (LTI) system with multiple noncommensurate orders have been
established in this paper. The orders become noncommensurate orders when they
do not have a common divisor. Sufficient and necessary conditions of stability
for this kind of fractional-order LTI system with multiple noncommensurate
orders. Based on the numerical inverse Laplace transform technique, time-domain
responses for a fractional-order system with double noncommensurate orders are
presented to illustrate the obtained stability results.
|
1106.1216
|
Using More Data to Speed-up Training Time
|
cs.LG stat.ML
|
In many recent applications, data is plentiful. By now, we have a rather
clear understanding of how more data can be used to improve the accuracy of
learning algorithms. Recently, there has been a growing interest in
understanding how more data can be leveraged to reduce the required training
runtime. In this paper, we study the runtime of learning as a function of the
number of available training examples, and underscore the main high-level
techniques. We provide some initial positive results showing that the runtime
can decrease exponentially while only requiring a polynomial growth of the
number of examples, and spell-out several interesting open problems.
|
1106.1220
|
Impulse response of a generalized fractional second order filter
|
cs.SY math.OC
|
The impulse response of a generalized fractional second order filter of the
form ${{({{s}^{2\alpha}}+a{{s}^{\alpha}}+b)}^{-\gamma}}$ is derived, where
$0<\alpha \le 1$, $0<\gamma <2$. The asymptotic properties of the impulse
responses are obtained for two cases, and the two cases show the similar
properties for the changing of $\gamma$ values. It is shown that only when
${{({{s}^{2\alpha}}+a{{s}^{\alpha}}+b)}^{-1}}$ has the critical stability, the
generalized fractional second order filter
${{({{s}^{2\alpha}}+a{{s}^{\alpha}}+b)}^{-\gamma}}$ has different properties as
we change the value of $\gamma$. Finally, numerical examples to illustrate the
impulse response are provided to verify the proposed concepts.
|
1106.1224
|
Robust stability for fractional-order systems with structured and
unstructured uncertainties
|
cs.SY math.OC
|
The issues of robust stability for two types of uncertain fractional-order
systems of order $\alpha \in (0,1)$ are dealt with in this paper. For the
polytope-type uncertainty case, a less conservative sufficient condition of
robust stability is given; for the norm-bounded uncertainty case, a sufficient
and necessary condition of robust stability is presented. Both of these
conditions can be checked by solving sets of linear matrix inequalities. Two
numerical examples are presented to confirm the proposed conditions.
|
1106.1226
|
Sufficient and Necessary Condition of Admissibility for Fractional-order
Singular System
|
cs.SY math.OC
|
This paper has been withdrawn. This paper focuses on the admissibility
condition for fractional-order singular system with order $\alpha \in (0,1)$.
The definitions of regularity, impulse-free and admissibility are given first,
then a sufficient and necessary condition of admissibility for fractional-order
singular system is established. A numerical example is included to illustrate
the proposed condition.
|
1106.1250
|
Optimal Repair of MDS Codes in Distributed Storage via Subspace
Interference Alignment
|
cs.IT math.IT
|
It is well known that an (n,k) code can be used to store 'k' units of
information in 'n' unit-capacity disks of a distributed data storage system. If
the code used is maximum distance separable (MDS), then the system can tolerate
any (n-k) disk failures, since the original information can be recovered from
any k surviving disks. The focus of this paper is the design of a systematic
MDS code with the additional property that a single disk failure can be
repaired with minimum repair bandwidth, i.e., with the minimum possible amount
of data to be downloaded for recovery of the failed disk. Previously, a lower
bound of (n-1)/(n-k) units has been established by Dimakis et. al, on the
repair bandwidth for a single disk failure in an (n,k) MDS code . Recently, the
existence of asymptotic codes achieving this lower bound for arbitrary (n,k)
has been established by drawing connections to interference alignment. While
the existence of asymptotic constructions achieving this lower bound have been
shown, finite code constructions achieving this lower bound existed in previous
literature only for the special (high-redundancy) scenario where $k \leq
\max(n/2,3)$. The question of existence of finite codes for arbitrary values of
(n,k) achieving the lower bound on the repair bandwidth remained open. In this
paper, by using permutation coding sub-matrices, we provide the first known
finite MDS code which achieves the optimal repair bandwidth of (n-1)/(n-k) for
arbitrary (n,k), for recovery of a failed systematic disk. We also generalize
our permutation matrix based constructions by developing a novel framework for
repair-bandwidth-optimal MDS codes based on the idea of subspace interference
alignment - a concept previously introduced by Suh and Tse the context of
wireless cellular networks.
|
1106.1325
|
Shearlets and Optimally Sparse Approximations
|
math.FA cs.IT cs.NA math.IT
|
Multivariate functions are typically governed by anisotropic features such as
edges in images or shock fronts in solutions of transport-dominated equations.
One major goal both for the purpose of compression as well as for an efficient
analysis is the provision of optimally sparse approximations of such functions.
Recently, cartoon-like images were introduced in 2D and 3D as a suitable model
class, and approximation properties were measured by considering the decay rate
of the $L^2$ error of the best $N$-term approximation. Shearlet systems are to
date the only representation system, which provide optimally sparse
approximations of this model class in 2D as well as 3D. Even more, in contrast
to all other directional representation systems, a theory for compactly
supported shearlet frames was derived which moreover also satisfy this
optimality benchmark. This chapter shall serve as an introduction to and a
survey about sparse approximations of cartoon-like images by band-limited and
also compactly supported shearlet frames as well as a reference for the
state-of-the-art of this research field.
|
1106.1351
|
Worst-Case SINR Constrained Robust Coordinated Beamforming for Multicell
Wireless Systems
|
cs.IT math.IT
|
Multicell coordinated beamforming (MCBF) has been recognized as a promising
approach to enhancing the system throughput and spectrum efficiency of wireless
cellular systems. In contrast to the conventional single-cell beamforming (SBF)
design, MCBF jointly optimizes the beamforming vectors of cooperative base
stations (BSs) (via a central processing unit(CPU)) in order to mitigate the
intercell interference. While most of the existing designs assume that the CPU
has the perfect knowledge of the channel state information (CSI) of mobile
stations (MSs), this paper takes into account the inevitable CSI errors at the
CPU, and study the robust MCBF design problem. Specifically, we consider the
worst-case robust design formulation that minimizes the weighted sum
transmission power of BSs subject to worst-case
signal-to-interference-plus-noise ratio (SINR) constraints on MSs. The
associated optimization problem is challenging because it involves infinitely
many nonconvex SINR constraints. In this paper, we show that the worst-case
SINR constraints can be reformulated as linear matrix inequalities, and the
approximation method known as semidefinite relation can be used to efficiently
handle the worst-case robust MCBF problem. Simulation results show that the
proposed robustMCBF design can provide guaranteed SINR performance for the MSs
and outperforms the robust SBF design.
|
1106.1356
|
Hierarchy of protein loop-lock structures: a new server for the
decomposition of a protein structure into a set of closed loops
|
physics.chem-ph cs.CE q-bio.QM
|
HoPLLS (Hierarchy of protein loop-lock structures)
(http://leah.haifa.ac.il/~skogan/Apache/mydata1/main.html) is a web server that
identifies closed loops - a structural basis for protein domain hierarchy. The
server is based on the loop-and-lock theory for structural organisation of
natural proteins. We describe this web server, the algorithms for the
decomposition of a 3D protein into loops and the results of scientific
investigations into a structural "alphabet" of loops and locks.
|
1106.1379
|
A Unified Framework for Approximating and Clustering Data
|
cs.LG
|
Given a set $F$ of $n$ positive functions over a ground set $X$, we consider
the problem of computing $x^*$ that minimizes the expression $\sum_{f\in
F}f(x)$, over $x\in X$. A typical application is \emph{shape fitting}, where we
wish to approximate a set $P$ of $n$ elements (say, points) by a shape $x$ from
a (possibly infinite) family $X$ of shapes. Here, each point $p\in P$
corresponds to a function $f$ such that $f(x)$ is the distance from $p$ to $x$,
and we seek a shape $x$ that minimizes the sum of distances from each point in
$P$. In the $k$-clustering variant, each $x\in X$ is a tuple of $k$ shapes, and
$f(x)$ is the distance from $p$ to its closest shape in $x$.
Our main result is a unified framework for constructing {\em coresets} and
{\em approximate clustering} for such general sets of functions. To achieve our
results, we forge a link between the classic and well defined notion of
$\varepsilon$-approximations from the theory of PAC Learning and VC dimension,
to the relatively new (and not so consistent) paradigm of coresets, which are
some kind of "compressed representation" of the input set $F$. Using
traditional techniques, a coreset usually implies an LTAS (linear time
approximation scheme) for the corresponding optimization problem, which can be
computed in parallel, via one pass over the data, and using only
polylogarithmic space (i.e, in the streaming model).
We show how to generalize the results of our framework for squared distances
(as in $k$-mean), distances to the $q$th power, and deterministic
constructions.
|
1106.1401
|
Volatility of Power Grids under Real-Time Pricing
|
cs.SY math.DS math.OC q-fin.PR
|
The paper proposes a framework for modeling and analysis of the dynamics of
supply, demand, and clearing prices in power system with real-time retail
pricing and information asymmetry. Real-time retail pricing is characterized by
passing on the real-time wholesale electricity prices to the end consumers, and
is shown to create a closed-loop feedback system between the physical layer and
the market layer of the power system. In the absence of a carefully designed
control law, such direct feedback between the two layers could increase
volatility and lower the system's robustness to uncertainty in demand and
generation. A new notion of generalized price-elasticity is introduced, and it
is shown that price volatility can be characterized in terms of the system's
maximal relative price elasticity, defined as the maximal ratio of the
generalized price-elasticity of consumers to that of the producers. As this
ratio increases, the system becomes more volatile, and eventually, unstable. As
new demand response technologies and distributed storage increase the
price-elasticity of demand, the architecture under examination is likely to
lead to increased volatility and possibly instability. This highlights the need
for assessing architecture systematically and in advance, in order to optimally
strike the trade-offs between volatility, economic efficiency, and system
reliability.
|
1106.1412
|
Control for Schroedinger operators on tori
|
math.AP cs.SY math-ph math.MP math.OC
|
A well known result of Jaffard states that an arbitrary region on a torus
controls, in the L2 sense, solutions of the free stationary and dynamical
Schroedinger equations. In this note we show that the same result is valid in
the presence of a smooth time-independent potential. The methods apply to
continuous potentials as well and we conjecture that the L2 control is valid
for any bounded time dependent potential.
|
1106.1414
|
Exact Free Distance and Trapping Set Growth Rates for LDPC Convolutional
Codes
|
cs.IT math.IT
|
Ensembles of (J,K)-regular low-density parity-check convolutional (LDPCC)
codes are known to be asymptotically good, in the sense that the minimum free
distance grows linearly with the constraint length. In this paper, we use a
protograph-based analysis of terminated LDPCC codes to obtain an upper bound on
the free distance growth rate of ensembles of periodically time-varying LDPCC
codes. This bound is compared to a lower bound and evaluated numerically. It is
found that, for a sufficiently large period, the bounds coincide. This approach
is then extended to obtain bounds on the trapping set numbers, which define the
size of the smallest, non-empty trapping sets, for these asymptotically good,
periodically time-varying LDPCC code ensembles.
|
1106.1424
|
Fixed-delay Events in Generalized Semi-Markov Processes Revisited
|
cs.SY cs.PF math.OC
|
We study long run average behavior of generalized semi-Markov processes with
both fixed-delay events as well as variable-delay events. We show that allowing
two fixed-delay events and one variable-delay event may cause an unstable
behavior of a GSMP. In particular, we show that a frequency of a given state
may not be defined for almost all runs (or more generally, an invariant measure
may not exist). We use this observation to disprove several results from
literature. Next we study GSMP with at most one fixed-delay event combined with
an arbitrary number of variable-delay events. We prove that such a GSMP always
possesses an invariant measure which means that the frequencies of states are
always well defined and we provide algorithms for approximation of these
frequencies. Additionally, we show that the positive results remain valid even
if we allow an arbitrary number of reasonably restricted fixed-delay events.
|
1106.1445
|
From Classical to Quantum Shannon Theory
|
quant-ph cs.IT math.IT
|
The aim of this book is to develop "from the ground up" many of the major,
exciting, pre- and post-millenium developments in the general area of study
known as quantum Shannon theory. As such, we spend a significant amount of time
on quantum mechanics for quantum information theory (Part II), we give a
careful study of the important unit protocols of teleportation, super-dense
coding, and entanglement distribution (Part III), and we develop many of the
tools necessary for understanding information transmission or compression (Part
IV). Parts V and VI are the culmination of this book, where all of the tools
developed come into play for understanding many of the important results in
quantum Shannon theory.
|
1106.1474
|
Simple Bounds for Recovering Low-complexity Models
|
cs.IT math.IT
|
This note presents a unified analysis of the recovery of simple objects from
random linear measurements. When the linear functionals are Gaussian, we show
that an s-sparse vector in R^n can be efficiently recovered from 2s log n
measurements with high probability and a rank r, n by n matrix can be
efficiently recovered from r(6n-5r) with high probability. For sparse vectors,
this is within an additive factor of the best known nonasymptotic bounds. For
low-rank matrices, this matches the best known bounds. We present a parallel
analysis for block sparse vectors obtaining similarly tight bounds. In the case
of sparse and block sparse signals, we additionally demonstrate that our bounds
are only slightly weakened when the measurement map is a random sign matrix.
Our results are based on analyzing a particular dual point which certifies
optimality conditions of the respective convex programming problem. Our
calculations rely only on standard large deviation inequalities and our
analysis is self-contained.
|
1106.1478
|
Consistent Query Answering under Spatial Semantic Constraints
|
cs.DB
|
Consistent query answering is an inconsistency tolerant approach to obtaining
semantically correct answers from a database that may be inconsistent with
respect to its integrity constraints. In this work we formalize the notion of
consistent query answer for spatial databases and spatial semantic integrity
constraints. In order to do this, we first characterize conflicting spatial
data, and next, we define admissible instances that restore consistency while
staying close to the original instance. In this way we obtain a repair
semantics, which is used as an instrumental concept to define and possibly
derive consistent query answers. We then concentrate on a class of spatial
denial constraints and spatial queries for which there exists an efficient
strategy to compute consistent query answers. This study applies inconsistency
tolerance in spatial databases, rising research issues that shift the goal from
the consistency of a spatial database to the consistency of query answering.
|
1106.1510
|
Towards OWL-based Knowledge Representation in Petrology
|
cs.AI
|
This paper presents our work on development of OWL-driven systems for formal
representation and reasoning about terminological knowledge and facts in
petrology. The long-term aim of our project is to provide solid foundations for
a large-scale integration of various kinds of knowledge, including basic terms,
rock classification algorithms, findings and reports. We describe three steps
we have taken towards that goal here. First, we develop a semi-automated
procedure for transforming a database of igneous rock samples to texts in a
controlled natural language (CNL), and then a collection of OWL ontologies.
Second, we create an OWL ontology of important petrology terms currently
described in natural language thesauri. We describe a prototype of a tool for
collecting definitions from domain experts. Third, we present an approach to
formalization of current industrial standards for classification of rock
samples, which requires linear equations in OWL 2. In conclusion, we discuss a
range of opportunities arising from the use of semantic technologies in
petrology and outline the future work in this area.
|
1106.1521
|
A Linear-Time Approximation of the Earth Mover's Distance
|
cs.IR
|
Color descriptors are one of the important features used in content-based
image retrieval. The Dominant Color Descriptor (DCD) represents a few
perceptually dominant colors in an image through color quantization. For image
retrieval based on DCD, the earth mover's distance and the optimal color
composition distance are proposed to measure the dissimilarity between two
images. Although providing good retrieval results, both methods are too
time-consuming to be used in a large image database. To solve the problem, we
propose a new distance function that calculates an approximate earth mover's
distance in linear time. To calculate the dissimilarity in linear time, the
proposed approach employs the space-filling curve for multidimensional color
space. To improve the accuracy, the proposed approach uses multiple curves and
adjusts the color positions. As a result, our approach achieves
order-of-magnitude time improvement but incurs small errors. We have performed
extensive experiments to show the effectiveness and efficiency of the proposed
approach. The results reveal that our approach achieves almost the same results
with the EMD in linear time.
|
1106.1523
|
A Novel Combined Term Suggestion Service for Domain-Specific Digital
Libraries
|
cs.DL cs.IR
|
Interactive query expansion can assist users during their query formulation
process. We conducted a user study with over 4,000 unique visitors and four
different design approaches for a search term suggestion service. As a basis
for our evaluation we have implemented services which use three different
vocabularies: (1) user search terms, (2) terms from a terminology service and
(3) thesaurus terms. Additionally, we have created a new combined service which
utilizes thesaurus term and terms from a domain-specific search term
re-commender. Our results show that the thesaurus-based method clearly is used
more often compared to the other single-method implementations. We interpret
this as a strong indicator that term suggestion mechanisms should be
domain-specific to be close to the user terminology. Our novel combined
approach which interconnects a thesaurus service with additional statistical
relations out-performed all other implementations. All our observations show
that domain-specific vocabulary can support the user in finding alternative
concepts and formulating queries.
|
1106.1570
|
A Neural Network Model for Construction Projects Site Overhead Cost
Estimating in Egypt
|
cs.NE
|
Estimating of the overhead costs of building construction projects is an
important task in the management of these projects. The quality of construction
management depends heavily on their accurate cost estimation. Construction
costs prediction is a very difficult and sophisticated task especially when
using manual calculation methods. This paper uses Artificial Neural Network
(ANN) approach to develop a parametric cost-estimating model for site overhead
cost in Egypt. Fifty-two actual real-life cases of building projects
constructed in Egypt during the seven year period 2002-2009 were used as
training materials. The neural network architecture is presented for the
estimation of the site overhead costs as a percentage from the total project
price.
|
1106.1577
|
Market efficiency, anticipation and the formation of bubbles-crashes
|
physics.soc-ph cs.SI q-fin.GN
|
A dynamical model is introduced for the formation of a bullish or bearish
trends driving an asset price in a given market. Initially, each agent decides
to buy or sell according to its personal opinion, which results from the
combination of its own private information, the public information and its own
analysis. It then adjusts such opinion through the market as it observes
sequentially the behavior of a group of random selection of other agents. Its
choice is then determined by a local majority rule including itself. Whenever
the selected group is at a tie, i.e., it is undecided on what to do, the choice
is determined by the local group belief with respect to the anticipated trend
at that time. These local adjustments create a dynamic that leads the market
price formation. In case of balanced anticipations the market is found to be
efficient in being successful to make the "right price" to emerge from the
sequential aggregation of all the local individual informations which all
together contain the fundamental value. However, when a leading optimistic
belief prevails, the same efficient market mechanisms are found to produce a
bullish dynamic even though most agents have bearish private informations. The
market yields then a wider and wider discrepancy between the fundamental value
and the market value, which in turn creates a speculative bubble. Nevertheless,
there exists a limit in the growing of the bubble where private opinions take
over again and at once invert the trend, originating a sudden bearish trend.
Moreover, in the case of a drastic shift in the collective expectations, a huge
drop in price levels may also occur extremely fast and puts the market out of
control, it is a market crash.
|
1106.1595
|
Transmission with Energy Harvesting Nodes in Fading Wireless Channels:
Optimal Policies
|
cs.IT cs.NI math.IT
|
Wireless systems comprised of rechargeable nodes have a significantly
prolonged lifetime and are sustainable. A distinct characteristic of these
systems is the fact that the nodes can harvest energy throughout the duration
in which communication takes place. As such, transmission policies of the nodes
need to adapt to these harvested energy arrivals. In this paper, we consider
optimization of point-to-point data transmission with an energy harvesting
transmitter which has a limited battery capacity, communicating in a wireless
fading channel. We consider two objectives: maximizing the throughput by a
deadline, and minimizing the transmission completion time of the communication
session. We optimize these objectives by controlling the time sequence of
transmit powers subject to energy storage capacity and causality constraints.
We, first, study optimal offline policies. We introduce a directional
water-filling algorithm which provides a simple and concise interpretation of
the necessary optimality conditions. We show the optimality of an adaptive
directional water-filling algorithm for the throughput maximization problem. We
solve the transmission completion time minimization problem by utilizing its
equivalence to its throughput maximization counterpart. Next, we consider
online policies. We use stochastic dynamic programming to solve for the optimal
online policy that maximizes the average number of bits delivered by a deadline
under stochastic fading and energy arrival processes with causal channel state
feedback. We also propose near-optimal policies with reduced complexity, and
numerically study their performances along with the performances of the offline
and online optimal policies under various different configurations.
|
1106.1622
|
Large-Scale Convex Minimization with a Low-Rank Constraint
|
cs.LG stat.ML
|
We address the problem of minimizing a convex function over the space of
large matrices with low rank. While this optimization problem is hard in
general, we propose an efficient greedy algorithm and derive its formal
approximation guarantees. Each iteration of the algorithm involves
(approximately) finding the left and right singular vectors corresponding to
the largest singular value of a certain matrix, which can be calculated in
linear time. This leads to an algorithm which can scale to large matrices
arising in several applications such as matrix completion for collaborative
filtering and robust low rank matrix approximation.
|
1106.1631
|
The combined effect of connectivity and dependency links on percolation
of networks
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
Percolation theory is extensively studied in statistical physics and
mathematics with applications in diverse fields. However, the research is
focused on systems with only one type of links, connectivity links. We review a
recently developed mathematical framework for analyzing percolation properties
of realistic scenarios of networks having links of two types, connectivity and
dependency links. This formalism was applied to study
Erd$\ddot{o}$s-R$\acute{e}$nyi (ER) networks that include also dependency
links. For an ER network with average degree $k$ that is composed of dependency
clusters of size $s$, the fraction of nodes that belong to the giant component,
$P_\infty$, is given by $ P_\infty=p^{s-1}[1-\exp{(-kpP_\infty)}]^s $ where
$1-p$ is the initial fraction of randomly removed nodes. Here, we apply the
formalism to the study of random-regular (RR) networks and find a formula for
the size of the giant component in the percolation process:
$P_\infty=p^{s-1}(1-r^k)^s$ where $r$ is the solution of
$r=p^s(r^{k-1}-1)(1-r^k)+1$. These general results coincide, for $s=1$, with
the known equations for percolation in ER and RR networks respectively without
dependency links. In contrast to $s=1$, where the percolation transition is
second order, for $s>1$ it is of first order. Comparing the percolation
behavior of ER and RR networks we find a remarkable difference regarding their
resilience. We show, analytically and numerically, that in ER networks with low
connectivity degree or large dependency clusters, removal of even a finite
number (zero fraction) of the network nodes will trigger a cascade of failures
that fragments the whole network. This result is in contrast to RR networks
where such cascades and full fragmentation can be triggered only by removal of
a finite fraction of nodes in the network.
|
1106.1634
|
Repair Optimal Erasure Codes through Hadamard Designs
|
cs.IT cs.DC cs.NI math.IT
|
In distributed storage systems that employ erasure coding, the issue of
minimizing the total {\it communication} required to exactly rebuild a storage
node after a failure arises. This repair bandwidth depends on the structure of
the storage code and the repair strategies used to restore the lost data.
Designing high-rate maximum-distance separable (MDS) codes that achieve the
optimum repair communication has been a well-known open problem. In this work,
we use Hadamard matrices to construct the first explicit 2-parity MDS storage
code with optimal repair properties for all single node failures, including the
parities. Our construction relies on a novel method of achieving perfect
interference alignment over finite fields with a finite file size, or number of
extensions. We generalize this construction to design $m$-parity MDS codes that
achieve the optimum repair communication for single systematic node failures
and show that there is an interesting connection between our $m$-parity codes
and the systematic-repair optimal permutation-matrix based codes of Tamo {\it
et al.} \cite{Tamo} and Cadambe {\it et al.} \cite{PermCodes_ISIT, PermCodes}.
|
1106.1636
|
A Sequence of Relaxations Constraining Hidden Variable Models
|
cs.AI cs.SI physics.soc-ph quant-ph stat.ML
|
Many widely studied graphical models with latent variables lead to nontrivial
constraints on the distribution of the observed variables. Inspired by the Bell
inequalities in quantum mechanics, we refer to any linear inequality whose
violation rules out some latent variable model as a "hidden variable test" for
that model. Our main contribution is to introduce a sequence of relaxations
which provides progressively tighter hidden variable tests. We demonstrate
applicability to mixtures of sequences of i.i.d. variables, Bell inequalities,
and homophily models in social networks. For the last, we demonstrate that our
method provides a test that is able to rule out latent homophily as the sole
explanation for correlations on a real social network that are known to be due
to influence.
|
1106.1651
|
Sparse Principal Component of a Rank-deficient Matrix
|
cs.IT cs.LG cs.SY math.IT math.OC
|
We consider the problem of identifying the sparse principal component of a
rank-deficient matrix. We introduce auxiliary spherical variables and prove
that there exists a set of candidate index-sets (that is, sets of indices to
the nonzero elements of the vector argument) whose size is polynomially
bounded, in terms of rank, and contains the optimal index-set, i.e. the
index-set of the nonzero elements of the optimal solution. Finally, we develop
an algorithm that computes the optimal sparse principal component in polynomial
time for any sparsity degree.
|
1106.1652
|
Distributed Storage Codes through Hadamard Designs
|
cs.IT cs.DC cs.NI math.IT
|
In distributed storage systems that employ erasure coding, the issue of
minimizing the total {\it repair bandwidth} required to exactly regenerate a
storage node after a failure arises. This repair bandwidth depends on the
structure of the storage code and the repair strategies used to restore the
lost data. Minimizing it requires that undesired data during a repair align in
the smallest possible spaces, using the concept of interference alignment (IA).
Here, a points-on-a-lattice representation of the symbol extension IA of
Cadambe {\it et al.} provides cues to perfect IA instances which we combine
with fundamental properties of Hadamard matrices to construct a new storage
code with favorable repair properties. Specifically, we build an explicit
$(k+2,k)$ storage code over $\mathbb{GF}(3)$, whose single systematic node
failures can be repaired with bandwidth that matches exactly the theoretical
minimum. Moreover, the repair of single parity node failures generates at most
the same repair bandwidth as any systematic node failure. Our code can tolerate
any single node failure and any pair of failures that involves at most one
systematic failure.
|
1106.1674
|
Moment based estimation of stochastic Kronecker graph parameters
|
stat.ML cs.SI
|
Stochastic Kronecker graphs supply a parsimonious model for large sparse real
world graphs. They can specify the distribution of a large random graph using
only three or four parameters. Those parameters have however proved difficult
to choose in specific applications. This article looks at method of moments
estimators that are computationally much simpler than maximum likelihood. The
estimators are fast and in our examples, they typically yield Kronecker
parameters with expected feature counts closer to a given graph than we get
from KronFit. The improvement was especially prominent for the number of
triangles in the graph.
|
1106.1684
|
Max-Margin Stacking and Sparse Regularization for Linear Classifier
Combination and Selection
|
cs.LG
|
The main principle of stacked generalization (or Stacking) is using a
second-level generalizer to combine the outputs of base classifiers in an
ensemble. In this paper, we investigate different combination types under the
stacking framework; namely weighted sum (WS), class-dependent weighted sum
(CWS) and linear stacked generalization (LSG). For learning the weights, we
propose using regularized empirical risk minimization with the hinge loss. In
addition, we propose using group sparsity for regularization to facilitate
classifier selection. We performed experiments using two different ensemble
setups with differing diversities on 8 real-world datasets. Results show the
power of regularized learning with the hinge loss function. Using sparse
regularization, we are able to reduce the number of selected classifiers of the
diverse ensemble without sacrificing accuracy. With the non-diverse ensembles,
we even gain accuracy on average by using sparse regularization.
|
1106.1697
|
Model-free control of non-minimum phase systems and switched systems
|
math.OC cs.SY
|
This brief presents a simple derivation of the standard model-free control
for the non-minimum phase systems. The robustness of the proposed method is
studied in simulation considering the case of switched systems.
|
1106.1703
|
Structural Controllability of Switched Linear Systems
|
cs.SY math.OC
|
This paper studies the structural controllability of a class of uncertain
switched linear systems, where the parameters of subsystems state matrices are
either unknown or zero. The structural controllability is a generalization of
the traditional controllability concept for dynamical systems, and purely based
on the interconnection relation between the state variables and inputs through
non-zero elements in the state matrices. In order to illustrate such a
relationship, two kinds of graphic representations of switched linear systems
are proposed, based on which graph theory based necessary and sufficient
characterizations of the structural controllability for switched linear systems
are presented. Finally, the paper concludes with discussions on the results and
future work.
|
1106.1716
|
Predicting growth fluctuation in network economy
|
cs.AI q-bio.GN
|
This study presents a method to predict the growth fluctuation of firms
interdependent in a network economy. The risk of downward growth fluctuation of
firms is calculated from the statistics on Japanese industry.
|
1106.1770
|
Reinforcement learning based sensing policy optimization for energy
efficient cognitive radio networks
|
cs.LG
|
This paper introduces a machine learning based collaborative multi-band
spectrum sensing policy for cognitive radios. The proposed sensing policy
guides secondary users to focus the search of unused radio spectrum to those
frequencies that persistently provide them high data rate. The proposed policy
is based on machine learning, which makes it adaptive with the temporally and
spatially varying radio spectrum. Furthermore, there is no need for dynamic
modeling of the primary activity since it is implicitly learned over time.
Energy efficiency is achieved by minimizing the number of assigned sensors per
each subband under a constraint on miss detection probability. It is important
to control the missed detections because they cause collisions with primary
transmissions and lead to retransmissions at both the primary and secondary
user. Simulations show that the proposed machine learning based sensing policy
improves the overall throughput of the secondary network and improves the
energy efficiency while controlling the miss detection probability.
|
1106.1788
|
Uniform Null Controllability for a Degenerating Reaction-Diffusion
System Approximating a Simplified Cardiac Model
|
math.OC cs.SY
|
This paper is devoted to the analysis of the uniform null controllability for
a family of nonlinear reaction-diffusion systems approximating a
parabolic-elliptic system which models the electrical activity of the heart.
The uniform, with respect to the degenerating parameter, null controllability
of the approximating system by means of a single control is shown. The proof is
based on the combination of Carleman estimates and weighted energy
inequalities.
|
1106.1791
|
A Characterization of Entropy in Terms of Information Loss
|
cs.IT math-ph math.IT math.MP quant-ph
|
There are numerous characterizations of Shannon entropy and Tsallis entropy
as measures of information obeying certain properties. Using work by Faddeev
and Furuichi, we derive a very simple characterization. Instead of focusing on
the entropy of a probability measure on a finite set, this characterization
focuses on the `information loss', or change in entropy, associated with a
measure-preserving function. Information loss is a special case of conditional
entropy: namely, it is the entropy of a random variable conditioned on some
function of that variable. We show that Shannon entropy gives the only concept
of information loss that is functorial, convex-linear and continuous. This
characterization naturally generalizes to Tsallis entropy as well.
|
1106.1796
|
Accelerating Reinforcement Learning by Composing Solutions of
Automatically Identified Subtasks
|
cs.AI
|
This paper discusses a system that accelerates reinforcement learning by
using transfer from related tasks. Without such transfer, even if two tasks are
very similar at some abstract level, an extensive re-learning effort is
required. The system achieves much of its power by transferring parts of
previously learned solutions rather than a single complete solution. The system
exploits strong features in the multi-dimensional function produced by
reinforcement learning in solving a particular task. These features are stable
and easy to recognize early in the learning process. They generate a
partitioning of the state space and thus the function. The partition is
represented as a graph. This is used to index and compose functions stored in a
case base to form a close approximation to the solution of the new task.
Experiments demonstrate that function composition often produces more than an
order of magnitude increase in learning rate compared to a basic reinforcement
learning algorithm.
|
1106.1797
|
Parameter Learning of Logic Programs for Symbolic-Statistical Modeling
|
cs.AI
|
We propose a logical/mathematical framework for statistical parameter
learning of parameterized logic programs, i.e. definite clause programs
containing probabilistic facts with a parameterized distribution. It extends
the traditional least Herbrand model semantics in logic programming to
distribution semantics, possible world semantics with a probability
distribution which is unconditionally applicable to arbitrary logic programs
including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM
algorithm, the graphical EM algorithm, that runs for a class of parameterized
logic programs representing sequential decision processes where each decision
is exclusive and independent. It runs on a new data structure called support
graphs describing the logical relationship between observations and their
explanations, and learns parameters by computing inside and outside probability
generalized for logic programs. The complexity analysis shows that when
combined with OLDT search for all explanations for observations, the graphical
EM algorithm, despite its generality, has the same time complexity as existing
EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside
algorithm for PCFGs, and the one for singly connected Bayesian networks that
have been developed independently in each research field. Learning experiments
with PCFGs using two corpora of moderate size indicate that the graphical EM
algorithm can significantly outperform the Inside-Outside algorithm.
|
1106.1799
|
Finding a Path is Harder than Finding a Tree
|
cs.AI
|
I consider the problem of learning an optimal path graphical model from data
and show the problem to be NP-hard for the maximum likelihood and minimum
description length approaches and a Bayesian approach. This hardness result
holds despite the fact that the problem is a restriction of the polynomially
solvable problem of finding the optimal tree graphical model.
|
1106.1800
|
Extensions of Simple Conceptual Graphs: the Complexity of Rules and
Constraints
|
cs.AI
|
Simple conceptual graphs are considered as the kernel of most knowledge
representation formalisms built upon Sowa's model. Reasoning in this model can
be expressed by a graph homomorphism called projection, whose semantics is
usually given in terms of positive, conjunctive, existential FOL. We present
here a family of extensions of this model, based on rules and constraints,
keeping graph homomorphism as the basic operation. We focus on the formal
definitions of the different models obtained, including their operational
semantics and relationships with FOL, and we analyze the decidability and
complexity of the associated problems (consistency and deduction). As soon as
rules are involved in reasonings, these problems are not decidable, but we
exhibit a condition under which they fall in the polynomial hierarchy. These
results extend and complete the ones already published by the authors. Moreover
we systematically study the complexity of some particular cases obtained by
restricting the form of constraints and/or rules.
|
1106.1802
|
Fusions of Description Logics and Abstract Description Systems
|
cs.AI
|
Fusions are a simple way of combining logics. For normal modal logics,
fusions have been investigated in detail. In particular, it is known that,
under certain conditions, decidability transfers from the component logics to
their fusion. Though description logics are closely related to modal logics,
they are not necessarily normal. In addition, ABox reasoning in description
logics is not covered by the results from modal logics. In this paper, we
extend the decidability transfer results from normal modal logics to a large
class of description logics. To cover different description logics in a uniform
way, we introduce abstract description systems, which can be seen as a common
generalization of description and modal logics, and show the transfer results
in this general setting.
|
1106.1803
|
Improving the Efficiency of Inductive Logic Programming Through the Use
of Query Packs
|
cs.AI
|
Inductive logic programming, or relational learning, is a powerful paradigm
for machine learning or data mining. However, in order for ILP to become
practically useful, the efficiency of ILP systems must improve substantially.
To this end, the notion of a query pack is introduced: it structures sets of
similar queries. Furthermore, a mechanism is described for executing such query
packs. A complexity analysis shows that considerable efficiency improvements
can be achieved through the use of this query pack execution mechanism. This
claim is supported by empirical results obtained by incorporating support for
query pack execution in two existing learning systems.
|
1106.1804
|
A Critical Assessment of Benchmark Comparison in Planning
|
cs.AI
|
Recent trends in planning research have led to empirical comparison becoming
commonplace. The field has started to settle into a methodology for such
comparisons, which for obvious practical reasons requires running a subset of
planners on a subset of problems. In this paper, we characterize the
methodology and examine eight implicit assumptions about the problems, planners
and metrics used in many of these comparisons. The problem assumptions are:
PR1) the performance of a general purpose planner should not be
penalized/biased if executed on a sampling of problems and domains, PR2) minor
syntactic differences in representation do not affect performance, and PR3)
problems should be solvable by STRIPS capable planners unless they require ADL.
The planner assumptions are: PL1) the latest version of a planner is the best
one to use, PL2) default parameter settings approximate good performance, and
PL3) time cut-offs do not unduly bias outcome. The metrics assumptions are: M1)
performance degrades similarly for each planner when run on degraded runtime
environments (e.g., machine platform) and M2) the number of plan steps
distinguishes performance. We find that most of these assumptions are not
supported empirically; in particular, that planners are affected differently by
these assumptions. We conclude with a call to the community to devote research
resources to improving the state of the practice and especially to enhancing
the available benchmark problems.
|
1106.1811
|
Caching Stars in the Sky: A Semantic Caching Approach to Accelerate
Skyline Queries
|
cs.DB
|
Multi-criteria decision making has been made possible with the advent of
skyline queries. However, processing such queries for high dimensional datasets
remains a time consuming task. Real-time applications are thus infeasible,
especially for non-indexed skyline techniques where the datasets arrive online.
In this paper, we propose a caching mechanism that uses the semantics of
previous skyline queries to improve the processing time of a new query. In
addition to exact queries, utilizing such special semantics allow accelerating
related queries. We achieve this by generating partial result sets guaranteed
to be in the skyline sets. We also propose an index structure for efficient
organization of the cached queries. Experiments on synthetic and real datasets
show the effectiveness and scalability of our proposed methods.
|
1106.1813
|
SMOTE: Synthetic Minority Over-sampling Technique
|
cs.AI
|
An approach to the construction of classifiers from imbalanced datasets is
described. A dataset is imbalanced if the classification categories are not
approximately equally represented. Often real-world data sets are predominately
composed of "normal" examples with only a small percentage of "abnormal" or
"interesting" examples. It is also the case that the cost of misclassifying an
abnormal (interesting) example as a normal example is often much higher than
the cost of the reverse error. Under-sampling of the majority (normal) class
has been proposed as a good means of increasing the sensitivity of a classifier
to the minority class. This paper shows that a combination of our method of
over-sampling the minority (abnormal) class and under-sampling the majority
(normal) class can achieve better classifier performance (in ROC space) than
only under-sampling the majority class. This paper also shows that a
combination of our method of over-sampling the minority class and
under-sampling the majority class can achieve better classifier performance (in
ROC space) than varying the loss ratios in Ripper or class priors in Naive
Bayes. Our method of over-sampling the minority class involves creating
synthetic minority class examples. Experiments are performed using C4.5, Ripper
and a Naive Bayes classifier. The method is evaluated using the area under the
Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
|
1106.1814
|
When do Numbers Really Matter?
|
cs.AI
|
Common wisdom has it that small distinctions in the probabilities
(parameters) quantifying a belief network do not matter much for the results of
probabilistic queries. Yet, one can develop realistic scenarios under which
small variations in network parameters can lead to significant changes in
computed queries. A pending theoretical question is then to analytically
characterize parameter changes that do or do not matter. In this paper, we
study the sensitivity of probabilistic queries to changes in network parameters
and prove some tight bounds on the impact that such parameters can have on
queries. Our analytic results pinpoint some interesting situations under which
parameter changes do or do not matter. These results are important for
knowledge engineers as they help them identify influential network parameters.
They also help explain some of the previous experimental results and
observations with regards to network robustness against parameter changes.
|
1106.1816
|
Monitoring Teams by Overhearing: A Multi-Agent Plan-Recognition Approach
|
cs.AI
|
Recent years are seeing an increasing need for on-line monitoring of teams of
cooperating agents, e.g., for visualization, or performance tracking. However,
in monitoring deployed teams, we often cannot rely on the agents to always
communicate their state to the monitoring system. This paper presents a
non-intrusive approach to monitoring by 'overhearing', where the monitored
team's state is inferred (via plan-recognition) from team-members' routine
communications, exchanged as part of their coordinated task execution, and
observed (overheard) by the monitoring system. Key challenges in this approach
include the demanding run-time requirements of monitoring, the scarceness of
observations (increasing monitoring uncertainty), and the need to scale-up
monitoring to address potentially large teams. To address these, we present a
set of complementary novel techniques, exploiting knowledge of the social
structures and procedures in the monitored team: (i) an efficient probabilistic
plan-recognition algorithm, well-suited for processing communications as
observations; (ii) an approach to exploiting knowledge of the team's social
behavior to predict future observations during execution (reducing monitoring
uncertainty); and (iii) monitoring algorithms that trade expressivity for
scalability, representing only certain useful monitoring hypotheses, but
allowing for any number of agents and their different activities to be
represented in a single coherent entity. We present an empirical evaluation of
these techniques, in combination and apart, in monitoring a deployed team of
agents, running on machines physically distributed across the country, and
engaged in complex, dynamic task execution. We also compare the performance of
these techniques to human expert and novice monitors, and show that the
techniques presented are capable of monitoring at human-expert levels, despite
the difficulty of the task.
|
1106.1817
|
Automatically Training a Problematic Dialogue Predictor for a Spoken
Dialogue System
|
cs.AI
|
Spoken dialogue systems promise efficient and natural access to a large
variety of information sources and services from any phone. However, current
spoken dialogue systems are deficient in their strategies for preventing,
identifying and repairing problems that arise in the conversation. This paper
reports results on automatically training a Problematic Dialogue Predictor to
predict problematic human-computer dialogues using a corpus of 4692 dialogues
collected with the 'How May I Help You' (SM) spoken dialogue system. The
Problematic Dialogue Predictor can be immediately applied to the system's
decision of whether to transfer the call to a human customer care agent, or be
used as a cue to the system's dialogue manager to modify its behavior to repair
problems, and even perhaps, to prevent them. We show that a Problematic
Dialogue Predictor using automatically-obtainable features from the first two
exchanges in the dialogue can predict problematic dialogues 13.2% more
accurately than the baseline.
|
1106.1818
|
Inducing Interpretable Voting Classifiers without Trading Accuracy for
Simplicity: Theoretical Results, Approximation Algorithms
|
cs.AI
|
Recent advances in the study of voting classification algorithms have brought
empirical and theoretical results clearly showing the discrimination power of
ensemble classifiers. It has been previously argued that the search of this
classification power in the design of the algorithms has marginalized the need
to obtain interpretable classifiers. Therefore, the question of whether one
might have to dispense with interpretability in order to keep classification
strength is being raised in a growing number of machine learning or data mining
papers. The purpose of this paper is to study both theoretically and
empirically the problem. First, we provide numerous results giving insight into
the hardness of the simplicity-accuracy tradeoff for voting classifiers. Then
we provide an efficient "top-down and prune" induction heuristic, WIDC, mainly
derived from recent results on the weak learning and boosting frameworks. It is
to our knowledge the first attempt to build a voting classifier as a base
formula using the weak learning framework (the one which was previously highly
successful for decision tree induction), and not the strong learning framework
(as usual for such classifiers with boosting-like approaches). While it uses a
well-known induction scheme previously successful in other classes of concept
representations, thus making it easy to implement and compare, WIDC also relies
on recent or new results we give about particular cases of boosting known as
partition boosting and ranking loss boosting. Experimental results on
thirty-one domains, most of which readily available, tend to display the
ability of WIDC to produce small, accurate, and interpretable decision
committees.
|
1106.1819
|
A Knowledge Compilation Map
|
cs.AI
|
We propose a perspective on knowledge compilation which calls for analyzing
different compilation approaches according to two key dimensions: the
succinctness of the target compilation language, and the class of queries and
transformations that the language supports in polytime. We then provide a
knowledge compilation map, which analyzes a large number of existing target
compilation languages according to their succinctness and their polytime
transformations and queries. We argue that such analysis is necessary for
placing new compilation approaches within the context of existing ones. We also
go beyond classical, flat target compilation languages based on CNF and DNF,
and consider a richer, nested class based on directed acyclic graphs (such as
OBDDs), which we show to include a relatively large number of target
compilation languages.
|
1106.1820
|
Inferring Strategies for Sentence Ordering in Multidocument News
Summarization
|
cs.AI
|
The problem of organizing information for multidocument summarization so that
the generated summary is coherent has received relatively little attention.
While sentence ordering for single document summarization can be determined
from the ordering of sentences in the input article, this is not the case for
multidocument summarization where summary sentences may be drawn from different
input articles. In this paper, we propose a methodology for studying the
properties of ordering information in the news genre and describe experiments
done on a corpus of multiple acceptable orderings we developed for the task.
Based on these experiments, we implemented a strategy for ordering information
that combines constraints from chronological order of events and topical
relatedness. Evaluation of our augmented algorithm shows a significant
improvement of the ordering over two baseline strategies.
|
1106.1821
|
Collective Intelligence, Data Routing and Braess' Paradox
|
cs.AI
|
We consider the problem of designing the the utility functions of the
utility-maximizing agents in a multi-agent system so that they work
synergistically to maximize a global utility. The particular problem domain we
explore is the control of network routing by placing agents on all the routers
in the network. Conventional approaches to this task have the agents all use
the Ideal Shortest Path routing Algorithm (ISPA). We demonstrate that in many
cases, due to the side-effects of one agent's actions on another agent's
performance, having agents use ISPA's is suboptimal as far as global aggregate
cost is concerned, even when they are only used to route infinitesimally small
amounts of traffic. The utility functions of the individual agents are not
"aligned" with the global utility, intuitively speaking. As a particular
example of this we present an instance of Braess' paradox in which adding new
links to a network whose agents all use the ISPA results in a decrease in
overall throughput. We also demonstrate that load-balancing, in which the
agents' decisions are collectively made to optimize the global cost incurred by
all traffic currently being routed, is suboptimal as far as global cost
averaged across time is concerned. This is also due to 'side-effects', in this
case of current routing decision on future traffic. The mathematics of
Collective Intelligence (COIN) is concerned precisely with the issue of
avoiding such deleterious side-effects in multi-agent systems, both over time
and space. We present key concepts from that mathematics and use them to derive
an algorithm whose ideal version should have better performance than that of
having all agents use the ISPA, even in the infinitesimal limit. We present
experiments verifying this, and also showing that a machine-learning-based
version of this COIN algorithm in which costs are only imprecisely estimated
via empirical means (a version potentially applicable in the real world) also
outperforms the ISPA, despite having access to less information than does the
ISPA. In particular, this COIN algorithm almost always avoids Braess' paradox.
|
1106.1822
|
Efficient Solution Algorithms for Factored MDPs
|
cs.AI
|
This paper addresses the problem of planning under uncertainty in large
Markov Decision Processes (MDPs). Factored MDPs represent a complex state space
using state variables and the transition model using a dynamic Bayesian
network. This representation often allows an exponential reduction in the
representation size of structured MDPs, but the complexity of exact solution
algorithms for such MDPs can grow exponentially in the representation size. In
this paper, we present two approximate solution algorithms that exploit
structure in factored MDPs. Both use an approximate value function represented
as a linear combination of basis functions, where each basis function involves
only a small subset of the domain variables. A key contribution of this paper
is that it shows how the basic operations of both algorithms can be performed
efficiently in closed form, by exploiting both additive and context-specific
structure in a factored MDP. A central element of our algorithms is a novel
linear program decomposition technique, analogous to variable elimination in
Bayesian networks, which reduces an exponentially large LP to a provably
equivalent, polynomial-sized one. One algorithm uses approximate linear
programming, and the second approximate dynamic programming. Our dynamic
programming algorithm is novel in that it uses an approximation based on
max-norm, a technique that more directly minimizes the terms that appear in
error bounds for approximate MDP algorithms. We provide experimental results on
problems with over 10^40 states, demonstrating a promising indication of the
scalability of our approach, and compare our algorithm to an existing
state-of-the-art approach, showing, in some problems, exponential gains in
computation time.
|
1106.1853
|
Intelligent decision: towards interpreting the Pe Algorithm
|
cs.AI
|
The human intelligence lies in the algorithm, the nature of algorithm lies in
the classification, and the classification is equal to outlier detection. A lot
of algorithms have been proposed to detect outliers, meanwhile a lot of
definitions. Unsatisfying point is that definitions seem vague, which makes the
solution an ad hoc one. We analyzed the nature of outliers, and give two clear
definitions. We then develop an efficient RDD algorithm, which converts outlier
problem to pattern and degree problem. Furthermore, a collapse mechanism was
introduced by IIR algorithm, which can be united seamlessly with the RDD
algorithm and serve for the final decision. Both algorithms are originated from
the study on general AI. The combined edition is named as Pe algorithm, which
is the basis of the intelligent decision. Here we introduce longest k-turn
subsequence problem and corresponding solution as an example to interpret the
function of Pe algorithm in detecting curve-type outliers. We also give a
comparison between IIR algorithm and Pe algorithm, where we can get a better
understanding at both algorithms. A short discussion about intelligence is
added to demonstrate the function of the Pe algorithm. Related experimental
results indicate its robustness.
|
1106.1879
|
Second-Order Resolvability, Intrinsic Randomness, and Fixed-Length
Source Coding for Mixed Sources: Information Spectrum Approach
|
cs.IT math.IT
|
The second-order achievable asymptotics in typical random number generation
problems such as resolvability, intrinsic randomness, fixed-length source
coding are considered. In these problems, several researchers have derived the
first-order and the second-order achievability rates for general sources using
the information spectrum methods. Although these formulas are general, their
computation are quite hard. Hence, an attempt to address explicit computation
problems of achievable rates is meaningful. In particular, for i.i.d. sources,
the second-order achievable rates have earlier been determined simply by using
the asymptotic normality. In this paper, we consider mixed sources of two
i.i.d. sources. The mixed source is a typical case of nonergodic sources and
whose self-information does not have the asymptotic normality. Nonetheless, we
can explicitly compute the second-order achievable rates for these sources on
the basis of two-peak asymptotic normality. In addition, extensions of our
results to more general mixed sources, such as a mixture of countably infinite
i.i.d. sources or Markovian sources, and a continuous mixture of i.i.d.
sources, are considered.
|
1106.1887
|
Learning the Dependence Graph of Time Series with Latent Factors
|
cs.LG
|
This paper considers the problem of learning, from samples, the dependency
structure of a system of linear stochastic differential equations, when some of
the variables are latent. In particular, we observe the time evolution of some
variables, and never observe other variables; from this, we would like to find
the dependency structure between the observed variables - separating out the
spurious interactions caused by the (marginalizing out of the) latent
variables' time series. We develop a new method, based on convex optimization,
to do so in the case when the number of latent variables is smaller than the
number of observed ones. For the case when the dependency structure between the
observed variables is sparse, we theoretically establish a high-dimensional
scaling result for structure recovery. We verify our theoretical result with
both synthetic and real data (from the stock market).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.