id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1301.2012 | Error Correction in Learning using SVMs | cs.LG | This paper is concerned with learning binary classifiers under adversarial
label-noise. We introduce the problem of error-correction in learning where the
goal is to recover the original clean data from a label-manipulated version of
it, given (i) no constraints on the adversary other than an upper-bound on the
number of errors, and (ii) some regularity properties for the original data. We
present a simple and practical error-correction algorithm called SubSVMs that
learns individual SVMs on several small-size (log-size), class-balanced, random
subsets of the data and then reclassifies the training points using a majority
vote. Our analysis reveals the need for the two main ingredients of SubSVMs,
namely class-balanced sampling and subsampled bagging. Experimental results on
synthetic as well as benchmark UCI data demonstrate the effectiveness of our
approach. In addition to noise-tolerance, log-size subsampled bagging also
yields significant run-time benefits over standard SVMs.
|
1301.2015 | Heteroscedastic Relevance Vector Machine | stat.ML cs.LG | In this work we propose a heteroscedastic generalization to RVM, a fast
Bayesian framework for regression, based on some recent similar works. We use
variational approximation and expectation propagation to tackle the problem.
The work is still under progress and we are examining the results and comparing
with the previous works.
|
1301.2020 | Towards the full information chain theory: expected loss and information
relevance | physics.data-an cs.IT math.IT | When additional information sources are available, an important question for
an agent solving a certain problem is how to optimally use the information the
sources are capable of providing. A framework that relates information accuracy
on the source side to information relevance on the problem side is proposed. An
optimal information acquisition problem is formulated as that of question
selection to maximize the loss reduction for the problem solved by the agent. A
duality relationship between pseudoenergy (accuracy related) quantities on the
source side and loss (relevance related) quantities on the problem side is
observed.
|
1301.2030 | The One-Bit Null Space Learning Algorithm and its Convergence | cs.IT math.IT | This paper proposes a new algorithm for MIMO cognitive radio Secondary Users
(SU) to learn the null space of the interference channel to the Primary User
(PU) without burdening the PU with any knowledge or explicit cooperation with
the SU.
The knowledge of this null space enables the SU to transmit in the same band
simultaneously with the PU by utilizing separate spatial dimensions than the
PU. Specifically, the SU transmits in the null space of the interference
channel to the PU. We present a new algorithm, called the One-Bit Null Space
Learning Algorithm (OBNSLA), in which the SU learns the PU's null space by
observing a binary function that indicates whether the interference it inflicts
on the PU has increased or decreased in comparison to the SU's previous
transmitted signal. This function is obtained by listening to the PU
transmitted signal or control channel and extracting information from it about
whether the PU's Signal to Interference plus Noise power Ratio (SINR) has
increased or decreased.
In addition to introducing the OBNSLA, this paper provides a thorough
convergence analysis of this algorithm. The OBNSLA is shown to have a linear
convergence rate and an asymptotic quadratic convergence rate. Finally, we
derive bounds on the interference that the SU inflicts on the PU as a function
of a parameter determined by the SU. This lets the SU control the maximum level
of interference, which enables it to protect the PU completely blindly with
minimum complexity. The asymptotic analysis and the derived bounds also apply
to the recently proposed Blind Null Space Learning Algorithm.
|
1301.2032 | Training Effective Node Classifiers for Cascade Classification | cs.CV cs.LG stat.ML | Cascade classifiers are widely used in real-time object detection. Different
from conventional classifiers that are designed for a low overall
classification error rate, a classifier in each node of the cascade is required
to achieve an extremely high detection rate and moderate false positive rate.
Although there are a few reported methods addressing this requirement in the
context of object detection, there is no principled feature selection method
that explicitly takes into account this asymmetric node learning objective. We
provide such an algorithm here. We show that a special case of the biased
minimax probability machine has the same formulation as the linear asymmetric
classifier (LAC) of Wu et al (2005). We then design a new boosting algorithm
that directly optimizes the cost function of LAC. The resulting
totally-corrective boosting algorithm is implemented by the column generation
technique in convex optimization. Experimental results on object detection
verify the effectiveness of the proposed boosting algorithm as a node
classifier in cascade object detection, and show performance better than that
of the current state-of-the-art.
|
1301.2041 | Importance of Symbol Equity in Coded Modulation for Power Line
Communications | cs.IT math.IT | The use of multiple frequency shift keying modulation with permutation codes
addresses the problem of permanent narrowband noise disturbance in a power line
communications system. In this paper, we extend this coded modulation scheme
based on permutation codes to general codes and introduce an additional new
parameter that more precisely captures a code's performance against permanent
narrowband noise. As a result, we define a new class of codes, namely,
equitable symbol weight codes, which are optimal with respect to this measure.
|
1301.2055 | A Cascading Failure Model by Quantifying Interactions | physics.soc-ph cs.SI cs.SY | Cascading failures triggered by trivial initial events are encountered in
many complex systems. It is the interaction and coupling between components of
the system that causes cascading failures. We propose a simple model to
simulate cascading failure by using the matrix that determines how components
interact with each other. A careful comparison is made between the original
cascades and the simulated cascades by the proposed model. It is seen that the
model can capture general features of the original cascades, suggesting that
the interaction matrix can well reflect the relationship between components. An
index is also defined to identify important links and the distribution follows
an obvious power law. By eliminating a small number of most important links the
risk of cascading failures can be significantly mitigated, which is
dramatically different from getting rid of the same number of links randomly.
|
1301.2086 | API Blender: A Uniform Interface to Social Platform APIs | cs.SE cs.SI | With the growing success of the social Web, most Web developers have to
interact with at least one social Web platform, which implies studying the
related API specifications. These are often only informally described, may
contain errors, lack harmonization, and generally speaking make the developer's
work difficult. Most attempts to solve this problem, proposing formal
description languages for Web service APIs, have had limited success outside of
B2B applications; we believe it is due to their top-down nature. In addition, a
programmer dealing with one or several of these APIs has to deal with a number
of related tasks such as data integration, requests chaining, or policy
management, that are cumbersome to implement. Inspired by the SPORE project, we
present API Blender, an open-source solution to describe, interact with, and
integrate the most common social Web APIs. In this perspective, we first
introduce two new lightweight description formats for requests and services and
demonstrate their relevance with respect to current platform APIs. We present
our Python implementation of API Blender and its features regarding
authentication, policy management and multi-platform data integration.
|
1301.2115 | Domain Generalization via Invariant Feature Representation | stat.ML cs.LG | This paper investigates domain generalization: How to take knowledge acquired
from an arbitrary number of related domains and apply it to previously unseen
domains? We propose Domain-Invariant Component Analysis (DICA), a kernel-based
optimization algorithm that learns an invariant transformation by minimizing
the dissimilarity across domains, whilst preserving the functional relationship
between input and output variables. A learning-theoretic analysis shows that
reducing dissimilarity improves the expected generalization ability of
classifiers on new domains, motivating the proposed algorithm. Experimental
results on synthetic and real-world datasets demonstrate that DICA successfully
learns invariant features and improves classifier performance in practice.
|
1301.2130 | Distributed soft thresholding for sparse signal recovery | cs.IT cs.DC math.IT math.OC | In this paper, we address the problem of distributed sparse recovery of
signals acquired via compressed measurements in a sensor network. We propose a
new class of distributed algorithms to solve Lasso regression problems, when
the communication to a fusion center is not possible, e.g., due to
communication cost or privacy reasons. More precisely, we introduce a
distributed iterative soft thresholding algorithm (DISTA) that consists of
three steps: an averaging step, a gradient step, and a soft thresholding
operation. We prove the convergence of DISTA in networks represented by regular
graphs, and we compare it with existing methods in terms of performance,
memory, and complexity.
|
1301.2137 | A Forgetting-based Approach to Merging Knowledge Bases | cs.AI | This paper presents a novel approach based on variable forgetting, which is a
useful tool in resolving contradictory by filtering some given variables, to
merging multiple knowledge bases. This paper first builds a relationship
between belief merging and variable forgetting by using dilation. Variable
forgetting is applied to capture belief merging operation. Finally, some new
merging operators are developed by modifying candidate variables to amend the
shortage of traditional merging operators. Different from model selection of
traditional merging operators, as an alternative approach, variable selection
in those new operators could provide intuitive information about an atom
variable among whole knowledge bases.
|
1301.2138 | On the Degrees of Freedom of the K-User Time Correlated Broadcast
Channel with Delayed CSIT | cs.IT math.IT | The Degrees of Freedom (DoF) of a K-User MISO Broadcast Channel (BC) is
studied when the Transmitter (TX) has access to a delayed channel estimate in
addition to an imperfect estimate of the current channel. The current estimate
could be for example obtained from prediction applied on past estimates, in the
case where feedback delay is within the coherence time. Building on previous
recent works on this setting with two users, the estimation error of the
current channel is characterized by its scaling as P at the exponent \alpha,
where \alpha=1 (resp. \alpha=0) corresponds to an estimate being essentially
perfect (resp. useless) in terms of DoF. In this work, we contribute to the
characterization of the DoF region in such a setting by deriving an outerbound
for the DoF region and by providing an achievable DoF region. The achievable
DoF is obtained by developing a new alignment scheme, called the K\alpha-MAT
scheme, which builds upon both the principle of the MAT alignment scheme from
Maddah-Ali and Tse and Zero-Forcing to achieve a larger DoF when the delayed
CSIT received is correlated with the instantaneous channel state.
|
1301.2146 | A Paraconsistent Tableau Algorithm Based on Sign Transformation in
Semantic Web | cs.AI | In an open, constantly changing and collaborative environment like the
forthcoming Semantic Web, it is reasonable to expect that knowledge sources
will contain noise and inaccuracies. It is well known, as the logical
foundation of the Semantic Web, description logic is lack of the ability of
tolerating inconsistent or incomplete data. Recently, the ability of
paraconsistent approaches in Semantic Web is weaker in this paper, we present a
tableau algorithm based on sign transformation in Semantic Web which holds the
stronger ability of reasoning. We prove that the tableau algorithm is decidable
which hold the same function of classical tableau algorithm for consistent
knowledge bases.
|
1301.2150 | An Evidential Interpretation of the 1st and 2nd Laws of Thermodynamics | physics.data-an cond-mat.stat-mech cs.IT math.IT | I argue here that both the first and second laws of thermodynamics, generally
understood to be quintessentially physical in nature, can be equally well
described as being about certain types of information without the need to
invoke physical manifestations for information. In particular, I show that the
statistician's familiar likelihood principle is a general conservation
principle on a par with the first law, and that likelihood itself involves a
form of irrecoverable information loss that can be expressed in the form of
(one version of) the second law. Each of these principles involves a particular
type of information, and requires its own form of bookkeeping to properly
account for information accumulation. I illustrate both sets of books with a
simple coin-tossing (binomial) experiment. In thermodynamics, absolute
temperature T is the link that relates energy-based and entropy-based
bookkeeping systems. I consider the information-based analogue of this link,
denoted here as E, and show that E has a meaningful interpretation in its own
right in connection with statistical inference. These results contribute to a
growing body of theory at the intersection of thermodynamics, information
theory and statistical inference, and suggest a novel framework in which E
itself for the first time plays a starring role.
|
1301.2158 | Artificial Intelligence Framework for Simulating Clinical
Decision-Making: A Markov Decision Process Approach | cs.AI stat.ML | In the modern healthcare system, rapidly expanding costs/complexity, the
growing myriad of treatment options, and exploding information streams that
often do not effectively reach the front lines hinder the ability to choose
optimal treatment decisions over time. The goal in this paper is to develop a
general purpose (non-disease-specific) computational/artificial intelligence
(AI) framework to address these challenges. This serves two potential
functions: 1) a simulation environment for exploring various healthcare
policies, payment methodologies, etc., and 2) the basis for clinical artificial
intelligence - an AI that can think like a doctor. This approach combines
Markov decision processes and dynamic decision networks to learn from clinical
data and develop complex plans via simulation of alternative sequential
decision paths while capturing the sometimes conflicting, sometimes synergistic
interactions of various components in the healthcare system. It can operate in
partially observable environments (in the case of missing observations or data)
by maintaining belief states about patient health status and functions as an
online agent that plans and re-plans. This framework was evaluated using real
patient data from an electronic health record. Such an AI framework easily
outperforms the current treatment-as-usual (TAU) case-rate/fee-for-service
models of healthcare (Cost per Unit Change: $189 vs. $497) while obtaining a
30-35% increase in patient outcomes. Tweaking certain model parameters further
enhances this advantage, obtaining roughly 50% more improvement for roughly
half the costs. Given careful design and problem formulation, an AI simulation
framework can approximate optimal decisions even in complex and uncertain
environments. Future work is described that outlines potential lines of
research and integration of machine learning algorithms for personalized
medicine.
|
1301.2165 | List Decoding of Lifted Gabidulin Codes via the Pl\"ucker Embedding | cs.IT math.IT | Codes in the Grassmannian have recently found an application in random
network coding. All the codewords in such codes are subspaces of $\F_q^n$ with
a given dimension.
In this paper, we consider the problem of list decoding of a certain family
of codes in the Grassmannian, called lifted Gabidulin codes.
For this purpose we use the Pl\"ucker embedding of the Grassmannian. We
describe a way of representing a subset of the Pl\"ucker coordinates of lifted
Gabidulin codes as linear block codes. The union of the parity-check equations
of these block codes and the equations which arise from the description of a
ball around a subspace in the Pl\"ucker coordinates describe the list of
codewords with distance less than a given parameter from the received word.
|
1301.2172 | Content-Based Video Browsing by Text Region Localization and
Classification | cs.MM cs.IR | The amount of digital video data is increasing over the world. It highlights
the need for efficient algorithms that can index, retrieve and browse this data
by content. This can be achieved by identifying semantic description captured
automatically from video structure. Among these descriptions, text within video
is considered as rich features that enable a good way for video indexing and
browsing. Unlike most video text detection and extraction methods that treat
video sequences as collections of still images, we propose in this paper
spatiotemporal. video-text localization and identification approach which
proceeds in two main steps: text region localization and text region
classification. In the first step we detect the significant appearance of the
new objects in a frame by a split and merge processes applied on binarized edge
frame pair differences. Detected objects are, a priori, considered as text.
They are then filtered according to both local contrast variation and texture
criteria in order to get the effective ones. The resulted text regions are
classified based on a visual grammar descriptor containing a set of semantic
text class regions characterized by visual features. A visual table of content
is then generated based on extracted text regions occurring within video
sequence enriched by a semantic identification. The experimentation performed
on a variety of video sequences shows the efficiency of our approach.
|
1301.2173 | AViTExt: Automatic Video Text Extraction, A new Approach for video
content indexing Application | cs.MM cs.IR | In this paper, we propose a spatial temporal video-text detection technique
which proceed in two principal steps:potential text region detection and a
filtering process. In the first step we divide dynamically each pair of
consecutive video frames into sub block in order to detect change. A
significant difference between homologous blocks implies the appearance of an
important object which may be a text region. The temporal redundancy is then
used to filter these regions and forms an effective text region. The
experimentation driven on a variety of video sequences shows the effectiveness
of our approach by obtaining a 89,39% as precision rate and 90,19 as recall.
|
1301.2180 | The Streaming-DMT of Fading Channels | cs.IT math.IT | We consider the sequential transmission of a stream of messages over a
block-fading multi-input-multi-output (MIMO) channel. A new message arrives at
the beginning of each coherence block, and the decoder is required to output
each message sequentially, after a delay of $T$ coherence blocks. In the
special case when $T=1$, the setup reduces to the quasi-static fading channel.
We establish the optimal diversity-multiplexing tradeoff (DMT) in the high
signal-to-noise-ratio (SNR) regime, and show that it equals $T$ times the DMT
of the quasi-static channel. The converse is based on utilizing the delay
constraint to amplify a local outage event associated with a message, globally
across all the coherence blocks. This approach appears to be new. We propose
two coding schemes that achieve the optimal DMT. The first scheme involves
interleaving of messages, such that each message is transmitted across $T$
consecutive coherence blocks. This scheme requires the knowledge of the delay
constraint at both the encoder and decoder. Our second coding scheme involves a
sequential tree code and is delay-universal i.e., the knowledge of the decoding
delay is not required by the encoder. However, in this scheme we require the
coherence block-length to increase as $\log\mathrm{({SNR})}$, in order to
attain the optimal DMT. Finally, we discuss the case when multiple messages
arrive at uniform intervals {\em within} each coherence period. Through a
simple example we exhibit the sub-optimality of interleaving, and propose
another scheme that achieves the optimal DMT.
|
1301.2182 | Dynamic Triggering Mechanisms for Event-Triggered Control | cs.SY math.OC | In this paper, we present a new class of event triggering mechanisms for
event-triggered control systems. This class is characterized by the
introduction of an internal dynamic variable, which motivates the proposed name
of dynamic event triggering mechanism. The stability of the resulting closed
loop system is proved and the influence of design parameters on the decay rate
of the Lyapunov function is discussed. For linear systems, we establish a lower
bound on the inter-execution time as a function of the parameters. The
influence of these parameters on a quadratic integral performance index is also
studied. Some simulation results are provided for illustration of the
theoretical claims.
|
1301.2194 | Network-based clustering with mixtures of L1-penalized Gaussian
graphical models: an empirical investigation | stat.ML cs.LG stat.ME | In many applications, multivariate samples may harbor previously unrecognized
heterogeneity at the level of conditional independence or network structure.
For example, in cancer biology, disease subtypes may differ with respect to
subtype-specific interplay between molecular components. Then, both subtype
discovery and estimation of subtype-specific networks present important and
related challenges. To enable such analyses, we put forward a mixture model
whose components are sparse Gaussian graphical models. This brings together
model-based clustering and graphical modeling to permit simultaneous estimation
of cluster assignments and cluster-specific networks. We carry out estimation
within an L1-penalized framework, and investigate several specific penalization
regimes. We present empirical results on simulated data and provide general
recommendations for the formulation and use of mixtures of L1-penalized
Gaussian graphical models.
|
1301.2200 | A Visual Grammar Approach for TV Program Identification | cs.MM cs.IR | Automatic identification of TV programs within TV streams is an important
task for archive exploitation. This paper proposes a new spatial-temporal
approach to identify programs in TV streams in two main steps: First, a
reference catalogue for video grammars of visual jingles is constructed. We
exploit visual grammars characterizing instances of the same program type in
order to identify the various program types in the TV stream. The role of video
grammar is to represent the visual invariants for each visual jingle using a
set of descriptors appropriate for each TV program. Secondly, programs in TV
streams are identified by examining the similarity of the video signal to the
visual grammars in the catalogue. The main idea of identification process
consists in comparing the visual similarity of the video signal signature in TV
stream to the catalogue elements. After presenting the proposed approach, the
paper overviews the encouraging experimental results on several streams
extracted from different channels and composed of several programs.
|
1301.2215 | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2012), 5th International Workshop, September 4, 2012, Budapest,
Hungary | cs.AI | This volume contains the papers presented at the fifth workshop on Answer Set
Programming and Other Computing Paradigms (ASPOCP 2012) held on September 4th,
2012 in Budapest, co-located with the 28th International Conference on Logic
Programming (ICLP 2012). It thus continues a series of previous events
co-located with ICLP, aiming at facilitating the discussion about crossing the
boundaries of current ASP techniques in theory, solving, and applications, in
combination with or inspired by other computing paradigms.
|
1301.2218 | Estimation from Relative Measurements in Mobile Networks with Markovian
Switching Topology: Clock Skew and Offset Estimation for Time Synchronization | cs.SY | We analyze a distributed algorithm for estimation of scalar parameters
belonging to nodes in a mobile network from noisy relative measurements. The
motivation comes from the problem of clock skew and offset estimation for the
purpose of time synchronization. The time variation of the network was modeled
as a Markov chain. The estimates are shown to be mean square convergent under
fairly weak assumptions on the Markov chain, as long as the union of the graphs
is connected. Expressions for the asymptotic mean and correlation are also
provided. The Markovian switching topology model of mobile networks is
justified for certain node mobility models through empirically estimated
conditional entropy measures.
|
1301.2223 | Disruptions in the U.S. Airport Network | physics.soc-ph cs.SI | Our project analyzes the United States domestic airport network. We attempt
to determine which airports are most vital in maintaining the underlying
infrastructure for all domestic flights within the United States. To perform
our analysis, we use data from the first quarter of 2010 and use several
methods and algorithms that are frequently used in network science. Using these
statistics, we identified the most important airports in the United States and
investigate the role and significance that these airports play in maintaining
the structure of the entire domestic airport network. Some of these airports
include Denver International and Ted Stevens Anchorage International. We also
identified any structural holes and suggested improvements that can be made to
the network. Finally, through our analysis, we developed a disaster response
algorithm that calculates flight path reroutes in emergency situations.
|
1301.2236 | User Profile-Driven Data Warehouse Summary for Adaptive OLAP Queries | cs.DB | Data warehousing is an essential element of decision support systems. It aims
at enabling the user knowledge to make better and faster daily business
decisions. To improve this decision support system and to give more and more
relevant information to the user, the need to integrate user's profiles into
the data warehouse process becomes crucial. In this paper, we propose to
exploit users' preferences as a basis for adapting OLAP (On-Line Analytical
Processing) queries to the user. For this, we present a user profile-driven
data warehouse approach that allows dening user's profile composed by his/her
identifier and a set of his/her preferences. Our approach is based on a general
data warehouse architecture and an adaptive OLAP analysis system. Our main idea
consists in creating a data warehouse materialized view for each user with
respect to his/her profile. This task is performed off-line when the user
defines his/her profile for the first time. Then, when a user query is
submitted to the data warehouse, the system deals with his/her data warehouse
materialized view instead of the whole data warehouse. In other words, the data
warehouse view summaries the data warehouse content for the user by taking into
account his/her preferences. Moreover, we are implementing our data warehouse
personalization approach under the SQL Server 2005 DBMS (DataBase Management
System).
|
1301.2237 | Wyner's Common Information: Generalizations and A New Lossy Source
Coding Interpretation | cs.IT math.IT | Wyner's common information was originally defined for a pair of dependent
discrete random variables. Its significance is largely reflected in, hence also
confined to, several existing interpretations in various source coding
problems. This paper attempts to both generalize its definition and to expand
its practical significance by providing a new operational interpretation. The
generalization is two-folded: the number of dependent variables can be
arbitrary, so are the alphabet of those random variables. New properties are
determined for the generalized Wyner's common information of N dependent
variables. More importantly, a lossy source coding interpretation of Wyner's
common information is developed using the Gray-Wyner network. In particular, it
is established that the common information equals to the smallest common
message rate when the total rate is arbitrarily close to the rate distortion
function with joint decoding. A surprising observation is that such equality
holds independent of the values of distortion constraints as long as the
distortions are within some distortion region. Examples about the computation
of common information are given, including that of a pair of dependent Gaussian
random variables.
|
1301.2247 | Evolutionary dynamics of group interactions on structured populations: A
review | physics.soc-ph cond-mat.stat-mech cs.SI nlin.AO q-bio.PE | Interactions among living organisms, from bacteria colonies to human
societies, are inherently more complex than interactions among particles and
nonliving matter. Group interactions are a particularly important and
widespread class, representative of which is the public goods game. In
addition, methods of statistical physics have proven valuable for studying
pattern formation, equilibrium selection, and self-organisation in evolutionary
games. Here we review recent advances in the study of evolutionary dynamics of
group interactions on structured populations, including lattices, complex
networks and coevolutionary models. We also compare these results with those
obtained on well-mixed populations. The review particularly highlights that the
study of the dynamics of group interactions, like several other important
equilibrium and non-equilibrium dynamical processes in biological, economical
and social sciences, benefits from the synergy between statistical physics,
network science and evolutionary game theory.
|
1301.2252 | A Factorized Variational Technique for Phase Unwrapping in Markov Random
Fields | cs.CV | Some types of medical and topographic imaging device produce images in which
the pixel values are "phase-wrapped", i.e. measured modulus a known scalar.
Phase unwrapping can be viewed as the problem of inferring the number of shifts
between each and every pair of neighboring pixels, subject to an a priori
preference for smooth surfaces, and subject to a zero curl constraint, which
requires that the shifts must sum to 0 around every loop. We formulate phase
unwrapping as a mean field inference problem in a Markov network, where the
prior favors the zero curl constraint. We compare our mean field technique with
the least squares method on a synthetic 100x100 image, and give results on a
512x512 synthetic aperture radar image from Sandia National Laboratories.<Long
Text>
|
1301.2253 | Efficient Approximation for Triangulation of Minimum Treewidth | cs.DS cs.AI | We present four novel approximation algorithms for finding triangulation of
minimum treewidth. Two of the algorithms improve on the running times of
algorithms by Robertson and Seymour, and Becker and Geiger that approximate the
optimum by factors of 4 and 3 2/3, respectively. A third algorithm is faster
than those but gives an approximation factor of 4 1/2. The last algorithm is
yet faster, producing factor-O(lg/k) approximations in polynomial time. Finding
triangulations of minimum treewidth for graphs is central to many problems in
computer science. Real-world problems in artificial intelligence, VLSI design
and databases are efficiently solvable if we have an efficient approximation
algorithm for them. We report on experimental results confirming the
effectiveness of our algorithms for large graphs associated with real-world
problems.
|
1301.2254 | Markov Chain Monte Carlo using Tree-Based Priors on Model Structure | cs.AI | We present a general framework for defining priors on model structure and
sampling from the posterior using the Metropolis-Hastings algorithm. The key
idea is that structure priors are defined via a probability tree and that the
proposal mechanism for the Metropolis-Hastings algorithm operates by traversing
this tree, thereby defining a cheaply computable acceptance probability. We
have applied this approach to Bayesian net structure learning using a number of
priors and tree traversal strategies. Our results show that these must be
chosen appropriately for this approach to be successful.
|
1301.2255 | Graphical readings of possibilistic logic bases | cs.AI | Possibility theory offers either a qualitive, or a numerical framework for
representing uncertainty, in terms of dual measures of possibility and
necessity. This leads to the existence of two kinds of possibilistic causal
graphs where the conditioning is either based on the minimum, or the product
operator. Benferhat et al. (1999) have investigated the connections between
min-based graphs and possibilistic logic bases (made of classical formulas
weighted in terms of certainty). This paper deals with a more difficult issue :
the product-based graphical representations of possibilistic bases, which
provides an easy structural reading of possibilistic bases. Moreover, this
paper also provides another reading of possibilistic bases in terms of
comparative preferences of the form "in the context p, q is preferred to not
q". This enables us to explicit preferences underlying a set of goals with
different levels of priority.
|
1301.2256 | Pre-processing for Triangulation of Probabilistic Networks | cs.AI cs.DS | The currently most efficient algorithm for inference with a probabilistic
network builds upon a triangulation of a network's graph. In this paper, we
show that pre-processing can help in finding good triangulations
forprobabilistic networks, that is, triangulations with a minimal maximum
clique size. We provide a set of rules for stepwise reducing a graph, without
losing optimality. This reduction allows us to solve the triangulation problem
on a smaller graph. From the smaller graph's triangulation, a triangulation of
the original graph is obtained by reversing the reduction steps. Our
experimental results show that the graphs of some well-known real-life
probabilistic networks can be triangulated optimally just by preprocessing; for
other networks, huge reductions in their graph's size are obtained.
|
1301.2257 | A Calculus for Causal Relevance | cs.AI | This paper presents a sound and completecalculus for causal relevance, based
onPearl's functional models semantics.The calculus consists of axioms and
rulesof inference for reasoning about causalrelevance relationships.We extend
the set of known axioms for causalrelevance with three new axioms, andintroduce
two new rules of inference forreasoning about specific subclasses
ofmodels.These subclasses give a more refinedcharacterization of causal models
than the one given in Halpern's axiomatizationof counterfactual
reasoning.Finally, we show how the calculus for causalrelevance can be used in
the task ofidentifying causal structure from non-observational data.
|
1301.2258 | Instrumentality Tests Revisited | cs.AI stat.ME | An instrument is a random variable thatallows the identification of
parameters inlinear models when the error terms arenot uncorrelated.It is a
popular method used in economicsand the social sciences that reduces theproblem
of identification to the problemof finding the appropriate instruments.Few
years ago, Pearl introduced a necessarytest for instruments that allows the
researcher to discard those candidatesthat fail the test.In this paper, we make
a detailed study of Pearl's test and the general model forinstruments. The
results of this studyinclude a novel interpretation of Pearl'stest, a general
theory of instrumentaltests, and an affirmative answer to aprevious conjecture.
We also presentnew instrumentality tests for the casesof discrete and
continuous variables.
|
1301.2259 | UCP-Networks: A Directed Graphical Representation of Conditional
Utilities | cs.AI | We propose a new directed graphical representation of utility functions,
called UCP-networks, that combines aspects of two existing graphical models:
generalized additive models and CP-networks. The network decomposes a utility
function into a number of additive factors, with the directionality of the arcs
reflecting conditional dependence of preference statements - in the underlying
(qualitative) preference ordering - under a {em ceteris paribus} (all else
being equal) interpretation. This representation is arguably natural in many
settings. Furthermore, the strong CP-semantics ensures that computation of
optimization and dominance queries is very efficient. We also demonstrate the
value of this representation in decision making. Finally, we describe an
interactive elicitation procedure that takes advantage of the linear nature of
the constraints on "`tradeoff weights" imposed by a UCP-network. This procedure
allows the network to be refined until the regret of the decision with minimax
regret (with respect to the incompletely specified utility function) falls
below a specified threshold (e.g., the cost of further questioning.
|
1301.2260 | Confidence Inference in Bayesian Networks | cs.AI | We present two sampling algorithms for probabilistic confidence inference in
Bayesian networks. These two algorithms (we call them AIS-BN-mu and
AIS-BN-sigma algorithms) guarantee that estimates of posterior probabilities
are with a given probability within a desired precision bound. Our algorithms
are based on recent advances in sampling algorithms for (1) estimating the mean
of bounded random variables and (2) adaptive importance sampling in Bayesian
networks. In addition to a simple stopping rule for sampling that they provide,
the AIS-BN-mu and AIS-BN-sigma algorithms are capable of guiding the learning
process in the AIS-BN algorithm. An empirical evaluation of the proposed
algorithms shows excellent performance, even for very unlikely evidence.
|
1301.2261 | Semi-Instrumental Variables: A Test for Instrument Admissibility | stat.ME cs.AI stat.AP | In a causal graphical model, an instrument for a variable X and its effect Y
is a random variable that is a cause of X and independent of all the causes of
Y except X. (Pearl (1995), Spirtes et al (2000)). Instrumental variables can be
used to estimate how the distribution of an effect will respond to a
manipulation of its causes, even in the presence of unmeasured common causes
(confounders). In typical instrumental variable estimation, instruments are
chosen based on domain knowledge. There is currently no statistical test for
validating a variable as an instrument. In this paper, we introduce the concept
of semi-instrument, which generalizes the concept of instrument. We show that
in the framework of additive models, under certain conditions, we can test
whether a variable is semi-instrumental. Moreover, adding some distribution
assumptions, we can test whether two semi-instruments are instrumental. We give
algorithms to estimate the p-value that a random variable is semi-instrumental,
and the p-value that two semi-instruments are both instrumental. These
algorithms can be used to test the experts' choice of instruments, or to
identify instruments automatically.
|
1301.2262 | Conditions Under Which Conditional Independence and Scoring Methods Lead
to Identical Selection of Bayesian Network Models | cs.AI cs.LG stat.ML | It is often stated in papers tackling the task of inferring Bayesian network
structures from data that there are these two distinct approaches: (i) Apply
conditional independence tests when testing for the presence or otherwise of
edges; (ii) Search the model space using a scoring metric. Here I argue that
for complete data and a given node ordering this division is a myth, by showing
that cross entropy methods for checking conditional independence are
mathematically identical to methods based upon discriminating between models by
their overall goodness-of-fit logarithmic scores.
|
1301.2263 | Linearity Properties of Bayes Nets with Binary Variables | cs.AI | It is "well known" that in linear models: (1) testable constraints on the
marginal distribution of observed variables distinguish certain cases in which
an unobserved cause jointly influences several observed variables; (2) the
technique of "instrumental variables" sometimes permits an estimation of the
influence of one variable on another even when the association between the
variables may be confounded by unobserved common causes; (3) the association
(or conditional probability distribution of one variable given another) of two
variables connected by a path or trek can be computed directly from the
parameter values associated with each edge in the path or trek; (4) the
association of two variables produced by multiple treks can be computed from
the parameters associated with each trek; and (5) the independence of two
variables conditional on a third implies the corresponding independence of the
sums of the variables over all units conditional on the sums over all units of
each of the original conditioning variables.These properties are exploited in
search procedures. It is also known that properties (2)-(5) do not hold for all
Bayes nets with binary variables. We show that (1) holds for all Bayes nets
with binary variables and (5) holds for all singly trek-connected Bayes nets of
that kind. We further show that all five properties hold for Bayes nets with
any DAG and binary variables parameterized with noisy-or and noisy-and gates.
|
1301.2264 | Using Bayesian Networks to Identify the Causal Effect of Speeding in
Individual Vehicle/Pedestrian Collisions | cs.AI stat.AP | On roads showing significant violations of posted speed limits, one measure
of the safety effect of speeding is the difference between the road's actual
accident count and the count that would have occurred if the posted speed limit
had been strictly obeyed. An estimate of this accident reduction can be had by
computing the probability that speeding was a necessary condition for each of
set of accidents. This is an instance of assessing individual probabilities of
causation, which is generally not possible absent prior knowledge of causal
structure. For traffic accidents such prior knowledge is often available and
this paper illustrates how, for a commonly occurring class of
vehicle/pedestrian accidents, approaches to uncertainty and causal analyses
appearing in the accident reconstruction literature can be unified using
Bayesian networks. Measured skidmarks, pedestrian throw distances, and
pedestrian injury severity are treated as evidence, and using the Gibbs
Sampling routine BUGS, the posterior probability distribution over exogenous
variables, such as the vehicle's initial speed, location, and driver reaction
time, is computed. This posterior distribution is then used to compute the
"probability of necessity" for speeding.
|
1301.2265 | Hybrid Processing of Beliefs and Constraints | cs.AI | This paper explores algorithms for processing probabilistic and deterministic
information when the former is represented as a belief network and the latter
as a set of boolean clauses. The motivating tasks are 1. evaluating beliefs
networks having a large number of deterministic relationships and2. evaluating
probabilities of complex boolean querie over a belief network. We propose a
parameterized family of variable elimination algorithms that exploit both types
of information, and that allows varying levels of constraint propagation
inferences. The complexity of the scheme is controlled by the induced-width of
the graph {em augmented} by the dependencies introduced by the boolean
constraints. Preliminary empirical evaluation demonstrate the effect of
constraint propagation on probabilistic computation.
|
1301.2266 | Variational MCMC | cs.LG stat.CO stat.ML | We propose a new class of learning algorithms that combines variational
approximation and Markov chain Monte Carlo (MCMC) simulation. Naive algorithms
that use the variational approximation as proposal distribution can perform
poorly because this approximation tends to underestimate the true variance and
other features of the data. We solve this problem by introducing more
sophisticated MCMC algorithms. One of these algorithms is a mixture of two MCMC
kernels: a random walk Metropolis kernel and a blockMetropolis-Hastings (MH)
kernel with a variational approximation as proposaldistribution. The MH kernel
allows one to locate regions of high probability efficiently. The Metropolis
kernel allows us to explore the vicinity of these regions. This algorithm
outperforms variationalapproximations because it yields slightly better
estimates of the mean and considerably better estimates of higher moments, such
as covariances. It also outperforms standard MCMC algorithms because it locates
theregions of high probability quickly, thus speeding up convergence. We
demonstrate this algorithm on the problem of Bayesian parameter estimation for
logistic (sigmoid) belief networks.
|
1301.2267 | Efficient Stepwise Selection in Decomposable Models | cs.AI cs.DS | In this paper, we present an efficient way of performing stepwise selection
in the class of decomposable models. The main contribution of the paper is a
simple characterization of the edges that canbe added to a decomposable model
while keeping the resulting model decomposable and an efficient algorithm for
enumerating all such edges for a given model in essentially O(1) time per edge.
We also discuss how backward selection can be performed efficiently using our
data structures.We also analyze the complexity of the complete stepwise
selection procedure, including the complexity of choosing which of the eligible
dges to add to (or delete from) the current model, with the aim ofminimizing
the Kullback-Leibler distance of the resulting model from the saturated model
for the data.
|
1301.2268 | Incorporating Expressive Graphical Models in Variational Approximations:
Chain-Graphs and Hidden Variables | cs.AI cs.LG | Global variational approximation methods in graphical models allow efficient
approximate inference of complex posterior distributions by using a simpler
model. The choice of the approximating model determines a tradeoff between the
complexity of the approximation procedure and the quality of the approximation.
In this paper, we consider variational approximations based on two classes of
models that are richer than standard Bayesian networks, Markov networks or
mixture models. As such, these classes allow to find better tradeoffs in the
spectrum of approximations. The first class of models are chain graphs, which
capture distributions that are partially directed. The second class of models
are directed graphs (Bayesian networks) with additional latent variables. Both
classes allow representation of multi-variable dependencies that cannot be
easily represented within a Bayesian network.
|
1301.2269 | Learning the Dimensionality of Hidden Variables | cs.LG cs.AI stat.ML | A serious problem in learning probabilistic models is the presence of hidden
variables. These variables are not observed, yet interact with several of the
observed variables. Detecting hidden variables poses two problems: determining
the relations to other variables in the model and determining the number of
states of the hidden variable. In this paper, we address the latter problem in
the context of Bayesian networks. We describe an approach that utilizes a
score-based agglomerative state-clustering. As we show, this approach allows us
to efficiently evaluate models with a range of cardinalities for the hidden
variable. We show how to extend this procedure to deal with multiple
interacting hidden variables. We demonstrate the effectiveness of this approach
by evaluating it on synthetic and real-life data. We show that our approach
learns models with hidden variables that generalize better and have better
structure than previous approaches.
|
1301.2270 | Multivariate Information Bottleneck | cs.LG cs.AI stat.ML | The Information bottleneck method is an unsupervised non-parametric data
organization technique. Given a joint distribution P(A,B), this method
constructs a new variable T that extracts partitions, or clusters, over the
values of A that are informative about B. The information bottleneck has
already been applied to document classification, gene expression, neural code,
and spectral analysis. In this paper, we introduce a general principled
framework for multivariate extensions of the information bottleneck method.
This allows us to consider multiple systems of data partitions that are
inter-related. Our approach utilizes Bayesian networks for specifying the
systems of clusters and what information each captures. We show that this
construction provides insight about bottleneck variations and enables us to
characterize solutions of these variations. We also present a general framework
for iterative algorithms for constructing solutions, and apply it to several
examples.
|
1301.2271 | A Comparison of Axiomatic Approaches to Qualitative Decision Making
Using Possibility Theory | cs.AI | In this paper we analyze two recent axiomatic approaches proposed by Dubois
et al and by Giang and Shenoy to qualitative decision making where uncertainty
is described by possibility theory. Both axiomtizations are inspired by von
Neumann and Morgenstern's system of axioms for the case of probability theory.
We show that our approach naturally unifies two axiomatic systems that
correspond respectively to pessimistic and optimistic decision criteria
proposed by Dubois et al. The simplifying unification is achieved by (i)
replacing axioms that are supposed to reflect two informational attitudes
(uncertainty aversion and uncertainty attraction) by an axiom that imposes
order on set of standard lotteries and (ii) using a binary utility scale in
which each utility level is represented by a pair of numbers.
|
1301.2272 | Enumerating Markov Equivalence Classes of Acyclic Digraph Models | cs.AI | Graphical Markov models determined by acyclic digraphs (ADGs), also called
directed acyclic graphs (DAGs), are widely studied in statistics, computer
science (as Bayesian networks), operations research (as influence diagrams),
and many related fields. Because different ADGs may determine the same Markov
equivalence class, it long has been of interest to determine the efficiency
gained in model specification and search by working directly with Markov
equivalence classes of ADGs rather than with ADGs themselves. A computer
program was written to enumerate the equivalence classes of ADG models as
specified by Pearl & Verma's equivalence criterion. The program counted
equivalence classes for models up to and including 10 vertices. The ratio of
number of classes to ADGs appears to approach an asymptote of about 0.267.
Classes were analyzed according to number of edges and class size. By edges,
the distribution of number of classes approaches a Gaussian shape. By class
size, classes of size 1 are most common, with the proportions for larger sizes
initially decreasing but then following a more irregular pattern. The maximum
number of classes generated by any undirected graph was found to increase
approximately factorially. The program also includes a new variation of orderly
algorithm for generating undirected graphs.
|
1301.2273 | Robust Combination of Local Controllers | cs.AI cs.SY | Planning problems are hard, motion planning, for example, isPSPACE-hard. Such
problems are even more difficult in the presence of uncertainty. Although,
Markov Decision Processes (MDPs) provide a formal framework for such problems,
finding solutions to high dimensional continuous MDPs is usually difficult,
especially when the actions and time measurements are continuous. Fortunately,
problem-specific knowledge allows us to design controllers that are good
locally, though having no global guarantees. We propose a method of
nonparametrically combining local controllers to obtain globally good
solutions. We apply this formulation to two types of problems : motion planning
(stochastic shortest path) and discounted MDPs. For motion planning, we argue
that usual MDP optimality criterion (expected cost) may not be practically
relevant. Wepropose an alternative: finding the minimum cost path,subject to
the constraint that the robot must reach the goal withhigh probability. For
this problem, we prove that a polynomial number of samples is sufficient to
obtain a high probability path. For discounted MDPs, we propose a formulation
that explicitly deals with model uncertainty, i.e., the problem introduced when
transition probabilities are not known exactly. We formulate the problem as a
robust linear program which directly incorporates this type of uncertainty.
|
1301.2274 | Similarity Measures on Preference Structures, Part II: Utility Functions | cs.AI | In previous work cite{Ha98:Towards} we presented a case-based approach to
eliciting and reasoning with preferences. A key issue in this approach is the
definition of similarity between user preferences. We introduced the
probabilistic distance as a measure of similarity on user preferences, and
provided an algorithm to compute the distance between two partially specified
{em value} functions. This is for the case of decision making under {em
certainty}. In this paper we address the more challenging issue of computing
the probabilistic distance in the case of decision making under{em
uncertainty}. We provide an algorithm to compute the probabilistic distance
between two partially specified {em utility} functions. We demonstrate the use
of this algorithm with a medical data set of partially specified patient
preferences,where none of the other existing distancemeasures appear definable.
Using this data set, we also demonstrate that the case-based approach to
preference elicitation isapplicable in domains with uncertainty. Finally, we
provide a comprehensive analytical comparison of the probabilistic distance
with some existing distance measures on preferences.
|
1301.2275 | Causes and Explanations: A Structural-Model Approach --- Part 1: Causes | cs.AI | We propose a new definition of actual causes, using structural equations to
model counterfactuals.We show that the definitions yield a plausible and
elegant account ofcausation that handles well examples which have caused
problems forother definitions and resolves major difficulties in the
traditionalaccount. In a companion paper, we show how the definition of
causality can beused to give an elegant definition of (causal) explanation.
|
1301.2277 | A Clustering Approach to Solving Large Stochastic Matching Problems | cs.AI cs.DS | In this work we focus on efficient heuristics for solving a class of
stochastic planning problems that arise in a variety of business, investment,
and industrial applications. The problem is best described in terms of future
buy and sell contracts. By buying less reliable, but less expensive, buy
(supply) contracts, a company or a trader can cover a position of more reliable
and more expensive sell contracts. The goal is to maximize the expected net
gain (profit) by constructing a dose to optimum portfolio out of the available
buy and sell contracts. This stochastic planning problem can be formulated as a
two-stage stochastic linear programming problem with recourse. However, this
formalization leads to solutions that are exponential in the number of possible
failure combinations. Thus, this approach is not feasible for large scale
problems. In this work we investigate heuristic approximation techniques
alleviating the efficiency problem. We primarily focus on the clustering
approach and devise heuristics for finding clusterings leading to good
approximations. We illustrate the quality and feasibility of the approach
through experimental data.
|
1301.2278 | Discovering Multiple Constraints that are Frequently Approximately
Satisfied | cs.LG stat.ML | Some high-dimensional data.sets can be modelled by assuming that there are
many different linear constraints, each of which is Frequently Approximately
Satisfied (FAS) by the data. The probability of a data vector under the model
is then proportional to the product of the probabilities of its constraint
violations. We describe three methods of learning products of constraints using
a heavy-tailed probability distribution for the violations.
|
1301.2279 | A Bayesian Approach to Tackling Hard Computational Problems | cs.AI | We are developing a general framework for using learned Bayesian models for
decision-theoretic control of search and reasoningalgorithms. We illustrate the
approach on the specific task of controlling both general and domain-specific
solvers on a hard class of structured constraint satisfaction problems. A
successful strategyfor reducing the high (and even infinite) variance in
running time typically exhibited by backtracking search algorithms is to cut
off and restart the search if a solution is not found within a certainamount of
time. Previous work on restart strategies have employed fixed cut off values.
We show how to create a dynamic cut off strategy by learning a Bayesian model
that predicts the ultimate length of a trial based on observing the early
behavior of the search algorithm. Furthermore, we describe the general
conditions under which a dynamic restart strategy can outperform the
theoretically optimal fixed strategy.
|
1301.2280 | Estimating Well-Performing Bayesian Networks using Bernoulli Mixtures | cs.LG cs.AI stat.ML | A novel method for estimating Bayesian network (BN) parameters from data is
presented which provides improved performance on test data. Previous research
has shown the value of representing conditional probability distributions
(CPDs) via neural networks(Neal 1992), noisy-OR gates (Neal 1992, Diez 1993)and
decision trees (Friedman and Goldszmidt 1996).The Bernoulli mixture network
(BMN) explicitly represents the CPDs of discrete BN nodes as mixtures of local
distributions,each having a different set of parents.This increases the space
of possible structures which can be considered,enabling the CPDs to have
finer-grained dependencies.The resulting estimation procedure induces a
modelthat is better able to emulate the underlying interactions occurring in
the data than conventional conditional Bernoulli network models.The results for
artificially generated data indicate that overfitting is best reduced by
restricting the complexity of candidate mixture substructures local to each
node. Furthermore, mixtures of very simple substructures can perform almost as
well as more complex ones.The BMN is also applied to data collected from an
online adventure game with an application to keyhole plan recognition. The
results show that the BMN-based model brings a dramatic improvement in
performance over a conventional BN model.
|
1301.2281 | Graphical Models for Game Theory | cs.GT cs.AI | In this work, we introduce graphical modelsfor multi-player game theory, and
give powerful algorithms for computing their Nash equilibria in certain cases.
An n-player game is given by an undirected graph on n nodes and a set of n
local matrices. The interpretation is that the payoff to player i is determined
entirely by the actions of player i and his neighbors in the graph, and thus
the payoff matrix to player i is indexed only by these players. We thus view
the global n-player game as being composed of interacting local games, each
involving many fewer players. Each player's action may have global impact, but
it occurs through the propagation of local influences.Our main technical result
is an efficient algorithm for computing Nash equilibria when the underlying
graph is a tree (or can be turned into a tree with few node mergings). The
algorithm runs in time polynomial in the size of the representation (the graph
and theassociated local game matrices), and comes in two related but distinct
flavors. The first version involves an approximation step, and computes a
representation of all approximate Nash equilibria (of which there may be an
exponential number in general). The second version allows the exact computation
of Nash equilibria at the expense of weakened complexity bounds. The algorithm
requires only local message-passing between nodes (and thus can be implemented
by the players themselves in a distributed manner). Despite an analogy to
inference in Bayes nets that we develop, the analysis of our algorithm is more
involved than that for the polytree algorithm in, owing partially to the fact
that we must either compute, or select from, an exponential number of potential
solutions. We discuss a number of extensions, such as the computation of
equilibria with desirable global properties (e.g. maximizing global return),
and directions for further research.
|
1301.2282 | On characterizing Inclusion of Bayesian Networks | cs.AI | Every directed acyclic graph (DAG) over a finite non-empty set of variables
(= nodes) N induces an independence model over N, which is a list of
conditional independence statements over N.The inclusion problem is how to
characterize (in graphical terms) whether all independence statements in the
model induced by a DAG K are in the model induced by a second DAG L. Meek
(1997) conjectured that this inclusion holds iff there exists a sequence of
DAGs from L to K such that only certain 'legal' arrow reversal and 'legal'
arrow adding operations are performed to get the next DAG in the sequence.In
this paper we give several characterizations of inclusion of DAG models and
verify Meek's conjecture in the case that the DAGs K and L differ in at most
one adjacency. As a warming up a rigorous proof of well-known graphical
characterizations of equivalence of DAGs, which is a highly related problem, is
given.
|
1301.2283 | Improved learning of Bayesian networks | cs.LG cs.AI stat.ML | The search space of Bayesian Network structures is usually defined as Acyclic
Directed Graphs (DAGs) and the search is done by local transformations of DAGs.
But the space of Bayesian Networks is ordered by DAG Markov model inclusion and
it is natural to consider that a good search policy should take this into
account. First attempt to do this (Chickering 1996) was using equivalence
classes of DAGs instead of DAGs itself. This approach produces better results
but it is significantly slower. We present a compromise between these two
approaches. It uses DAGs to search the space in such a way that the ordering by
inclusion is taken into account. This is achieved by repetitive usage of local
moves within the equivalence class of DAGs. We show that this new approach
produces better results than the original DAGs approach without substantial
change in time complexity. We present empirical results, within the framework
of heuristic search and Markov Chain Monte Carlo, provided through the Alarm
dataset.
|
1301.2284 | Classifier Learning with Supervised Marginal Likelihood | cs.LG stat.ML | It has been argued that in supervised classification tasks, in practice it
may be more sensible to perform model selection with respect to some more
focused model selection score, like the supervised (conditional) marginal
likelihood, than with respect to the standard marginal likelihood criterion.
However, for most Bayesian network models, computing the supervised marginal
likelihood score takes exponential time with respect to the amount of observed
data. In this paper, we consider diagnostic Bayesian network classifiers where
the significant model parameters represent conditional distributions for the
class variable, given the values of the predictor variables, in which case the
supervised marginal likelihood can be computed in linear time with respect to
the data. As the number of model parameters grows in this case exponentially
with respect to the number of predictors, we focus on simple diagnostic models
where the number of relevant predictors is small, and suggest two approaches
for applying this type of models in classification. The first approach is based
on mixtures of simple diagnostic models, while in the second approach we apply
the small predictor sets of the simple diagnostic models for augmenting the
Naive Bayes classifier.
|
1301.2285 | Plausible reasoning from spatial observations | cs.AI | This article deals with plausible reasoning from incomplete knowledge about
large-scale spatial properties. The availableinformation, consisting of a set
of pointwise observations,is extrapolated to neighbour points. We make use of
belief functions to represent the influence of the knowledge at a given point
to another point; the quantitative strength of this influence decreases when
the distance between both points increases. These influences arethen aggregated
using a variant of Dempster's rule of combination which takes into account the
relative dependence between observations.
|
1301.2286 | Iterative Markov Chain Monte Carlo Computation of Reference Priors and
Minimax Risk | cs.LG stat.ML | We present an iterative Markov chainMonte Carlo algorithm for
computingreference priors and minimax risk forgeneral parametric families.
Ourapproach uses MCMC techniques based onthe Blahut-Arimoto algorithm
forcomputing channel capacity ininformation theory. We give astatistical
analysis of the algorithm,bounding the number of samples requiredfor the
stochastic algorithm to closelyapproximate the deterministic algorithmin each
iteration. Simulations arepresented for several examples fromexponential
families. Although we focuson applications to reference priors andminimax risk,
the methods and analysiswe develop are applicable to a muchbroader class of
optimization problemsand iterative algorithms.
|
1301.2287 | Hypothesis Management in Situation-Specific Network Construction | cs.AI | This paper considers the problem of knowledge-based model construction in the
presence of uncertainty about the association of domain entities to random
variables. Multi-entity Bayesian networks (MEBNs) are defined as a
representation for knowledge in domains characterized by uncertainty in the
number of relevant entities, their interrelationships, and their association
with observables. An MEBN implicitly specifies a probability distribution in
terms of a hierarchically structured collection of Bayesian network fragments
that together encode a joint probability distribution over arbitrarily many
interrelated hypotheses. Although a finite query-complete model can always be
constructed, association uncertainty typically makes exact model construction
and evaluation intractable. The objective of hypothesis management is to
balance tractability against accuracy. We describe an application to the
problem of using intelligence reports to infer the organization and activities
of groups of military vehicles. Our approach is compared to related work in the
tracking and fusion literature.
|
1301.2288 | Inference in Hybrid Networks: Theoretical Limits and Practical
Algorithms | cs.AI | An important subclass of hybrid Bayesian networks are those that represent
Conditional Linear Gaussian (CLG) distributions --- a distribution with a
multivariate Gaussian component for each instantiation of the discrete
variables. In this paper we explore the problem of inference in CLGs. We show
that inference in CLGs can be significantly harder than inference in Bayes
Nets. In particular, we prove that even if the CLG is restricted to an
extremely simple structure of a polytree in which every continuous node has at
most one discrete ancestor, the inference task is NP-hard.To deal with the
often prohibitive computational cost of the exact inference algorithm for CLGs,
we explore several approximate inference algorithms. These algorithms try to
find a small subset of Gaussians which are a good approximation to the full
mixture distribution. We consider two Monte Carlo approaches and a novel
approach that enumerates mixture components in order of prior probability. We
compare these methods on a variety of problems and show that our novel
algorithm is very promising for large, hybrid diagnosis problems.
|
1301.2289 | Exact Inference in Networks with Discrete Children of Continuous Parents | cs.AI | Many real life domains contain a mixture of discrete and continuous variables
and can be modeled as hybrid Bayesian Networks. Animportant subclass of hybrid
BNs are conditional linear Gaussian (CLG) networks, where the conditional
distribution of the continuous variables given an assignment to the discrete
variables is a multivariate Gaussian. Lauritzen's extension to the clique tree
algorithm can be used for exact inference in CLG networks. However, many
domains also include discrete variables that depend on continuous ones, and CLG
networks do not allow such dependencies to berepresented. No exact inference
algorithm has been proposed for these enhanced CLG networks. In this paper, we
generalize Lauritzen's algorithm, providing the first "exact" inference
algorithm for augmented CLG networks - networks where continuous nodes are
conditional linear Gaussians but that also allow discrete children ofcontinuous
parents. Our algorithm is exact in the sense that it computes the exact
distributions over the discrete nodes, and the exact first and second moments
of the continuous ones, up to the accuracy obtained by numerical integration
used within thealgorithm. When the discrete children are modeled with softmax
CPDs (as is the case in many real world domains) the approximation of the
continuous distributions using the first two moments is particularly accurate.
Our algorithm is simple to implement and often comparable in its complexity to
Lauritzen's algorithm. We show empirically that it achieves substantially
higher accuracy than previous approximate algorithms.
|
1301.2290 | Probabilistic Logic Programming under Inheritance with Overriding | cs.AI | We present probabilistic logic programming under inheritance with overriding.
This approach is based on new notions of entailment for reasoning with
conditional constraints, which are obtained from the classical notion of
logical entailment by adding the principle of inheritance with overriding. This
is done by using recent approaches to probabilistic default reasoning with
conditional constraints. We analyze the semantic properties of the new
entailment relations. We also present algorithms for probabilistic logic
programming under inheritance with overriding, and program transformations for
an increased efficiency.
|
1301.2291 | Solving Influence Diagrams using HUGIN, Shafer-Shenoy and Lazy
Propagation | cs.AI | In this paper we compare three different architectures for the evaluation of
influence diagrams: HUGIN, Shafer-Shenoy, and Lazy Evaluation architecture. The
computational complexity of the architectures are compared on the LImited
Memory Influence Diagram (LIMID): a diagram where only the requiste information
for the computation of the optimal policies are depicted. Because the requsite
information is explicitly represented in the LIMID the evaluation can take
advantage of it, and significant savings in computational can be obtained. In
this paper we show how the obtained savings is considerably increased when the
computations performed on the LIMID is according to the Lazy Evaluation scheme.
|
1301.2292 | A Bayesian Multiresolution Independence Test for Continuous Variables | cs.AI cs.LG | In this paper we present a method ofcomputing the posterior probability
ofconditional independence of two or morecontinuous variables from
data,examined at several resolutions. Ourapproach is motivated by
theobservation that the appearance ofcontinuous data varies widely atvarious
resolutions, producing verydifferent independence estimatesbetween the
variablesinvolved. Therefore, it is difficultto ascertain independence
withoutexamining data at several carefullyselected resolutions. In our paper,
weaccomplish this using the exactcomputation of the posteriorprobability of
independence, calculatedanalytically given a resolution. Ateach examined
resolution, we assume amultinomial distribution with Dirichletpriors for the
discretized tableparameters, and compute the posteriorusing Bayesian
integration. Acrossresolutions, we use a search procedureto approximate the
Bayesian integral ofprobability over an exponential numberof possible
histograms. Our methodgeneralizes to an arbitrary numbervariables in a
straightforward manner.The test is suitable for Bayesiannetwork learning
algorithms that useindependence tests to infer the networkstructure, in domains
that contain anymix of continuous, ordinal andcategorical variables.
|
1301.2293 | Aggregating Learned Probabilistic Beliefs | cs.AI | We consider the task of aggregating beliefs of severalexperts. We assume that
these beliefs are represented as probabilitydistributions. We argue that the
evaluation of any aggregationtechnique depends on the semantic context of this
task. We propose aframework, in which we assume that nature generates samples
from a`true' distribution and different experts form their beliefs based onthe
subsets of the data they have a chance to observe. Naturally, theideal
aggregate distribution would be the one learned from thecombined sample sets.
Such a formulation leads to a natural way tomeasure the accuracy of the
aggregation mechanism.We show that the well-known aggregation operator LinOP is
ideallysuited for that task. We propose a LinOP-based learning
algorithm,inspired by the techniques developed for Bayesian learning,
whichaggregates the experts' distributions represented as Bayesiannetworks. Our
preliminary experiments show that this algorithmperforms well in practice.
|
1301.2294 | Expectation Propagation for approximate Bayesian inference | cs.AI cs.LG | This paper presents a new deterministic approximation technique in Bayesian
networks. This method, "Expectation Propagation", unifies two previous
techniques: assumed-density filtering, an extension of the Kalman filter, and
loopy belief propagation, an extension of belief propagation in Bayesian
networks. All three algorithms try to recover an approximate distribution which
is close in KL divergence to the true distribution. Loopy belief propagation,
because it propagates exact belief states, is useful for a limited class of
belief networks, such as those which are purely discrete. Expectation
Propagation approximates the belief states by only retaining certain
expectations, such as mean and variance, and iterates until these expectations
are consistent throughout the network. This makes it applicable to hybrid
networks with discrete and continuous nodes. Expectation Propagation also
extends belief propagation in the opposite direction - it can propagate richer
belief states that incorporate correlations between nodes. Experiments with
Gaussian mixture models show Expectation Propagation to be convincingly better
than methods with similar computational cost: Laplace's method, variational
Bayes, and Monte Carlo. Expectation Propagation also provides an efficient
algorithm for training Bayes point machine classifiers.
|
1301.2295 | Recognition Networks for Approximate Inference in BN20 Networks | cs.AI | We propose using recognition networks for approximate inference inBayesian
networks (BNs). A recognition network is a multilayerperception (MLP) trained
to predict posterior marginals given observedevidence in a particular BN. The
input to the MLP is a vector of thestates of the evidential nodes. The activity
of an output unit isinterpreted as a prediction of the posterior marginal of
thecorresponding variable. The MLP is trained using samples generated fromthe
corresponding BN.We evaluate a recognition network that was trained to do
inference ina large Bayesian network, similar in structure and complexity to
theQuick Medical Reference, Decision Theoretic (QMR-DT). Our networkis a
binary, two-layer, noisy-OR network containing over 4000 potentially observable
nodes and over 600 unobservable, hidden nodes. Inreal medical diagnosis, most
observables are unavailable, and there isa complex and unknown bias that
selects which ones are provided. Weincorporate a very basic type of selection
bias in our network: a knownpreference that available observables are positive
rather than negative.Even this simple bias has a significant effect on the
posterior. We compare the performance of our recognition network
tostate-of-the-art approximate inference algorithms on a large set oftest
cases. In order to evaluate the effect of our simplistic modelof the selection
bias, we evaluate algorithms using a variety ofincorrectly modeled observation
biases. Recognition networks performwell using both correct and incorrect
observation biases.
|
1301.2296 | The Factored Frontier Algorithm for Approximate Inference in DBNs | cs.AI | The Factored Frontier (FF) algorithm is a simple approximate
inferencealgorithm for Dynamic Bayesian Networks (DBNs). It is very similar
tothe fully factorized version of the Boyen-Koller (BK) algorithm, butinstead
of doing an exact update at every step followed bymarginalisation (projection),
it always works with factoreddistributions. Hence it can be applied to models
for which the exactupdate step is intractable. We show that FF is equivalent to
(oneiteration of) loopy belief propagation (LBP) on the original DBN, andthat
BK is equivalent (to one iteration of) LBP on a DBN where wecluster some of the
nodes. We then show empirically that byiterating, LBP can improve on the
accuracy of both FF and BK. Wecompare these algorithms on two real-world DBNs:
the first is a modelof a water treatment plant, and the second is a coupled
HMM, used tomodel freeway traffic.
|
1301.2297 | A Case Study in Knowledge Discovery and Elicitation in an Intelligent
Tutoring Application | cs.AI | Most successful Bayesian network (BN) applications to datehave been built
through knowledge elicitation from experts.This is difficult and time
consuming, which has lead to recentinterest in automated methods for learning
BNs from data. We present a case study in the construction of a BN in
anintelligent tutoring application, specifically decimal misconceptions.
Wedescribe the BN construction using expert elicitation and then investigate
how certainexisting automated knowledge discovery methods might support the BN
knowledge engineering process.
|
1301.2298 | Lattice Particle Filters | cs.AI cs.CV | A standard approach to approximate inference in state-space models isto apply
a particle filter, e.g., the Condensation Algorithm.However, the performance of
particle filters often varies significantlydue to their stochastic nature.We
present a class of algorithms, called lattice particle filters, thatcircumvent
this difficulty by placing the particles deterministicallyaccording to a
Quasi-Monte Carlo integration rule.We describe a practical realization of this
idea, discuss itstheoretical properties, and its efficiency.Experimental
results with a synthetic 2D tracking problem show that thelattice particle
filter is equivalent to a conventional particle filterthat has between 10 and
60% more particles, depending ontheir "sparsity" in the state-space.We also
present results on inferring 3D human motion frommoving light displays.
|
1301.2299 | Approximating MAP using Local Search | cs.AI | MAP is the problem of finding a most probable instantiation of a set of
variables in a Bayesian network, given evidence. Unlike computing marginals,
posteriors, and MPE (a special case of MAP), the time and space complexity of
MAP is not only exponential in the network treewidth, but also in a larger
parameter known as the "constrained" treewidth. In practice, this means that
computing MAP can be orders of magnitude more expensive than
computingposteriors or MPE. Thus, practitioners generally avoid MAP
computations, resorting instead to approximating them by the most likely value
for each MAP variableseparately, or by MPE.We present a method for
approximating MAP using local search. This method has space complexity which is
exponential onlyin the treewidth, as is the complexity of each search step. We
investigate the effectiveness of different local searchmethods and several
initialization strategies and compare them to otherapproximation
schemes.Experimental results show that local search provides a much more
accurate approximation of MAP, while requiring few search steps.Practically,
this means that the complexity of local search is often exponential only in
treewidth as opposed to the constrained treewidth, making approximating MAP as
efficient as other computations.
|
1301.2300 | Direct and Indirect Effects | cs.AI stat.ME | The direct effect of one eventon another can be defined and measured
byholding constant all intermediate variables between the two.Indirect effects
present conceptual andpractical difficulties (in nonlinear models), because
they cannot be isolated by holding certain variablesconstant. This paper shows
a way of defining any path-specific effectthat does not invoke blocking the
remainingpaths.This permits the assessment of a more naturaltype of direct and
indirect effects, one thatis applicable in both linear and nonlinear models.
The paper establishesconditions under which such assessments can be estimated
consistentlyfrom experimental and nonexperimental data,and thus extends
path-analytic techniques tononlinear and nonparametric models.
|
1301.2301 | Sufficiency, Separability and Temporal Probabilistic Models | cs.AI | Suppose we are given the conditional probability of one variable given some
other variables.Normally the full joint distribution over the conditioning
variablesis required to determine the probability of the conditioned
variable.Under what circumstances are the marginal distributions over the
conditioning variables sufficient to determine the probability ofthe
conditioned variable?Sufficiency in this sense is equivalent to additive
separability ofthe conditional probability distribution.Such separability
structure is natural and can be exploited forefficient inference.Separability
has a natural generalization to conditional separability.Separability provides
a precise notion of weaklyinteracting subsystems in temporal probabilistic
models.Given a system that is decomposed into separable subsystems,
exactmarginal probabilities over subsystems at future points in time can
becomputed by propagating marginal subsystem probabilities, rather thancomplete
system joint probabilities.Thus, separability can make exact prediction
tractable.However, observations can break separability,so exact monitoring of
dynamic systems remains hard.
|
1301.2302 | Toward General Analysis of Recursive Probability Models | cs.AI | There is increasing interest within the research community in the design and
use of recursive probability models. Although there still remains concern about
computational complexity costs and the fact that computing exact solutions can
be intractable for many nonrecursive models and impossible in the general case
for recursive problems, several research groups are actively developing
computational techniques for recursive stochastic languages. We have developed
an extension to the traditional lambda-calculus as a framework for families of
Turing complete stochastic languages. We have also developed a class of exact
inference algorithms based on the traditional reductions of the
lambda-calculus. We further propose that using the deBruijn notation (a
lambda-calculus notation with nameless dummies) supports effective caching in
such systems (caching being an essential component of efficient computation).
Finally, our extension to the lambda-calculus offers a foundation and general
theory for the construction of recursive stochastic modeling languages as well
as promise for effective caching and efficient approximation algorithms for
inference.
|
1301.2303 | Probabilistic Models for Unified Collaborative and Content-Based
Recommendation in Sparse-Data Environments | cs.IR cs.LG stat.ML | Recommender systems leverage product and community information to target
products to consumers. Researchers have developed collaborative recommenders,
content-based recommenders, and (largely ad-hoc) hybrid systems. We propose a
unified probabilistic framework for merging collaborative and content-based
recommendations. We extend Hofmann's [1999] aspect model to incorporate
three-way co-occurrence data among users, items, and item content. The relative
influence of collaboration data versus content data is not imposed as an
exogenous parameter, but rather emerges naturally from the given data sources.
Global probabilistic models coupled with standard Expectation Maximization (EM)
learning algorithms tend to drastically overfit in sparse-data situations, as
is typical in recommendation applications. We show that secondary content
information can often be used to overcome sparsity. Experiments on data from
the ResearchIndex library of Computer Science publications show that
appropriate mixture models incorporating secondary data produce significantly
better quality recommenders than k-nearest neighbors (k-NN). Global
probabilistic models also allow more general inferences than local methods like
k-NN.
|
1301.2304 | Vector-space Analysis of Belief-state Approximation for POMDPs | cs.AI | We propose a new approach to value-directed belief state approximation for
POMDPs. The value-directed model allows one to choose approximation methods for
belief state monitoring that have a small impact on decision quality. Using a
vector space analysis of the problem, we devise two new search procedures for
selecting an approximation scheme that have much better computational
properties than existing methods. Though these provide looser error bounds, we
show empirically that they have a similar impact on decision quality in
practice, and run up to two orders of magnitude more quickly.
|
1301.2305 | Value-Directed Sampling Methods for POMDPs | cs.AI | We consider the problem of approximate belief-state monitoring using particle
filtering for the purposes of implementing a policy for a partially-observable
Markov decision process (POMDP). While particle filtering has become a
widely-used tool in AI for monitoring dynamical systems, rather scant attention
has been paid to their use in the context of decision making. Assuming the
existence of a value function, we derive error bounds on decision quality
associated with filtering using importance sampling. We also describe an
adaptive procedure that can be used to dynamically determine the number of
samples required to meet specific error bounds. Empirical evidence is offered
supporting this technique as a profitable means of directing sampling effort
where it is needed to distinguish policies.
|
1301.2306 | A Mixed Graphical Model for Rhythmic Parsing | cs.AI cs.SD | A method is presented for the rhythmic parsing problem: Given a sequence of
observed musical note onset times, we estimate the corresponding notated rhythm
and tempo process. A graphical model is developed that represents the
simultaneous evolution of tempo and rhythm and relates these hidden quantities
to observations. The rhythm variables are discrete and the tempo and
observation variables are continuous. We show how to compute the globally most
likely configuration of the tempo and rhythm variables given an observation of
note onset times. Preliminary experiments are presented on a small data set. A
generalization to arbitrary conditional Gaussian distributions is outlined.
|
1301.2307 | Decision-Theoretic Planning with Concurrent Temporally Extended Actions | cs.AI | We investigate a model for planning under uncertainty with temporallyextended
actions, where multiple actions can be taken concurrently at each decision
epoch. Our model is based on the options framework, and combines it with
factored state space models,where the set of options can be partitioned into
classes that affectdisjoint state variables. We show that the set of
decisionepochs for concurrent options defines a semi-Markov decisionprocess, if
the underlying temporally extended actions being parallelized arerestricted to
Markov options. This property allows us to use SMDPalgorithms for computing the
value function over concurrentoptions. The concurrent options model allows
overlapping execution ofoptions in order to achieve higher performance or in
order to performa complex task. We describe a simple experiment using a
navigationtask which illustrates how concurrent options results in a faster
planwhen compared to the case when only one option is taken at a time.
|
1301.2308 | A Tractable POMDP for a Class of Sequencing Problems | cs.AI | We consider a partially observable Markov decision problem (POMDP) that
models a class of sequencing problems. Although POMDPs are typically
intractable, our formulation admits tractable solution. Instead of maintaining
a value function over a high-dimensional set of belief states, we reduce the
state space to one of smaller dimension, in which grid-based dynamic
programming techniques are effective. We develop an error bound for the
resulting approximation, and discuss an application of the model to a problem
in targeted advertising.
|
1301.2309 | Symmetric Collaborative Filtering Using the Noisy Sensor Model | cs.IR cs.LG | Collaborative filtering is the process of making recommendations regarding
the potential preference of a user, for example shopping on the Internet, based
on the preference ratings of the user and a number of other users for various
items. This paper considers collaborative filtering based on
explicitmulti-valued ratings. To evaluate the algorithms, weconsider only {em
pure} collaborative filtering, using ratings exclusively, and no other
information about the people or items.Our approach is to predict a user's
preferences regarding a particularitem by using other people who rated that
item and other items ratedby the user as noisy sensors. The noisy sensor model
uses Bayes' theorem to compute the probability distribution for the
user'srating of a new item. We give two variant models: in one, we learn a{em
classical normal linear regression} model of how users rate items; in
another,we assume different users rate items the same, but the accuracy of
thesensors needs to be learned. We compare these variant models
withstate-of-the-art techniques and show how they are significantly
better,whether a user has rated only two items or many. We reportempirical
results using the EachMovie database
footnote{http://research.compaq.com/SRC/eachmovie/} of movie ratings. Wealso
show that by considering items similarity along with theusers similarity, the
accuracy of the prediction increases.
|
1301.2310 | Policy Improvement for POMDPs Using Normalized Importance Sampling | cs.AI cs.LG | We present a new method for estimating the expected return of a POMDP from
experience. The method does not assume any knowledge of the POMDP and allows
the experience to be gathered from an arbitrary sequence of policies. The
return is estimated for any new policy of the POMDP. We motivate the estimator
from function-approximation and importance sampling points-of-view and derive
its theoretical properties. Although the estimator is biased, it has low
variance and the bias is often irrelevant when the estimator is used for
pair-wise comparisons. We conclude by extending the estimator to policies with
memory and compare its performance in a greedy search algorithm to REINFORCE
algorithms showing an order of magnitude reduction in the number of trials
required.
|
1301.2311 | Maximum Likelihood Bounded Tree-Width Markov Networks | cs.LG cs.AI stat.ML | Chow and Liu (1968) studied the problem of learning a maximumlikelihood
Markov tree. We generalize their work to more complexMarkov networks by
considering the problem of learning a maximumlikelihood Markov network of
bounded complexity. We discuss howtree-width is in many ways the appropriate
measure of complexity andthus analyze the problem of learning a maximum
likelihood Markovnetwork of bounded tree-width.Similar to the work of Chow and
Liu, we are able to formalize thelearning problem as a combinatorial
optimization problem on graphs. Weshow that learning a maximum likelihood
Markov network of boundedtree-width is equivalent to finding a maximum weight
hypertree. Thisequivalence gives rise to global, integer-programming
based,approximation algorithms with provable performance guarantees, for
thelearning problem. This contrasts with heuristic local-searchalgorithms which
were previously suggested (e.g. by Malvestuto 1991).The equivalence also allows
us to study the computational hardness ofthe learning problem. We show that
learning a maximum likelihoodMarkov network of bounded tree-width is NP-hard,
and discuss thehardness of approximation.
|
1301.2312 | Causal Discovery from Changes | cs.AI | We propose a new method of discovering causal structures, based on the
detection of local, spontaneous changes in the underlying data-generating
model. We analyze the classes of structures that are equivalent relative to a
stream of distributions produced by local changes, and devise algorithms that
output graphical representations of these equivalence classes. We present
experimental results, using simulated data, and examine the errors associated
with detection of changes and recovery of structures.
|
1301.2313 | Bayesian Error-Bars for Belief Net Inference | cs.AI | A Bayesian Belief Network (BN) is a model of a joint distribution over a
setof n variables, with a DAG structure to represent the immediate
dependenciesbetween the variables, and a set of parameters (aka CPTables) to
represent thelocal conditional probabilities of a node, given each assignment
to itsparents. In many situations, these parameters are themselves random
variables - this may reflect the uncertainty of the domain expert, or may come
from atraining sample used to estimate the parameter values. The distribution
overthese "CPtable variables" induces a distribution over the response the
BNwill return to any "What is Pr(H | E)?" query. This paper investigates
thevariance of this response, showing first that it is asymptotically
normal,then providing its mean and asymptotical variance. We then present
aneffective general algorithm for computing this variance, which has the
samecomplexity as simply computing the (mean value of) the response itself -
ie,O(n 2^w), where n is the number of variables and w is the effective
treewidth. Finally, we provide empirical evidence that this algorithm,
whichincorporates assumptions and approximations, works effectively in
practice,given only small samples.
|
1301.2314 | Analysing Sensitivity Data from Probabilistic Networks | cs.AI | With the advance of efficient analytical methods for sensitivity analysis
ofprobabilistic networks, the interest in the sensitivities revealed by
real-life networks is rekindled. As the amount of data resulting from a
sensitivity analysis of even a moderately-sized network is alreadyoverwhelming,
methods for extracting relevant information are called for. One such methodis
to study the derivative of the sensitivity functions yielded for a network's
parameters. We further propose to build upon the concept of admissible
deviation, that is, the extent to which a parameter can deviate from the true
value without inducing a change in the most likely outcome. We illustrate these
concepts by means of a sensitivity analysis of a real-life probabilistic
network in oncology.
|
1301.2315 | The Optimal Reward Baseline for Gradient-Based Reinforcement Learning | cs.LG cs.AI stat.ML | There exist a number of reinforcement learning algorithms which learnby
climbing the gradient of expected reward. Their long-runconvergence has been
proved, even in partially observableenvironments with non-deterministic
actions, and without the need fora system model. However, the variance of the
gradient estimator hasbeen found to be a significant practical problem. Recent
approacheshave discounted future rewards, introducing a bias-variance
trade-offinto the gradient estimate. We incorporate a reward baseline into
thelearning system, and show that it affects variance without
introducingfurther bias. In particular, as we approach the
zero-bias,high-variance parameterization, the optimal (or variance
minimizing)constant reward baseline is equal to the long-term average
expectedreward. Modified policy-gradient algorithms are presented, and anumber
of experiments demonstrate their improvement over previous work.
|
1301.2316 | Cross-covariance modelling via DAGs with hidden variables | cs.LG stat.ML | DAG models with hidden variables present many difficulties that are not
present when all nodes are observed. In particular, fully observed DAG models
are identified and correspond to well-defined sets ofdistributions, whereas
this is not true if nodes are unobserved. Inthis paper we characterize exactly
the set of distributions given by a class of one-dimensional Gaussian latent
variable models. These models relate two blocks of observed variables, modeling
only the cross-covariance matrix. We describe the relation of this model to the
singular value decomposition of the cross-covariance matrix. We show that,
although the model is underidentified, useful information may be extracted. We
further consider an alternative parametrization in which one latent variable is
associated with each block. Our analysis leads to some novel covariance
equivalence results for Gaussian hidden variable models.
|
1301.2317 | Belief Optimization for Binary Networks: A Stable Alternative to Loopy
Belief Propagation | cs.AI cs.LG | We present a novel inference algorithm for arbitrary, binary, undirected
graphs. Unlike loopy belief propagation, which iterates fixed point equations,
we directly descend on the Bethe free energy. The algorithm consists of two
phases, first we update the pairwise probabilities, given the marginal
probabilities at each unit,using an analytic expression. Next, we update the
marginal probabilities, given the pairwise probabilities by following the
negative gradient of the Bethe free energy. Both steps are guaranteed to
decrease the Bethe free energy, and since it is lower bounded, the algorithm is
guaranteed to converge to a local minimum. We also show that the Bethe free
energy is equal to the TAP free energy up to second order in the weights. In
experiments we confirm that when belief propagation converges it usually finds
identical solutions as our belief optimization method. However, in cases where
belief propagation fails to converge, belief optimization continues to converge
to reasonable beliefs. The stable nature of belief optimization makes it
ideally suited for learning graphical models from data.
|
1301.2318 | Statistical Modeling in Continuous Speech Recognition (CSR)(Invited
Talk) | cs.LG cs.AI stat.ML | Automatic continuous speech recognition (CSR) is sufficiently mature that a
variety of real world applications are now possible including large vocabulary
transcription and interactive spoken dialogues. This paper reviews the
evolution of the statistical modelling techniques which underlie current-day
systems, specifically hidden Markov models (HMMs) and N-grams. Starting from a
description of the speech signal and its parameterisation, the various
modelling assumptions and their consequences are discussed. It then describes
various techniques by which the effects of these assumptions can be mitigated.
Despite the progress that has been made, the limitations of current modelling
techniques are still evident. The paper therefore concludes with a brief review
of some of the more fundamental modelling work now in progress.
|
1301.2319 | Planning and Acting under Uncertainty: A New Model for Spoken Dialogue
Systems | cs.AI | Uncertainty plays a central role in spoken dialogue systems. Some stochastic
models like Markov decision process (MDP) are used to model the dialogue
manager. But the partially observable system state and user intention hinder
the natural representation of the dialogue state. MDP-based system degrades
fast when uncertainty about a user's intention increases. We propose a novel
dialogue model based on the partially observable Markov decision process
(POMDP). We use hidden system states and user intentions as the state set,
parser results and low-level information as the observation set, domain actions
and dialogue repair actions as the action set. Here the low-level information
is extracted from different input modals, including speech, keyboard, mouse,
etc., using Bayesian networks. Because of the limitation of the exact
algorithms, we focus on heuristic approximation algorithms and their
applicability in POMDP for dialogue management. We also propose two methods for
grid point selection in grid-based approximation algorithms.
|
1301.2320 | Using Temporal Data for Making Recommendations | cs.IR cs.AI cs.LG | We treat collaborative filtering as a univariate time series estimation
problem: given a user's previous votes, predict the next vote. We describe two
families of methods for transforming data to encode time order in ways amenable
to off-the-shelf classification and density estimation tools, and examine the
results of using these approaches on several real-world data sets. The
improvements in predictive accuracy we realize recommend the use of other
predictive algorithms that exploit the temporal order of data.
|
1301.2335 | New digital signature protocol based on elliptic curves | cs.CR cs.IT math.IT | In this work, a new digital signature based on elliptic curves is presented.
We established its efficiency and security. The method, derived from a variant
of ElGamal signature scheme, can be seen as a secure alternative protocol if
known systems are completely broken.
|
1301.2342 | A Linear Time Algorithm for the Feasibility of Pebble Motion on Graphs | cs.DS cs.RO | Given a connected, undirected, simple graph $G = (V, E)$ and $p \le |V|$
pebbles labeled $1,..., p$, a configuration of these $p$ pebbles is an
injective map assigning the pebbles to vertices of $G$. Let $S$ and $D$ be two
such configurations. From a configuration, pebbles can move on $G$ as follows:
In each step, at most one pebble may move from the vertex it currently occupies
to an adjacent unoccupied vertex, yielding a new configuration. A natural
question in this setting is the following: Is configuration $D$ reachable from
$S$ and if so, how? We show that the feasibility of this problem can be decided
in time $O(|V| + |E|)$.
|
1301.2343 | Planning by Prioritized Sweeping with Small Backups | cs.AI cs.LG | Efficient planning plays a crucial role in model-based reinforcement
learning. Traditionally, the main planning operation is a full backup based on
the current estimates of the successor states. Consequently, its computation
time is proportional to the number of successor states. In this paper, we
introduce a new planning backup that uses only the current value of a single
successor state and has a computation time independent of the number of
successor states. This new backup, which we call a small backup, opens the door
to a new class of model-based reinforcement learning methods that exhibit much
finer control over their planning process than traditional methods. We
empirically demonstrate that this increased flexibility allows for more
efficient planning by showing that an implementation of prioritized sweeping
based on small backups achieves a substantial performance improvement over
classical implementations.
|
1301.2351 | Application of Hopfield Network to Saccades | cs.CV q-bio.NC | Human eye movement mechanisms (saccades) are very useful for scene analysis,
including object representation and pattern recognition. In this letter, a
Hopfield neural network to emulate saccades is proposed. The network uses an
energy function that includes location and identification tasks. Computer
simulation shows that the network performs those tasks cooperatively. The
result suggests that the network is applicable to shift-invariant pattern
recognition.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.