id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1207.3742
|
Communities in Affiliation Networks with Attitudunal Actors
|
cs.SI physics.soc-ph
|
Our aim here is to plead for the significance of cultural considerations of
overlapping inter-attitudinal patterns right next to well established
structural considerations of interorganizational networks based on overlapping
membership patterns. In particular, we examine how the analytical sociological
methodological incorporation of cultural attributes or attitudes might enhance
our understanding of structural community categorizations in
interorganizational networks. For this purpose, we analyze data of the
International Peace Protest Survey (IPPS) on the world-wide peace protests of
February, 15, 2003, in order to manifest the added value offered by the
consideration of the culture-structure duality in participation studies.
|
1207.3745
|
Influence of opinion dynamics on the evolution of games
|
physics.soc-ph cond-mat.stat-mech cs.SI q-bio.PE
|
Under certain circumstances such as lack of information or bounded
rationality, human players can take decisions on which strategy to choose in a
game on the basis of simple opinions. These opinions can be modified after each
round by observing own or others payoff results but can be also modified after
interchanging impressions with other players. In this way, the update of the
strategies can become a question that goes beyond simple evolutionary rules
based on fitness and become a social issue. In this work, we explore this
scenario by coupling a game with an opinion dynamics model. The opinion is
represented by a continuous variable that corresponds to the certainty of the
agents respect to which strategy is best. The opinions transform into actions
by making the selection of an strategy a stochastic event with a probability
regulated by the opinion. A certain regard for the previous round payoff is
included but the main update rules of the opinion are given by a model inspired
in social interchanges. We find that the dynamics fixed points of the coupled
model is different from those of the evolutionary game or the opinion models
alone. Furthermore, new features emerge such as the resilience of the fraction
of cooperators to the topology of the social interaction network or to the
presence of a small fraction of extremist players.
|
1207.3749
|
Preliminary Design of Debris Removal Missions by Means of Simplified
Models for Low-Thrust, Many-Revolution Transfers
|
math.OC cs.NE
|
This paper presents a novel approach for the preliminary design of
Low-Thrust, many-revolution transfers. The main feature of the novel approach
is a considerable reduction in the control parameters and a consequent gain in
computational speed. Each spiral is built by using a predefined pattern for
thrust direction and switching structure. The pattern is then optimised to
minimise propellant consumption and transfer time. The variation of the orbital
elements due to the thrust is computed analytically from a first-order solution
of the perturbed Keplerian motion. The proposed approach allows for a realistic
estimation of {\Delta}V and time of flight required to transfer a spacecraft
between two arbitrary orbits. Eccentricity and plane changes are both accounted
for. The novel approach is applied here to the design of missions for the
removal of space debris by means of an Ion Beam Shepherd Spacecraft. In
particular, two slightly different variants of the proposed low-thrust control
model are used for the different phases of the mission. Thanks to their low
computational cost they can be included in a multiobjective optimisation
problem in which the sequence and timing of the removal of five pieces of
debris are optimised to minimise propellant consumption and mission duration.
|
1207.3760
|
Towards a Self-Organized Agent-Based Simulation Model for Exploration of
Human Synaptic Connections
|
cs.NE cs.AI cs.LG nlin.AO
|
In this paper, the early design of our self-organized agent-based simulation
model for exploration of synaptic connections that faithfully generates what is
observed in natural situation is given. While we take inspiration from
neuroscience, our intent is not to create a veridical model of processes in
neurodevelopmental biology, nor to represent a real biological system. Instead,
our goal is to design a simulation model that learns acting in the same way of
human nervous system by using findings on human subjects using reflex
methodologies in order to estimate unknown connections.
|
1207.3772
|
Surrogate Losses in Passive and Active Learning
|
math.ST cs.LG stat.ML stat.TH
|
Active learning is a type of sequential design for supervised machine
learning, in which the learning algorithm sequentially requests the labels of
selected instances from a large pool of unlabeled data points. The objective is
to produce a classifier of relatively low risk, as measured under the 0-1 loss,
ideally using fewer label requests than the number of random labeled data
points sufficient to achieve the same. This work investigates the potential
uses of surrogate loss functions in the context of active learning.
Specifically, it presents an active learning algorithm based on an arbitrary
classification-calibrated surrogate loss function, along with an analysis of
the number of label requests sufficient for the classifier returned by the
algorithm to achieve a given risk under the 0-1 loss. Interestingly, these
results cannot be obtained by simply optimizing the surrogate risk via active
learning to an extent sufficient to provide a guarantee on the 0-1 loss, as is
common practice in the analysis of surrogate losses for passive learning. Some
of the results have additional implications for the use of surrogate losses in
passive learning.
|
1207.3790
|
Accuracy Measures for the Comparison of Classifiers
|
cs.LG
|
The selection of the best classification algorithm for a given dataset is a
very widespread problem. It is also a complex one, in the sense it requires to
make several important methodological choices. Among them, in this work we
focus on the measure used to assess the classification performance and rank the
algorithms. We present the most popular measures and discuss their properties.
Despite the numerous measures proposed over the years, many of them turn out to
be equivalent in this specific case, to have interpretation problems, or to be
unsuitable for our purpose. Consequently, classic overall success rate or
marginal rates should be preferred for this specific task.
|
1207.3809
|
Image Labeling on a Network: Using Social-Network Metadata for Image
Classification
|
cs.CV cs.SI physics.soc-ph
|
Large-scale image retrieval benchmarks invariably consist of images from the
Web. Many of these benchmarks are derived from online photo sharing networks,
like Flickr, which in addition to hosting images also provide a highly
interactive social community. Such communities generate rich metadata that can
naturally be harnessed for image classification and retrieval. Here we study
four popular benchmark datasets, extending them with social-network metadata,
such as the groups to which each image belongs, the comment thread associated
with the image, who uploaded it, their location, and their network of friends.
Since these types of data are inherently relational, we propose a model that
explicitly accounts for the interdependencies between images sharing common
properties. We model the task as a binary labeling problem on a network, and
use structured learning techniques to learn model parameters. We find that
social-network metadata are useful in a variety of classification tasks, in
many cases outperforming methods based on image content.
|
1207.3837
|
How Random are Online Social Interactions?
|
cs.CY cs.SI physics.soc-ph
|
The massive amounts of data that social media generates has facilitated the
study of online human behavior on a scale unimaginable a few years ago. At the
same time, the much discussed apparent randomness with which people interact
online makes it appear as if these studies cannot reveal predictive social
behaviors that could be used for developing better platforms and services. We
use two large social databases to measure the mutual information entropy that
both individual and group actions generate as they evolve over time. We show
that user's interaction sequences have strong deterministic components, in
contrast with existing assumptions and models. In addition, we show that
individual interactions are more predictable when users act on their own rather
than when attending group activities.
|
1207.3850
|
On Capacity and Optimal Scheduling for the Half-Duplex Multiple-Relay
Channel
|
cs.IT math.IT
|
We study the half-duplex multiple-relay channel (HD-MRC) where every node can
either transmit or listen but cannot do both at the same time. We obtain a
capacity upper bound based on a max-flow min-cut argument and achievable
transmission rates based on the decode-forward (DF) coding strategy, for both
the discrete memoryless HD-MRC and the phase-fading HD-MRC. We discover that
both the upper bound and the achievable rates are functions of the
transmit/listen state (a description of which nodes transmit and which
receive). More precisely, they are functions of the time fraction of the
different states, which we term a schedule. We formulate the optimal scheduling
problem to find an optimal schedule that maximizes the DF rate. The optimal
scheduling problem turns out to be a maximin optimization, for which we propose
an algorithmic solution. We demonstrate our approach on a four-node
multiple-relay channel, obtaining closed-form solutions in certain scenarios.
Furthermore, we show that for the received signal-to-noise ratio degraded
phase-fading HD-MRC, the optimal scheduling problem can be simplified to a max
optimization.
|
1207.3855
|
Hybrid Grey Interval Relation Decision-Making in Artistic Talent
Evaluation of Player
|
cs.AI
|
This paper proposes a grey interval relation TOPSIS method for the decision
making in which all of the attribute weights and attribute values are given by
the interval grey numbers. In this paper, all of the subjective and objective
weights are obtained by interval grey number and decision-making is based on
four methods such as the relative approach degree of grey TOPSIS, the relative
approach degree of grey incidence and the relative approach degree method using
the maximum entropy estimation using 2-dimensional Euclidean distance. A
multiple attribute decision-making example for evaluation of artistic talent of
Kayagum (stringed Korean harp) players is given to show practicability of the
proposed approach.
|
1207.3859
|
Approximate Message Passing with Consistent Parameter Estimation and
Applications to Sparse Learning
|
cs.IT cs.LG math.IT
|
We consider the estimation of an i.i.d. (possibly non-Gaussian) vector $\xbf
\in \R^n$ from measurements $\ybf \in \R^m$ obtained by a general cascade model
consisting of a known linear transform followed by a probabilistic
componentwise (possibly nonlinear) measurement channel. A novel method, called
adaptive generalized approximate message passing (Adaptive GAMP), that enables
joint learning of the statistics of the prior and measurement channel along
with estimation of the unknown vector $\xbf$ is presented. The proposed
algorithm is a generalization of a recently-developed EM-GAMP that uses
expectation-maximization (EM) iterations where the posteriors in the E-steps
are computed via approximate message passing. The methodology can be applied to
a large class of learning problems including the learning of sparse priors in
compressed sensing or identification of linear-nonlinear cascade models in
dynamical systems and neural spiking processes. We prove that for large i.i.d.
Gaussian transform matrices the asymptotic componentwise behavior of the
adaptive GAMP algorithm is predicted by a simple set of scalar state evolution
equations. In addition, we show that when a certain maximum-likelihood
estimation can be performed in each step, the adaptive GAMP method can yield
asymptotically consistent parameter estimates, which implies that the algorithm
achieves a reconstruction quality equivalent to the oracle algorithm that knows
the correct parameter values. Remarkably, this result applies to essentially
arbitrary parametrizations of the unknown distributions, including ones that
are nonlinear and non-Gaussian. The adaptive GAMP methodology thus provides a
systematic, general and computationally efficient method applicable to a large
range of complex linear-nonlinear models with provable guarantees.
|
1207.3863
|
Qualitative Approximate Behavior Composition
|
cs.AI
|
The behavior composition problem involves automatically building a controller
that is able to realize a desired, but unavailable, target system (e.g., a
house surveillance) by suitably coordinating a set of available components
(e.g., video cameras, blinds, lamps, a vacuum cleaner, phones, etc.) Previous
work has almost exclusively aimed at bringing about the desired component in
its totality, which is highly unsatisfactory for unsolvable problems. In this
work, we develop an approach for approximate behavior composition without
departing from the classical setting, thus making the problem applicable to a
much wider range of cases. Based on the notion of simulation, we characterize
what a maximal controller and the "closest" implementable target module
(optimal approximation) are, and show how these can be computed using ATL model
checking technology for a special case. We show the uniqueness of optimal
approximations, and prove their soundness and completeness with respect to
their imported controllers.
|
1207.3868
|
Impact of Different Spreading Codes Using FEC on DWT Based MC-CDMA
System
|
cs.IT cs.PF math.IT
|
The effect of different spreading codes in DWT based MC-CDMA wireless
communication system is investigated. In this paper, we present the Bit Error
Rate (BER) performance of different spreading codes (Walsh-Hadamard code,
Orthogonal gold code and Golay complementary sequences) using Forward Error
Correction (FEC) of the proposed system. The data is analyzed and is compared
among different spreading codes in both coded and uncoded cases. It is found
via computer simulation that the performance of the proposed coded system is
much better than that of the uncoded system irrespective of the spreading codes
and all the spreading codes show approximately similar nature for both coded
and uncoded in all modulation schemes.
|
1207.3869
|
Automated Inference System for End-To-End Diagnosis of Network
Performance Issues in Client-Terminal Devices
|
cs.NI cs.AI cs.PF
|
Traditional network diagnosis methods of Client-Terminal Device (CTD)
problems tend to be laborintensive, time consuming, and contribute to increased
customer dissatisfaction. In this paper, we propose an automated solution for
rapidly diagnose the root causes of network performance issues in CTD. Based on
a new intelligent inference technique, we create the Intelligent Automated
Client Diagnostic (IACD) system, which only relies on collection of
Transmission Control Protocol (TCP) packet traces. Using soft-margin Support
Vector Machine (SVM) classifiers, the system (i) distinguishes link problems
from client problems and (ii) identifies characteristics unique to the specific
fault to report the root cause. The modular design of the system enables
support for new access link and fault types. Experimental evaluation
demonstrated the capability of the IACD system to distinguish between faulty
and healthy links and to diagnose the client faults with 98% accuracy. The
system can perform fault diagnosis independent of the user's specific TCP
implementation, enabling diagnosis of diverse range of client devices
|
1207.3871
|
Performance Analysis of Wavelet Based MC-CDMA System with Implementation
of Various Antenna Diversity Schemes
|
cs.IT cs.PF math.IT
|
The impact of using wavelet based technique on the performance of a MC-CDMA
wireless communication system has been investigated. The system under proposed
study incorporates Walsh Hadamard codes to discriminate the message signal for
individual user. A computer program written in Mathlab source code is developed
and this simulation study is made with implementation of various antenna
diversity schemes and fading (Rayleigh and Rician) channel. Computer simulation
results demonstrate that the proposed wavelet based MC-CDMA system outperforms
in Alamouti (two transmit antenna and one receive antenna) under AWGN and
Rician channel.
|
1207.3874
|
Reasoning about Agent Programs using ATL-like Logics
|
cs.AI
|
We propose a variant of Alternating-time Temporal Logic (ATL) grounded in the
agents' operational know-how, as defined by their libraries of abstract plans.
Inspired by ATLES, a variant itself of ATL, it is possible in our logic to
explicitly refer to "rational" strategies for agents developed under the
Belief-Desire-Intention agent programming paradigm. This allows us to express
and verify properties of BDI systems using ATL-type logical frameworks.
|
1207.3875
|
Transmission of Voice Signal: BER Performance Analysis of Different FEC
Schemes Based OFDM System over Various Channels
|
cs.IT cs.PF math.IT
|
In this paper, we investigate the impact of Forward Error Correction (FEC)
codes namely Cyclic Redundancy Code and Convolution Code on the performance of
OFDM wireless communication system for speech signal transmission over both
AWGN and fading (Rayleigh and Rician) channels in term of Bit Error
Probability. The simulation has been done in conjunction with QPSK digital
modulation and compared with uncoded resultstal modulation. In the fading
channels, it is found via computer simulation that the performance of the
Convolution interleaved based OFDM systems outperform than that of CRC
interleaved OFDM system as well as uncoded OFDM channels.
|
1207.3877
|
A New Determinant Inequality of Positive Semi-Definite Matrices
|
cs.IT math.IT
|
A new determinant inequality of positive semidefinite matrices is discovered
and proved by us. This new inequality is useful for attacking and solving a
variety of optimization problems arising from the design of wireless
communication systems.
|
1207.3882
|
WEP: An Energy Efficient Protocol for Cluster Based Heterogeneous
Wireless Sensor Network
|
cs.IT cs.PF math.IT
|
We develop an energy-efficient routing protocol in order to enhance the
stability period of wireless sensor networks. This protocol is called weighted
election protocol (WEP). It introduces a scheme to combine clustering strategy
with chain routing algorithm for satisfy both energy and stable period
constrains under heterogeneous environment in WSNs. Simulation results show
that new one performs better than LEACH, SEP and HEARP in terms of stability
period and network lifetime. It is also found that longer stability period
strongly depend on higher values of extra energy during its heterogeneous
settings.
|
1207.3884
|
Effect of Interleaved FEC Code on Wavelet Based MC-CDMA System with
Alamouti STBC in Different Modulation Schemes
|
cs.IT cs.PF math.IT
|
In this paper, the impact of Forward Error Correction (FEC) code namely
Trellis code with interleaver on the performance of wavelet based MC-CDMA
wireless communication system with the implementation of Alamouti antenna
diversity scheme has been investigated in terms of Bit Error Rate (BER) as a
function of Signal-to-Noise Ratio (SNR) per bit. Simulation of the system under
proposed study has been done in M-ary modulation schemes (MPSK, MQAM and DPSK)
over AWGN and Rayleigh fading channel incorporating Walsh Hadamard code as
orthogonal spreading code to discriminate the message signal for individual
user. It is observed via computer simulation that the performance of the
interleaved coded based proposed system outperforms than that of the uncoded
system in all modulation schemes over Rayleigh fading channel.
|
1207.3911
|
On Dimension Bounds for Auxiliary Quantum Systems
|
quant-ph cs.IT math.IT
|
Expressions of several capacity regions in quantum information theory involve
an optimization over auxiliary quantum registers. Evaluating such expressions
requires bounds on the dimension of the Hilbert space of these auxiliary
registers, for which no non-trivial technique is known; we lack a quantum
analog of the Carath\'{e}odory theorem. In this paper, we develop a new
non-Carath\'{e}odory-type tool for evaluating expressions involving a single
quantum auxiliary register and several classical random variables. As we show,
such expressions appear in problems of entanglement-assisted Gray-Wyner and
entanglement-assisted channel simulation, where the question of whether
entanglement helps in these settings is related to that of evaluating
expressions with a single quantum auxiliary register. To evaluate such
expressions, we argue that developing a quantum analog of the Carath\'{e}odory
theorem requires a better understanding of a notion which we call ``quantum
conditioning." We then proceed by proving a few results about quantum
conditioning, one of which is that quantum conditioning is strictly richer than
the usual classical conditioning.
|
1207.3914
|
Largenet2: an object-oriented programming library for simulating large
adaptive networks
|
physics.comp-ph cs.DS cs.SI physics.soc-ph
|
The largenet2 C++ library provides an infrastructure for the simulation of
large dynamic and adaptive networks with discrete node and link states. The
library is released as free software. It is available at
http://rincedd.github.com/largenet2. Largenet2 is licensed under the Creative
Commons Attribution-NonCommercial 3.0 Unported License.
|
1207.3932
|
Automatic Segmentation of Manipuri (Meiteilon) Word into Syllabic Units
|
cs.CL
|
The work of automatic segmentation of a Manipuri language (or Meiteilon) word
into syllabic units is demonstrated in this paper. This language is a scheduled
Indian language of Tibeto-Burman origin, which is also a very highly
agglutinative language. This language usages two script: a Bengali script and
Meitei Mayek (Script). The present work is based on the second script. An
algorithm is designed so as to identify mainly the syllables of Manipuri origin
word. The result of the algorithm shows a Recall of 74.77, Precision of 91.21
and F-Score of 82.18 which is a reasonable score with the first attempt of such
kind for this language.
|
1207.3944
|
Polarimetric SAR Image Segmentation with B-Splines and a New Statistical
Model
|
cs.CV stat.ML
|
We present an approach for polarimetric Synthetic Aperture Radar (SAR) image
region boundary detection based on the use of B-Spline active contours and a
new model for polarimetric SAR data: the GHP distribution. In order to detect
the boundary of a region, initial B-Spline curves are specified, either
automatically or manually, and the proposed algorithm uses a deformable
contours technique to find the boundary. In doing this, the parameters of the
polarimetric GHP model for the data are estimated, in order to find the
transition points between the region being segmented and the surrounding area.
This is a local algorithm since it works only on the region to be segmented.
Results of its performance are presented.
|
1207.3961
|
Ensemble Clustering with Logic Rules
|
stat.ML cs.LG
|
In this article, the logic rule ensembles approach to supervised learning is
applied to the unsupervised or semi-supervised clustering. Logic rules which
were obtained by combining simple conjunctive rules are used to partition the
input space and an ensemble of these rules is used to define a similarity
matrix. Similarity partitioning is used to partition the data in an
hierarchical manner. We have used internal and external measures of cluster
validity to evaluate the quality of clusterings or to identify the number of
clusters.
|
1207.3962
|
Computation of the Hausdorff distance between sets of line segments in
parallel
|
cs.CG cs.CV cs.DC
|
We show that the Hausdorff distance for two sets of non-intersecting line
segments can be computed in parallel in $O(\log^2 n)$ time using O(n)
processors in a CREW-PRAM computation model. We discuss how some parts of the
sequential algorithm can be performed in parallel using previously known
parallel algorithms; and identify the so-far unsolved part of the problem for
the parallel computation, which is the following: Given two sets of
$x$-monotone curve segments, red and blue, for each red segment find its
extremal intersection points with the blue set, i.e. points with the minimal
and maximal $x$-coordinate. Each segment set is assumed to be intersection
free. For this intersection problem we describe a parallel algorithm which
completes the Hausdorff distance computation within the stated time and
processor bounds.
|
1207.3994
|
Model Selection for Degree-corrected Block Models
|
cs.SI cond-mat.stat-mech math.ST physics.soc-ph stat.ML stat.TH
|
The proliferation of models for networks raises challenging problems of model
selection: the data are sparse and globally dependent, and models are typically
high-dimensional and have large numbers of latent variables. Together, these
issues mean that the usual model-selection criteria do not work properly for
networks. We illustrate these challenges, and show one way to resolve them, by
considering the key network-analysis problem of dividing a graph into
communities or blocks of nodes with homogeneous patterns of links to the rest
of the network. The standard tool for doing this is the stochastic block model,
under which the probability of a link between two nodes is a function solely of
the blocks to which they belong. This imposes a homogeneous degree distribution
within each block; this can be unrealistic, so degree-corrected block models
add a parameter for each node, modulating its over-all degree. The choice
between ordinary and degree-corrected block models matters because they make
very different inferences about communities. We present the first principled
and tractable approach to model selection between standard and degree-corrected
block models, based on new large-graph asymptotics for the distribution of
log-likelihood ratios under the stochastic block model, finding substantial
departures from classical results for sparse graphs. We also develop
linear-time approximations for log-likelihoods under both the stochastic block
model and the degree-corrected model, using belief propagation. Applications to
simulated and real networks show excellent agreement with our approximations.
Our results thus both solve the practical problem of deciding on degree
correction, and point to a general approach to model selection in network
analysis.
|
1207.4028
|
Signal processing with Levy information
|
math.PR cs.IT eess.SP math.IT math.OC q-fin.GN
|
Levy processes, which have stationary independent increments, are ideal for
modelling the various types of noise that can arise in communication channels.
If a Levy process admits exponential moments, then there exists a parametric
family of measure changes called Esscher transformations. If the parameter is
replaced with an independent random variable, the true value of which
represents a "message", then under the transformed measure the original Levy
process takes on the character of an "information process". In this paper we
develop a theory of such Levy information processes. The underlying Levy
process, which we call the fiducial process, represents the "noise type". Each
such noise type is capable of carrying a message of a certain specification. A
number of examples are worked out in detail, including information processes of
the Brownian, Poisson, gamma, variance gamma, negative binomial, inverse
Gaussian, and normal inverse Gaussian type. Although in general there is no
additive decomposition of information into signal and noise, one is led
nevertheless for each noise type to a well-defined scheme for signal detection
and enhancement relevant to a variety of practical situations.
|
1207.4044
|
Designing Information Revelation and Intervention with an Application to
Flow Control
|
cs.GT cs.IT cs.MA cs.NI math.IT
|
There are many familiar situations in which a manager seeks to design a
system in which users share a resource, but outcomes depend on the information
held and actions taken by users. If communication is possible, the manager can
ask users to report their private information and then, using this information,
instruct them on what actions they should take. If the users are compliant,
this reduces the manager's optimization problem to a well-studied problem of
optimal control. However, if the users are self-interested and not compliant,
the problem is much more complicated: when asked to report their private
information, the users might lie; upon receiving instructions, the users might
disobey. Here we ask whether the manager can design the system to get around
both of these difficulties. To do so, the manager must provide for the users
the incentives to report truthfully and to follow the instructions, despite the
fact that the users are self-interested. For a class of environments that
includes many resource allocation games in communication networks, we provide
tools for the manager to design an efficient system. In addition to reports and
recommendations, the design we employ allows the manager to intervene in the
system after the users take actions. In an abstracted environment, we find
conditions under which the manager can achieve the same outcome it could if
users were compliant, and conditions under which it does not. We then apply our
framework and results to design a flow control management system.
|
1207.4074
|
An analytical comparison of coalescent-based multilocus methods: The
three-taxon case
|
math.PR cs.CE cs.DS math.ST q-bio.PE stat.TH
|
Incomplete lineage sorting (ILS) is a common source of gene tree incongruence
in multilocus analyses. A large number of methods have been developed to infer
species trees in the presence of ILS. Here we provide a mathematical analysis
of several coalescent-based methods. Our analysis is performed on a three-taxon
species tree and assumes that the gene trees are correctly reconstructed along
with their branch lengths.
|
1207.4083
|
Optimization of a Finite Frequency-Hopping Ad Hoc Network in Nakagami
Fading
|
cs.IT math.IT
|
This paper considers the analysis and optimization of a frequency-hopping ad
hoc network with a finite number of mobiles and finite spatial extent. The
mobiles communicate using coded continuous-phase frequency-shift keying (CPFSK)
modulation. The performance of the system is a function of the number of
hopping channels, the rate of the error-correction code, and the modulation
index used by the CPFSK modulation. For a given channel model and density of
mobiles, these parameters are jointly optimized by maximizing the
(modulation-constrained) transmission capacity, which is a measure of the
spatial spectral efficiency of the system. The transmission capacity of the
finite network is found by using a recent expression for the spatially averaged
outage probability in the presence of Nakagami fading, which is found in closed
form in the absence of shadowing and can be solved using numerical integration
in the presence of shadowing.
|
1207.4089
|
A Two-Stage Combined Classifier in Scale Space Texture Classification
|
cs.CV cs.LG
|
Textures often show multiscale properties and hence multiscale techniques are
considered useful for texture analysis. Scale-space theory as a biologically
motivated approach may be used to construct multiscale textures. In this paper
various ways are studied to combine features on different scales for texture
classification of small image patches. We use the N-jet of derivatives up to
the second order at different scales to generate distinct pattern
representations (DPR) of feature subsets. Each feature subset in the DPR is
given to a base classifier (BC) of a two-stage combined classifier. The
decisions made by these BCs are combined in two stages over scales and
derivatives. Various combining systems and their significances and differences
are discussed. The learning curves are used to evaluate the performances. We
found for small sample sizes combining classifiers performs significantly
better than combining feature spaces (CFS). It is also shown that combining
classifiers performs better than the support vector machine on CFS in
multiscale texture classification.
|
1207.4096
|
The Global Grid
|
cs.SY physics.soc-ph
|
This paper puts forward the vision that a natural future stage of the
electricity network could be a grid spanning the whole planet and connecting
most of the large power plants in the world: this is the "Global Grid". The
main driving force behind the Global Grid will be the harvesting of remote
renewable sources, and its key infrastructure element will be the high capacity
long transmission lines. Wind farms and solar power plants will supply load
centers with green power over long distances.
This paper focuses on the introduction of the concept, showing that a
globally interconnected network can be technologically feasible and
economically competitive. We further highlight the multiple opportunities
emerging from a global electricity network such as smoothing the renewable
energy supply and electricity demand, reducing the need for bulk storage, and
reducing the volatility of the energy prices. We also discuss possible
investment mechanisms and operating schemes. Among others, we envision in such
a system a global power market and the establishment of two new coordinating
bodies, the "Global Regulator" and the "Global System Operator".
|
1207.4098
|
Automatic Control Software Synthesis for Quantized Discrete Time Hybrid
Systems
|
cs.SY cs.SE
|
Many Embedded Systems are indeed Software Based Control Systems, that is
control systems whose controller consists of control software running on a
microcontroller device. This motivates investigation on Formal Model Based
Design approaches for automatic synthesis of embedded systems control software.
This paper addresses control software synthesis for discrete time nonlinear
systems. We present a methodology to overapproximate the dynamics of a discrete
time nonlinear hybrid system H by means of a discrete time linear hybrid system
L(H), in such a way that controllers for L(H) are guaranteed to be controllers
for H. We present experimental results on the inverted pendulum, a challenging
and meaningful benchmark in nonlinear Hybrid Systems control.
|
1207.4104
|
Outliers and Random Noises in System Identification: a Compressed
Sensing Approach
|
cs.IT math.IT
|
In this paper, we consider robust system identification under sparse outliers
and random noises. In this problem, system parameters are observed through a
Toeplitz matrix. All observations are subject to random noises and a few are
corrupted with outliers. We reduce this problem of system identification to a
sparse error correcting problem using a Toeplitz structured real-numbered
coding matrix. We prove the performance guarantee of Toeplitz structured matrix
in sparse error correction. Thresholds on the percentage of correctable errors
for Toeplitz structured matrices are established. When both outliers and
observation noise are present, we have shown that the estimation error goes to
0 asymptotically as long as the probability density function for observation
noise is not "vanishing" around 0. No probabilistic assumptions are imposed on
the outliers.
|
1207.4107
|
Exploiting First-Order Regression in Inductive Policy Selection
|
cs.AI
|
We consider the problem of computing optimal generalised policies for
relational Markov decision processes. We describe an approach combining some of
the benefits of purely inductive techniques with those of symbolic dynamic
programming methods. The latter reason about the optimal value function using
first-order decision theoretic regression and formula rewriting, while the
former, when provided with a suitable hypotheses language, are capable of
generalising value functions or policies for small instances. Our idea is to
use reasoning and in particular classical first-order regression to
automatically generate a hypotheses language dedicated to the domain at hand,
which is then used as input by an inductive solver. This approach avoids the
more complex reasoning of symbolic dynamic programming while focusing the
inductive solver's attention on concepts that are specifically relevant to the
optimal value function for the domain considered.
|
1207.4109
|
A Complete Anytime Algorithm for Treewidth
|
cs.DS cs.AI cs.DM
|
In this paper, we present a Branch and Bound algorithm called QuickBB for
computing the treewidth of an undirected graph. This algorithm performs a
search in the space of perfect elimination ordering of vertices of the graph.
The algorithm uses novel pruning and propagation techniques which are derived
from the theory of graph minors and graph isomorphism. We present a new
algorithm called minor-min-width for computing a lower bound on treewidth that
is used within the branch and bound algorithm and which improves over earlier
available lower bounds. Empirical evaluation of QuickBB on randomly generated
graphs and benchmarks in Graph Coloring and Bayesian Networks shows that it is
consistently better than complete algorithms like QuickTree [Shoikhet and
Geiger, 1997] in terms of cpu time. QuickBB also has good anytime performance,
being able to generate a better upper bound on treewidth of some graphs whose
optimal treewidth could not be computed up to now.
|
1207.4110
|
The Minimum Information Principle for Discriminative Learning
|
cs.LG stat.ML
|
Exponential models of distributions are widely used in machine learning for
classiffication and modelling. It is well known that they can be interpreted as
maximum entropy models under empirical expectation constraints. In this work,
we argue that for classiffication tasks, mutual information is a more suitable
information theoretic measure to be optimized. We show how the principle of
minimum mutual information generalizes that of maximum entropy, and provides a
comprehensive framework for building discriminative classiffiers. A game
theoretic interpretation of our approach is then given, and several
generalization bounds provided. We present iterative algorithms for solving the
minimum information problem and its convex dual, and demonstrate their
performance on various classiffication tasks. The results show that minimum
information classiffiers outperform the corresponding maximum entropy models.
|
1207.4111
|
Decision Making for Symbolic Probability
|
cs.AI
|
This paper proposes a decision theory for a symbolic generalization of
probability theory (SP). Darwiche and Ginsberg [2,3] proposed SP to relax the
requirement of using numbers for uncertainty while preserving desirable
patterns of Bayesian reasoning. SP represents uncertainty by symbolic supports
that are ordered partially rather than completely as in the case of standard
probability. We show that a preference relation on acts that satisfies a number
of intuitive postulates is represented by a utility function whose domain is a
set of pairs of supports. We argue that a subjective interpretation is as
useful and appropriate for SP as it is for numerical probability. It is useful
because the subjective interpretation provides a basis for uncertainty
elicitation. It is appropriate because we can provide a decision theory that
explains how preference on acts is based on support comparison.
|
1207.4112
|
Algebraic Statistics in Model Selection
|
cs.LG stat.ML
|
We develop the necessary theory in computational algebraic geometry to place
Bayesian networks into the realm of algebraic statistics. We present an
algebra{statistics dictionary focused on statistical modeling. In particular,
we link the notion of effiective dimension of a Bayesian network with the
notion of algebraic dimension of a variety. We also obtain the independence and
non{independence constraints on the distributions over the observable variables
implied by a Bayesian network with hidden variables, via a generating set of an
ideal of polynomials associated to the network. These results extend previous
work on the subject. Finally, the relevance of these results for model
selection is discussed.
|
1207.4113
|
On-line Prediction with Kernels and the Complexity Approximation
Principle
|
cs.LG stat.ML
|
The paper describes an application of Aggregating Algorithm to the problem of
regression. It generalizes earlier results concerned with plain linear
regression to kernel techniques and presents an on-line algorithm which
performs nearly as well as any oblivious kernel predictor. The paper contains
the derivation of an estimate on the performance of this algorithm. The
estimate is then used to derive an application of the Complexity Approximation
Principle to kernel methods.
|
1207.4114
|
Metrics for Finite Markov Decision Processes
|
cs.AI
|
We present metrics for measuring the similarity of states in a finite Markov
decision process (MDP). The formulation of our metrics is based on the notion
of bisimulation for MDPs, with an aim towards solving discounted infinite
horizon reinforcement learning tasks. Such metrics can be used to aggregate
states, as well as to better structure other value function approximators
(e.g., memory-based or nearest-neighbor approximators). We provide bounds that
relate our metric distances to the optimal values of states in the given MDP.
|
1207.4115
|
Dynamic Programming for Structured Continuous Markov Decision Problems
|
cs.AI
|
We describe an approach for exploiting structure in Markov Decision Processes
with continuous state variables. At each step of the dynamic programming, the
state space is dynamically partitioned into regions where the value function is
the same throughout the region. We first describe the algorithm for piecewise
constant representations. We then extend it to piecewise linear
representations, using techniques from POMDPs to represent and reason about
linear surfaces efficiently. We show that for complex, structured problems, our
approach exploits the natural structure so that optimal solutions can be
computed efficiently.
|
1207.4116
|
Region-Based Incremental Pruning for POMDPs
|
cs.AI
|
We present a major improvement to the incremental pruning algorithm for
solving partially observable Markov decision processes. Our technique targets
the cross-sum step of the dynamic programming (DP) update, a key source of
complexity in POMDP algorithms. Instead of reasoning about the whole belief
space when pruning the cross-sums, our algorithm divides the belief space into
smaller regions and performs independent pruning in each region. We evaluate
the benefits of the new technique both analytically and experimentally, and
show that it produces very significant performance gains. The results
contribute to the scalability of POMDP algorithms to domains that cannot be
handled by the best existing techniques.
|
1207.4117
|
A Unified framework for order-of-magnitude confidence relations
|
cs.AI
|
The aim of this work is to provide a unified framework for ordinal
representations of uncertainty lying at the crosswords between possibility and
probability theories. Such confidence relations between events are commonly
found in monotonic reasoning, inconsistency management, or qualitative decision
theory. They start either from probability theory, making it more qualitative,
or from possibility theory, making it more expressive. We show these two trends
converge to a class of genuine probability theories. We provide
characterization results for these useful tools that preserve the qualitative
nature of possibility rankings, while enjoying the power of expressivity of
additive representations.
|
1207.4118
|
Iterative Conditional Fitting for Gaussian Ancestral Graph Models
|
stat.ME cs.LG stat.ML
|
Ancestral graph models, introduced by Richardson and Spirtes (2002),
generalize both Markov random fields and Bayesian networks to a class of graphs
with a global Markov property that is closed under conditioning and
marginalization. By design, ancestral graphs encode precisely the conditional
independence structures that can arise from Bayesian networks with selection
and unobserved (hidden/latent) variables. Thus, ancestral graph models provide
a potentially very useful framework for exploratory model selection when
unobserved variables might be involved in the data-generating process but no
particular hidden structure can be specified. In this paper, we present the
Iterative Conditional Fitting (ICF) algorithm for maximum likelihood estimation
in Gaussian ancestral graph models. The name reflects that in each step of the
procedure a conditional distribution is estimated, subject to constraints,
while a marginal distribution is held fixed. This approach is in duality to the
well-known Iterative Proportional Fitting algorithm, in which marginal
distributions are fitted while conditional distributions are held fixed.
|
1207.4119
|
Mixtures of Deterministic-Probabilistic Networks and their AND/OR Search
Space
|
cs.AI
|
The paper introduces mixed networks, a new framework for expressing and
reasoning with probabilistic and deterministic information. The framework
combines belief networks with constraint networks, defining the semantics and
graphical representation. We also introduce the AND/OR search space for
graphical models, and develop a new linear space search algorithm. This
provides the basis for understanding the benefits of processing the constraint
information separately, resulting in the pruning of the search space. When the
constraint part is tractable or has a small number of solutions, using the
mixed representation can be exponentially more effective than using pure belief
networks which odel constraints as conditional probability tables.
|
1207.4120
|
Stable Independance and Complexity of Representation
|
cs.AI
|
The representation of independence relations generally builds upon the
well-known semigraphoid axioms of independence. Recently, a representation has
been proposed that captures a set of dominant statements of an independence
relation from which any other statement can be generated by means of the
axioms; the cardinality of this set is taken to indicate the complexity of the
relation. Building upon the idea of dominance, we introduce the concept of
stability to provide for a more compact representation of independence. We give
an associated algorithm for establishing such a representation.We show that,
with our concept of stability, many independence relations are found to be of
lower complexity than with existing representations.
|
1207.4121
|
Propositional and Relational Bayesian Networks Associated with Imprecise
and Qualitative Probabilistic Assesments
|
cs.AI
|
This paper investigates a representation language with flexibility inspired
by probabilistic logic and compactness inspired by relational Bayesian
networks. The goal is to handle propositional and first-order constructs
together with precise, imprecise, indeterminate and qualitative probabilistic
assessments. The paper shows how this can be achieved through the theory of
credal networks. New exact and approximate inference algorithms based on
multilinear programming and iterated/loopy propagation of interval
probabilities are presented; their superior performance, compared to existing
ones, is shown empirically.
|
1207.4122
|
Bayesian Biosurveillance of Disease Outbreaks
|
stat.AP cs.AI cs.CE
|
Early, reliable detection of disease outbreaks is a critical problem today.
This paper reports an investigation of the use of causal Bayesian networks to
model spatio-temporal patterns of a non-contagious disease (respiratory anthrax
infection) in a population of people. The number of parameters in such a
network can become enormous, if not carefully managed. Also, inference needs to
be performed in real time as population data stream in. We describe techniques
we have applied to address both the modeling and inference challenges. A key
contribution of this paper is the explication of assumptions and techniques
that are sufficient to allow the scaling of Bayesian network modeling and
inference to millions of nodes for real-time surveillance applications. The
results reported here provide a proof-of-concept that Bayesian networks can
serve as the foundation of a system that effectively performs Bayesian
biosurveillance of disease outbreaks.
|
1207.4123
|
A Logic Programming Framework for Possibilistic Argumentation with Vague
Knowledge
|
cs.AI
|
Defeasible argumentation frameworks have evolved to become a sound setting to
formalize commonsense, qualitative reasoning from incomplete and potentially
inconsistent knowledge. Defeasible Logic Programming (DeLP) is a defeasible
argumentation formalism based on an extension of logic programming. Although
DeLP has been successfully integrated in a number of different real-world
applications, DeLP cannot deal with explicit uncertainty, nor with vague
knowledge, as defeasibility is directly encoded in the object language. This
paper introduces P-DeLP, a new logic programming language that extends original
DeLP capabilities for qualitative reasoning by incorporating the treatment of
possibilistic uncertainty and fuzzy knowledge. Such features will be formalized
on the basis of PGL, a possibilistic logic based on Godel fuzzy logic.
|
1207.4124
|
Sensitivity Analysis in Bayesian Networks: From Single to Multiple
Parameters
|
cs.AI
|
Previous work on sensitivity analysis in Bayesian networks has focused on
single parameters, where the goal is to understand the sensitivity of queries
to single parameter changes, and to identify single parameter changes that
would enforce a certain query constraint. In this paper, we expand the work to
multiple parameters which may be in the CPT of a single variable, or the CPTs
of multiple variables. Not only do we identify the solution space of multiple
parameter changes that would be needed to enforce a query constraint, but we
also show how to find the optimal solution, that is, the one which disturbs the
current probability distribution the least (with respect to a specific measure
of disturbance). We characterize the computational complexity of our new
techniques and discuss their applications to developing and debugging Bayesian
networks, and to the problem of reasoning about the value (reliability) of new
information.
|
1207.4125
|
Applying Discrete PCA in Data Analysis
|
cs.LG stat.ML
|
Methods for analysis of principal components in discrete data have existed
for some time under various names such as grade of membership modelling,
probabilistic latent semantic analysis, and genotype inference with admixture.
In this paper we explore a number of extensions to the common theory, and
present some application of these methods to some common statistical tasks. We
show that these methods can be interpreted as a discrete version of ICA. We
develop a hierarchical version yielding components at different levels of
detail, and additional techniques for Gibbs sampling. We compare the algorithms
on a text prediction task using support vector machines, and to information
retrieval.
|
1207.4126
|
Compact Value-Function Representations for Qualitative Preferences
|
cs.AI
|
We consider the challenge of preference elicitation in systems that help
users discover the most desirable item(s) within a given database. Past work on
preference elicitation focused on structured models that provide a factored
representation of users' preferences. Such models require less information to
construct and support efficient reasoning algorithms. This paper makes two
substantial contributions to this area: (1) Strong representation theorems for
factored value functions. (2) A methodology that utilizes our representation
results to address the problem of optimal item selection.
|
1207.4127
|
On finding minimal w-cutset
|
cs.DS cs.AI
|
The complexity of a reasoning task over a graphical model is tied to the
induced width of the underlying graph. It is well-known that the conditioning
(assigning values) on a subset of variables yields a subproblem of the reduced
complexity where instantiated variables are removed. If the assigned variables
constitute a cycle-cutset, the rest of the network is singly-connected and
therefore can be solved by linear propagation algorithms. A w-cutset is a
generalization of a cycle-cutset defined as a subset of nodes such that the
subgraph with cutset nodes removed has induced-width of w or less. In this
paper we address the problem of finding a minimal w-cutset in a graph. We
relate the problem to that of finding the minimal w-cutset of a
treedecomposition. The latter can be mapped to the well-known set multi-cover
problem. This relationship yields a proof of NP-completeness on one hand and a
greedy algorithm for finding a w-cutset of a tree decomposition on the other.
Empirical evaluation of the algorithms is presented.
|
1207.4129
|
Recovering Articulated Object Models from 3D Range Data
|
cs.CV
|
We address the problem of unsupervised learning of complex articulated object
models from 3D range data. We describe an algorithm whose input is a set of
meshes corresponding to different configurations of an articulated object. The
algorithm automatically recovers a decomposition of the object into
approximately rigid parts, the location of the parts in the different object
instances, and the articulated object skeleton linking the parts. Our algorithm
first registers allthe meshes using an unsupervised non-rigid technique
described in a companion paper. It then segments the meshes using a graphical
model that captures the spatial contiguity of parts. The segmentation is done
using the EM algorithm, iterating between finding a decomposition of the object
into rigid parts, and finding the location of the parts in the object
instances. Although the graphical model is densely connected, the object
decomposition step can be performed optimally and efficiently, allowing us to
identify a large number of object parts while avoiding local maxima. We
demonstrate the algorithm on real world datasets, recovering a 15-part
articulated model of a human puppet from just 7 different puppet
configurations, as well as a 4 part model of a fiexing arm where significant
non-rigid deformation was present.
|
1207.4130
|
Using arguments for making decisions: A possibilistic logic approach
|
cs.AI
|
Humans currently use arguments for explaining choices which are already made,
or for evaluating potential choices. Each potential choice has usually pros and
cons of various strengths. In spite of the usefulness of arguments in a
decision making process, there have been few formal proposals handling this
idea if we except works by Fox and Parsons and by Bonet and Geffner. In this
paper we propose a possibilistic logic framework where arguments are built from
an uncertain knowledge base and a set of prioritized goals. The proposed
approach can compute two kinds of decisions by distinguishing between
pessimistic and optimistic attitudes. When the available, maybe uncertain,
knowledge is consistent, as well as the set of prioritized goals (which have to
be fulfilled as far as possible), the method for evaluating decisions on the
basis of arguments agrees with the possibility theory-based approach to
decision-making under uncertainty. Taking advantage of its relation with formal
approaches to defeasible argumentation, the proposed framework can be
generalized in case of partially inconsistent knowledge, or goal bases.
|
1207.4131
|
Exponential Families for Conditional Random Fields
|
cs.LG stat.ML
|
In this paper we de ne conditional random elds in reproducing kernel Hilbert
spaces and show connections to Gaussian Process classi cation. More speci
cally, we prove decomposition results for undirected graphical models and we
give constructions for kernels. Finally we present e cient means of solving the
optimization problem using reduced rank decompositions and we show how
stationarity can be exploited e ciently in the optimization process.
|
1207.4132
|
MOB-ESP and other Improvements in Probability Estimation
|
cs.LG cs.AI stat.ML
|
A key prerequisite to optimal reasoning under uncertainty in intelligent
systems is to start with good class probability estimates. This paper improves
on the current best probability estimation trees (Bagged-PETs) and also
presents a new ensemble-based algorithm (MOB-ESP). Comparisons are made using
several benchmark datasets and multiple metrics. These experiments show that
MOB-ESP outputs significantly more accurate class probabilities than either the
baseline BPETs algorithm or the enhanced version presented here (EB-PETs).
These results are based on metrics closely associated with the average accuracy
of the predictions. MOB-ESP also provides much better probability rankings than
B-PETs. The paper further suggests how these estimation techniques can be
applied in concert with a broader category of classifiers.
|
1207.4133
|
"Ideal Parent" Structure Learning for Continuous Variable Networks
|
cs.LG stat.ML
|
In recent years, there is a growing interest in learning Bayesian networks
with continuous variables. Learning the structure of such networks is a
computationally expensive procedure, which limits most applications to
parameter learning. This problem is even more acute when learning networks with
hidden variables. We present a general method for significantly speeding the
structure search algorithm for continuous variable networks with common
parametric distributions. Importantly, our method facilitates the addition of
new hidden variables into the network structure efficiently. We demonstrate the
method on several data sets, both for learning structure on fully observable
data, and for introducing new hidden variables during structure search.
|
1207.4134
|
Bayesian Learning in Undirected Graphical Models: Approximate MCMC
algorithms
|
cs.LG stat.ML
|
Bayesian learning in undirected graphical models|computing posterior
distributions over parameters and predictive quantities is exceptionally
difficult. We conjecture that for general undirected models, there are no
tractable MCMC (Markov Chain Monte Carlo) schemes giving the correct
equilibrium distribution over parameters. While this intractability, due to the
partition function, is familiar to those performing parameter optimisation,
Bayesian learning of posterior distributions over undirected model parameters
has been unexplored and poses novel challenges. we propose several approximate
MCMC schemes and test on fully observed binary models (Boltzmann machines) for
a small coronary heart disease data set and larger artificial systems. While
approximations must perform well on the model, their interaction with the
sampling scheme is also important. Samplers based on variational mean- field
approximations generally performed poorly, more advanced methods using loopy
propagation, brief sampling and stochastic dynamics lead to acceptable
parameter posteriors. Finally, we demonstrate these techniques on a Markov
random field with hidden variables.
|
1207.4135
|
Case-Factor Diagrams for Structured Probabilistic Modeling
|
cs.AI
|
We introduce a probabilistic formalism subsuming Markov random fields of
bounded tree width and probabilistic context free grammars. Our models are
based on a representation of Boolean formulas that we call case-factor diagrams
(CFDs). CFDs are similar to binary decision diagrams (BDDs) but are concise for
circuits of bounded tree width (unlike BDDs) and can concisely represent the
set of parse trees over a given string undera given context free grammar (also
unlike BDDs). A probabilistic model consists of aCFD defining a feasible set of
Boolean assignments and a weight (or cost) for each individual Boolean
variable. We give an insideoutside algorithm for simultaneously computing the
marginal of each Boolean variable, and a Viterbi algorithm for finding the
mininum cost variable assignment. Both algorithms run in time proportional to
the size of the CFD.
|
1207.4136
|
Convolutional Factor Graphs as Probabilistic Models
|
cs.AI
|
Based on a recent development in the area of error control coding, we
introduce the notion of convolutional factor graphs (CFGs) as a new class of
probabilistic graphical models. In this context, the conventional factor graphs
are referred to as multiplicative factor graphs (MFGs). This paper shows that
CFGs are natural models for probability functions when summation of independent
latent random variables is involved. In particular, CFGs capture a large class
of linear models, where the linearity is in the sense that the observed
variables are obtained as a linear ransformation of the latent variables taking
arbitrary distributions. We use Gaussian models and independent factor models
as examples to emonstrate the use of CFGs. The requirement of a linear
transformation between latent variables (with certain independence restriction)
and the bserved variables, to an extent, limits the modelling flexibility of
CFGs. This structural restriction however provides a powerful analytic tool to
the framework of CFGs; that is, upon taking the Fourier transform of the
function represented by the CFG, the resulting function is represented by a FG
with identical structure. This Fourier transform duality allows inference
problems on a CFG to be solved on the corresponding dual MFG.
|
1207.4137
|
An Empirical Evaluation of Possible Variations of Lazy Propagation
|
cs.AI
|
As real-world Bayesian networks continue to grow larger and more complex, it
is important to investigate the possibilities for improving the performance of
existing algorithms of probabilistic inference. Motivated by examples, we
investigate the dependency of the performance of Lazy propagation on the
message computation algorithm. We show how Symbolic Probabilistic Inference
(SPI) and Arc-Reversal (AR) can be used for computation of clique to clique
messages in the addition to the traditional use of Variable Elimination (VE).
In addition, the paper resents the results of an empirical evaluation of the
performance of Lazy propagation using VE, SPI, and AR as the message
computation algorithm. The results of the empirical evaluation show that for
most networks, the performance of inference did not depend on the choice of
message computation algorithm, but for some randomly generated networks the
choice had an impact on both space and time performance. In the cases where the
choice had an impact, AR produced the best results.
|
1207.4138
|
Active Model Selection
|
cs.LG stat.ML
|
Classical learning assumes the learner is given a labeled data sample, from
which it learns a model. The field of Active Learning deals with the situation
where the learner begins not with a training sample, but instead with resources
that it can use to obtain information to help identify the optimal model. To
better understand this task, this paper presents and analyses the simplified
"(budgeted) active model selection" version, which captures the pure
exploration aspect of many active learning problems in a clean and simple
problem formulation. Here the learner can use a fixed budget of "model probes"
(where each probe evaluates the specified model on a random indistinguishable
instance) to identify which of a given set of possible models has the highest
expected accuracy. Our goal is a policy that sequentially determines which
model to probe next, based on the information observed so far. We present a
formal description of this task, and show that it is NPhard in general. We then
investigate a number of algorithms for this task, including several existing
ones (eg, "Round-Robin", "Interval Estimation", "Gittins") as well as some
novel ones (e.g., "Biased-Robin"), describing first their approximation
properties and then their empirical performance on various problem instances.
We observe empirically that the simple biased-robin algorithm significantly
outperforms the other algorithms in the case of identical costs and priors.
|
1207.4139
|
An Extended Cencov-Campbell Characterization of Conditional Information
Geometry
|
cs.LG stat.ML
|
We formulate and prove an axiomatic characterization of conditional
information geometry, for both the normalized and the nonnormalized cases. This
characterization extends the axiomatic derivation of the Fisher geometry by
Cencov and Campbell to the cone of positive conditional models, and as a
special case to the manifold of conditional distributions. Due to the close
connection between the conditional I-divergence and the product Fisher
information metric the characterization provides a new axiomatic interpretation
of the primal problems underlying logistic regression and AdaBoost.
|
1207.4140
|
Selection of Identifiability Criteria for Total Effects by using Path
Diagrams
|
stat.ME cs.AI stat.AP
|
Pearl has provided the back door criterion, the front door criterion and the
conditional instrumental variable (IV) method as identifiability criteria for
total effects. In some situations, these three criteria can be applied to
identifying total effects simultaneously. For the purpose of increasing
estimating accuracy, this paper compares the three ways of identifying total
effects in terms of the asymptotic variance, and concludes that in some
situations the superior of them can be recognized directly from the graph
structure.
|
1207.4141
|
Pre-Selection of Independent Binary Features: An Application to
Diagnosing Scrapie in Sheep
|
cs.AI cs.CE
|
Suppose that the only available information in a multi-class problem are
expert estimates of the conditional probabilities of occurrence for a set of
binary features. The aim is to select a subset of features to be measured in
subsequent data collection experiments. In the lack of any information about
the dependencies between the features, we assume that all features are
conditionally independent and hence choose the Naive Bayes classifier as the
optimal classifier for the problem. Even in this (seemingly trivial) case of
complete knowledge of the distributions, choosing an optimal feature subset is
not straightforward. We discuss the properties and implementation details of
Sequential Forward Selection (SFS) as a feature selection procedure for the
current problem. A sensitivity analysis was carried out to investigate whether
the same features are selected when the probabilities vary around the estimated
values. The procedure is illustrated with a set of probability estimates for
Scrapie in sheep.
|
1207.4142
|
Conditional Chow-Liu Tree Structures for Modeling Discrete-Valued Vector
Time Series
|
cs.LG stat.ML
|
We consider the problem of modeling discrete-valued vector time series data
using extensions of Chow-Liu tree models to capture both dependencies across
time and dependencies across variables. Conditional Chow-Liu tree models are
introduced, as an extension to standard Chow-Liu trees, for modeling
conditional rather than joint densities. We describe learning algorithms for
such models and show how they can be used to learn parsimonious representations
for the output distributions in hidden Markov models. These models are applied
to the important problem of simulating and forecasting daily precipitation
occurrence for networks of rain stations. To demonstrate the effectiveness of
the models, we compare their performance versus a number of alternatives using
historical precipitation data from Southwestern Australia and the Western
United States. We illustrate how the structure and parameters of the models can
be used to provide an improved meteorological interpretation of such data.
|
1207.4143
|
Modeling Waveform Shapes with Random Eects Segmental Hidden Markov
Models
|
stat.AP cs.CE
|
In this paper we describe a general probabilistic framework for modeling
waveforms such as heartbeats from ECG data. The model is based on segmental
hidden Markov models (as used in speech recognition) with the addition of
random effects to the generative model. The random effects component of the
model handles shape variability across different waveforms within a general
class of waveforms of similar shape. We show that this probabilistic model
provides a unified framework for learning these models from sets of waveform
data as well as parsing, classification, and prediction of new waveforms. We
derive a computationally efficient EM algorithm to fit the model on multiple
waveforms, and introduce a scoring method that evaluates a test waveform based
on its shape. Results on two real-world data sets demonstrate that the random
effects methodology leads to improved accuracy (compared to alternative
approaches) on classification and segmentation of real-world waveforms.
|
1207.4144
|
A Generative Bayesian Model for Aggregating Experts' Probabilities
|
cs.LG stat.ML
|
In order to improve forecasts, a decisionmaker often combines probabilities
given by various sources, such as human experts and machine learning
classifiers. When few training data are available, aggregation can be improved
by incorporating prior knowledge about the event being forecasted and about
salient properties of the experts. To this end, we develop a generative
Bayesian aggregation model for probabilistic classi cation. The model includes
an event-specific prior, measures of individual experts' bias, calibration,
accuracy, and a measure of dependence betweeen experts. Rather than require
absolute measures, we show that aggregation may be expressed in terms of
relative accuracy between experts. The model results in a weighted logarithmic
opinion pool (LogOps) that satis es consistency criteria such as the external
Bayesian property. We derive analytic solutions for independent and for
exchangeable experts. Empirical tests demonstrate the model's use, comparing
its accuracy with other aggregation methods.
|
1207.4145
|
Joint discovery of haplotype blocks and complex trait associations from
SNP sequences
|
q-bio.GN cs.CE stat.ME
|
Haplotypes, the global patterns of DNA sequence variation, have important
implications for identifying complex traits. Recently, blocks of limited
haplotype diversity have been discovered in human chromosomes, intensifying the
research on modelling the block structure as well as the transitions or
co-occurrence of the alleles in these blocks as a way to compress the
variability and infer the associations more robustly. The haplotype block
structure analysis is typically complicated by the fact that the phase
information for each SNP is missing, i.e., the observed allele pairs are not
given in a consistent order across the sequence. The techniques for
circumventing this require additional information, such as family data, or a
more complex sequencing procedure. In this paper we present a hierarchical
statistical model and the associated learning and inference algorithms that
simultaneously deal with the allele ambiguity per locus, missing data, block
estimation, and the complex trait association. While the blo structure may
differ from the structures inferred by other methods, which use the pedigree
information or previously known alleles, the parameters we estimate, including
the learned block structure and the estimated block transitions per locus,
define a good model of variability in the set. The method is completely
datadriven and can detect Chron's disease from the SNP data taken from the
human chromosome 5q31 with the detection rate of 80% and a small error
variance.
|
1207.4146
|
A Bayesian Approach toward Active Learning for Collaborative Filtering
|
cs.LG cs.IR stat.ML
|
Collaborative filtering is a useful technique for exploiting the preference
patterns of a group of users to predict the utility of items for the active
user. In general, the performance of collaborative filtering depends on the
number of rated examples given by the active user. The more the number of rated
examples given by the active user, the more accurate the predicted ratings will
be. Active learning provides an effective way to acquire the most informative
rated examples from active users. Previous work on active learning for
collaborative filtering only considers the expected loss function based on the
estimated model, which can be misleading when the estimated model is
inaccurate. This paper takes one step further by taking into account of the
posterior distribution of the estimated model, which results in more robust
active learning algorithm. Empirical studies with datasets of movie ratings
show that when the number of ratings from the active user is restricted to be
small, active learning methods only based on the estimated model don't perform
well while the active learning method using the model distribution achieves
substantially better performance.
|
1207.4148
|
Dynamical Systems Trees
|
cs.LG stat.ML
|
We propose dynamical systems trees (DSTs) as a flexible class of models for
describing multiple processes that interact via a hierarchy of aggregating
parent chains. DSTs extend Kalman filters, hidden Markov models and nonlinear
dynamical systems to an interactive group scenario. Various individual
processes interact as communities and sub-communities in a tree structure that
is unrolled in time. To accommodate nonlinear temporal activity, each
individual leaf process is modeled as a dynamical system containing discrete
and/or continuous hidden states with discrete and/or Gaussian emissions.
Subsequent higher level parent processes act like hidden Markov models and
mediate the interaction between leaf processes or between other parent
processes in the hierarchy. Aggregator chains are parents of child processes
that they combine and mediate, yielding a compact overall parameterization. We
provide tractable inference and learning algorithms for arbitrary DST
topologies via an efficient structured mean-field algorithm. The diverse
applicability of DSTs is demonstrated by experiments on gene expression data
and by modeling group behavior in the setting of an American football game.
|
1207.4149
|
From Fields to Trees
|
stat.CO cs.LG
|
We present new MCMC algorithms for computing the posterior distributions and
expectations of the unknown variables in undirected graphical models with
regular structure. For demonstration purposes, we focus on Markov Random Fields
(MRFs). By partitioning the MRFs into non-overlapping trees, it is possible to
compute the posterior distribution of a particular tree exactly by conditioning
on the remaining tree. These exact solutions allow us to construct efficient
blocked and Rao-Blackwellised MCMC algorithms. We show empirically that tree
sampling is considerably more efficient than other partitioned sampling schemes
and the naive Gibbs sampler, even in cases where loopy belief propagation fails
to converge. We prove that tree sampling exhibits lower variance than the naive
Gibbs sampler and other naive partitioning schemes using the theoretical
measure of maximal correlation. We also construct new information theory tools
for comparing different MCMC schemes and show that, under these, tree sampling
is more efficient.
|
1207.4150
|
Solving Factored MDPs with Continuous and Discrete Variables
|
cs.AI
|
Although many real-world stochastic planning problems are more naturally
formulated by hybrid models with both discrete and continuous variables,
current state-of-the-art methods cannot adequately address these problems. We
present the first framework that can exploit problem structure for modeling and
solving hybrid problems efficiently. We formulate these problems as hybrid
Markov decision processes (MDPs with continuous and discrete state and action
variables), which we assume can be represented in a factored way using a hybrid
dynamic Bayesian network (hybrid DBN). This formulation also allows us to apply
our methods to collaborative multiagent settings. We present a new linear
program approximation method that exploits the structure of the hybrid MDP and
lets us compute approximate value functions more efficiently. In particular, we
describe a new factored discretization of continuous variables that avoids the
exponential blow-up of traditional approaches. We provide theoretical bounds on
the quality of such an approximation and on its scale-up potential. We support
our theoretical arguments with experiments on a set of control problems with up
to 28-dimensional continuous state space and 22-dimensional action space.
|
1207.4151
|
PAC-learning bounded tree-width Graphical Models
|
cs.LG cs.DS stat.ML
|
We show that the class of strongly connected graphical models with treewidth
at most k can be properly efficiently PAC-learnt with respect to the
Kullback-Leibler Divergence. Previous approaches to this problem, such as those
of Chow ([1]), and Ho gen ([7]) have shown that this class is PAC-learnable by
reducing it to a combinatorial optimization problem. However, for k > 1, this
problem is NP-complete ([15]), and so unless P=NP, these approaches will take
exponential amounts of time. Our approach differs significantly from these, in
that it first attempts to find approximate conditional independencies by
solving (polynomially many) submodular optimization problems, and then using a
dynamic programming formulation to combine the approximate conditional
independence information to derive a graphical model with underlying graph of
the tree-width specified. This gives us an efficient (polynomial time in the
number of random variables) PAC-learning algorithm which requires only
polynomial number of samples of the true distribution, and only polynomial
running time.
|
1207.4152
|
Maximum Entropy for Collaborative Filtering
|
cs.IR cs.LG
|
Within the task of collaborative filtering two challenges for computing
conditional probabilities exist. First, the amount of training data available
is typically sparse with respect to the size of the domain. Thus, support for
higher-order interactions is generally not present. Second, the variables that
we are conditioning upon vary for each query. That is, users label different
variables during each query. For this reason, there is no consistent input to
output mapping. To address these problems we purpose a maximum entropy approach
using a non-standard measure of entropy. This approach can be simplified to
solving a set of linear equations that can be efficiently solved.
|
1207.4153
|
Annealed MAP
|
cs.AI
|
Maximum a Posteriori assignment (MAP) is the problem of finding the most
probable instantiation of a set of variables given the partial evidence on the
other variables in a Bayesian network. MAP has been shown to be a NP-hard
problem [22], even for constrained networks, such as polytrees [18]. Hence,
previous approaches often fail to yield any results for MAP problems in large
complex Bayesian networks. To address this problem, we propose AnnealedMAP
algorithm, a simulated annealing-based MAP algorithm. The AnnealedMAP algorithm
simulates a non-homogeneous Markov chain whose invariant function is a
probability density that concentrates itself on the modes of the target
density. We tested this algorithm on several real Bayesian networks. The
results show that, while maintaining good quality of the MAP solutions, the
AnnealedMAP algorithm is also able to solve many problems that are beyond the
reach of previous approaches.
|
1207.4154
|
Discretized Approximations for POMDP with Average Cost
|
cs.AI cs.SY math.OC
|
In this paper, we propose a new lower approximation scheme for POMDP with
discounted and average cost criterion. The approximating functions are
determined by their values at a finite number of belief points, and can be
computed efficiently using value iteration algorithms for finite-state MDP.
While for discounted problems several lower approximation schemes have been
proposed earlier, ours seems the first of its kind for average cost problems.
We focus primarily on the average cost case, and we show that the corresponding
approximation can be computed efficiently using multi-chain algorithms for
finite-state MDP. We give a preliminary analysis showing that regardless of the
existence of the optimal average cost J in the POMDP, the approximation
obtained is a lower bound of the liminf optimal average cost function, and can
also be used to calculate an upper bound on the limsup optimal average cost
function, as well as bounds on the cost of executing the stationary policy
associated with the approximation. Weshow the convergence of the cost
approximation, when the optimal average cost is constant and the optimal
differential cost is continuous.
|
1207.4155
|
Similarity-Driven Cluster Merging Method for Unsupervised Fuzzy
Clustering
|
cs.LG stat.ML
|
In this paper, a similarity-driven cluster merging method is proposed for
unsuper-vised fuzzy clustering. The cluster merging method is used to resolve
the problem of cluster validation. Starting with an overspecified number of
clusters in the data, pairs of similar clusters are merged based on the
proposed similarity-driven cluster merging criterion. The similarity between
clusters is calculated by a fuzzy cluster similarity matrix, while an adaptive
threshold is used for merging. In addition, a modified generalized ob- jective
function is used for prototype-based fuzzy clustering. The function includes
the p-norm distance measure as well as principal components of the clusters.
The number of the principal components is determined automatically from the
data being clustered. The properties of this unsupervised fuzzy clustering
algorithm are illustrated by several experiments.
|
1207.4156
|
Graph partition strategies for generalized mean field inference
|
cs.LG stat.ML
|
An autonomous variational inference algorithm for arbitrary graphical models
requires the ability to optimize variational approximations over the space of
model parameters as well as over the choice of tractable families used for the
variational approximation. In this paper, we present a novel combination of
graph partitioning algorithms with a generalized mean field (GMF) inference
algorithm. This combination optimizes over disjoint clustering of variables and
performs inference using those clusters. We provide a formal analysis of the
relationship between the graph cut and the GMF approximation, and explore
several graph partition strategies empirically. Our empirical results provide
rather clear support for a weighted version of MinCut as a useful clustering
algorithm for GMF inference, which is consistent with the implications from the
formal analysis.
|
1207.4157
|
An Integrated, Conditional Model of Information Extraction and
Coreference with Applications to Citation Matching
|
cs.LG cs.DL cs.IR stat.ML
|
Although information extraction and coreference resolution appear together in
many applications, most current systems perform them as ndependent steps. This
paper describes an approach to integrated inference for extraction and
coreference based on conditionally-trained undirected graphical models. We
discuss the advantages of conditional probability training, and of a
coreference model structure based on graph partitioning. On a data set of
research paper citations, we show significant reduction in error by using
extraction uncertainty to improve coreference citation matching accuracy, and
using coreference to improve the accuracy of the extracted fields.
|
1207.4158
|
On the Choice of Regions for Generalized Belief Propagation
|
cs.AI cs.LG
|
Generalized belief propagation (GBP) has proven to be a promising technique
for approximate inference tasks in AI and machine learning. However, the choice
of a good set of clusters to be used in GBP has remained more of an art then a
science until this day. This paper proposes a sequential approach to adding new
clusters of nodes and their interactions (i.e. "regions") to the approximation.
We first review and analyze the recently introduced region graphs and find that
three kinds of operations ("split", "merge" and "death") leave the free energy
and (under some conditions) the fixed points of GBP invariant. This leads to
the notion of "weakly irreducible" regions as the natural candidates to be
added to the approximation. Computational complexity of the GBP algorithm is
controlled by restricting attention to regions with small "region-width".
Combining the above with an efficient (i.e. local in the graph) measure to
predict the improved accuracy of GBP leads to the sequential "region pursuit"
algorithm for adding new regions bottom-up to the region graph. Experiments
show that this algorithm can indeed perform close to optimally.
|
1207.4160
|
Monotonicity in Bayesian Networks
|
cs.AI
|
For many real-life Bayesian networks, common knowledge dictates that the
output established for the main variable of interest increases with higher
values for the observable variables. We define two concepts of monotonicity to
capture this type of knowledge. We say that a network is isotone in
distribution if the probability distribution computed for the output variable
given specific observations is stochastically dominated by any such
distribution given higher-ordered observations; a network is isotone in mode if
a probability distribution given higher observations has a higher mode. We show
that establishing whether a network exhibits any of these properties of
monotonicity is coNPPP-complete in general, and remains coNP-complete for
polytrees. We present an approximate algorithm for deciding whether a network
is monotone in distribution and illustrate its application to a real-life
network in oncology.
|
1207.4161
|
Identifying Conditional Causal Effects
|
cs.AI stat.ME
|
This paper concerns the assessment of the effects of actions from a
combination of nonexperimental data and causal assumptions encoded in the form
of a directed acyclic graph in which some variables are presumed to be
unobserved. We provide a procedure that systematically identifies cause effects
between two sets of variables conditioned on some other variables, in time
polynomial in the number of variables in the graph. The identifiable
conditional causal effects are expressed in terms of the observed joint
distribution.
|
1207.4162
|
ARMA Time-Series Modeling with Graphical Models
|
stat.AP cs.LG stat.ME
|
We express the classic ARMA time-series model as a directed graphical model.
In doing so, we find that the deterministic relationships in the model make it
effectively impossible to use the EM algorithm for learning model parameters.
To remedy this problem, we replace the deterministic relationships with
Gaussian distributions having a small variance, yielding the stochastic ARMA
(ARMA) model. This modification allows us to use the EM algorithm to learn
parmeters and to forecast,even in situations where some data is missing. This
modification, in conjunction with the graphicalmodel approach, also allows us
to include cross predictors in situations where there are multiple times series
and/or additional nontemporal covariates. More surprising,experiments suggest
that the move to stochastic ARMA yields improved accuracy through better
smoothing. We demonstrate improvements afforded by cross prediction and better
smoothing on real data.
|
1207.4164
|
Factored Latent Analysis for far-field tracking data
|
cs.LG stat.ML
|
This paper uses Factored Latent Analysis (FLA) to learn a factorized,
segmental representation for observations of tracked objects over time.
Factored Latent Analysis is latent class analysis in which the observation
space is subdivided and each aspect of the original space is represented by a
separate latent class model. One could simply treat these factors as completely
independent and ignore their interdependencies or one could concatenate them
together and attempt to learn latent class structure for the complete
observation space. Alternatively, FLA allows the interdependencies to be
exploited in estimating an effective model, which is also capable of
representing a factored latent state. In this paper, FLA is used to learn a set
of factored latent classes to represent different modalities of observations of
tracked objects. Different characteristics of the state of tracked objects are
each represented by separate latent class models, including normalized size,
normalized speed, normalized direction, and position. This model also enables
effective temporal segmentation of these sequences. This method is data-driven,
unsupervised using only pairwise observation statistics. This data-driven and
unsupervised activity classi- fication technique exhibits good performance in
multiple challenging environments.
|
1207.4166
|
Heuristic Search Value Iteration for POMDPs
|
cs.AI
|
We present a novel POMDP planning algorithm called heuristic search value
iteration (HSVI).HSVI is an anytime algorithm that returns a policy and a
provable bound on its regret with respect to the optimal policy. HSVI gets its
power by combining two well-known techniques: attention-focusing search
heuristics and piecewise linear convex representations of the value function.
HSVI's soundness and convergence have been proven. On some benchmark problems
from the literature, HSVI displays speedups of greater than 100 with respect to
other state-of-the-art POMDP value iteration algorithms. We also apply HSVI to
a new rover exploration problem 10 times larger than most POMDP problems in the
literature.
|
1207.4167
|
Predictive State Representations: A New Theory for Modeling Dynamical
Systems
|
cs.AI cs.LG
|
Modeling dynamical systems, both for control purposes and to make predictions
about their behavior, is ubiquitous in science and engineering. Predictive
state representations (PSRs) are a recently introduced class of models for
discrete-time dynamical systems. The key idea behind PSRs and the closely
related OOMs (Jaeger's observable operator models) is to represent the state of
the system as a set of predictions of observable outcomes of experiments one
can do in the system. This makes PSRs rather different from history-based
models such as nth-order Markov models and hidden-state-based models such as
HMMs and POMDPs. We introduce an interesting construct, the systemdynamics
matrix, and show how PSRs can be derived simply from it. We also use this
construct to show formally that PSRs are more general than both nth-order
Markov models and HMMs/POMDPs. Finally, we discuss the main difference between
PSRs and OOMs and conclude with directions for future work.
|
1207.4168
|
A New Characterization of Probabilities in Bayesian Networks
|
cs.AI
|
We characterize probabilities in Bayesian networks in terms of algebraic
expressions called quasi-probabilities. These are arrived at by casting
Bayesian networks as noisy AND-OR-NOT networks, and viewing the subnetworks
that lead to a node as arguments for or against a node. Quasi-probabilities are
in a sense the "natural" algebra of Bayesian networks: we can easily compute
the marginal quasi-probability of any node recursively, in a compact form; and
we can obtain the joint quasi-probability of any set of nodes by multiplying
their marginals (using an idempotent product operator). Quasi-probabilities are
easily manipulated to improve the efficiency of probabilistic inference. They
also turn out to be representable as square-wave pulse trains, and joint and
marginal distributions can be computed by multiplication and complementation of
pulse trains.
|
1207.4169
|
The Author-Topic Model for Authors and Documents
|
cs.IR cs.LG stat.ML
|
We introduce the author-topic model, a generative model for documents that
extends Latent Dirichlet Allocation (LDA; Blei, Ng, & Jordan, 2003) to include
authorship information. Each author is associated with a multinomial
distribution over topics and each topic is associated with a multinomial
distribution over words. A document with multiple authors is modeled as a
distribution over topics that is a mixture of the distributions associated with
the authors. We apply the model to a collection of 1,700 NIPS conference papers
and 160,000 CiteSeer abstracts. Exact inference is intractable for these
datasets and we use Gibbs sampling to estimate the topic and author
distributions. We compare the performance with two other generative models for
documents, which are special cases of the author-topic model: LDA (a topic
model) and a simple author model in which each author is associated with a
distribution over words rather than a distribution over topics. We show topics
recovered by the author-topic model, and demonstrate applications to computing
similarity between authors and entropy of author output.
|
1207.4170
|
Evidence-invariant Sensitivity Bounds
|
cs.AI
|
The sensitivities revealed by a sensitivity analysis of a probabilistic
network typically depend on the entered evidence. For a real-life network
therefore, the analysis is performed a number of times, with different
evidence. Although efficient algorithms for sensitivity analysis exist, a
complete analysis is often infeasible because of the large range of possible
combinations of observations. In this paper we present a method for studying
sensitivities that are invariant to the evidence entered. Our method builds
upon the idea of establishing bounds between which a parameter can be varied
without ever inducing a change in the most likely value of a variable of
interest.
|
1207.4172
|
Variational Chernoff Bounds for Graphical Models
|
cs.LG stat.ML
|
Recent research has made significant progress on the problem of bounding log
partition functions for exponential family graphical models. Such bounds have
associated dual parameters that are often used as heuristic estimates of the
marginal probabilities required in inference and learning. However these
variational estimates do not give rigorous bounds on marginal probabilities,
nor do they give estimates for probabilities of more general events than simple
marginals. In this paper we build on this recent work by deriving rigorous
upper and lower bounds on event probabilities for graphical models. Our
approach is based on the use of generalized Chernoff bounds to express bounds
on event probabilities in terms of convex optimization problems; these
optimization problems, in turn, require estimates of generalized log partition
functions. Simulations indicate that this technique can result in useful,
rigorous bounds to complement the heuristic variational estimates, with
comparable computational cost.
|
1207.4173
|
Robustness of Causal Claims
|
cs.AI stat.ME
|
A causal claim is any assertion that invokes causal relationships between
variables, for example that a drug has a certain effect on preventing a
disease. Causal claims are established through a combination of data and a set
of causal assumptions called a causal model. A claim is robust when it is
insensitive to violations of some of the causal assumptions embodied in the
model. This paper gives a formal definition of this notion of robustness and
establishes a graphical condition for quantifying the degree of robustness of a
given causal claim. Algorithms for computing the degree of robustness are also
presented.
|
1207.4174
|
Robust Probabilistic Inference in Distributed Systems
|
cs.AI cs.DC
|
Probabilistic inference problems arise naturally in distributed systems such
as sensor networks and teams of mobile robots. Inference algorithms that use
message passing are a natural fit for distributed systems, but they must be
robust to the failure situations that arise in real-world settings, such as
unreliable communication and node failures. Unfortunately, the popular
sum-product algorithm can yield very poor estimates in these settings because
the nodes' beliefs before convergence can be arbitrarily different from the
correct posteriors. In this paper, we present a new message passing algorithm
for probabilistic inference which provides several crucial guarantees that the
standard sum-product algorithm does not. Not only does it converge to the
correct posteriors, but it is also guaranteed to yield a principled
approximation at any point before convergence. In addition, the computational
complexity of the message passing updates depends only upon the model, and is
dependent of the network topology of the distributed system. We demonstrate the
approach with detailed experimental results on a distributed sensor calibration
task using data from an actual sensor network deployment.
|
1207.4175
|
On Modeling Profiles instead of Values
|
cs.AI
|
We consider the problem of estimating the distribution underlying an observed
sample of data. Instead of maximum likelihood, which maximizes the probability
of the ob served values, we propose a different estimate, the high-profile
distribution, which maximizes the probability of the observed profile the
number of symbols appearing any given number of times. We determine the
high-profile distribution of several data samples, establish some of its
general properties, and show that when the number of distinct symbols observed
is small compared to the data size, the high-profile and maximum-likelihood
distributions are roughly the same, but when the number of symbols is large,
the distributions differ, and high-profile better explains the data.
|
1207.4176
|
Learning Diagnostic Policies from Examples by Systematic Search
|
cs.AI
|
A diagnostic policy specifies what test to perform next, based on the results
of previous tests, and when to stop and make a diagnosis. Cost-sensitive
diagnostic policies perform tradeoffs between (a) the cost of tests and (b) the
cost of misdiagnoses. An optimal diagnostic policy minimizes the expected total
cost. We formalize this diagnosis process as a Markov Decision Process (MDP).
We investigate two types of algorithms for solving this MDP: systematic search
based on AO* algorithm and greedy search (particularly the Value of Information
method). We investigate the issue of learning the MDP probabilities from
examples, but only as they are relevant to the search for good policies. We do
not learn nor assume a Bayesian network for the diagnosis process. Regularizers
are developed to control overfitting and speed up the search. This research is
the first that integrates overfitting prevention into systematic search. The
paper has two contributions: it discusses the factors that make systematic
search feasible for diagnosis, and it shows experimentally, on benchmark data
sets, that systematic search methods produce better diagnostic policies than
greedy methods.
|
1207.4177
|
Hybrid Influence Diagrams Using Mixtures of Truncated Exponentials
|
cs.AI
|
Mixtures of truncated exponentials (MTE) potentials are an alternative to
discretization for representing continuous chance variables in influence
diagrams. Also, MTE potentials can be used to approximate utility functions.
This paper introduces MTE influence diagrams, which can represent decision
problems without restrictions on the relationships between continuous and
discrete chance variables, without limitations on the distributions of
continuous chance variables, and without limitations on the nature of the
utility functions. In MTE influence diagrams, all probability distributions and
the joint utility function (or its multiplicative factors) are represented by
MTE potentials and decision nodes are assumed to have discrete state spaces.
MTE influence diagrams are solved by variable elimination using a fusion
algorithm.
|
1207.4179
|
Probabilistic index maps for modeling natural signals
|
cs.CV
|
One of the major problems in modeling natural signals is that signals with
very similar structure may locally have completely different measurements,
e.g., images taken under different illumination conditions, or the speech
signal captured in different environments. While there have been many
successful attempts to address these problems in application-specific settings,
we believe that underlying a large set of problems in signal representation is
a representational deficiency of intensity-derived local measurements that are
the basis of most efficient models. We argue that interesting structure in
signals is better captured when the signal is de- fined as a matrix whose
entries are discrete indices to a separate palette of possible measurements. In
order to model the variability in signal structure, we define a signal class
not by a single index map, but by a probability distribution over the index
maps, which can be estimated from the data, and which we call probabilistic
index maps. The existing algorithm can be adapted to work with this
representation. Furthermore, the probabilistic index map representation leads
to algorithms with computational costs proportional to either the size of the
palette or the log of the size of the palette, making the cost of significantly
increased invariance to non-structural changes quite bearable. We illustrate
the benefits of the probabilistic index map representation in several
applications in computer vision and speech processing.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.