id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1301.0127 | A Semi-automated Statistical Algorithm for Object Separation | cs.CV | We explicate a semi-automated statistical algorithm for object identification
and segregation in both gray scale and color images. The algorithm makes
optimal use of the observation that definite objects in an image are typically
represented by pixel values having narrow Gaussian distributions about
characteristic mean values. Furthermore, for visually distinct objects, the
corresponding Gaussian distributions have negligible overlap with each other
and hence the Mahalanobis distance between these distributions are large. These
statistical facts enable one to sub-divide images into multiple thresholds of
variable sizes, each segregating similar objects. The procedure incorporates
the sensitivity of human eye to the gray pixel values into the variable
threshold size, while mapping the Gaussian distributions into localized
\delta-functions, for object separation. The effectiveness of this recursive
statistical algorithm is demonstrated using a wide variety of images.
|
1301.0140 | The idempotent Radon--Nikodym theorem has a converse statement | math.FA cs.IT math.IT | Idempotent integration is an analogue of the Lebesgue integration where
$\sigma$-additive measures are replaced by $\sigma$-maxitive measures. It has
proved useful in many areas of mathematics such as fuzzy set theory,
optimization, idempotent analysis, large deviation theory, or extreme value
theory. Existence of Radon--Nikodym derivatives, which turns out to be crucial
in all of these applications, was proved by Sugeno and Murofushi. Here we show
a converse statement to this idempotent version of the Radon--Nikodym theorem,
i.e. we characterize the $\sigma$-maxitive measures that have the
Radon--Nikodym property.
|
1301.0142 | Semi-Supervised Domain Adaptation with Non-Parametric Copulas | stat.ML cs.LG | A new framework based on the theory of copulas is proposed to address semi-
supervised domain adaptation problems. The presented method factorizes any
multivariate density into a product of marginal distributions and bivariate
cop- ula functions. Therefore, changes in each of these factors can be detected
and corrected to adapt a density model accross different learning domains.
Impor- tantly, we introduce a novel vine copula model, which allows for this
factorization in a non-parametric manner. Experimental results on regression
problems with real-world data illustrate the efficacy of the proposed approach
when compared to state-of-the-art techniques.
|
1301.0148 | Markov Chain Order estimation with Conditional Mutual Information | physics.data-an cs.IT math.IT stat.ME | We introduce the Conditional Mutual Information (CMI) for the estimation of
the Markov chain order. For a Markov chain of $K$ symbols, we define CMI of
order $m$, $I_c(m)$, as the mutual information of two variables in the chain
being $m$ time steps apart, conditioning on the intermediate variables of the
chain. We find approximate analytic significance limits based on the estimation
bias of CMI and develop a randomization significance test of $I_c(m)$, where
the randomized symbol sequences are formed by random permutation of the
components of the original symbol sequence. The significance test is applied
for increasing $m$ and the Markov chain order is estimated by the last order
for which the null hypothesis is rejected. We present the appropriateness of
CMI-testing on Monte Carlo simulations and compare it to the Akaike and
Bayesian information criteria, the maximal fluctuation method (Peres-Shields
estimator) and a likelihood ratio test for increasing orders using
$\phi$-divergence. The order criterion of CMI-testing turns out to be superior
for orders larger than one, but its effectiveness for large orders depends on
data availability. In view of the results from the simulations, we interpret
the estimated orders by the CMI-testing and the other criteria on genes and
intergenic regions of DNA chains.
|
1301.0167 | Classifier Fusion Method to Recognize Handwritten Kannada Numerals | cs.CV | Optical Character Recognition (OCR) is one of the important fields in image
processing and pattern recognition domain. Handwritten character recognition
has always been a challenging task. Only a little work can be traced towards
the recognition of handwritten characters for the south Indian languages.
Kannada is one such south Indian language which is also one of the official
language of India. Accurate recognition of Kannada characters is a challenging
task because of the high degree of similarity between the characters. Hence,
good quality features are to be extracted and better classifiers are needed to
improve the accuracy of the OCR for Kannada characters. This paper explores the
effectiveness of feature extraction method like run length count (RLC) and
directional chain code (DCC) for the recognition of handwritten Kannada
numerals. In this paper, a classifier fusion method is implemented to improve
the recognition rate. For the classifier fusion, we have considered K-nearest
neighbour (KNN) and Linear classifier (LC). The novelty of this method is to
achieve better accuracy with few features using classifier fusion approach.
Proposed method achieves an average recognition rate of 96%.
|
1301.0170 | Noise-Induced Spatial Pattern Formation in Stochastic Reaction-Diffusion
Systems | q-bio.QM cs.SY q-bio.PE | This paper is concerned with stochastic reaction-diffusion kinetics governed
by the reaction-diffusion master equation. Specifically, the primary goal of
this paper is to provide a mechanistic basis of Turing pattern formation that
is induced by intrinsic noise. To this end, we first derive an approximate
reaction-diffusion system by using linear noise approximation. We show that the
approximated system has a certain structure that is associated with a coupled
dynamic multi-agent system. This observation then helps us derive an efficient
computation tool to examine the spatial power spectrum of the intrinsic noise.
We numerically demonstrate that the result is quite effective to analyze
noise-induced Turing pattern. Finally, we illustrate the theoretical mechanism
behind the noise-induced pattern formation with a H2 norm interpretation of the
multi-agent system.
|
1301.0173 | Knowledge Discovery System For Fiber Reinforced Polymer Matrix Composite
Laminate | cs.AI cs.CE | In this paper Knowledge Discovery System (KDS) is proposed and implemented
for the extraction of knowledge-mean stiffness of a polymer composite material
in which when fibers are placed at different orientations. Cosine amplitude
method is implemented for retrieving compatible polymer matrix and
reinforcement fiber which is coming under predicted fiber class, from the
polymer and reinforcement database respectively, based on the design
requirements. Fuzzy classification rules to classify fibers into short, medium
and long fiber classes are derived based on the fiber length and the computed
or derive critical length of fiber. Longitudinal and Transverse module of
Polymer Matrix Composite consisting of seven layers with different fiber volume
fractions and different fibers orientations at 0,15,30,45,60,75 and 90 degrees
are analyzed through Rule-of Mixture material design model. The analysis
results are represented in different graphical steps and have been measured
with statistical parameters. This data mining application implemented here has
focused the mechanical problems of material design and analysis. Therefore,
this system is an expert decision support system for optimizing the materials
performance for designing light-weight and strong, and cost effective polymer
composite materials.
|
1301.0176 | Similarity Measuring Approuch for Engineering Materials Selection | cs.AI cs.CE | Advanced engineering materials design involves the exploration of massive
multidimensional feature spaces, the correlation of materials properties and
the processing parameters derived from disparate sources. The search for
alternative materials or processing property strategies, whether through
analytical, experimental or simulation approaches, has been a slow and arduous
task, punctuated by infrequent and often expected discoveries. A few systematic
efforts have been made to analyze the trends in data as a basis for
classifications and predictions. This is particularly due to the lack of large
amounts of organized data and more importantly the challenging of shifting
through them in a timely and efficient manner. The application of recent
advances in Data Mining on materials informatics is the state of art of
computational and experimental approaches for materials discovery. In this
paper similarity based engineering materials selection model is proposed and
implemented to select engineering materials based on the composite materials
constraints. The result reviewed from this model is sustainable for effective
decision making in advanced engineering materials design applications.
|
1301.0178 | Efficient Solutions for Weighted Sum Rate Maximization in Multicellular
Networks With Channel Uncertainties | cs.IT math.IT | The important problem of weighted sum rate maximization (WSRM) in a
multicellular environment is intrinsically sensitive to channel estimation
errors. In this paper, we study ways to maximize the weighted sum rate in a
linearly precoded multicellular downlink system where the receivers are
equipped with a single antenna. With perfect channel information available at
the base stations, we first present a novel fast converging algorithm that
solves the WSRM problem. Then, the assumption is relaxed to the case where the
error vectors in the channel estimates are assumed to lie in an uncertainty set
formed by the intersection of finite ellipsoids. As our main contributions, we
present two procedures to solve the intractable nonconvex robust designs based
on the worst case principle. The proposed iterative algorithms solve the
semidefinite programs in each of their steps and provably converge to a locally
optimal solution of the robust WSRM problem. The proposed approaches are
numerically compared against each other to ascertain their robustness towards
channel estimation imperfections. The results clearly indicate the performance
gain compared to the case when channel uncertainties are ignored in the design
process. For certain scenarios, we also quantify the gap between the proposed
approximations and exact solutions.
|
1301.0179 | A Novel Design Specification Distance(DSD) Based K-Mean Clustering
Performace Evluation on Engineering Materials Database | cs.LG | Organizing data into semantically more meaningful is one of the fundamental
modes of understanding and learning. Cluster analysis is a formal study of
methods for understanding and algorithm for learning. K-mean clustering
algorithm is one of the most fundamental and simple clustering algorithms. When
there is no prior knowledge about the distribution of data sets, K-mean is the
first choice for clustering with an initial number of clusters. In this paper a
novel distance metric called Design Specification (DS) distance measure
function is integrated with K-mean clustering algorithm to improve cluster
accuracy. The K-means algorithm with proposed distance measure maximizes the
cluster accuracy to 99.98% at P = 1.525, which is determined through the
iterative procedure. The performance of Design Specification (DS) distance
measure function with K - mean algorithm is compared with the performances of
other standard distance functions such as Euclidian, squared Euclidean, City
Block, and Chebshew similarity measures deployed with K-mean algorithm.The
proposed method is evaluated on the engineering materials database. The
experiments on cluster analysis and the outlier profiling show that these is an
excellent improvement in the performance of the proposed method.
|
1301.0189 | A generalized theory of preferential linking | physics.soc-ph cs.SI | There are diverse mechanisms driving the evolution of social networks. A key
open question dealing with understanding their evolution is: How various
preferential linking mechanisms produce networks with different features? In
this paper we first empirically study preferential linking phenomena in an
evolving online social network, find and validate the linear preference. We
propose an analyzable model which captures the real growth process of the
network and reveals the underlying mechanism dominating its evolution.
Furthermore based on preferential linking we propose a generalized model
reproducing the evolution of online social networks, present unified analytical
results describing network characteristics for 27 preference scenarios, and
explore the relation between preferential linking mechanism and network
features. We find that within the framework of preferential linking analytical
degree distributions can only be the combinations of finite kinds of functions
which are related to rational, logarithmic and inverse tangent functions, and
extremely complex network structure will emerge even for very simple sublinear
preferential linking. This work not only provides a verifiable origin for the
emergence of various network characteristics in social networks, but bridges
the micro individuals' behaviors and the global organization of social
networks.
|
1301.0207 | Worst-case Asymmetric Distributed Source Coding | cs.IT math.IT | We consider a worst-case asymmetric distributed source coding problem where
an information sink communicates with $N$ correlated information sources to
gather their data. A data-vector $\bar{x} = (x_1, ..., x_N) \sim {\mathcal P}$
is derived from a discrete and finite joint probability distribution ${\mathcal
P} = p(x_1, ..., x_N)$ and component $x_i$ is revealed to the $i^{\textrm{th}}$
source, $1 \le i \le N$. We consider an asymmetric communication scenario where
only the sink is assumed to know distribution $\mathcal P$. We are interested
in computing the minimum number of bits that the sources must send, in the
worst-case, to enable the sink to losslessly learn any $\bar{x}$ revealed to
the sources.
We propose a novel information measure called information ambiguity to
perform the worst-case information-theoretic analysis and prove its various
properties. Then, we provide interactive communication protocols to solve the
above problem in two different communication scenarios. We also investigate the
role of block-coding in the worst-case analysis of distributed compression
problem and prove that it offers almost no compression advantage compared to
the scenarios where this problem is addressed, as in this paper, with only a
single instance of data-vector.
|
1301.0213 | Compressed Sensing with Linear Correlation Between Signal and
Measurement Noise | cs.IT math.IT | Existing convex relaxation-based approaches to reconstruction in compressed
sensing assume that noise in the measurements is independent of the signal of
interest. We consider the case of noise being linearly correlated with the
signal and introduce a simple technique for improving compressed sensing
reconstruction from such measurements. The technique is based on a linear model
of the correlation of additive noise with the signal. The modification of the
reconstruction algorithm based on this model is very simple and has negligible
additional computational cost compared to standard reconstruction algorithms,
but is not known in existing literature. The proposed technique reduces
reconstruction error considerably in the case of linearly correlated
measurements and noise. Numerical experiments confirm the efficacy of the
technique. The technique is demonstrated with application to low-rate
quantization of compressed measurements, which is known to introduce correlated
noise, and improvements in reconstruction error compared to ordinary Basis
Pursuit De-Noising of up to approximately 7 dB are observed for 1 bit/sample
quantization. Furthermore, the proposed method is compared to Binary Iterative
Hard Thresholding which it is demonstrated to outperform in terms of
reconstruction error for sparse signals with a number of non-zero coefficients
greater than approximately 1/10th of the number of compressed measurements.
|
1301.0216 | Applying Strategic Multiagent Planning to Real-World Travel Sharing
Problems | cs.AI | Travel sharing, i.e., the problem of finding parts of routes which can be
shared by several travellers with different points of departure and
destinations, is a complex multiagent problem that requires taking into account
individual agents' preferences to come up with mutually acceptable joint plans.
In this paper, we apply state-of-the-art planning techniques to real-world
public transportation data to evaluate the feasibility of multiagent planning
techniques in this domain. The potential application value of improving travel
sharing technology has great application value due to its ability to reduce the
environmental impact of travelling while providing benefits to travellers at
the same time. We propose a three-phase algorithm that utilises performant
single-agent planners to find individual plans in a simplified domain first,
then merges them using a best-response planner which ensures resulting
solutions are individually rational, and then maps the resulting plan onto the
full temporal planning domain to schedule actual journeys. The evaluation of
our algorithm on real-world, multi-modal public transportation data for the
United Kingdom shows linear scalability both in the scenario size and in the
number of agents, where trade-offs have to be made between total cost
improvement, the percentage of feasible timetables identified for journeys, and
the prolongation of these journeys. Our system constitutes the first
implementation of strategic multiagent planning algorithms in large-scale
domains and provides insights into the engineering process of translating
general domain-independent multiagent planning algorithms to real-world
applications.
|
1301.0239 | Surprise maximization reveals the community structure of complex
networks | cs.SI cond-mat.stat-mech physics.soc-ph q-bio.MN | How to determine the community structure of complex networks is an open
question. It is critical to establish the best strategies for community
detection in networks of unknown structure. Here, using standard synthetic
benchmarks, we show that none of the algorithms hitherto developed for
community structure characterization perform optimally. Significantly,
evaluating the results according to their modularity, the most popular measure
of the quality of a partition, systematically provides mistaken solutions.
However, a novel quality function, called Surprise, can be used to elucidate
which is the optimal division into communities. Consequently, we show that the
best strategy to find the community structure of all the networks examined
involves choosing among the solutions provided by multiple algorithms the one
with the highest Surprise value. We conclude that Surprise maximization
precisely reveals the community structure of complex networks.
|
1301.0254 | Group theory, group actions, evolutionary algorithms, and global
optimization | cs.NE math.DS math.OC math.RA | In this paper we use group, action and orbit to understand how evolutionary
solve nonconvex optimization problems.
|
1301.0259 | Triadic closure dynamics drives scaling-laws in social multiplex
networks | physics.soc-ph cs.SI physics.data-an | Social networks exhibit scaling-laws for several structural characteristics,
such as the degree distribution, the scaling of the attachment kernel, and the
clustering coefficients as a function of node degree. A detailed understanding
if and how these scaling laws are inter-related is missing so far, let alone
whether they can be understood through a common, dynamical principle. We
propose a simple model for stationary network formation and show that the three
mentioned scaling relations follow as natural consequences of triadic closure.
The validity of the model is tested on multiplex data from a well studied
massive multiplayer online game. We find that the three scaling exponents
observed in the multiplex data for the friendship, communication and trading
networks can simultaneously be explained by the model. These results suggest
that triadic closure could be identified as one of the fundamental dynamical
principles in social multiplex network formation.
|
1301.0297 | Wyner-Ziv Coding in the Real Field Based on BCH-DFT Codes | cs.IT math.IT | We show how real-number codes can be used to compress correlated sources and
establish a new framework for distributed lossy source coding, in which we
quantize compressed sources instead of compressing quantized sources. This
change in the order of binning and quantization blocks makes it possible to
model correlation between continuous-valued sources more realistically and
compensate for the quantization error when the sources are completely
correlated. We focus on the asymmetric case, i.e., lossy source coding with
side information at the decoder, also known as Wyner-Ziv coding. The encoding
and decoding procedures are described in detail for discrete Fourier transform
(DFT) codes, both for syndrome- and parity-based approaches. We also extend the
parity-based approach to the case where the transmission channel is noisy and
perform distributed joint source-channel coding in this context. The proposed
system is well suited for low-delay communications. Furthermore, the
mean-squared reconstruction error (MSE) is shown to be less than or close to
the quantization error level, the ideal case in coding based on binary codes.
|
1301.0302 | MANCaLog: A Logic for Multi-Attribute Network Cascades (Technical
Report) | cs.AI cs.LO cs.MA cs.SI physics.soc-ph | The modeling of cascade processes in multi-agent systems in the form of
complex networks has in recent years become an important topic of study due to
its many applications: the adoption of commercial products, spread of disease,
the diffusion of an idea, etc. In this paper, we begin by identifying a
desiderata of seven properties that a framework for modeling such processes
should satisfy: the ability to represent attributes of both nodes and edges, an
explicit representation of time, the ability to represent non-Markovian
temporal relationships, representation of uncertain information, the ability to
represent competing cascades, allowance of non-monotonic diffusion, and
computational tractability. We then present the MANCaLog language, a formalism
based on logic programming that satisfies all these desiderata, and focus on
algorithms for finding minimal models (from which the outcome of cascades can
be obtained) as well as how this formalism can be applied in real world
scenarios. We are not aware of any other formalism in the literature that meets
all of the above requirements.
|
1301.0306 | Statistical Inference in Large Antenna Arrays under Unknown Noise
Pattern | cs.IT math.IT | In this article, a general information-plus-noise transmission model is
assumed, the receiver end of which is composed of a large number of sensors and
is unaware of the noise pattern. For this model, and under reasonable
assumptions, a set of results is provided for the receiver to perform
statistical eigen-inference on the information part. In particular, we
introduce new methods for the detection, counting, and the power and subspace
estimation of multiple sources composing the information part of the
transmission. The theoretical performance of some of these techniques is also
discussed. An exemplary application of these methods to array processing is
then studied in greater detail, leading in particular to a novel MUSIC-like
algorithm assuming unknown noise covariance.
|
1301.0363 | Employing functional interactions for characterization and detection of
sparse complexes from yeast PPI networks | cs.CE q-bio.MN | Over the last few years, several computational techniques have been devised
to recover protein complexes from the protein interaction (PPI) networks of
organisms. These techniques model "dense" subnetworks within PPI networks as
complexes. However, our comprehensive evaluations revealed that these
techniques fail to reconstruct many 'gold standard' complexes that are "sparse"
in the networks (only 71 recovered out of 123 known yeast complexes embedded in
a network of 9704 interactions among 1622 proteins). In this work, we propose a
novel index called Component-Edge (CE) score to quantitatively measure the
notion of "complex derivability" from PPI networks. Using this index, we
theoretically categorize complexes as "sparse" or "dense" with respect to a
given network. We then devise an algorithm SPARC that selectively employs
functional interactions to improve the CE scores of predicted complexes, and
thereby elevates many of the "sparse" complexes to "dense". This empowers
existing methods to detect these "sparse" complexes. We demonstrate that our
approach is effective in reconstructing significantly many complexes missed
previously (104 recovered out of the 123 known complexes or ~47% improvement).
|
1301.0369 | Constacyclic Codes over Finite Fields | cs.IT math.IT math.NT | An equivalence relation called isometry is introduced to classify
constacyclic codes over a finite field; the polynomial generators of
constacyclic codes of length $\ell^tp^s$ are characterized, where $p$ is the
characteristic of the finite field and $\ell$ is a prime different from $p$.
|
1301.0373 | Compressed Sensing Matrices from Fourier Matrices | cs.IT math.IT math.NA | The class of Fourier matrices is of special importance in compressed sensing
(CS). This paper concerns deterministic construction of compressed sensing
matrices from Fourier matrices. By using Katz' character sum estimation, we are
able to design a deterministic procedure to select rows from a Fourier matrix
to form a good compressed sensing matrix for sparse recovery. The sparsity
bound in our construction is similar to that of binary CS matrices constructed
by DeVore which greatly improves previous results for CS matrices from Fourier
matrices. Our approach also provides more flexibilities in terms of the
dimension of CS matrices. As a consequence, our construction yields an
approximately mutually unbiased bases from Fourier matrices which is of
particular interest to quantum information theory. This paper also contains a
useful improvement to Katz' character sum estimation for quadratic extensions,
with an elementary and transparent proof. Some numerical examples are included.
|
1301.0384 | Spectrum Sharing-based Multi-hop Decode-and-Forward Relay Networks under
Interference Constraints: Performance Analysis and Relay Position
Optimization | cs.IT math.IT | The exact closed-form expressions for outage probability and bit error rate
of spectrum sharing-based multi-hop decodeand- forward (DF) relay networks in
non-identical Rayleigh fading channels are derived. We also provide the
approximate closed-form expression for the system ergodic capacity. Utilizing
these tractable analytical formulas, we can study the impact of key network
parameters on the performance of cognitivemulti-hop relay networks under
interference constraints. Using a linear network model, we derive an optimum
relay position scheme by numerically solving an optimization problem of
balancing average signal-to-noise ratio (SNR) of each hop. The numerical
results show that the optimal scheme leads to SNR performance gains of more
than 1 dB. All the analytical expressions are verified by Monte-Carlo
simulations confirming the advantage ofmultihop DF relaying networks in
cognitive environments.
|
1301.0387 | Chaotic Analog-to-Information Conversion with Chaotic State Modulation | cs.IT math.IT | Chaotic compressive sensing is a nonlinear framework for compressive sensing.
Along the framework, this paper proposes a chaotic analog-to-information
converter, chaotic modulation, to acquire and reconstruct band-limited sparse
analog signals at sub-Nyquist rate. In the chaotic modulation, the sparse
signal is randomized through state modulation of continuous-time chaotic system
and one state output is sampled as compressive measurements. The reconstruction
is achieved through the estimation of the sparse coefficients with principle of
chaotic impulsive synchronization and Lp-norm regularized nonlinear least
squares. The concept of supreme local Lyapunov exponents (SLLE) is introduced
to study the reconstructablity. It is found that the sparse signals are
reconstructable, if the largest SLLE of the error dynamical system is negative.
As examples, the Lorenz system and Liu system excited by the sparse multi-tone
signals are taken to illustrate the principle and the performance.
|
1301.0427 | Zipf's law and L. Levin's probability distributions | cs.IT math.IT | Zipf's law in its basic incarnation is an empirical probability distribution
governing the frequency of usage of words in a language. As Terence Tao
recently remarked, it still lacks a convincing and satisfactory mathematical
explanation.
In this paper I suggest that at least in certain situations, Zipf's law can
be explained as a special case of the a priori distribution introduced and
studied by L. Levin. The Zipf ranking corresponding to diminishing probability
appears then as the ordering determined by the growing Kolmogorov complexity.
One argument justifying this assertion is the appeal to a recent
interpretation by Yu. Manin and M. Marcolli of asymptotic bounds for
error--correcting codes in terms of phase transition. In the respective
partition function, Kolmogorov complexity of a code plays the role of its
energy.
This version contains minor corrections and additions.
|
1301.0432 | A Self-Organizing Neural Scheme for Door Detection in Different
Environments | cs.CV | Doors are important landmarks for indoor mobile robot navigation and also
assist blind people to independently access unfamiliar buildings. Most existing
algorithms of door detection are limited to work for familiar environments
because of restricted assumptions about color, texture and shape. In this paper
we propose a novel approach which employs feature based classification and uses
the Kohonen Self-Organizing Map (SOM) for the purpose of door detection.
Generic and stable features are used for the training of SOM that increase the
performance significantly: concavity, bottom-edge intensity profile and door
edges. To validate the robustness and generalizability of our method, we
collected a large dataset of real world door images from a variety of
environments and different lighting conditions. The algorithm achieves more
than 95% detection which demonstrates that our door detection method is generic
and robust with variations of color, texture, occlusions, lighting condition,
scales, and viewpoints.
|
1301.0435 | Investigating the performance of Correspondence Algorithms in Vision
based Driver-assistance in Indoor Environment | cs.CV cs.RO | This paper presents the experimental comparison of fourteen stereo matching
algorithms in variant illumination conditions. Different adaptations of global
and local stereo matching techniques are chosen for evaluation The variant
strength and weakness of the chosen correspondence algorithms are explored by
employing the methodology of the prediction error strategy. The algorithms are
gauged on the basis of their performance on real world data set taken in
various indoor lighting conditions and at different times of the day
|
1301.0503 | Word Storms: Multiples of Word Clouds for Visual Comparison of Documents | cs.IR cs.DL cs.HC | Word clouds are a popular tool for visualizing documents, but they are not a
good tool for comparing documents, because identical words are not presented
consistently across different clouds. We introduce the concept of word storms,
a visualization tool for analysing corpora of documents. A word storm is a
group of word clouds, in which each cloud represents a single document,
juxtaposed to allow the viewer to compare and contrast the documents. We
present a novel algorithm that creates a coordinated word storm, in which words
that appear in multiple documents are placed in the same location, using the
same color and orientation, in all of the corresponding clouds. In this way,
similar documents are represented by similar-looking word clouds, making them
easier to compare and contrast visually. We evaluate the algorithm in two ways:
first, an automatic evaluation based on document classification; and second, a
user study. The results confirm that unlike standard word clouds, a coordinated
word storm better allows for visual comparison of documents.
|
1301.0528 | Adaptive Electricity Scheduling in Microgrids | cs.SY | Microgrid (MG) is a promising component for future smart grid (SG)
deployment. The balance of supply and demand of electric energy is one of the
most important requirements of MG management. In this paper, we present a novel
framework for smart energy management based on the concept of
quality-of-service in electricity (QoSE). Specifically, the resident
electricity demand is classified into basic usage and quality usage. The basic
usage is always guaranteed by the MG, while the quality usage is controlled
based on the MG state. The microgrid control center (MGCC) aims to minimize the
MG operation cost and maintain the outage probability of quality usage, i.e.,
QoSE, below a target value, by scheduling electricity among renewable energy
resources, energy storage systems, and macrogrid. The problem is formulated as
a constrained stochastic programming problem. The Lyapunov optimization
technique is then applied to derive an adaptive electricity scheduling
algorithm by introducing the QoSE virtual queues and energy storage virtual
queues. The proposed algorithm is an online algorithm since it does not require
any statistics and future knowledge of the electricity supply, demand and price
processes. We derive several "hard" performance bounds for the proposed
algorithm, and evaluate its performance with trace-driven simulations. The
simulation results demonstrate the efficacy of the proposed electricity
scheduling algorithm.
|
1301.0534 | Follow the Leader If You Can, Hedge If You Must | cs.LG stat.ML | Follow-the-Leader (FTL) is an intuitive sequential prediction strategy that
guarantees constant regret in the stochastic setting, but has terrible
performance for worst-case data. Other hedging strategies have better
worst-case guarantees but may perform much worse than FTL if the data are not
maximally adversarial. We introduce the FlipFlop algorithm, which is the first
method that provably combines the best of both worlds.
As part of our construction, we develop AdaHedge, which is a new way of
dynamically tuning the learning rate in Hedge without using the doubling trick.
AdaHedge refines a method by Cesa-Bianchi, Mansour and Stoltz (2007), yielding
slightly improved worst-case guarantees. By interleaving AdaHedge and FTL, the
FlipFlop algorithm achieves regret within a constant factor of the FTL regret,
without sacrificing AdaHedge's worst-case guarantees.
AdaHedge and FlipFlop do not need to know the range of the losses in advance;
moreover, unlike earlier methods, both have the intuitive property that the
issued weights are invariant under rescaling and translation of the losses. The
losses are also allowed to be negative, in which case they may be interpreted
as gains.
|
1301.0542 | Eventual linear convergence of the Douglas Rachford iteration for basis
pursuit | math.NA cs.IT math.IT math.OC | We provide a simple analysis of the Douglas-Rachford splitting algorithm in
the context of $\ell^1$ minimization with linear constraints, and quantify the
asymptotic linear convergence rate in terms of principal angles between
relevant vector spaces. In the compressed sensing setting, we show how to bound
this rate in terms of the restricted isometry constant. More general iterative
schemes obtained by $\ell^2$-regularization and over-relaxation including the
dual split Bregman method are also treated, which answers the question how to
choose the relaxation and soft-thresholding parameters to accelerate the
asymptotic convergence rate. We make no attempt at characterizing the transient
regime preceding the onset of linear convergence.
|
1301.0550 | Markov Equivalence Classes for Maximal Ancestral Graphs | cs.AI stat.ME | Ancestral graphs are a class of graphs that encode conditional independence
relations arising in DAG models with latent and selection variables,
corresponding to marginalization and conditioning. However, for any ancestral
graph, there may be several other graphs to which it is Markov equivalent. We
introduce a simple representation of a Markov equivalence class of ancestral
graphs, thereby facilitating model search. \ More specifically, we define a
join operation on ancestral graphs which will associate a unique graph with a
Markov equivalence class. We also extend the separation criterion for ancestral
graphs (which is an extension of d-separation) and provide a proof of the
pairwise Markov property for joined ancestral graphs.
|
1301.0551 | Learning Hierarchical Object Maps Of Non-Stationary Environments with
mobile robots | cs.LG cs.RO stat.ML | Building models, or maps, of robot environments is a highly active research
area; however, most existing techniques construct unstructured maps and assume
static environments. In this paper, we present an algorithm for learning object
models of non-stationary objects found in office-type environments. Our
algorithm exploits the fact that many objects found in office environments look
alike (e.g., chairs, recycling bins). It does so through a two-level
hierarchical representation, which links individual objects with generic shape
templates of object classes. We derive an approximate EM algorithm for learning
shape parameters at both levels of the hierarchy, using local occupancy grid
maps for representing shape. Additionally, we develop a Bayesian model
selection algorithm that enables the robot to estimate the total number of
objects and object templates in the environment. Experimental results using a
real robot equipped with a laser range finder indicate that our approach
performs well at learning object-based maps of simple office environments. The
approach outperforms a previously developed non-hierarchical algorithm that
models objects but lacks class templates.
|
1301.0552 | A constraint satisfaction approach to the robust spanning tree problem
with interval data | cs.AI | Robust optimization is one of the fundamental approaches to deal with
uncertainty in combinatorial optimization. This paper considers the robust
spanning tree problem with interval data, which arises in a variety of
telecommunication applications. It proposes a constraint satisfaction approach
using a combinatorial lower bound, a pruning component that removes infeasible
and suboptimal edges, as well as a search strategy exploring the most uncertain
edges first. The resulting algorithm is shown to produce very dramatic
improvements over the mathematical programming approach of Yaman et al. and to
enlarge considerably the class of problems amenable to effective solutions
|
1301.0553 | On the Construction of the Inclusion Boundary Neighbourhood for Markov
Equivalence Classes of Bayesian Network Structures | cs.AI | The problem of learning Markov equivalence classes of Bayesian network
structures may be solved by searching for the maximum of a scoring metric in a
space of these classes. This paper deals with the definition and analysis of
one such search space. We use a theoretically motivated neighbourhood, the
inclusion boundary, and represent equivalence classes by essential graphs. We
show that this search space is connected and that the score of the neighbours
can be evaluated incrementally. We devise a practical way of building this
neighbourhood for an essential graph that is purely graphical and does not
explicitely refer to the underlying independences. We find that its size can be
intractable, depending on the complexity of the essential graph of the
equivalence class. The emphasis is put on the potential use of this space with
greedy hill -climbing search
|
1301.0554 | Tree-dependent Component Analysis | cs.LG stat.ML | We present a generalization of independent component analysis (ICA), where
instead of looking for a linear transform that makes the data components
independent, we look for a transform that makes the data components well fit by
a tree-structured graphical model. Treating the problem as a semiparametric
statistical problem, we show that the optimal transform is found by minimizing
a contrast function based on mutual information, a function that directly
extends the contrast function used for classical ICA. We provide two
approximations of this contrast function, one using kernel density estimation,
and another using kernel generalized variance. This tree-dependent component
analysis framework leads naturally to an efficient general multivariate density
estimation technique where only bivariate density estimation needs to be
performed.
|
1301.0555 | Bipolar Possibilistic Representations | cs.AI | Recently, it has been emphasized that the possibility theory framework allows
us to distinguish between i) what is possible because it is not ruled out by
the available knowledge, and ii) what is possible for sure. This distinction
may be useful when representing knowledge, for modelling values which are not
impossible because they are consistent with the available knowledge on the one
hand, and values guaranteed to be possible because reported from observations
on the other hand. It is also of interest when expressing preferences, to point
out values which are positively desired among those which are not rejected.
This distinction can be encoded by two types of constraints expressed in terms
of necessity measures and in terms of guaranteed possibility functions, which
induce a pair of possibility distributions at the semantic level. A consistency
condition should ensure that what is claimed to be guaranteed as possible is
indeed not impossible. The present paper investigates the representation of
this bipolar view, including the case when it is stated by means of conditional
measures, or by means of comparative context-dependent constraints. The
interest of this bipolar framework, which has been recently stressed for
expressing preferences, is also pointed out in the representation of diagnostic
knowledge.
|
1301.0556 | Learning with Scope, with Application to Information Extraction and
Classification | cs.LG cs.IR stat.ML | In probabilistic approaches to classification and information extraction, one
typically builds a statistical model of words under the assumption that future
data will exhibit the same regularities as the training data. In many data
sets, however, there are scope-limited features whose predictive power is only
applicable to a certain subset of the data. For example, in information
extraction from web pages, word formatting may be indicative of extraction
category in different ways on different web pages. The difficulty with using
such features is capturing and exploiting the new regularities encountered in
previously unseen data. In this paper, we propose a hierarchical probabilistic
model that uses both local/scope-limited features, such as word formatting, and
global features, such as word content. The local regularities are modeled as an
unobserved random parameter which is drawn once for each local data set. This
random parameter is estimated during the inference process and then used to
perform classification with both the local and global features--- a procedure
which is akin to automatically retuning the classifier to the local
regularities on each newly encountered web page. Exact inference is intractable
and we present approximations via point estimates and variational methods.
Empirical results on large collections of web data demonstrate that this method
significantly improves performance from traditional models of global features
alone.
|
1301.0557 | Qualitative MDPs and POMDPs: An Order-Of-Magnitude Approximation | cs.AI | We develop a qualitative theory of Markov Decision Processes (MDPs) and
Partially Observable MDPs that can be used to model sequential decision making
tasks when only qualitative information is available. Our approach is based
upon an order-of-magnitude approximation of both probabilities and utilities,
similar to epsilon-semantics. The result is a qualitative theory that has close
ties with the standard maximum-expected-utility theory and is amenable to
general planning techniques.
|
1301.0558 | Introducing Variable Importance Tradeoffs into CP-Nets | cs.AI | The ability to make decisions and to assess potential courses of action is a
corner-stone of many AI applications, and usually this requires explicit
information about the decision-maker s preferences. IN many applications,
preference elicitation IS a serious bottleneck.The USER either does NOT have
the time, the knowledge, OR the expert support required TO specify complex
multi - attribute utility functions. IN such cases, a method that IS based ON
intuitive, yet expressive, preference statements IS required. IN this paper we
suggest the USE OF TCP - nets, an enhancement OF CP - nets, AS a tool FOR
representing, AND reasoning about qualitative preference statements.We present
AND motivate this framework, define its semantics, AND show how it can be used
TO perform constrained optimization.
|
1301.0559 | Planning under Continuous Time and Resource Uncertainty: A Challenge for
AI | cs.AI | We outline a class of problems, typical of Mars rover operations, that are
problematic for current methods of planning under uncertainty. The existing
methods fail because they suffer from one or more of the following limitations:
1) they rely on very simple models of actions and time, 2) they assume that
uncertainty is manifested in discrete action outcomes, 3) they are only
practical for very small problems. For many real world problems, these
assumptions fail to hold. In particular, when planning the activities for a
Mars rover, none of the above assumptions is valid: 1) actions can be
concurrent and have differing durations, 2) there is uncertainty concerning
action durations and consumption of continuous resources like power, and 3)
typical daily plans involve on the order of a hundred actions. This class of
problems may be of particular interest to the UAI community because both
classical and decision-theoretic planning techniques may be useful in solving
it. We describe the rover problem, discuss previous work on planning under
uncertainty, and present a detailed, but very small, example illustrating some
of the difficulties of finding good plans.
|
1301.0560 | Generalized Instrumental Variables | cs.AI | This paper concerns the assessment of direct causal effects from a
combination of: (i) non-experimental data, and (ii) qualitative domain
knowledge. Domain knowledge is encoded in the form of a directed acyclic graph
(DAG), in which all interactions are assumed linear, and some variables are
presumed to be unobserved. We provide a generalization of the well-known method
of Instrumental Variables, which allows its application to models with few
conditional independeces.
|
1301.0561 | Finding Optimal Bayesian Networks | cs.AI | In this paper, we derive optimality results for greedy Bayesian-network
search algorithms that perform single-edge modifications at each step and use
asymptotically consistent scoring criteria. Our results extend those of Meek
(1997) and Chickering (2002), who demonstrate that in the limit of large
datasets, if the generative distribution is perfect with respect to a DAG
defined over the observable variables, such search algorithms will identify
this optimal (i.e. generative) DAG model. We relax their assumption about the
generative distribution, and assume only that this distribution satisfies the
{em composition property} over the observable variables, which is a more
realistic assumption for real domains. Under this assumption, we guarantee that
the search algorithms identify an {em inclusion-optimal} model; that is, a
model that (1) contains the generative distribution and (2) has no sub-model
that contains this distribution. In addition, we show that the composition
property is guaranteed to hold whenever the dependence relationships in the
generative distribution can be characterized by paths between singleton
elements in some generative graphical model (e.g. a DAG, a chain graph, or a
Markov network) even when the generative model includes unobserved variables,
and even when the observed data is subject to selection bias.
|
1301.0562 | Continuation Methods for Mixing Heterogenous Sources | cs.LG stat.ML | A number of modern learning tasks involve estimation from heterogeneous
information sources. This includes classification with labeled and unlabeled
data as well as other problems with analogous structure such as competitive
(game theoretic) problems. The associated estimation problems can be typically
reduced to solving a set of fixed point equations (consistency conditions). We
introduce a general method for combining a preferred information source with
another in this setting by evolving continuous paths of fixed points at
intermediate allocations. We explicitly identify critical points along the
unique paths to either increase the stability of estimation or to ensure a
significant departure from the initial source. The homotopy continuation
approach is guaranteed to terminate at the second source, and involves no
combinatorial effort. We illustrate the power of these ideas both in
classification tasks with labeled and unlabeled data, as well as in the context
of a competitive (min-max) formulation of DNA sequence motif discovery.
|
1301.0563 | Interpolating Conditional Density Trees | cs.LG cs.AI stat.ML | Joint distributions over many variables are frequently modeled by decomposing
them into products of simpler, lower-dimensional conditional distributions,
such as in sparsely connected Bayesian networks. However, automatically
learning such models can be very computationally expensive when there are many
datapoints and many continuous variables with complex nonlinear relationships,
particularly when no good ways of decomposing the joint distribution are known
a priori. In such situations, previous research has generally focused on the
use of discretization techniques in which each continuous variable has a single
discretization that is used throughout the entire network. \ In this paper, we
present and compare a wide variety of tree-based algorithms for learning and
evaluating conditional density estimates over continuous variables. These trees
can be thought of as discretizations that vary according to the particular
interactions being modeled; however, the density within a given leaf of the
tree need not be assumed constant, and we show that such nonuniform leaf
densities lead to more accurate density estimation. We have developed Bayesian
network structure-learning algorithms that employ these tree-based conditional
density representations, and we show that they can be used to practically learn
complex joint probability models over dozens of continuous variables from
thousands of datapoints. We focus on finding models that are simultaneously
accurate, fast to learn, and fast to evaluate once they are learned.
|
1301.0564 | Iterative Join-Graph Propagation | cs.AI | The paper presents an iterative version of join-tree clustering that applies
the message passing of join-tree clustering algorithm to join-graphs rather
than to join-trees, iteratively. It is inspired by the success of Pearl's
belief propagation algorithm as an iterative approximation scheme on one hand,
and by a recently introduced mini-clustering i. success as an anytime
approximation method, on the other. The proposed Iterative Join-graph
Propagation IJGP belongs to the class of generalized belief propagation
methods, recently proposed using analogy with algorithms in statistical
physics. Empirical evaluation of this approach on a number of problem classes
demonstrates that even the most time-efficient variant is almost always
superior to IBP and MC i, and is sometimes more accurate by as much as several
orders of magnitude.
|
1301.0565 | An Information-Theoretic External Cluster-Validity Measure | cs.LG stat.ML | In this paper we propose a measure of clustering quality or accuracy that is
appropriate in situations where it is desirable to evaluate a clustering
algorithm by somehow comparing the clusters it produces with ``ground truth'
consisting of classes assigned to the patterns by manual means or some other
means in whose veracity there is confidence. Such measures are refered to as
``external'. Our measure also has the characteristic of allowing clusterings
with different numbers of clusters to be compared in a quantitative and
principled way. Our evaluation scheme quantitatively measures how useful the
cluster labels of the patterns are as predictors of their class labels. In
cases where all clusterings to be compared have the same number of clusters,
the measure is equivalent to the mutual information between the cluster labels
and the class labels. In cases where the numbers of clusters are different,
however, it computes the reduction in the number of bits that would be required
to encode (compress) the class labels if both the encoder and decoder have free
acccess to the cluster labels. To achieve this encoding the estimated
conditional probabilities of the class labels given the cluster labels must
also be encoded. These estimated probabilities can be seen as a model for the
class labels and their associated code length as a model cost.
|
1301.0566 | Causes and Explanations in the Structural-Model Approach: Tractable
Cases | cs.AI | In this paper, we continue our research on the algorithmic aspects of Halpern
and Pearl's causes and explanations in the structural-model approach. To this
end, we present new characterizations of weak causes for certain classes of
causal models, which show that under suitable restrictions deciding causes and
explanations is tractable. To our knowledge, these are the first explicit
tractability results for the structural-model approach.
|
1301.0567 | The Thing That We Tried Didn't Work Very Well : Deictic Representation
in Reinforcement Learning | cs.LG cs.AI | Most reinforcement learning methods operate on propositional representations
of the world state. Such representations are often intractably large and
generalize poorly. Using a deictic representation is believed to be a viable
alternative: they promise generalization while allowing the use of existing
reinforcement-learning methods. Yet, there are few experiments on learning with
deictic representations reported in the literature. In this paper we explore
the effectiveness of two forms of deictic representation and a na\"{i}ve
propositional representation in a simple blocks-world domain. We find,
empirically, that the deictic representations actually worsen learning
performance. We conclude with a discussion of possible causes of these results
and strategies for more effective learning in domains with objects.
|
1301.0568 | Factorization of Discrete Probability Distributions | cs.AI | We formulate necessary and sufficient conditions for an arbitrary discrete
probability distribution to factor according to an undirected graphical model,
or a log-linear model, or other more general exponential models. This result
generalizes the well known Hammersley-Clifford Theorem.
|
1301.0569 | Statistical Decisions Using Likelihood Information Without Prior
Probabilities | cs.AI | This paper presents a decision-theoretic approach to statistical inference
that satisfies the likelihood principle (LP) without using prior information.
Unlike the Bayesian approach, which also satisfies LP, we do not assume
knowledge of the prior distribution of the unknown parameter. With respect to
information that can be obtained from an experiment, our solution is more
efficient than Wald's minimax solution.However, with respect to information
assumed to be known before the experiment, our solution demands less input than
the Bayesian solution.
|
1301.0570 | Reduction of Maximum Entropy Models to Hidden Markov Models | cs.AI cs.CL | We show that maximum entropy (maxent) models can be modeled with certain
kinds of HMMs, allowing us to construct maxent models with hidden variables,
hidden state sequences, or other characteristics. The models can be trained
using the forward-backward algorithm. While the results are primarily of
theoretical interest, unifying apparently unrelated concepts, we also give
experimental results for a maxent model with a hidden variable on a word
disambiguation task; the model outperforms standard techniques.
|
1301.0571 | Distributed Planning in Hierarchical Factored MDPs | cs.AI | We present a principled and efficient planning algorithm for collaborative
multiagent dynamical systems. All computation, during both the planning and the
execution phases, is distributed among the agents; each agent only needs to
model and plan for a small part of the system. Each of these local subsystems
is small, but once they are combined they can represent an exponentially larger
problem. The subsystems are connected through a subsystem hierarchy.
Coordination and communication between the agents is not imposed, but derived
directly from the structure of this hierarchy. A globally consistent plan is
achieved by a message passing algorithm, where messages correspond to natural
local reward functions and are computed by local linear programs; another
message passing algorithm allows us to execute the resulting policy. When two
portions of the hierarchy share the same structure, our algorithm can reuse
plans and messages to speed up computation.
|
1301.0572 | Expectation Propogation for approximate inference in dynamic Bayesian
networks | cs.AI | We describe expectation propagation for approximate inference in dynamic
Bayesian networks as a natural extension of Pearl s exact belief
propagation.Expectation propagation IS a greedy algorithm, converges IN many
practical cases, but NOT always.We derive a DOUBLE - loop algorithm, guaranteed
TO converge TO a local minimum OF a Bethe free energy.Furthermore, we show that
stable fixed points OF (damped) expectation propagation correspond TO local
minima OF this free energy, but that the converse need NOT be the CASE .We
illustrate the algorithms BY applying them TO switching linear dynamical
systems AND discuss implications FOR approximate inference IN general Bayesian
networks.
|
1301.0573 | Coordinates: Probabilistic Forecasting of Presence and Availability | cs.HC cs.AI | We present methods employed in Coordinate, a prototype service that supports
collaboration and communication by learning predictive models that provide
forecasts of users s AND availability.We describe how data IS collected about
USER activity AND proximity FROM multiple devices, IN addition TO analysis OF
the content OF users, the time of day, and day of week. We review applications
of presence forecasting embedded in the Priorities application and then present
details of the Coordinate service that was informed by the earlier efforts.
|
1301.0574 | Unconstrained Influence Diagrams | cs.AI | We extend the language of influence diagrams to cope with decision scenarios
where the order of decisions and observations is not determined. As the
ordering of decisions is dependent on the evidence, a step-strategy of such a
scenario is a sequence of dependent choices of the next action. A strategy is a
step-strategy together with selection functions for decision actions. The
structure of a step-strategy can be represented as a DAG with nodes labeled
with action variables. We introduce the concept of GS-DAG: a DAG incorporating
an optimal step-strategy for any instantiation. We give a method for
constructing GS-DAGs, and we show how to use a GS-DAG for determining an
optimal strategy. Finally we discuss how analysis of relevant past can be used
to reduce the size of the GS-DAG.
|
1301.0575 | CFW: A Collaborative Filtering System Using Posteriors Over Weights Of
Evidence | cs.IR cs.AI | We describe CFW, a computationally efficient algorithm for collaborative
filtering that uses posteriors over weights of evidence. In experiments on real
data, we show that this method predicts as well or better than other methods in
situations where the size of the user query is small. The new approach works
particularly well when the user s query CONTAINS low frequency(unpopular)
items.The approach complements that OF dependency networks which perform well
WHEN the size OF the query IS large.Also IN this paper, we argue that the USE
OF posteriors OVER weights OF evidence IS a natural way TO recommend similar
items collaborative - filtering task.
|
1301.0576 | A Bayesian Network Scoring Metric That Is Based On Globally Uniform
Parameter Priors | cs.AI | We introduce a new Bayesian network (BN) scoring metric called the Global
Uniform (GU) metric. This metric is based on a particular type of default
parameter prior. Such priors may be useful when a BN developer is not willing
or able to specify domain-specific parameter priors. The GU parameter prior
specifies that every prior joint probability distribution P consistent with a
BN structure S is considered to be equally likely. Distribution P is consistent
with S if P includes just the set of independence relations defined by S. We
show that the GU metric addresses some undesirable behavior of the BDeu and K2
Bayesian network scoring metrics, which also use particular forms of default
parameter priors. A closed form formula for computing GU for special classes of
BNs is derived. Efficiently computing GU for an arbitrary BN remains an open
problem.
|
1301.0577 | Efficient Nash Computation in Large Population Games with Bounded
Influence | cs.GT cs.AI | We introduce a general representation of large-population games in which each
player s influence ON the others IS centralized AND limited, but may otherwise
be arbitrary.This representation significantly generalizes the class known AS
congestion games IN a natural way.Our main results are provably correct AND
efficient algorithms FOR computing AND learning approximate Nash equilibria IN
this general framework.
|
1301.0578 | Dimension Correction for Hierarchical Latent Class Models | cs.LG stat.ML | Model complexity is an important factor to consider when selecting among
graphical models. When all variables are observed, the complexity of a model
can be measured by its standard dimension, i.e. the number of independent
parameters. When hidden variables are present, however, standard dimension
might no longer be appropriate. One should instead use effective dimension
(Geiger et al. 1996). This paper is concerned with the computation of effective
dimension. First we present an upper bound on the effective dimension of a
latent class (LC) model. This bound is tight and its computation is easy. We
then consider a generalization of LC models called hierarchical latent class
(HLC) models (Zhang 2002). We show that the effective dimension of an HLC model
can be obtained from the effective dimensions of some related LC models. We
also demonstrate empirically that using effective dimension in place of
standard dimension improves the quality of models learned from data.
|
1301.0579 | Almost-everywhere algorithmic stability and generalization error | cs.LG stat.ML | We explore in some detail the notion of algorithmic stability as a viable
framework for analyzing the generalization error of learning algorithms. We
introduce the new notion of training stability of a learning algorithm and show
that, in a general setting, it is sufficient for good bounds on generalization
error. In the PAC setting, training stability is both necessary and sufficient
for learnability.\ The approach based on training stability makes no reference
to VC dimension or VC entropy. There is no need to prove uniform convergence,
and generalization error is bounded directly via an extended McDiarmid
inequality. As a result it potentially allows us to deal with a broader class
of learning algorithms than Empirical Risk Minimization. \ We also explore the
relationships among VC dimension, generalization error, and various notions of
stability. Several examples of learning algorithms are considered.
|
1301.0580 | Value Function Approximation in Zero-Sum Markov Games | cs.AI | This paper investigates value function approximation in the context of
zero-sum Markov games, which can be viewed as a generalization of the Markov
decision process (MDP) framework to the two-agent case. We generalize error
bounds from MDPs to Markov games and describe generalizations of reinforcement
learning algorithms to Markov games. We present a generalization of the optimal
stopping problem to a two-player simultaneous move Markov game. For this
special problem, we provide stronger bounds and can guarantee convergence for
LSTD and temporal difference learning with linear value function approximation.
We demonstrate the viability of value function approximation for Markov games
by using the Least squares policy iteration (LSPI) algorithm to learn good
policies for a soccer domain and a flow control problem.
|
1301.0582 | Monitoring a Complez Physical System using a Hybrid Dynamic Bayes Net | cs.AI | The Reverse Water Gas Shift system (RWGS) is a complex physical system
designed to produce oxygen from the carbon dioxide atmosphere on Mars. If sent
to Mars, it would operate without human supervision, thus requiring a reliable
automated system for monitoring and control. The RWGS presents many challenges
typical of real-world systems, including: noisy and biased sensors, nonlinear
behavior, effects that are manifested over different time granularities, and
unobservability of many important quantities. In this paper we model the RWGS
using a hybrid (discrete/continuous) Dynamic Bayesian Network (DBN), where the
state at each time slice contains 33 discrete and 184 continuous variables. We
show how the system state can be tracked using probabilistic inference over the
model. We discuss how to deal with the various challenges presented by the
RWGS, providing a suite of techniques that are likely to be useful in a wide
range of applications. In particular, we describe a general framework for
dealing with nonlinear behavior using numerical integration techniques,
extending the successful Unscented Filter. We also show how to use a
fixed-point computation to deal with effects that develop at different time
scales, specifically rapid changes occurring during slowly changing processes.
We test our model using real data collected from the RWGS, demonstrating the
feasibility of hybrid DBNs for monitoring complex real-world physical systems.
|
1301.0583 | Polynomial Value Iteration Algorithms for Detrerminstic MDPs | cs.AI cs.DS | Value iteration is a commonly used and empirically competitive method in
solving many Markov decision process problems. However, it is known that value
iteration has only pseudo-polynomial complexity in general. We establish a
somewhat surprising polynomial bound for value iteration on deterministic
Markov decision (DMDP) problems. We show that the basic value iteration
procedure converges to the highest average reward cycle on a DMDP problem in
heta(n^2) iterations, or heta(mn^2) total time, where n denotes the number of
states, and m the number of edges. We give two extensions of value iteration
that solve the DMDP in heta(mn) time. We explore the analysis of policy
iteration algorithms and report on an empirical study of value iteration
showing that its convergence is much faster on random sparse graphs.
|
1301.0584 | Decayed MCMC Filtering | cs.AI cs.LG cs.SY | Filtering---estimating the state of a partially observable Markov process
from a sequence of observations---is one of the most widely studied problems in
control theory, AI, and computational statistics. Exact computation of the
posterior distribution is generally intractable for large discrete systems and
for nonlinear continuous systems, so a good deal of effort has gone into
developing robust approximation algorithms. This paper describes a simple
stochastic approximation algorithm for filtering called {em decayed MCMC}. The
algorithm applies Markov chain Monte Carlo sampling to the space of state
trajectories using a proposal distribution that favours flips of more recent
state variables. The formal analysis of the algorithm involves a generalization
of standard coupling arguments for MCMC convergence. We prove that for any
ergodic underlying Markov process, the convergence time of decayed MCMC with
inverse-polynomial decay remains bounded as the length of the observation
sequence grows. We show experimentally that decayed MCMC is at least
competitive with other approximation algorithms such as particle filtering.
|
1301.0585 | Formalizing Scenario Analysis | cs.AI | We propose a formal treatment of scenarios in the context of a dialectical
argumentation formalism for qualitative reasoning about uncertain propositions.
Our formalism extends prior work in which arguments for and against uncertain
propositions were presented and compared in interaction spaces called Agoras.
We now define the notion of a scenario in this framework and use it to define a
set of qualitative uncertainty labels for propositions across a collection of
scenarios. This work is intended to lead to a formal theory of scenarios and
scenario analysis.
|
1301.0586 | Staged Mixture Modelling and Boosting | cs.LG stat.ML | In this paper, we introduce and evaluate a data-driven staged mixture
modeling technique for building density, regression, and classification models.
Our basic approach is to sequentially add components to a finite mixture model
using the structural expectation maximization (SEM) algorithm. We show that our
technique is qualitatively similar to boosting. This correspondence is a
natural byproduct of the fact that we use the SEM algorithm to sequentially fit
the mixture model. Finally, in our experimental evaluation, we demonstrate the
effectiveness of our approach on a variety of prediction and density estimation
tasks using real-world data.
|
1301.0587 | Optimal Time Bounds for Approximate Clustering | cs.DS cs.LG stat.ML | Clustering is a fundamental problem in unsupervised learning, and has been
studied widely both as a problem of learning mixture models and as an
optimization problem. In this paper, we study clustering with respect the
emph{k-median} objective function, a natural formulation of clustering in which
we attempt to minimize the average distance to cluster centers. One of the main
contributions of this paper is a simple but powerful sampling technique that we
call emph{successive sampling} that could be of independent interest. We show
that our sampling procedure can rapidly identify a small set of points (of size
just O(klog{n/k})) that summarize the input points for the purpose of
clustering. Using successive sampling, we develop an algorithm for the k-median
problem that runs in O(nk) time for a wide range of values of k and is
guaranteed, with high probability, to return a solution with cost at most a
constant factor times optimal. We also establish a lower bound of Omega(nk) on
any randomized constant-factor approximation algorithm for the k-median problem
that succeeds with even a negligible (say 1/100) probability. Thus we establish
a tight time bound of Theta(nk) for the k-median problem for a wide range of
values of k. The best previous upper bound for the problem was O(nk), where the
O-notation hides polylogarithmic factors in n and k. The best previous lower
bound of O(nk) applied only to deterministic k-median algorithms. While we
focus our presentation on the k-median objective, all our upper bounds are
valid for the k-means objective as well. In this context our algorithm compares
favorably to the widely used k-means heuristic, which requires O(nk) time for
just one iteration and provides no useful approximation guarantees.
|
1301.0588 | Expectation-Propogation for the Generative Aspect Model | cs.LG cs.IR stat.ML | The generative aspect model is an extension of the multinomial model for text
that allows word probabilities to vary stochastically across documents.
Previous results with aspect models have been promising, but hindered by the
computational difficulty of carrying out inference and learning. This paper
demonstrates that the simple variational methods of Blei et al (2001) can lead
to inaccurate inferences and biased learning for the generative aspect model.
We develop an alternative approach that leads to higher accuracy at comparable
cost. An extension of Expectation-Propagation is used for inference and then
embedded in an EM algorithm for learning. Experimental results are presented
for both synthetic and real data sets.
|
1301.0589 | Real-valued All-Dimensions search: Low-overhead rapid searching over
subsets of attributes | cs.AI | This paper is about searching the combinatorial space of contingency tables
during the inner loop of a nonlinear statistical optimization. Examples of this
operation in various data analytic communities include searching for nonlinear
combinations of attributes that contribute significantly to a regression
(Statistics), searching for items to include in a decision list (machine
learning) and association rule hunting (Data Mining).
This paper investigates a new, efficient approach to this class of problems,
called RADSEARCH (Real-valued All-Dimensions-tree Search). RADSEARCH finds the
global optimum, and this gives us the opportunity to empirically evaluate the
question: apart from algorithmic elegance what does this attention to
optimality buy us?
We compare RADSEARCH with other recent successful search algorithms such as
CN2, PRIM, APriori, OPUS and DenseMiner. Finally, we introduce RADREG, a new
regression algorithm for learning real-valued outputs based on RADSEARCHing for
high-order interactions.
|
1301.0590 | Factored Particles for Scalable Monitoring | cs.AI | Exact monitoring in dynamic Bayesian networks is intractable, so approximate
algorithms are necessary. This paper presents a new family of approximate
monitoring algorithms that combine the best qualities of the particle filtering
and Boyen-Koller methods. Our algorithms maintain an approximate representation
the belief state in the form of sets of factored particles, that correspond to
samples of clusters of state variables. Empirical results show that our
algorithms outperform both ordinary particle filtering and the Boyen-Koller
algorithm on large systems.
|
1301.0591 | Continuous Time Bayesian Networks | cs.AI | In this paper we present a language for finite state continuous time Bayesian
networks (CTBNs), which describe structured stochastic processes that evolve
over continuous time. The state of the system is decomposed into a set of local
variables whose values change over time. The dynamics of the system are
described by specifying the behavior of each local variable as a function of
its parents in a directed (possibly cyclic) graph. The model specifies, at any
given point in time, the distribution over two aspects: when a local variable
changes its value and the next value it takes. These distributions are
determined by the variable s CURRENT value AND the CURRENT VALUES OF its
parents IN the graph.More formally, each variable IS modelled AS a finite state
continuous time Markov process whose transition intensities are functions OF
its parents.We present a probabilistic semantics FOR the language IN terms OF
the generative model a CTBN defines OVER sequences OF events.We list types OF
queries one might ask OF a CTBN, discuss the conceptual AND computational
difficulties associated WITH exact inference, AND provide an algorithm FOR
approximate inference which takes advantage OF the structure within the
process.
|
1301.0592 | MAP Complexity Results and Approximation Methods | cs.AI | MAP is the problem of finding a most probable instantiation of a set of
nvariables in a Bayesian network, given some evidence. MAP appears to be a
significantly harder problem than the related problems of computing the
probability of evidence Pr, or MPE a special case of MAP. Because of the
complexity of MAP, and the lack of viable algorithms to approximate it,MAP
computations are generally avoided by practitioners. This paper investigates
the complexity of MAP. We show that MAP is complete for NP. We also provide
negative complexity results for elimination based algorithms. It turns out that
MAP remains hard even when MPE, and Pr are easy. We show that MAP is NPcomplete
when the networks are restricted to polytrees, and even then can not be
effectively approximated. Because there is no approximation algorithm with
guaranteed results, we investigate best effort approximations. We introduce a
generic MAP approximation framework. As one instantiation of it, we implement
local search coupled with belief propagation BP to approximate MAP. We show how
to extract approximate evidence retraction information from belief propagation
which allows us to perform efficient local search. This allows MAP
approximation even on networks that are too complex to even exactly solve the
easier problems of computing Pr or MPE. Experimental results indicate that
using BP and local search provides accurate MAP estimates in many cases.
|
1301.0593 | Bayesian Network Classifiers in a High Dimensional Framework | cs.LG stat.ML | We present a growing dimension asymptotic formalism. The perspective in this
paper is classification theory and we show that it can accommodate
probabilistic networks classifiers, including naive Bayes model and its
augmented version. When represented as a Bayesian network these classifiers
have an important advantage: The corresponding discriminant function turns out
to be a specialized case of a generalized additive model, which makes it
possible to get closed form expressions for the asymptotic misclassification
probabilities used here as a measure of classification accuracy. Moreover, in
this paper we propose a new quantity for assessing the discriminative power of
a set of features which is then used to elaborate the augmented naive Bayes
classifier. The result is a weighted form of the augmented naive Bayes that
distributes weights among the sets of features according to their
discriminative power. We derive the asymptotic distribution of the sample based
discriminative power and show that it is seriously overestimated in a high
dimensional case. We then apply this result to find the optimal, in a sense of
minimum misclassification probability, type of weighting.
|
1301.0594 | Modelling Information Incorporation in Markets, with Application to
Detecting and Explaining Events | cs.AI q-fin.GN | We develop a model of how information flows into a market, and derive
algorithms for automatically detecting and explaining relevant events. We
analyze data from twenty-two "political stock markets" (i.e., betting markets
on political outcomes) on the Iowa Electronic Market (IEM). We prove that,
under certain efficiency assumptions, prices in such betting markets will on
average approach the correct outcomes over time, and show that IEM data
conforms closely to the theory. We present a simple model of a betting market
where information is revealed over time, and show a qualitative correspondence
between the model and real market data. We also present an algorithm for
automatically detecting significant events and generating semantic explanations
of their origin. The algorithm operates by discovering significant changes in
vocabulary on online news sources (using expected entropy loss) that align with
major price spikes in related betting markets.
|
1301.0596 | From Qualitative to Quantitative Probabilistic Networks | cs.AI | Quantification is well known to be a major obstacle in the construction of a
probabilistic network, especially when relying on human experts for this
purpose. The construction of a qualitative probabilistic network has been
proposed as an initial step in a network s quantification, since the
qualitative network can be used TO gain preliminary insight IN the projected
networks reasoning behaviour. We extend on this idea and present a new type of
network in which both signs and numbers are specified; we further present an
associated algorithm for probabilistic inference. Building upon these
semi-qualitative networks, a probabilistic network can be quantified and
studied in a stepwise manner. As a result, modelling inadequacies can be
detected and amended at an early stage in the quantification process.
|
1301.0597 | Inference with Seperately Specified Sets of Probabilities in Credal
Networks | cs.AI | We present new algorithms for inference in credal networks --- directed
acyclic graphs associated with sets of probabilities. Credal networks are here
interpreted as encoding strong independence relations among variables. We first
present a theory of credal networks based on separately specified sets of
probabilities. We also show that inference with polytrees is NP-hard in this
setting. We then introduce new techniques that reduce the computational effort
demanded by inference, particularly in polytrees, by exploring separability of
credal sets.
|
1301.0598 | Asymptotic Model Selection for Naive Bayesian Networks | cs.AI cs.LG | We develop a closed form asymptotic formula to compute the marginal
likelihood of data given a naive Bayesian network model with two hidden states
and binary features. This formula deviates from the standard BIC score. Our
work provides a concrete example that the BIC score is generally not valid for
statistical models that belong to a stratified exponential family. This stands
in contrast to linear and curved exponential families, where the BIC score has
been proven to provide a correct approximation for the marginal likelihood.
|
1301.0599 | Advances in Boosting (Invited Talk) | cs.LG stat.ML | Boosting is a general method of generating many simple classification rules
and combining them into a single, highly accurate rule. In this talk, I will
review the AdaBoost boosting algorithm and some of its underlying theory, and
then look at how this theory has helped us to face some of the challenges of
applying AdaBoost in two domains: In the first of these, we used boosting for
predicting and modeling the uncertainty of prices in complicated, interacting
auctions. The second application was to the classification of caller utterances
in a telephone spoken-dialogue system where we faced two challenges: the need
to incorporate prior knowledge to compensate for initially insufficient data;
and a later need to filter the large stream of unlabeled examples being
collected to select the ones whose labels are likely to be most informative.
|
1301.0600 | An MDP-based Recommender System | cs.LG cs.AI cs.IR | Typical Recommender systems adopt a static view of the recommendation process
and treat it as a prediction problem. We argue that it is more appropriate to
view the problem of generating recommendations as a sequential decision problem
and, consequently, that Markov decision processes (MDP) provide a more
appropriate model for Recommender systems. MDPs introduce two benefits: they
take into account the long-term effects of each recommendation, and they take
into account the expected value of each recommendation. To succeed in practice,
an MDP-based Recommender system must employ a strong initial model; and the
bulk of this paper is concerned with the generation of such a model. In
particular, we suggest the use of an n-gram predictive model for generating the
initial MDP. Our n-gram model induces a Markov-chain model of user behavior
whose predictive accuracy is greater than that of existing predictive models.
We describe our predictive model in detail and evaluate its performance on real
data. In addition, we show how the model can be used in an MDP-based
Recommender system.
|
1301.0601 | Reinforcement Learning with Partially Known World Dynamics | cs.LG stat.ML | Reinforcement learning would enjoy better success on real-world problems if
domain knowledge could be imparted to the algorithm by the modelers. Most
problems have both hidden state and unknown dynamics. Partially observable
Markov decision processes (POMDPs) allow for the modeling of both.
Unfortunately, they do not provide a natural framework in which to specify
knowledge about the domain dynamics. The designer must either admit to knowing
nothing about the dynamics or completely specify the dynamics (thereby turning
it into a planning problem). We propose a new framework called a partially
known Markov decision process (PKMDP) which allows the designer to specify
known dynamics while still leaving portions of the environment s dynamics
unknown.The model represents NOT ONLY the environment dynamics but also the
agents knowledge of the dynamics. We present a reinforcement learning algorithm
for this model based on importance sampling. The algorithm incorporates
planning based on the known dynamics and learning about the unknown dynamics.
Our results clearly demonstrate the ability to add domain knowledge and the
resulting benefits for learning.
|
1301.0602 | Unsupervised Active Learning in Large Domains | cs.LG stat.ML | Active learning is a powerful approach to analyzing data effectively. We show
that the feasibility of active learning depends crucially on the choice of
measure with respect to which the query is being optimized. The standard
information gain, for example, does not permit an accurate evaluation with a
small committee, a representative subset of the model space. We propose a
surrogate measure requiring only a small committee and discuss the properties
of this new measure. We devise, in addition, a bootstrap approach for committee
selection. The advantages of this approach are illustrated in the context of
recovering (regulatory) network models.
|
1301.0603 | Real-Time Inference with Large-Scale Temporal Bayes Nets | cs.AI | An increasing number of applications require real-time reasoning under
uncertainty with streaming input. The temporal (dynamic) Bayes net formalism
provides a powerful representational framework for such applications. However,
existing exact inference algorithms for dynamic Bayes nets do not scale to the
size of models required for real world applications which often contain
hundreds or even thousands of variables for each time slice. In addition,
existing algorithms were not developed with real-time processing in mind. We
have developed a new computational approach to support real-time exact
inference in large temporal Bayes nets. Our approach tackles scalability by
recognizing that the complexity of the inference depends on the number of
interface nodes between time slices and by exploiting the distinction between
static and dynamic nodes in order to reduce the number of interface nodes and
to factorize their joint probability distribution. We approach the real-time
issue by organizing temporal Bayes nets into static representations, and then
using the symbolic probabilistic inference algorithm to derive analytic
expressions for the static representations. The parts of these expressions that
do not change at each time step are pre-computed. The remaining parts are
compiled into efficient procedural code so that the memory and CPU resources
required by the inference are small and fixed.
|
1301.0604 | Discriminative Probabilistic Models for Relational Data | cs.LG cs.AI stat.ML | In many supervised learning tasks, the entities to be labeled are related to
each other in complex ways and their labels are not independent. For example,
in hypertext classification, the labels of linked pages are highly correlated.
A standard approach is to classify each entity independently, ignoring the
correlations between them. Recently, Probabilistic Relational Models, a
relational version of Bayesian networks, were used to define a joint
probabilistic model for a collection of related entities. In this paper, we
present an alternative framework that builds on (conditional) Markov networks
and addresses two limitations of the previous approach. First, undirected
models do not impose the acyclicity constraint that hinders representation of
many important relational dependencies in directed models. Second, undirected
models are well suited for discriminative training, where we optimize the
conditional likelihood of the labels given the features, which generally
improves classification accuracy. We show how to train these models
effectively, and how to use approximate probabilistic inference over the
learned model for collective classification of multiple related entities. We
provide experimental results on a webpage classification task, showing that
accuracy can be significantly improved by modeling relational dependencies.
|
1301.0605 | Loopy Belief Propogation and Gibbs Measures | cs.AI | We address the question of convergence in the loopy belief propagation (LBP)
algorithm. Specifically, we relate convergence of LBP to the existence of a
weak limit for a sequence of Gibbs measures defined on the LBP s associated
computation tree.Using tools FROM the theory OF Gibbs measures we develop
easily testable sufficient conditions FOR convergence.The failure OF
convergence OF LBP implies the existence OF multiple phases FOR the associated
Gibbs specification.These results give new insight INTO the mechanics OF the
algorithm.
|
1301.0606 | Anytime State-Based Solution Methods for Decision Processes with
non-Markovian Rewards | cs.AI | A popular approach to solving a decision process with non-Markovian rewards
(NMRDP) is to exploit a compact representation of the reward function to
automatically translate the NMRDP into an equivalent Markov decision process
(MDP) amenable to our favorite MDP solution method. The contribution of this
paper is a representation of non-Markovian reward functions and a translation
into MDP aimed at making the best possible use of state-based anytime
algorithms as the solution method. By explicitly constructing and exploring
only parts of the state space, these algorithms are able to trade computation
time for policy quality, and have proven quite effective in dealing with large
MDPs. Our representation extends future linear temporal logic (FLTL) to express
rewards. Our translation has the effect of embedding model-checking in the
solution method. It results in an MDP of the minimal size achievable without
stepping outside the anytime framework, and consequently in better policies by
the deadline.
|
1301.0607 | Particle Filters in Robotics (Invited Talk) | cs.RO cs.AI | This presentation will introduce the audience to a new, emerging body of
research on sequential Monte Carlo techniques in robotics. In recent years,
particle filters have solved several hard perceptual robotic problems. Early
successes were limited to low-dimensional problems, such as the problem of
robot localization in environments with known maps. More recently, researchers
have begun exploiting structural properties of robotic domains that have led to
successful particle filter applications in spaces with as many as 100,000
dimensions. The presentation will discuss specific tricks necessary to make
these techniques work in real - world domains,and also discuss open challenges
for researchers IN the UAI community.
|
1301.0608 | On the Testable Implications of Causal Models with Hidden Variables | cs.AI | The validity OF a causal model can be tested ONLY IF the model imposes
constraints ON the probability distribution that governs the generated data. IN
the presence OF unmeasured variables, causal models may impose two types OF
constraints : conditional independencies, AS READ through the d - separation
criterion, AND functional constraints, FOR which no general criterion IS
available.This paper offers a systematic way OF identifying functional
constraints AND, thus, facilitates the task OF testing causal models AS well AS
inferring such models FROM data.
|
1301.0609 | Exploiting Functional Dependence in Bayesian Network Inference | cs.AI | We propose an efficient method for Bayesian network inference in models with
functional dependence. We generalize the multiplicative factorization method
originally designed by Takikawa and D Ambrosio(1999) FOR models WITH
independence OF causal influence.Using a hidden variable, we transform a
probability potential INTO a product OF two - dimensional potentials.The
multiplicative factorization yields more efficient inference. FOR example, IN
junction tree propagation it helps TO avoid large cliques. IN ORDER TO keep
potentials small, the number OF states OF the hidden variable should be
minimized.We transform this problem INTO a combinatorial problem OF minimal
base IN a particular space.We present an example OF a computerized adaptive
test, IN which the factorization method IS significantly more efficient than
previous inference methods.
|
1301.0610 | A New Class of Upper Bounds on the Log Partition Function | cs.LG stat.ML | Bounds on the log partition function are important in a variety of contexts,
including approximate inference, model fitting, decision theory, and large
deviations analysis. We introduce a new class of upper bounds on the log
partition function, based on convex combinations of distributions in the
exponential domain, that is applicable to an arbitrary undirected graphical
model. In the special case of convex combinations of tree-structured
distributions, we obtain a family of variational problems, similar to the Bethe
free energy, but distinguished by the following desirable properties: i. they
are cnvex, and have a unique global minimum; and ii. the global minimum gives
an upper bound on the log partition function. The global minimum is defined by
stationary conditions very similar to those defining fixed points of belief
propagation or tree-based reparameterization Wainwright et al., 2001. As with
BP fixed points, the elements of the minimizing argument can be used as
approximations to the marginals of the original model. The analysis described
here can be extended to structures of higher treewidth e.g., hypertrees,
thereby making connections with more advanced approximations e.g., Kikuchi and
variants Yedidia et al., 2001; Minka, 2001.
|
1301.0611 | Decision Principles to justify Carnap's Updating Method and to Suggest
Corrections of Probability Judgments (Invited Talks) | cs.AI | This paper uses decision-theoretic principles to obtain new insights into the
assessment and updating of probabilities. First, a new foundation of
Bayesianism is given. It does not require infinite atomless uncertainties as
did Savage s classical result, AND can therefore be applied TO ANY finite
Bayesian network.It neither requires linear utility AS did de Finetti s
classical result, AND r ntherefore allows FOR the empirically AND normatively
desirable risk r naversion.Finally, BY identifying AND fixing utility IN an
elementary r nmanner, our result can readily be applied TO identify methods OF
r nprobability updating.Thus, a decision - theoretic foundation IS given r nto
the computationally efficient method OF inductive reasoning r ndeveloped BY
Rudolf Carnap.Finally, recent empirical findings ON r nprobability assessments
are discussed.It leads TO suggestions FOR r ncorrecting biases IN probability
assessments, AND FOR an alternative r nto the Dempster - Shafer belief
functions that avoids the reduction TO r ndegeneracy after multiple updatings.r
n
|
1301.0612 | Adaptive Foreground and Shadow Detection inImage Sequences | cs.CV | This paper presents a novel method of foreground segmentation that
distinguishes moving objects from their moving cast shadows in monocular image
sequences. The models of background, edge information, and shadow are set up
and adaptively updated. A Bayesian belief network is proposed to describe the
relationships among the segmentation label, background, intensity, and edge
information. The notion of Markov random field is used to encourage the spatial
connectivity of the segmented regions. The solution is obtained by maximizing
the posterior possibility density of the segmentation field.
|
1301.0613 | IPF for Discrete Chain Factor Graphs | cs.LG cs.AI stat.ML | Iterative Proportional Fitting (IPF), combined with EM, is commonly used as
an algorithm for likelihood maximization in undirected graphical models. In
this paper, we present two iterative algorithms that generalize upon IPF. The
first one is for likelihood maximization in discrete chain factor graphs, which
we define as a wide class of discrete variable models including undirected
graphical models and Bayesian networks, but also chain graphs and sigmoid
belief networks. The second one is for conditional likelihood maximization in
standard undirected models and Bayesian networks. In both algorithms, the
iteration steps are expressed in closed form. Numerical simulations show that
the algorithms are competitive with state of the art methods.
|
1301.0614 | Inductive Policy Selection for First-Order MDPs | cs.AI | We select policies for large Markov Decision Processes (MDPs) with compact
first-order representations. We find policies that generalize well as the
number of objects in the domain grows, potentially without bound. Existing
dynamic-programming approaches based on flat, propositional, or first-order
representations either are impractical here or do not naturally scale as the
number of objects grows without bound. We implement and evaluate an alternative
approach that induces first-order policies using training data constructed by
solving small problem instances using PGraphplan (Blum & Langford, 1999). Our
policies are represented as ensembles of decision lists, using a taxonomic
concept language. This approach extends the work of Martin and Geffner (2000)
to stochastic domains, ensemble learning, and a wider variety of problems.
Empirically, we find "good" policies for several stochastic first-order MDPs
that are beyond the scope of previous approaches. We also discuss the
application of this work to the relational reinforcement-learning problem.
|
1301.0633 | Clustered Calibration: An Improvement to Radio Interferometric Direction
Dependent Self-Calibration | astro-ph.IM cs.CE stat.AP | The new generation of radio synthesis arrays, such as LOFAR and SKA, have
been designed to surpass existing arrays in terms of sensitivity, angular
resolution and frequency coverage. This evolution has led to the development of
advanced calibration techniques that ensure the delivery of accurate results at
the lowest possible computational cost. However, the performance of such
calibration techniques is still limited by the compact, bright sources in the
sky, used as calibrators. It is important to have a bright enough source that
is well distinguished from the background noise level in order to achieve
satisfactory results in calibration. We present "clustered calibration" as a
modification to traditional radio interferometric calibration, in order to
accommodate faint sources that are almost below the background noise level into
the calibration process. The main idea is to employ the information of the
bright sources' measured signals as an aid to calibrate fainter sources that
are nearby the bright sources. In the case where we do not have bright enough
sources, a source cluster could act as a bright source that can be
distinguished from background noise. We construct a number of source clusters
assuming that the signals of the sources belonging to a single cluster are
corrupted by almost the same errors, and each cluster is calibrated as a single
source, using the combined coherencies of its sources simultaneously. This
upgrades the power of an individual faint source by the effective power of its
cluster. We give performance analysis of clustered calibration to show the
superiority of this approach compared to the traditional unclustered
calibration. We also provide analytical criteria to choose the optimum number
of clusters for a given observation in an efficient manner.
|
1301.0647 | Algebraic Semantics of Similarity-Based Bitten Rough Set Theory | math.LO cs.LO cs.MA | We develop two algebraic semantics for bitten rough set theory (\cite{SW})
over similarity spaces and their abstract granular versions. Connections with
choice based generalized rough semantics developed in \cite{AM69} by the
present author and general cover based rough set theories are also considered.
|
1301.0669 | Constacyclic Codes over $F_p+vF_p$ | cs.IT math.IT | In this paper, we study constacyclic codes over $F_p+vF_p$, where $p$ is an
odd prime and $v^2=v$. The polynomial generators of all constacyclic codes over
$F_p+vF_p$ are characterized and their dual codes are also determined.
|
1301.0683 | A Quality and Cost Approach for Comparison of Small-World Networks | cs.SI physics.soc-ph | We propose an approach based on analysis of cost-quality tradeoffs for
comparison of efficiency of various algorithms for small-world network
construction. A number of both known in the literature and original algorithms
for complex small-world networks construction are shortly reviewed and
compared. The networks constructed on the basis of these algorithms have basic
structure of 1D regular lattice with additional shortcuts providing the
small-world properties. It is shown that networks proposed in this work have
the best cost-quality ratio in the considered class.
|
1301.0700 | Position and Orientation Estimation of a Rigid Body: Rigid Body
Localization | cs.IT math.IT | Rigid body localization refers to a problem of estimating the position of a
rigid body along with its orientation using anchors. We consider a setup in
which a few sensors are mounted on a rigid body. The absolute position of the
rigid body is not known, but, the relative position of the sensors or the
topology of the sensors on the rigid body is known. We express the absolute
position of the sensors as an affine function of the Stiefel manifold and
propose a simple least-squares (LS) estimator as well as a constrained total
least-squares (CTLS) estimator to jointly estimate the orientation and the
position of the rigid body. To account for the perturbations of the sensors, we
also propose a constrained total least-squares (CTLS) estimator. Analytical
closed-form solutions for the proposed estimators are provided. Simulations are
used to corroborate and analyze the performance of the proposed estimators.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.