id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1212.1824 | Stochastic Gradient Descent for Non-smooth Optimization: Convergence
Results and Optimal Averaging Schemes | cs.LG math.OC stat.ML | Stochastic Gradient Descent (SGD) is one of the simplest and most popular
stochastic optimization methods. While it has already been theoretically
studied for decades, the classical analysis usually required non-trivial
smoothness assumptions, which do not apply to many modern applications of SGD
with non-smooth objective functions such as support vector machines. In this
paper, we investigate the performance of SGD without such smoothness
assumptions, as well as a running average scheme to convert the SGD iterates to
a solution with optimal optimization accuracy. In this framework, we prove that
after T rounds, the suboptimality of the last SGD iterate scales as
O(log(T)/\sqrt{T}) for non-smooth convex objective functions, and O(log(T)/T)
in the non-smooth strongly convex case. To the best of our knowledge, these are
the first bounds of this kind, and almost match the minimax-optimal rates
obtainable by appropriate averaging schemes. We also propose a new and simple
averaging scheme, which not only attains optimal rates, but can also be easily
computed on-the-fly (in contrast, the suffix averaging scheme proposed in
Rakhlin et al. (2011) is not as simple to implement). Finally, we provide some
experimental illustrations.
|
1212.1839 | On Structured Realizability and Stabilizability of Linear Systems | cs.SY math.OC | We study the notion of structured realizability for linear systems defined
over graphs. A stabilizable and detectable realization is structured if the
state-space matrices inherit the sparsity pattern of the adjacency matrix of
the associated graph. In this paper, we demonstrate that not every structured
transfer matrix has a structured realization and we reveal the practical
meaning of this fact. We also uncover a close connection between the structured
realizability of a plant and whether the plant can be stabilized by a
structured controller. In particular, we show that a structured stabilizing
controller can only exist when the plant admits a structured realization.
Finally, we give a parameterization of all structured stabilizing controllers
and show that they always have structured realizations.
|
1212.1863 | Self Authentication of image through Daubechies Transform technique
(SADT) | cs.CR cs.CV | In this paper a 4 x 4 Daubechies transform based authentication technique
termed as SADT has been proposed to authenticate gray scale images. The cover
image is transformed into the frequency domain using 4 x 4 mask in a row major
order using Daubechies transform technique, resulting four frequency subbands
AF, HF, VF and DF. One byte of every band in a mask is embedding with two or
four bits of secret information. Experimental results are computed and compared
with the existing authentication techniques like Li s method [5], SCDFT [6],
Region-Based method [7] and other similar techniques based on Mean Square Error
(MSE), Peak Signal to Noise Ratio (PSNR) and Image Fidelity (IF), which shows
better performance in SADT.
|
1212.1881 | Deciding Monotone Duality and Identifying Frequent Itemsets in Quadratic
Logspace | cs.DS cs.AI cs.CC cs.DB | The monotone duality problem is defined as follows: Given two monotone
formulas f and g in iredundant DNF, decide whether f and g are dual. This
problem is the same as duality testing for hypergraphs, that is, checking
whether a hypergraph H consists of precisely all minimal transversals of a
simple hypergraph G. By exploiting a recent problem-decomposition method by
Boros and Makino (ICALP 2009), we show that duality testing for hypergraphs,
and thus for monotone DNFs, is feasible in DSPACE[log^2 n], i.e., in quadratic
logspace. As the monotone duality problem is equivalent to a number of problems
in the areas of databases, data mining, and knowledge discovery, the results
presented here yield new complexity results for those problems, too. For
example, it follows from our results that whenever for a Boolean-valued
relation (whose attributes represent items), a number of maximal frequent
itemsets and a number of minimal infrequent itemsets are known, then it can be
decided in quadratic logspace whether there exist additional frequent or
infrequent itemsets.
|
1212.1901 | Kolmogorov Complexity and the Garden of Eden Theorem | nlin.CG cs.IT math.IT | Suppose $\tau$ is a cellular automaton over an amenable group and a finite
alphabet. Celebrated Garden of Eden theorem states, that pre-injectivity of
$\tau$ is equivalent to non-existence of Garden of Eden configuration. In this
paper we will prove, that imposing some mild restrictions, we could add another
equivalent assertion: non-existence of Garden of Eden configuration is
equivalent to preservation of asymptotic Kolmogorov complexity under the action
of cellular automaton. It yields a characterisation of the cellular automata,
which preserve the asymptotic Kolmogorov complexity.
|
1212.1913 | Energy-minimizing error-correcting codes | math.CO cs.IT math.IT | We study a discrete model of repelling particles, and we show using linear
programming bounds that many familiar families of error-correcting codes
minimize a broad class of potential energies when compared with all other codes
of the same size and block length. Examples of these universally optimal codes
include Hamming, Golay, and Reed-Solomon codes, among many others, and this
helps explain their robustness as the channel model varies. Universal
optimality of these codes is equivalent to minimality of their binomial
moments, which has been proved in many cases by Ashikhmin and Barg. We
highlight connections with mathematical physics and the analogy between these
results and previous work by Cohn and Kumar in the continuous setting, and we
develop a framework for optimizing the linear programming bounds. Furthermore,
we show that if these bounds prove a code is universally optimal, then the code
remains universally optimal even if one codeword is removed.
|
1212.1918 | Condens\'es de textes par des m\'ethodes num\'eriques | cs.IR cs.CL | Since information in electronic form is already a standard, and that the
variety and the quantity of information become increasingly large, the methods
of summarizing or automatic condensation of texts is a critical phase of the
analysis of texts. This article describes CORTEX a system based on numerical
methods, which allows obtaining a condensation of a text, which is independent
of the topic and of the length of the text. The structure of the system enables
it to find the abstracts in French or Spanish in very short times.
|
1212.1927 | User Taglines: Alternative Presentations of Expertise and Interest in
Social Media | cs.SI | Web applications are increasingly showing recommended users from social media
along with some descriptions, an attempt to show relevancy - why they are being
shown. For example, Twitter search for a topical keyword shows expert
twitterers on the side for 'whom to follow'. Google+ and Facebook also
recommend users to follow or add to friend circle. Popular Internet newspaper-
The Huffington Post shows Twitter influencers/ experts on the side of an
article for authoritative relevant tweets. The state of the art shows user
profile bios as summary for Twitter experts, but it has issues with length
constraint imposed by user interface (UI) design, missing bio and sometimes
funny profile bio. Alternatively, applications can use human generated user
summary, but it will not scale. Therefore, we study the problem of automatic
generation of informative expertise summary or taglines for Twitter experts in
space constraint imposed by UI design. We propose three methods for expertise
summary generation- Occupation-Pattern based, Link-Triangulation based and
User-Classification based, with use of knowledge-enhanced computing approaches.
We also propose methods for final summary selection for users with multiple
candidates of generated summaries. We evaluate the proposed approaches by
user-study using a number of experiments. Our results show promising quality of
92.8% good summaries with majority agreement in the best case and 70% with
majority agreement in the worst case. Our approaches also outperform the state
of the art up to 88%. This study has implications in the area of expert
profiling, user presentation and application design for engaging user
experience.
|
1212.1936 | High-dimensional sequence transduction | cs.LG | We investigate the problem of transforming an input sequence into a
high-dimensional output sequence in order to transcribe polyphonic audio music
into symbolic notation. We introduce a probabilistic model based on a recurrent
neural network that is able to learn realistic output distributions given the
input and we devise an efficient algorithm to search for the global mode of
that distribution. The resulting method produces musically plausible
transcriptions even under high levels of noise and drastically outperforms
previous state-of-the-art approaches on five datasets of synthesized sounds and
real recordings, approximately halving the test error rate.
|
1212.1940 | Consensus Formation on Simplicial Complex of Opinions | physics.soc-ph cs.SI | Geometric realization of opinion is considered as a simplex and the opinion
space of a group of individuals is a simplicial complex whose topological
features are monitored in the process of opinion formation. The agents are
physically located on the nodes of the scale-free network. Social interactions
include all concepts of social dynamics present in the mainstream models
augmented by four additional interaction mechanisms which depend on the local
properties of opinions and their overlapping properties. The results pertaining
to the formation of consensus are of particular interest. An analogy with
quantum mechanical pure states is established through the application of the
high dimensional combinatorial Laplacian.
|
1212.1942 | Balanced K-SAT and Biased random K-SAT on trees | cond-mat.stat-mech cs.AI cs.CC | We study and solve some variations of the random K-satisfiability problem -
balanced K-SAT and biased random K-SAT - on a regular tree, using techniques we
have developed earlier(arXiv:1110.2065). In both these problems, as well as
variations of these that we have looked at, we find that the SAT-UNSAT
transition obtained on the Bethe lattice matches the exact threshold for the
same model on a random graph for K=2 and is very close to the numerical value
obtained for K=3. For higher K it deviates from the numerical estimates of the
solvability threshold on random graphs, but is very close to the dynamical
1-RSB threshold as obtained from the first non-trivial fixed point of the
survey propagation algorithm.
|
1212.1969 | Joint Secured and Robust Technique for OFDM Systems | cs.IT math.IT | This work presents a novel technique for joint secured and robust
transmission of orthogonal frequency division multiplexing (OFDM) based
communication systems. The proposed system is implemented by developing a new
OFDM symbol structure based on symmetric key cryptography. At the receiver
side, data detection becomes infeasible without the knowledge of the secret
key. For an intruder who tries to detect the data without the knowledge of the
key, the signal will be a noise-like signal. In addition to the system
security, theoretical and simulation results demonstrated that the proposed
system provides time and frequency diversity, which makes the system highly
robust against severe frequency-selective fading as well as other impairments
such as impulsive noise and multiple access interference. For particular
frequency-selective fading channels, the bit error rate (BER) improvements was
about 15 dB at BER of 10E-4.
|
1212.2002 | A simpler approach to obtaining an O(1/t) convergence rate for the
projected stochastic subgradient method | cs.LG math.OC stat.ML | In this note, we present a new averaging technique for the projected
stochastic subgradient method. By using a weighted average with a weight of t+1
for each iterate w_t at iteration t, we obtain the convergence rate of O(1/t)
with both an easy proof and an easy implementation. The new scheme is compared
empirically to existing techniques, with similar performance behavior.
|
1212.2005 | The Dynamic Controllability of Conditional STNs with Uncertainty | cs.AI cs.SY | Recent attempts to automate business processes and medical-treatment
processes have uncovered the need for a formal framework that can accommodate
not only temporal constraints, but also observations and actions with
uncontrollable durations. To meet this need, this paper defines a Conditional
Simple Temporal Network with Uncertainty (CSTNU) that combines the simple
temporal constraints from a Simple Temporal Network (STN) with the conditional
nodes from a Conditional Simple Temporal Problem (CSTP) and the contingent
links from a Simple Temporal Network with Uncertainty (STNU). A notion of
dynamic controllability for a CSTNU is defined that generalizes the dynamic
consistency of a CTP and the dynamic controllability of an STNU. The paper also
presents some sound constraint-propagation rules for dynamic controllability
that are expected to form the backbone of a dynamic-controllability-checking
algorithm for CSTNUs.
|
1212.2006 | A Novel Feature-based Bayesian Model for Query Focused Multi-document
Summarization | cs.CL cs.IR | Both supervised learning methods and LDA based topic model have been
successfully applied in the field of query focused multi-document
summarization. In this paper, we propose a novel supervised approach that can
incorporate rich sentence features into Bayesian topic models in a principled
way, thus taking advantages of both topic model and feature based supervised
learning methods. Experiments on TAC2008 and TAC2009 demonstrate the
effectiveness of our approach.
|
1212.2036 | Query-focused Multi-document Summarization: Combining a Novel Topic
Model with Graph-based Semi-supervised Learning | cs.CL cs.IR | Graph-based semi-supervised learning has proven to be an effective approach
for query-focused multi-document summarization. The problem of previous
semi-supervised learning is that sentences are ranked without considering the
higher level information beyond sentence level. Researches on general
summarization illustrated that the addition of topic level can effectively
improve the summary quality. Inspired by previous researches, we propose a
two-layer (i.e. sentence layer and topic layer) graph-based semi-supervised
learning approach. At the same time, we propose a novel topic model which makes
full use of the dependence between sentences and words. Experimental results on
DUC and TAC data sets demonstrate the effectiveness of our proposed approach.
|
1212.2044 | Macro-Economic Time Series Modeling and Interaction Networks | cs.NE stat.AP | Macro-economic models describe the dynamics of economic quantities. The
estimations and forecasts produced by such models play a substantial role for
financial and political decisions. In this contribution we describe an approach
based on genetic programming and symbolic regression to identify variable
interactions in large datasets. In the proposed approach multiple symbolic
regression runs are executed for each variable of the dataset to find
potentially interesting models. The result is a variable interaction network
that describes which variables are most relevant for the approximation of each
variable of the dataset. This approach is applied to a macro-economic dataset
with monthly observations of important economic indicators in order to identify
potentially interesting dependencies of these indicators. The resulting
interaction network of macro-economic indicators is briefly discussed and two
of the identified models are presented in detail. The two models approximate
the help wanted index and the CPI inflation in the US.
|
1212.2056 | Soft Constraint Logic Programming for Electric Vehicle Travel
Optimization | cs.AI | Soft Constraint Logic Programming is a natural and flexible declarative
programming formalism, which allows to model and solve real-life problems
involving constraints of different types.
In this paper, after providing a slightly more general and elegant
presentation of the framework, we show how we can apply it to the e-mobility
problem of coordinating electric vehicles in order to overcome both energetic
and temporal constraints and so to reduce their running cost. In particular, we
focus on the journey optimization sub-problem, considering sequences of trips
from a user's appointment to another one. Solutions provide the best
alternatives in terms of time and energy consumption, including route sequences
and possible charging events.
|
1212.2065 | A Survey on Information Retrieval, Text Categorization, and Web Crawling | cs.IR | This paper is a survey discussing Information Retrieval concepts, methods,
and applications. It goes deep into the document and query modelling involved
in IR systems, in addition to pre-processing operations such as removing stop
words and searching by synonym techniques. The paper also tackles text
categorization along with its application in neural networks and machine
learning. Finally, the architecture of web crawlers is to be discussed shedding
the light on how internet spiders index web documents and how they allow users
to search for items on the web.
|
1212.2071 | A Data Warehouse Design for a Typical University Information System | cs.DB | Presently, large enterprises rely on database systems to manage their data
and information. These databases are useful for conducting daily business
transactions. However, the tight competition in the marketplace has led to the
concept of data mining in which data are analyzed to derive effective business
strategies and discover better ways in carrying out business. In order to
perform data mining, regular databases must be converted into what so called
informational databases also known as data warehouse. This paper presents a
design model for building data warehouse for a typical university information
system. It is based on transforming an operational database into an
informational warehouse useful for decision makers to conduct data analysis,
predication, and forecasting. The proposed model is based on four stages of
data migration: Data extraction, data cleansing, data transforming, and data
indexing and loading. The complete system is implemented under MS Access 2010
and is meant to serve as a repository of data for data mining operations.
|
1212.2094 | Secondary Access to Spectrum with SINR Requirements Through Constraint
Transformation | cs.NI cs.IT math.IT | In this paper we investigate the problem of allocating spectrum among radio
nodes under SINR requirements. This problem is of special interest in dynamic
spectrum access networks where topology and spectral resources differ with time
and location. The problem is to determine the number of radio nodes that can
transmit simultaneously while still achieving their SINR requirements and then
decide which channels these nodes should transmit on. Previous work have shown
how this can be done for a large spectrum pool where nodes allocate multiple
channels from that pool which renders a linear programming approach feasible
when the pool is large enough. In this paper we extend their work by
considering arbitrary individual pool sizes and allow nodes to only transmit on
one channel. Due to the accumulative nature of interference this problem is a
non-convex integer problem which is NP-hard. However, we introduce a constraint
transformation that transforms the problem to a binary quadratic constraint
problem. Although this problem is still NP-hard, well known heuristic
algorithms for solving this problem are known in the literature. We implement a
heuristic algorithm based on Lagrange relaxation which bounds the solution
value of the heuristic to the optimal value of the constraint transformed
problem. Simulation results show that this approach provides solutions within
an average gap of 10% of solutions obtained by a genetic algorithm for the
original non-convex integer problem.
|
1212.2125 | Sparse Regression Codes for Multi-terminal Source and Channel Coding | cs.IT math.IT | We study a new class of codes for Gaussian multi-terminal source and channel
coding. These codes are designed using the statistical framework of
high-dimensional linear regression and are called Sparse Superposition or
Sparse Regression codes. Codewords are linear combinations of subsets of
columns of a design matrix. These codes were recently introduced by Barron and
Joseph and shown to achieve the channel capacity of AWGN channels with
computationally feasible decoding. They have also recently been shown to
achieve the optimal rate-distortion function for Gaussian sources. In this
paper, we demonstrate how to implement random binning and superposition coding
using sparse regression codes. In particular, with minimum-distance
encoding/decoding it is shown that sparse regression codes attain the optimal
information-theoretic limits for a variety of multi-terminal source and channel
coding problems.
|
1212.2129 | Online Portfolio Selection: A Survey | q-fin.CP cs.AI cs.CE q-fin.PM | Online portfolio selection is a fundamental problem in computational finance,
which has been extensively studied across several research communities,
including finance, statistics, artificial intelligence, machine learning, and
data mining, etc. This article aims to provide a comprehensive survey and a
structural understanding of published online portfolio selection techniques.
From an online machine learning perspective, we first formulate online
portfolio selection as a sequential decision problem, and then survey a variety
of state-of-the-art approaches, which are grouped into several major
categories, including benchmarks, "Follow-the-Winner" approaches,
"Follow-the-Loser" approaches, "Pattern-Matching" based approaches, and
"Meta-Learning Algorithms". In addition to the problem formulation and related
algorithms, we also discuss the relationship of these algorithms with the
Capital Growth theory in order to better understand the similarities and
differences of their underlying trading ideas. This article aims to provide a
timely and comprehensive survey for both machine learning and data mining
researchers in academia and quantitative portfolio managers in the financial
industry to help them understand the state-of-the-art and facilitate their
research and practical applications. We also discuss some open issues and
evaluate some emerging new trends for future research directions.
|
1212.2136 | A class of random fields on complete graphs with tractable partition
function | cs.LG stat.ML | The aim of this short note is to draw attention to a method by which the
partition function and marginal probabilities for a certain class of random
fields on complete graphs can be computed in polynomial time. This class
includes Ising models with homogeneous pairwise potentials but arbitrary
(inhomogeneous) unary potentials. Similarly, the partition function and
marginal probabilities can be computed in polynomial time for random fields on
complete bipartite graphs, provided they have homogeneous pairwise potentials.
We expect that these tractable classes of large scale random fields can be very
useful for the evaluation of approximation algorithms by providing exact error
estimates.
|
1212.2142 | Universality in voting behavior: an empirical analysis | physics.soc-ph cs.SI physics.data-an | Election data represent a precious source of information to study human
behavior at a large scale. In proportional elections with open lists, the
number of votes received by a candidate, rescaled by the average performance of
all competitors in the same party list, has the same distribution regardless of
the country and the year of the election. Here we provide the first thorough
assessment of this claim. We analyzed election datasets of 15 countries with
proportional systems. We confirm that a class of nations with similar election
rules fulfill the universality claim. Discrepancies from this trend in other
countries with open-lists elections are always associated with peculiar
differences in the election rules, which matter more than differences between
countries and historical periods. Our analysis shows that the role of parties
in the electoral performance of candidates is crucial: alternative scalings not
taking into account party affiliations lead to poor results.
|
1212.2144 | Design of companding quantizer for Laplacian source using the
approximation of probability density function | math.OC cs.IT math.IT | In this paper both piecewise linear and piecewise uniform approximation of
probability density function are performed. For the probability density
function approximated in these ways, a compressor function is formed. On the
basis of compressor function formed in this way, piecewise linear and piecewise
uniform companding quantizer are designed. Design of these companding quantizer
models is performed for the Laplacian source at the entrance of the quantizer.
The performance estimate of the proposed companding quantizer models is done by
determining the values of signal to quantization noise ratio (SQNR) and
approximation error for the both of proposed models and also by their mutual
comparison.
|
1212.2145 | A Scale-Space Theory for Text | cs.IR cs.CL | Scale-space theory has been established primarily by the computer vision and
signal processing communities as a well-founded and promising framework for
multi-scale processing of signals (e.g., images). By embedding an original
signal into a family of gradually coarsen signals parameterized with a
continuous scale parameter, it provides a formal framework to capture the
structure of a signal at different scales in a consistent way. In this paper,
we present a scale space theory for text by integrating semantic and spatial
filters, and demonstrate how natural language documents can be understood,
processed and analyzed at multiple resolutions, and how this scale-space
representation can be used to facilitate a variety of NLP and text analysis
tasks.
|
1212.2150 | Collaborative Competitive filtering II: Optimal Recommendation and
Collaborative Games | cs.IR | Recommender systems have emerged as a new weapon to help online firms to
realize many of their strategic goals (e.g., to improve sales, revenue,
customer experience etc.). However, many existing techniques commonly approach
these goals by seeking to recover preference (e.g., estimating ratings) in a
matrix completion framework. This paper aims to bridge this significant gap
between the clearly-defined strategic objectives and the not-so-well-justified
proxy.
We show it is advantageous to think of a recommender system as an analogy to
a monopoly economic market with the system as the sole seller, users as the
buyers and items as the goods. This new perspective motivates a game-theoretic
formulation for recommendation that enables us to identify the optimal
recommendation policy by explicit optimizing certain strategic goals. In this
paper, we revisit and extend our prior work, the Collaborative-Competitive
Filtering preference model, towards a game-theoretic framework. The proposed
framework consists of two components. First, a conditional preference model
that characterizes how a user would respond to a recommendation action; Second,
knowing in advance how the user would respond, how a recommender system should
act (i.e., recommend) strategically to maximize its goals. We show how
objectives such as click-through rate, sales revenue and consumption diversity
can be optimized explicitly in this framework. Experiments are conducted on a
commercial recommender system and demonstrate promising results.
|
1212.2153 | Emergence of network features from multiplexity | physics.soc-ph cs.SI | Many biological and man-made networked systems are characterized by the
simultaneous presence of different sub-networks organized in separate layers,
with links and nodes of qualitatively different types. While during the past
few years theoretical studies have examined a variety of structural features of
complex networks, the outstanding question is whether such features are
characterizing all single layers, or rather emerge as a result of
coarse-graining, i.e. when going from the multilayered to the aggregate network
representation. Here we address this issue with the help of real data. We
analyze the structural properties of an intrinsically multilayered real
network, the European Air Transportation Multiplex Network in which each
commercial airline defines a network layer. We examine how several structural
measures evolve as layers are progressively merged together. In particular, we
discuss how the topology of each layer affects the emergence of structural
properties in the aggregate network.
|
1212.2170 | Stochastic Perron's method for Hamilton-Jacobi-Bellman equations | math.PR cs.SY math.AP math.OC | We show that the value function of a stochastic control problem is the unique
solution of the associated Hamilton-Jacobi-Bellman (HJB) equation, completely
avoiding the proof of the so-called dynamic programming principle (DPP). Using
Stochastic Perron's method we construct a super-solution lying below the value
function and a sub-solution dominating it. A comparison argument easily closes
the proof. The program has the precise meaning of verification for
viscosity-solutions, obtaining the DPP as a conclusion. It also immediately
follows that the weak and strong formulations of the stochastic control problem
have the same value. Using this method we also capture the possible
face-lifting phenomenon in a straightforward manner.
|
1212.2245 | Fast and Robust Linear Motion Deblurring | cs.CV | We investigate efficient algorithmic realisations for robust deconvolution of
grey-value images with known space-invariant point-spread function, with
emphasis on 1D motion blur scenarios. The goal is to make deconvolution
suitable as preprocessing step in automated image processing environments with
tight time constraints. Candidate deconvolution methods are selected for their
restoration quality, robustness and efficiency. Evaluation of restoration
quality and robustness on synthetic and real-world test images leads us to
focus on a combination of Wiener filtering with few iterations of robust and
regularised Richardson-Lucy deconvolution. We discuss algorithmic optimisations
for specific scenarios. In the case of uniform linear motion blur in coordinate
direction, it is possible to achieve real-time performance (less than 50 ms) in
single-threaded CPU computation on images of $256\times256$ pixels. For more
general space-invariant blur settings, still favourable computation times are
obtained. Exemplary parallel implementations demonstrate that the proposed
method also achieves real-time performance for general 1D motion blurs in a
multi-threaded CPU setting, and for general 2D blurs on a GPU.
|
1212.2251 | A Propagation Model for Provenance Views of Public/Private Workflows | cs.DB | We study the problem of concealing functionality of a proprietary or private
module when provenance information is shown over repeated executions of a
workflow which contains both `public' and `private' modules. Our approach is to
use `provenance views' to hide carefully chosen subsets of data over all
executions of the workflow to ensure G-privacy: for each private module and
each input x, the module's output f(x) is indistinguishable from G -1 other
possible values given the visible data in the workflow executions. We show that
G-privacy cannot be achieved simply by combining solutions for individual
private modules; data hiding must also be `propagated' through public modules.
We then examine how much additional data must be hidden and when it is safe to
stop propagating data hiding. The answer depends strongly on the workflow
topology as well as the behavior of public modules on the visible data. In
particular, for a class of workflows (which include the common tree and chain
workflows), taking private solutions for each private module, augmented with a
`public closure' that is `upstream-downstream safe', ensures G-privacy. We
define these notions formally and show that the restrictions are necessary. We
also study the related optimization problems of minimizing the amount of hidden
data.
|
1212.2262 | Bag-of-Words Representation for Biomedical Time Series Classification | cs.LG cs.AI | Automatic analysis of biomedical time series such as electroencephalogram
(EEG) and electrocardiographic (ECG) signals has attracted great interest in
the community of biomedical engineering due to its important applications in
medicine. In this work, a simple yet effective bag-of-words representation that
is able to capture both local and global structure similarity information is
proposed for biomedical time series representation. In particular, similar to
the bag-of-words model used in text document domain, the proposed method treats
a time series as a text document and extracts local segments from the time
series as words. The biomedical time series is then represented as a histogram
of codewords, each entry of which is the count of a codeword appeared in the
time series. Although the temporal order of the local segments is ignored, the
bag-of-words representation is able to capture high-level structural
information because both local and global structural information are well
utilized. The performance of the bag-of-words model is validated on three
datasets extracted from real EEG and ECG signals. The experimental results
demonstrate that the proposed method is not only insensitive to parameters of
the bag-of-words model such as local segment length and codebook size, but also
robust to noise.
|
1212.2264 | A space efficient streaming algorithm for triangle counting using the
birthday paradox | cs.DS cs.DM cs.SI | We design a space efficient algorithm that approximates the transitivity
(global clustering coefficient) and total triangle count with only a single
pass through a graph given as a stream of edges. Our procedure is based on the
classic probabilistic result, the birthday paradox. When the transitivity is
constant and there are more edges than wedges (common properties for social
networks), we can prove that our algorithm requires $O(\sqrt{n})$ space ($n$ is
the number of vertices) to provide accurate estimates. We run a detailed set of
experiments on a variety of real graphs and demonstrate that the memory
requirement of the algorithm is a tiny fraction of the graph. For example, even
for a graph with 200 million edges, our algorithm stores just 60,000 edges to
give accurate results. Being a single pass streaming algorithm, our procedure
also maintains a real-time estimate of the transitivity/number of triangles of
a graph, by storing a minuscule fraction of edges.
|
1212.2278 | Inverting and Visualizing Features for Object Detection | cs.CV | We introduce algorithms to visualize feature spaces used by object detectors.
The tools in this paper allow a human to put on `HOG goggles' and perceive the
visual world as a HOG based object detector sees it. We found that these
visualizations allow us to analyze object detection systems in new ways and
gain new insight into the detector's failures. For example, when we visualize
the features for high scoring false alarms, we discovered that, although they
are clearly wrong in image space, they do look deceptively similar to true
positives in feature space. This result suggests that many of these false
alarms are caused by our choice of feature space, and indicates that creating a
better learning algorithm or building bigger datasets is unlikely to correct
these errors. By visualizing feature spaces, we can gain a more intuitive
understanding of our detection systems.
|
1212.2287 | Runtime Optimizations for Prediction with Tree-Based Models | cs.DB cs.IR cs.LG | Tree-based models have proven to be an effective solution for web ranking as
well as other problems in diverse domains. This paper focuses on optimizing the
runtime performance of applying such models to make predictions, given an
already-trained model. Although exceedingly simple conceptually, most
implementations of tree-based models do not efficiently utilize modern
superscalar processor architectures. By laying out data structures in memory in
a more cache-conscious fashion, removing branches from the execution flow using
a technique called predication, and micro-batching predictions using a
technique called vectorization, we are able to better exploit modern processor
architectures and significantly improve the speed of tree-based models over
hard-coded if-else blocks. Our work contributes to the exploration of
architecture-conscious runtime implementations of machine learning algorithms.
|
1212.2309 | Low Rank Mechanism for Optimizing Batch Queries under Differential
Privacy | cs.DB cs.CR | Differential privacy is a promising privacy-preserving paradigm for
statistical query processing over sensitive data. It works by injecting random
noise into each query result, such that it is provably hard for the adversary
to infer the presence or absence of any individual record from the published
noisy results. The main objective in differentially private query processing is
to maximize the accuracy of the query results, while satisfying the privacy
guarantees. Previous work, notably \cite{LHR+10}, has suggested that with an
appropriate strategy, processing a batch of correlated queries as a whole
achieves considerably higher accuracy than answering them individually.
However, to our knowledge there is currently no practical solution to find such
a strategy for an arbitrary query batch; existing methods either return
strategies of poor quality (often worse than naive methods) or require
prohibitively expensive computations for even moderately large domains.
Motivated by this, we propose the \emph{Low-Rank Mechanism} (LRM), the first
practical differentially private technique for answering batch queries with
high accuracy, based on a \emph{low rank approximation} of the workload matrix.
We prove that the accuracy provided by LRM is close to the theoretical lower
bound for any mechanism to answer a batch of queries under differential
privacy. Extensive experiments using real data demonstrate that LRM
consistently outperforms state-of-the-art query processing solutions under
differential privacy, by large margins.
|
1212.2314 | Tree Projections and Structural Decomposition Methods: Minimality and
Game-Theoretic Characterization | cs.DM cs.AI | Tree projections provide a mathematical framework that encompasses all the
various (purely) structural decomposition methods that have been proposed in
the literature to single out classes of nearly-acyclic (hyper)graphs, such as
the tree decomposition method, which is the most powerful decomposition method
on graphs, and the (generalized) hypertree decomposition method, which is its
natural counterpart on arbitrary hypergraphs. The paper analyzes this
framework, by focusing in particular on "minimal" tree projections, that is, on
tree projections without useless redundancies. First, it is shown that minimal
tree projections enjoy a number of properties that are usually required for
normal form decompositions in various structural decomposition methods. In
particular, they enjoy the same kind of connection properties as (minimal) tree
decompositions of graphs, with the result being tight in the light of the
negative answer that is provided to the open question about whether they enjoy
a slightly stronger notion of connection property, defined to speed-up the
computation of hypertree decompositions. Second, it is shown that tree
projections admit a natural game-theoretic characterization in terms of the
Captain and Robber game. In this game, as for the Robber and Cops game
characterizing tree decompositions, the existence of winning strategies implies
the existence of monotone ones. As a special case, the Captain and Robber game
can be used to characterize the generalized hypertree decomposition method,
where such a game-theoretic characterization was missing and asked for. Besides
their theoretical interest, these results have immediate algorithmic
applications both for the general setting and for structural decomposition
methods that can be recast in terms of tree projections.
|
1212.2316 | Asymptotic Optimality of Equal Power Allocation for Linear Estimation of
WSS Random Processes | cs.IT math.IT | This letter establishes the asymptotic optimality of equal power allocation
for measurements of a continuous wide-sense stationary (WSS) random process
with a square-integrable autocorrelation function when linear estimation is
used on equally-spaced measurements with periodicity meeting the Nyquist
criterion and with the variance of the noise on any sample inversely
proportional to the power expended by the user to obtain that measurement.
|
1212.2338 | Controlled conflict resolution for replicated document | cs.DB | Collaborative working is increasingly popular, but it presents challenges due
to the need for high responsiveness and disconnected work support. To address
these challenges the data is optimistically replicated at the edges of the
network, i.e. personal computers or mobile devices. This replication requires a
merge mechanism that preserves the consistency and structure of the shared data
subject to concurrent modifications. In this paper, we propose a generic design
to ensure eventual consistency (every replica will eventually view the same
data) and to maintain the specific constraints of the replicated data. Our
layered design provides to the application engineer the complete control over
system scalability and behavior of the replicated data in face of concurrent
modifications. We show that our design allows replication of complex data types
with acceptable performances.
|
1212.2340 | PAC-Bayesian Learning and Domain Adaptation | stat.ML cs.LG | In machine learning, Domain Adaptation (DA) arises when the distribution gen-
erating the test (target) data differs from the one generating the learning
(source) data. It is well known that DA is an hard task even under strong
assumptions, among which the covariate-shift where the source and target
distributions diverge only in their marginals, i.e. they have the same labeling
function. Another popular approach is to consider an hypothesis class that
moves closer the two distributions while implying a low-error for both tasks.
This is a VC-dim approach that restricts the complexity of an hypothesis class
in order to get good generalization. Instead, we propose a PAC-Bayesian
approach that seeks for suitable weights to be given to each hypothesis in
order to build a majority vote. We prove a new DA bound in the PAC-Bayesian
context. This leads us to design the first DA-PAC-Bayesian algorithm based on
the minimization of the proposed bound. Doing so, we seek for a \rho-weighted
majority vote that takes into account a trade-off between three quantities. The
first two quantities being, as usual in the PAC-Bayesian approach, (a) the
complexity of the majority vote (measured by a Kullback-Leibler divergence) and
(b) its empirical risk (measured by the \rho-average errors on the source
sample). The third quantity is (c) the capacity of the majority vote to
distinguish some structural difference between the source and target samples.
|
1212.2342 | Distributed MIMO coding scheme with low decoding complexity for future
mobile TV broadcasting | cs.NI cs.IT math.IT | A novel distributed space-time block code (STBC) for the next generation
mobile TV broadcasting is proposed. The new code provides efficient performance
within a wide range of power imbalance showing strong adaptivity to the single
frequency network (SFN) broadcasting deployments. The new code outperforms
existing STBCs with equivalent decoding complexity and approaches those with
much higher complexities.
|
1212.2343 | Improved Channel Estimation Methods based on PN sequence for TDS-OFDM | cs.IT cs.NI math.IT | An accurate channel estimation is crucial for the novel time domain
synchronous orthogonal frequency-division multiplexing (TDS-OFDM) scheme in
which pseudo noise (PN) sequences serve as both guard intervals (GI) for OFDM
data symbols and training sequences for synchronization/channel estimation.
This paper studies the channel estimation method based on the cross-correlation
of PN sequences. A theoretical analysis of this estimator is conducted and
several improved estimators are then proposed to reduce the estimation error
floor encountered by the PN-correlation-based estimator. It is shown through
mathematical derivations and simulations that the new estimators approach or
even achieve the Cramer-Rao bound.
|
1212.2345 | Enhanced Mobile Digital Video Broadcasting with Distributed Space-Time
Coding | cs.IT cs.MM math.IT | This paper investigates the distributed space-time (ST) coding proposals for
the future Digital Video Broadcasting--Next Generation Handheld (DVB-NGH)
standard. We first theoretically show that the distributed MIMO scheme is the
best broadcasting scenario in terms of channel capacity. Consequently we
evaluate the performance of several ST coding proposals for DVB-NGH with
practical system specifications and channel conditions. Simulation results
demonstrate that the 3D code is the best ST coding solution for broadcasting in
the distributed MIMO scenario.
|
1212.2390 | On the complexity of learning a language: An improvement of Block's
algorithm | cs.CL cs.LG | Language learning is thought to be a highly complex process. One of the
hurdles in learning a language is to learn the rules of syntax of the language.
Rules of syntax are often ordered in that before one rule can applied one must
apply another. It has been thought that to learn the order of n rules one must
go through all n! permutations. Thus to learn the order of 27 rules would
require 27! steps or 1.08889x10^{28} steps. This number is much greater than
the number of seconds since the beginning of the universe! In an insightful
analysis the linguist Block ([Block 86], pp. 62-63, p.238) showed that with the
assumption of transitivity this vast number of learning steps reduces to a mere
377 steps. We present a mathematical analysis of the complexity of Block's
algorithm. The algorithm has a complexity of order n^2 given n rules. In
addition, we improve Block's results exponentially, by introducing an algorithm
that has complexity of order less than n log n.
|
1212.2396 | Source Coding Problems with Conditionally Less Noisy Side Information | cs.IT math.IT | A computable expression for the rate-distortion (RD) function proposed by
Heegard and Berger has eluded information theory for nearly three decades.
Heegard and Berger's single-letter achievability bound is well known to be
optimal for \emph{physically degraded} side information; however, it is not
known whether the bound is optimal for arbitrarily correlated side information
(general discrete memoryless sources). In this paper, we consider a new setup
in which the side information at one receiver is \emph{conditionally less
noisy} than the side information at the other. The new setup includes degraded
side information as a special case, and it is motivated by the literature on
degraded and less noisy broadcast channels. Our key contribution is a converse
proving the optimality of Heegard and Berger's achievability bound in a new
setting. The converse rests upon a certain \emph{single-letterization} lemma,
which we prove using an information theoretic telescoping identity {recently
presented by Kramer}. We also generalise the above ideas to two different
successive-refinement problems.
|
1212.2398 | An Information Theoretic Algorithm for Finding Periodicities in Stellar
Light Curves | astro-ph.IM cs.IT math.IT | We propose a new information theoretic metric for finding periodicities in
stellar light curves. Light curves are astronomical time series of brightness
over time, and are characterized as being noisy and unevenly sampled. The
proposed metric combines correntropy (generalized correlation) with a periodic
kernel to measure similarity among samples separated by a given period. The new
metric provides a periodogram, called Correntropy Kernelized Periodogram (CKP),
whose peaks are associated with the fundamental frequencies present in the
data. The CKP does not require any resampling, slotting or folding scheme as it
is computed directly from the available samples. CKP is the main part of a
fully-automated pipeline for periodic light curve discrimination to be used in
astronomical survey databases. We show that the CKP method outperformed the
slotted correntropy, and conventional methods used in astronomy for periodicity
discrimination and period estimation tasks, using a set of light curves drawn
from the MACHO survey. The proposed metric achieved 97.2% of true positives
with 0% of false positives at the confidence level of 99% for the periodicity
discrimination task; and 88% of hits with 11.6% of multiples and 0.4% of misses
in the period estimation task.
|
1212.2414 | Mining Techniques in Network Security to Enhance Intrusion Detection
Systems | cs.CR cs.LG | In intrusion detection systems, classifiers still suffer from several
drawbacks such as data dimensionality and dominance, different network feature
types, and data impact on the classification. In this paper two significant
enhancements are presented to solve these drawbacks. The first enhancement is
an improved feature selection using sequential backward search and information
gain. This, in turn, extracts valuable features that enhance positively the
detection rate and reduce the false positive rate. The second enhancement is
transferring nominal network features to numeric ones by exploiting the
discrete random variable and the probability mass function to solve the problem
of different feature types, the problem of data dominance, and data impact on
the classification. The latter is combined to known normalization methods to
achieve a significant hybrid normalization approach. Finally, an intensive and
comparative study approves the efficiency of these enhancements and shows
better performance comparing to other proposed methods.
|
1212.2415 | Robust Face Recognition using Local Illumination Normalization and
Discriminant Feature Point Selection | cs.LG cs.CV | Face recognition systems must be robust to the variation of various factors
such as facial expression, illumination, head pose and aging. Especially, the
robustness against illumination variation is one of the most important problems
to be solved for the practical use of face recognition systems. Gabor wavelet
is widely used in face detection and recognition because it gives the
possibility to simulate the function of human visual system. In this paper, we
propose a method for extracting Gabor wavelet features which is stable under
the variation of local illumination and show experiment results demonstrating
its effectiveness.
|
1212.2425 | Multi-layered Social Networks | cs.SI physics.soc-ph | It is quite obvious that in the real world, more than one kind of
relationship can exist between two actors and that those ties can be so
intertwined that it is impossible to analyse them separately [Fienberg 85],
[Minor 83], [Szell 10]. Social networks with more than one type of relation are
not a completely new concept [Wasserman 94] but they were analysed mainly at
the small scale, e.g. in [McPherson 01], [Padgett 93], and [Entwisle 07]. Just
like in the case of regular single-layered social network there is no widely
accepted definition or even common name. At the beginning such networks have
been called multiplex network [Haythornthwaite 99], [Monge 03]. The term is
derived from communications theory which defines multiplex as combining
multiple signals into one in such way that it is possible to separate them if
needed [Hamill 06]. Recently, the area of multi-layered social network has
started attracting more and more attention in research conducted within
different domains [Kazienko 11a], [Szell 10], [Rodriguez 07], [Rodriguez 09],
and the meaning of multiplex network has expanded and covers not only social
relationships but any kind of connection, e.g. based on geography, occupation,
kinship, hobbies, etc. [Abraham 12]. This essay aims to summarize existing
knowledge about one concept which has many different names i.e. the concept of
Multi-layered Social Network also known as Layered social network,
Multi-relational social network, Multidimensional social network, Multiplex
social network
|
1212.2438 | Model-order reduction of biochemical reaction networks | cs.SY cs.SE math.DS physics.chem-ph q-bio.MN | In this paper we propose a model-order reduction method for chemical reaction
networks governed by general enzyme kinetics, including the mass-action and
Michaelis-Menten kinetics. The model-order reduction method is based on the
Kron reduction of the weighted Laplacian matrix which describes the graph
structure of complexes in the chemical reaction network. We apply our method to
a yeast glycolysis model, where the simulation result shows that the transient
behaviour of a number of key metabolites of the reduced-order model is in good
agreement with those of the full-order model.
|
1212.2442 | Active Collaborative Filtering | cs.IR cs.LG stat.ML | Collaborative filtering (CF) allows the preferences of multiple users to be
pooled to make recommendations regarding unseen products. We consider in this
paper the problem of online and interactive CF: given the current ratings
associated with a user, what queries (new ratings) would most improve the
quality of the recommendations made? We cast this terms of expected value of
information (EVOI); but the online computational cost of computing optimal
queries is prohibitive. We show how offline prototyping and computation of
bounds on EVOI can be used to dramatically reduce the required online
computation. The framework we develop is general, but we focus on derivations
and empirical study in the specific case of the multiple-cause vector
quantization model.
|
1212.2444 | On revising fuzzy belief bases | cs.AI | We look at the problem of revising fuzzy belief bases, i.e., belief base
revision in which both formulas in the base as well as revision-input formulas
can come attached with varying truth-degrees. Working within a very general
framework for fuzzy logic which is able to capture a variety of types of
inference under uncertainty, such as truth-functional fuzzy logics and certain
types of probabilistic inference, we show how the idea of rational change from
'crisp' base revision, as embodied by the idea of partial meet revision, can be
faithfully extended to revising fuzzy belief bases. We present and axiomatise
an operation of partial meet fuzzy revision and illustrate how the operation
works in several important special instances of the framework.
|
1212.2445 | Upgrading Ambiguous Signs in QPNs | cs.AI | WA qualitative probabilistic network models the probabilistic relationships
between its variables by means of signs. Non-monotonic influences have
associated an ambiguous sign. These ambiguous signs typically lead to
uninformative results upon inference. A non-monotonic influence can, however,
be associated with a, more informative, sign that indicates its effect in the
current state of the network. To capture this effect, we introduce the concept
of situational sign. Furthermore, if the network converts to a state in which
all variables that provoke the non-monotonicity have been observed, a
non-monotonic influence reduces to a monotonic influence. We study the
persistence and propagation of situational signs upon inference and give a
method to establish the sign of a reduced influence.
|
1212.2446 | Parametric Dependability Analysis through Probabilistic Horn Abduction | cs.AI | Dependability modeling and evaluation is aimed at investigating that a system
performs its function correctly in time. A usual way to achieve a high
reliability, is to design redundant systems that contain several replicas of
the same subsystem or component. State space methods for dependability analysis
may suffer of the state space explosion problem in such a kind of situation.
Combinatorial models, on the other hand, require the simplified assumption of
statistical independence; however, in case of redundant systems, this does not
guarantee a reduced number of modeled elements. In order to provide a more
compact system representation, parametric system modeling has been investigated
in the literature, in such a way that a set of replicas of a given subsystem is
parameterized so that only one representative instance is explicitly included.
While modeling aspects can be suitably addressed by these approaches,
analytical tools working on parametric characterizations are often more
difficult to be defined and the standard approach is to 'unfold' the parametric
model, in order to exploit standard analysis algorithms working at the unfolded
'ground' level. Moreover, parameterized combinatorial methods still require the
statistical independence assumption. In the present paper we consider the
formalism of Parametric Fault Tree (PFT) and we show how it can be related to
Probabilistic Horn Abduction (PHA). Since PHA is a framework where both
modeling and analysis can be performed in a restricted first-order language, we
aim at showing that converting a PFT into a PHA knowledge base will allow an
approach to dependability analysis directly exploiting parametric
representation. We will show that classical qualitative and quantitative
dependability measures can be characterized within PHA. Furthermore, additional
modeling aspects (such as noisy gates and local dependencies) as well as
additional reliability measures (such as posterior probability analysis) can be
naturally addressed by this conversion. A simple example of a multi-processor
system with several replicated units is used to illustrate the approach.
|
1212.2447 | Bayesian Hierarchical Mixtures of Experts | cs.LG stat.ML | The Hierarchical Mixture of Experts (HME) is a well-known tree-based model
for regression and classification, based on soft probabilistic splits. In its
original formulation it was trained by maximum likelihood, and is therefore
prone to over-fitting. Furthermore the maximum likelihood framework offers no
natural metric for optimizing the complexity and structure of the tree.
Previous attempts to provide a Bayesian treatment of the HME model have relied
either on ad-hoc local Gaussian approximations or have dealt with related
models representing the joint distribution of both input and output variables.
In this paper we describe a fully Bayesian treatment of the HME model based on
variational inference. By combining local and global variational methods we
obtain a rigourous lower bound on the marginal probability of the data under
the model. This bound is optimized during the training phase, and its resulting
value can be used for model order selection. We present results using this
approach for a data set describing robot arm kinematics.
|
1212.2448 | On Triangulating Dynamic Graphical Models | cs.AI | This paper introduces new methodology to triangulate dynamic Bayesian
networks (DBNs) and dynamic graphical models (DGMs). While most methods to
triangulate such networks use some form of constrained elimination scheme based
on properties of the underlying directed graph, we find it useful to view
triangulation and elimination using properties only of the resulting undirected
graph, obtained after the moralization step. We first briefly introduce the
Graphical model toolkit (GMTK) and its notion of dynamic graphical models, one
that slightly extends the standard notion of a DBN. We next introduce the
'boundary algorithm', a method to find the best boundary between partitions in
a dynamic model. We find that using this algorithm, the notions of forward- and
backward-interface become moot - namely, the size and fill-in of the best
forward- and backward- interface are identical. Moreover, we observe that
finding a good partition boundary allows for constrained elimination orders
(and therefore graph triangulations) that are not possible using standard
slice-by-slice constrained eliminations. More interestingly, with certain
boundaries it is possible to obtain constrained elimination schemes that lie
outside the space of possible triangulations using only unconstrained
elimination. Lastly, we report triangulation results on invented graphs,
standard DBNs from the literature, novel DBNs used in speech recognition
research systems, and also random graphs. Using a number of different
triangulation quality measures (max clique size, state-space, etc.), we find
that with our boundary algorithm the triangulation quality can dramatically
improve.
|
1212.2449 | An Empirical Study of w-Cutset Sampling for Bayesian Networks | cs.AI | The paper studies empirically the time-space trade-off between sampling and
inference in a sl cutset sampling algorithm. The algorithm samples over a
subset of nodes in a Bayesian network and applies exact inference over the
rest. Consequently, while the size of the sampling space decreases, requiring
less samples for convergence, the time for generating each single sample
increases. The w-cutset sampling selects a sampling set such that the
induced-width of the network when the sampling set is observed is bounded by w,
thus requiring inference whose complexity is exponential in w. In this paper,
we investigate performance of w-cutset sampling over a range of w values and
measure the accuracy of w-cutset sampling as a function of w. Our experiments
demonstrate that the cutset sampling idea is quite powerful showing that an
optimal balance between inference and sampling benefits substantially from
restricting the cutset size, even at the cost of more complex inference.
|
1212.2450 | A possibilistic handling of partially ordered information | cs.AI | In a standard possibilistic logic, prioritized information are encoded by
means of weighted knowledge base. This paper proposes an extension of
possibilistic logic for dealing with partially ordered information. We Show
that all basic notions of standard possibilitic logic (sumbsumption, syntactic
and semantic inference, etc.) have natural couterparts when dealing with
partially ordered information. We also propose an algorithm which computes
possibilistic conclusions of a partial knowledge base of a partially ordered
knowlege base.
|
1212.2452 | Value Elimination: Bayesian Inference via Backtracking Search | cs.AI | Backtracking search is a powerful algorithmic paradigm that can be used to
solve many problems. It is in a certain sense the dual of variable elimination;
but on many problems, e.g., SAT, it is vastly superior to variable elimination
in practice. Motivated by this we investigate the application of backtracking
search to the problem of Bayesian inference (Bayes). We show that natural
generalizations of known techniques allow backtracking search to achieve
performance guarantees similar to standard algorithms for Bayes, and that there
exist problems on which backtracking can in fact do much better. We also
demonstrate that these ideas can be applied to implement a Bayesian inference
engine whose performance is competitive with standard algorithms. Since
backtracking search can very naturally take advantage of context specific
structure, the potential exists for performance superior to standard algorithms
on many problems.
|
1212.2453 | Web-Based Question Answering: A Decision-Making Perspective | cs.IR cs.CL | We describe an investigation of the use of probabilistic models and
cost-benefit analyses to guide resource-intensive procedures used by a
Web-based question answering system. We first provide an overview of research
on question-answering systems. Then, we present details on AskMSR, a prototype
web-based question answering system. We discuss Bayesian analyses of the
quality of answers generated by the system and show how we can endow the system
with the ability to make decisions about the number of queries issued to a
search engine, given the cost of queries and the expected value of query
results in refining an ultimate answer. Finally, we review the results of a set
of experiments.
|
1212.2455 | New Advances in Inference by Recursive Conditioning | cs.AI | Recursive Conditioning (RC) was introduced recently as the first any-space
algorithm for inference in Bayesian networks which can trade time for space by
varying the size of its cache at the increment needed to store a floating point
number. Under full caching, RC has an asymptotic time and space complexity
which is comparable to mainstream algorithms based on variable elimination and
clustering (exponential in the network treewidth and linear in its size). We
show two main results about RC in this paper. First, we show that its actual
space requirements under full caching are much more modest than those needed by
mainstream methods and study the implications of this finding. Second, we show
that RC can effectively deal with determinism in Bayesian networks by employing
standard logical techniques, such as unit resolution, allowing a significant
reduction in its time requirements in certain cases. We illustrate our results
using a number of benchmark networks, including the very challenging ones that
arise in genetic linkage analysis.
|
1212.2456 | Incremental Compilation of Bayesian networks | cs.AI | Most methods of exact probability propagation in Bayesian networks do not
carry out the inference directly over the network, but over a secondary
structure known as a junction tree or a join tree (JT). The process of
obtaining a JT is usually termed {sl compilation}. As compilation is usually
viewed as a whole process; each time the network is modified, a new compilation
process has to be carried out. The possibility of reusing an already existing
JT, in order to obtain the new one regarding only the modifications in the
network has received only little attention in the literature. In this paper we
present a method for incremental compilation of a Bayesian network, following
the classical scheme in which triangulation plays the key role. In order to
perform incremental compilation we propose to recompile only those parts of the
JT which can have been affected by the networks modifications. To do so, we
exploit the technique OF maximal prime subgraph decomposition in determining
the minimal subgraph(s) that have to be recompiled, and thereby the minimal
subtree(s) of the JT that should be replaced by new subtree(s).We focus on
structural modifications : addition and deletion of links and variables.
|
1212.2457 | Structure-Based Causes and Explanations in the Independent Choice Logic | cs.AI | This paper is directed towards combining Pearl's structural-model approach to
causal reasoning with high-level formalisms for reasoning about actions. More
precisely, we present a combination of Pearl's structural-model approach with
Poole's independent choice logic. We show how probabilistic theories in the
independent choice logic can be mapped to probabilistic causal models. This
mapping provides the independent choice logic with appealing concepts of
causality and explanation from the structural-model approach. We illustrate
this along Halpern and Pearl's sophisticated notions of actual cause,
explanation, and partial explanation. This mapping also adds first-order
modeling capabilities and explicit actions to the structural-model approach.
|
1212.2458 | Inference in Polytrees with Sets of Probabilities | cs.AI stat.CO | Inferences in directed acyclic graphs associated with probability sets and
probability intervals are NP-hard, even for polytrees. In this paper we focus
on such inferences, and propose: 1) a substantial improvement on Tessems A / R
algorithm FOR polytrees WITH probability intervals; 2) a new algorithm FOR
direction - based local search(IN sets OF probability) that improves ON
existing methods; 3) a collection OF branch - AND - bound algorithms that
combine the previous techniques.The first two techniques lead TO approximate
solutions, WHILE branch - AND - bound procedures can produce either exact OR
approximate solutions.We report ON dramatic improvements ON existing techniques
FOR inference WITH probability sets AND intervals, IN SOME cases reducing the
computational effort BY many orders OF magnitude.
|
1212.2459 | Symbolic Generalization for On-line Planning | cs.AI | Symbolic representations have been used successfully in off-line planning
algorithms for Markov decision processes. We show that they can also improve
the performance of on-line planners. In addition to reducing computation time,
symbolic generalization can reduce the amount of costly real-world interactions
required for convergence. We introduce Symbolic Real-Time Dynamic Programming
(or sRTDP), an extension of RTDP. After each step of on-line interaction with
an environment, sRTDP uses symbolic model-checking techniques to generalizes
its experience by updating a group of states rather than a single state. We
examine two heuristic approaches to dynamic grouping of states and show that
they accelerate the planning process significantly in terms of both CPU time
and the number of steps of interaction with the environment.
|
1212.2460 | The Information Bottleneck EM Algorithm | cs.LG stat.ML | Learning with hidden variables is a central challenge in probabilistic
graphical models that has important implications for many real-life problems.
The classical approach is using the Expectation Maximization (EM) algorithm.
This algorithm, however, can get trapped in local maxima. In this paper we
explore a new approach that is based on the Information Bottleneck principle.
In this approach, we view the learning problem as a tradeoff between two
information theoretic objectives. The first is to make the hidden variables
uninformative about the identity of specific instances. The second is to make
the hidden variables informative about the observed attributes. By exploring
different tradeoffs between these two objectives, we can gradually converge on
a high-scoring solution. As we show, the resulting, Information Bottleneck
Expectation Maximization (IB-EM) algorithm, manages to find solutions that are
superior to standard EM methods.
|
1212.2461 | Probabilistic Reasoning about Actions in Nonmonotonic Causal Theories | cs.AI | We present the language {m P}{cal C}+ for probabilistic reasoning about
actions, which is a generalization of the action language {cal C}+ that allows
to deal with probabilistic as well as nondeterministic effects of actions. We
define a formal semantics of {m P}{cal C}+ in terms of probabilistic
transitions between sets of states. Using a concept of a history and its belief
state, we then show how several important problems in reasoning about actions
can be concisely formulated in our formalism.
|
1212.2462 | A New Algorithm for Maximum Likelihood Estimation in Gaussian Graphical
Models for Marginal Independence | stat.ME cs.LG stat.ML | Graphical models with bi-directed edges (<->) represent marginal
independence: the absence of an edge between two vertices indicates that the
corresponding variables are marginally independent. In this paper, we consider
maximum likelihood estimation in the case of continuous variables with a
Gaussian joint distribution, sometimes termed a covariance graph model. We
present a new fitting algorithm which exploits standard regression techniques
and establish its convergence properties. Moreover, we contrast our procedure
to existing estimation methods.
|
1212.2463 | A Simple Insight into Iterative Belief Propagation's Success | cs.AI | In Non - ergodic belief networks the posterior belief OF many queries given
evidence may become zero.The paper shows that WHEN belief propagation IS
applied iteratively OVER arbitrary networks(the so called, iterative OR loopy
belief propagation(IBP)) it IS identical TO an arc - consistency algorithm
relative TO zero - belief queries(namely assessing zero posterior
probabilities). This implies that zero - belief conclusions derived BY belief
propagation converge AND are sound.More importantly it suggests that the
inference power OF IBP IS AS strong AND AS weak, AS that OF arc -
consistency.This allows the synthesis OF belief networks FOR which belief
propagation IS useless ON one hand, AND focuses the investigation OF classes OF
belief network FOR which belief propagation may be zero - complete.Finally, ALL
the above conclusions apply also TO Generalized belief propagation algorithms
that extend loopy belief propagation AND allow a crisper understanding OF their
power.
|
1212.2464 | A Robust Independence Test for Constraint-Based Learning of Causal
Structure | cs.AI cs.LG stat.ML | Constraint-based (CB) learning is a formalism for learning a causal network
with a database D by performing a series of conditional-independence tests to
infer structural information. This paper considers a new test of independence
that combines ideas from Bayesian learning, Bayesian network inference, and
classical hypothesis testing to produce a more reliable and robust test. The
new test can be calculated in the same asymptotic time and space required for
the standard tests such as the chi-squared test, but it allows the
specification of a prior distribution over parameters and can be used when the
database is incomplete. We prove that the test is correct, and we demonstrate
empirically that, when used with a CB causal discovery algorithm with
noninformative priors, it recovers structural features more reliably and it
produces networks with smaller KL-Divergence, especially as the number of nodes
increases or the number of records decreases. Another benefit is the dramatic
reduction in the probability that a CB algorithm will stall during the search,
providing a remedy for an annoying problem plaguing CB learning when the
database is small.
|
1212.2465 | Loopy Belief Propagation as a Basis for Communication in Sensor Networks | cs.AI cs.NI | Sensor networks are an exciting new kind of computer system. Consisting of a
large number of tiny, cheap computational devices physically distributed in an
environment, they gather and process data about the environment in real time.
One of the central questions in sensor networks is what to do with the data,
i.e., how to reason with it and how to communicate it. This paper argues that
the lessons of the UAI community, in particular that one should produce and
communicate beliefs rather than raw sensor values, are highly relevant to
sensor networks. We contend that loopy belief propagation is particularly well
suited to communicating beliefs in sensor networks, due to its compact
implementation and distributed nature. We investigate the ability of loopy
belief propagation to function under the stressful conditions likely to prevail
in sensor networks. Our experiments show that it performs well and degrades
gracefully. It converges to appropriate beliefs even in highly asynchronous
settings where some nodes communicate far less frequently than others; it
continues to function if some nodes fail to participate in the propagation
process; and it can track changes in the environment that occur while beliefs
are propagating. As a result, we believe that sensor networks present an
important application opportunity for UAI.
|
1212.2466 | On Information Regularization | cs.LG stat.ML | We formulate a principle for classification with the knowledge of the
marginal distribution over the data points (unlabeled data). The principle is
cast in terms of Tikhonov style regularization where the regularization penalty
articulates the way in which the marginal density should constrain otherwise
unrestricted conditional distributions. Specifically, the regularization
penalty penalizes any information introduced between the examples and labels
beyond what is provided by the available labeled examples. The work extends
Szummer and Jaakkola's information regularization (NIPS 2002) to multiple
dimensions, providing a regularizer independent of the covering of the space
used in the derivation. We show in addition how the information regularizer can
be used as a measure of complexity of the classification task with unlabeled
data and prove a relevant sample-complexity bound. We illustrate the
regularization principle in practice by restricting the class of conditional
distributions to be logistic regression models and constructing the
regularization penalty from a finite set of unlabeled examples.
|
1212.2468 | Large-Sample Learning of Bayesian Networks is NP-Hard | cs.LG cs.AI stat.ML | In this paper, we provide new complexity results for algorithms that learn
discrete-variable Bayesian networks from data. Our results apply whenever the
learning algorithm uses a scoring criterion that favors the simplest model able
to represent the generative distribution exactly. Our results therefore hold
whenever the learning algorithm uses a consistent scoring criterion and is
applied to a sufficiently large dataset. We show that identifying high-scoring
structures is hard, even when we are given an independence oracle, an inference
oracle, and/or an information oracle. Our negative results also apply to the
learning of discrete-variable Bayesian networks in which each node has at most
k parents, for all k > 3.
|
1212.2469 | Using the structure of d-connecting paths as a qualitative measure of
the strength of dependence | cs.AI | Pearls concept OF a d - connecting path IS one OF the foundations OF the
modern theory OF graphical models : the absence OF a d - connecting path IN a
DAG indicates that conditional independence will hold IN ANY distribution
factorising according TO that graph. IN this paper we show that IN singly -
connected Gaussian DAGs it IS possible TO USE the form OF a d - connection TO
obtain qualitative information about the strength OF conditional
dependence.More precisely, the squared partial correlations BETWEEN two given
variables, conditioned ON different subsets may be partially ordered BY
examining the relationship BETWEEN the d - connecting path AND the SET OF
variables conditioned upon.
|
1212.2470 | Reasoning about Bayesian Network Classifiers | cs.LG cs.AI stat.ML | Bayesian network classifiers are used in many fields, and one common class of
classifiers are naive Bayes classifiers. In this paper, we introduce an
approach for reasoning about Bayesian network classifiers in which we
explicitly convert them into Ordered Decision Diagrams (ODDs), which are then
used to reason about the properties of these classifiers. Specifically, we
present an algorithm for converting any naive Bayes classifier into an ODD, and
we show theoretically and experimentally that this algorithm can give us an ODD
that is tractable in size even given an intractable number of instances. Since
ODDs are tractable representations of classifiers, our algorithm allows us to
efficiently test the equivalence of two naive Bayes classifiers and
characterize discrepancies between them. We also show a number of additional
results including a count of distinct classifiers that can be induced by
changing some CPT in a naive Bayes classifier, and the range of allowable
changes to a CPT which keeps the current classifier unchanged.
|
1212.2471 | Monte Carlo Matrix Inversion Policy Evaluation | cs.LG cs.AI cs.NA | In 1950, Forsythe and Leibler (1950) introduced a statistical technique for
finding the inverse of a matrix by characterizing the elements of the matrix
inverse as expected values of a sequence of random walks. Barto and Duff (1994)
subsequently showed relations between this technique and standard dynamic
programming and temporal differencing methods. The advantage of the Monte Carlo
matrix inversion (MCMI) approach is that it scales better with respect to
state-space size than alternative techniques. In this paper, we introduce an
algorithm for performing reinforcement learning policy evaluation using MCMI.
We demonstrate that MCMI improves on runtime over a maximum likelihood
model-based policy evaluation approach and on both runtime and accuracy over
the temporal differencing (TD) policy evaluation approach. We further improve
on MCMI policy evaluation by adding an importance sampling technique to our
algorithm to reduce the variance of our estimator. Lastly, we illustrate
techniques for scaling up MCMI to large state spaces in order to perform policy
improvement.
|
1212.2472 | Budgeted Learning of Naive-Bayes Classifiers | cs.LG stat.ML | Frequently, acquiring training data has an associated cost. We consider the
situation where the learner may purchase data during training, subject TO a
budget. IN particular, we examine the CASE WHERE each feature label has an
associated cost, AND the total cost OF ALL feature labels acquired during
training must NOT exceed the budget.This paper compares methods FOR choosing
which feature label TO purchase next, given the budget AND the CURRENT belief
state OF naive Bayes model parameters.Whereas active learning has traditionally
focused ON myopic(greedy) strategies FOR query selection, this paper presents a
tractable method FOR incorporating knowledge OF the budget INTO the decision
making process, which improves performance.
|
1212.2473 | A Linear Belief Function Approach to Portfolio Evaluation | cs.AI q-fin.ST | By elaborating on the notion of linear belief functions (Dempster 1990; Liu
1996), we propose an elementary approach to knowledge representation for expert
systems using linear belief functions. We show how to use basic matrices to
represent market information and financial knowledge, including complete
ignorance, statistical observations, subjective speculations, distributional
assumptions, linear relations, and empirical asset pricing models. We then
appeal to Dempster's rule of combination to integrate the knowledge for
assessing an overall belief of portfolio performance, and updating the belief
by incorporating additional information. We use an example of three gold stocks
to illustrate the approach.
|
1212.2474 | Learning Riemannian Metrics | cs.LG stat.ML | We propose a solution to the problem of estimating a Riemannian metric
associated with a given differentiable manifold. The metric learning problem is
based on minimizing the relative volume of a given set of points. We derive the
details for a family of metrics on the multinomial simplex. The resulting
metric has applications in text classification and bears some similarity to
TFIDF representation of text documents.
|
1212.2475 | Efficient Gradient Estimation for Motor Control Learning | cs.LG cs.SY | The task of estimating the gradient of a function in the presence of noise is
central to several forms of reinforcement learning, including policy search
methods. We present two techniques for reducing gradient estimation errors in
the presence of observable input noise applied to the control signal. The first
method extends the idea of a reinforcement baseline by fitting a local linear
model to the function whose gradient is being estimated; we show how to find
the linear model that minimizes the variance of the gradient estimate, and how
to estimate the model from data. The second method improves this further by
discounting components of the gradient vector that have high variance. These
methods are applied to the problem of motor control learning, where actuator
noise has a significant influence on behavior. In particular, we apply the
techniques to learn locally optimal controllers for a dart-throwing task using
a simulated three-link arm; we demonstrate that proposed methods significantly
improve the reward function gradient estimate and, consequently, the learning
curve, over existing methods.
|
1212.2476 | Approximate Decomposition: A Method for Bounding and Estimating
Probabilistic and Deterministic Queries | cs.AI | In this paper, we introduce a method for approximating the solution to
inference and optimization tasks in uncertain and deterministic reasoning. Such
tasks are in general intractable for exact algorithms because of the large
number of dependency relationships in their structure. Our method effectively
maps such a dense problem to a sparser one which is in some sense "closest".
Exact methods can be run on the sparser problem to derive bounds on the
original answer, which can be quite sharp. We present empirical results
demonstrating that our method works well on the tasks of belief inference and
finding the probability of the most probable explanation in belief networks,
and finding the cost of the solution that violates the smallest number of
constraints in constraint satisfaction problems. On one large CPCS network, for
example, we were able to calculate upper and lower bounds on the conditional
probability of a variable, given evidence, that were almost identical in the
average case.
|
1212.2477 | 1 Billion Pages = 1 Million Dollars? Mining the Web to Play "Who Wants
to be a Millionaire?" | cs.IR cs.CL | We exploit the redundancy and volume of information on the web to build a
computerized player for the ABC TV game show 'Who Wants To Be A Millionaire?'
The player consists of a question-answering module and a decision-making
module. The question-answering module utilizes question transformation
techniques, natural language parsing, multiple information retrieval
algorithms, and multiple search engines; results are combined in the spirit of
ensemble learning using an adaptive weighting scheme. Empirically, the system
correctly answers about 75% of questions from the Millionaire CD-ROM, 3rd
edition - general-interest trivia questions often about popular culture and
common knowledge. The decision-making module chooses from allowable actions in
the game in order to maximize expected risk-adjusted winnings, where the
estimated probability of answering correctly is a function of past performance
and confidence in in correctly answering the current question. When given a six
question head start (i.e., when starting from the $2,000 level), we find that
the system performs about as well on average as humans starting at the
beginning. Our system demonstrates the potential of simple but well-chosen
techniques for mining answers from unstructured information such as the web.
|
1212.2478 | Preference-based Graphic Models for Collaborative Filtering | cs.IR | Collaborative filtering is a very useful general technique for exploiting the
preference patterns of a group of users to predict the utility of items to a
particular user. Previous research has studied several probabilistic graphic
models for collaborative filtering with promising results. However, while these
models have succeeded in capturing the similarity among users and items in one
way or the other, none of them has considered the fact that users with similar
interests in items can have very different rating patterns; some users tend to
assign a higher rating to all items than other users. In this paper, we propose
and study of two new graphic models that address the distinction between user
preferences and ratings. In one model, called the decoupled model, we introduce
two different variables to decouple a users preferences FROM his ratings. IN
the other, called the preference model, we model the orderings OF items
preferred BY a USER, rather than the USERs numerical ratings of items.
Empirical study over two datasets of movie ratings shows that appropriate
modeling of the distinction between user preferences and ratings improves the
performance substantially and consistently. Specifically, the proposed
decoupled model outperforms all five existing approaches that we compare with
significantly, but the preference model is not very successful. These results
suggest that explicit modeling of the underlying user preferences is very
important for collaborative filtering, but we can not afford ignoring the
rating information completely.
|
1212.2479 | LAYERWIDTH: Analysis of a New Metric for Directed Acyclic Graphs | cs.DS cs.AI cs.DM | We analyze a new property of directed acyclic graphs (DAGs), called
layerwidth, arising from a class of DAGs proposed by Eiter and Lukasiewicz.
This class of DAGs permits certain problems of structural model-based causality
and explanation to be tractably solved. In this paper, we first address an open
question raised by Eiter and Lukasiewicz - the computational complexity of
deciding whether a given graph has a bounded layerwidth. After proving that
this problem is NP-complete, we proceed by proving numerous important
properties of layerwidth that are helpful in efficiently computing the optimal
layerwidth. Finally, we compare this new DAG property to two other important
DAG properties: treewidth and bandwidth.
|
1212.2480 | Approximate Inference and Constrained Optimization | cs.LG cs.AI stat.ML | Loopy and generalized belief propagation are popular algorithms for
approximate inference in Markov random fields and Bayesian networks. Fixed
points of these algorithms correspond to extrema of the Bethe and Kikuchi free
energy. However, belief propagation does not always converge, which explains
the need for approaches that explicitly minimize the Kikuchi/Bethe free energy,
such as CCCP and UPS. Here we describe a class of algorithms that solves this
typically nonconvex constrained minimization of the Kikuchi free energy through
a sequence of convex constrained minimizations of upper bounds on the Kikuchi
free energy. Intuitively one would expect tighter bounds to lead to faster
algorithms, which is indeed convincingly demonstrated in our simulations.
Several ideas are applied to obtain tight convex bounds that yield dramatic
speed-ups over CCCP.
|
1212.2481 | Monte-Carlo optimizations for resource allocation problems in stochastic
network systems | cs.AI | Real-world distributed systems and networks are often unreliable and subject
to random failures of its components. Such a stochastic behavior affects
adversely the complexity of optimization tasks performed routinely upon such
systems, in particular, various resource allocation tasks. In this work we
investigate and develop Monte Carlo solutions for a class of two-stage
optimization problems in stochastic networks in which the expected value of
resource allocations before and after stochastic failures needs to be
optimized. The limitation of these problems is that their exact solutions are
exponential in the number of unreliable network components: thus, exact methods
do not scale-up well to large networks often seen in practice. We first prove
that Monte Carlo optimization methods can overcome the exponential bottleneck
of exact methods. Next we support our theoretical findings on resource
allocation experiments and show a very good scale-up potential of the new
methods to large stochastic networks.
|
1212.2482 | Implementation and Comparison of Solution Methods for Decision Processes
with Non-Markovian Rewards | cs.AI | This paper examines a number of solution methods for decision processes with
non-Markovian rewards (NMRDPs). They all exploit a temporal logic specification
of the reward function to automatically translate the NMRDP into an equivalent
Markov decision process (MDP) amenable to well-known MDP solution methods. They
differ however in the representation of the target MDP and the class of MDP
solution methods to which they are suited. As a result, they adopt different
temporal logics and different translations. Unfortunately, no implementation of
these methods nor experimental let alone comparative results have ever been
reported. This paper is the first step towards filling this gap. We describe an
integrated system for solving NMRDPs which implements these methods and several
variants under a common interface; we use it to compare the various approaches
and identify the problem features favoring one over the other.
|
1212.2483 | Sufficient Dimensionality Reduction with Irrelevant Statistics | cs.LG stat.ML | The problem of finding a reduced dimensionality representation of categorical
variables while preserving their most relevant characteristics is fundamental
for the analysis of complex data. Specifically, given a co-occurrence matrix of
two variables, one often seeks a compact representation of one variable which
preserves information about the other variable. We have recently introduced
``Sufficient Dimensionality Reduction' [GT-2003], a method that extracts
continuous reduced dimensional features whose measurements (i.e., expectation
values) capture maximal mutual information among the variables. However, such
measurements often capture information that is irrelevant for a given task.
Widely known examples are illumination conditions, which are irrelevant as
features for face recognition, writing style which is irrelevant as a feature
for content classification, and intonation which is irrelevant as a feature for
speech recognition. Such irrelevance cannot be deduced apriori, since it
depends on the details of the task, and is thus inherently ill defined in the
purely unsupervised case. Separating relevant from irrelevant features can be
achieved using additional side data that contains such irrelevant structures.
This approach was taken in [CT-2002], extending the information bottleneck
method, which uses clustering to compress the data. Here we use this
side-information framework to identify features whose measurements are
maximally informative for the original data set, but carry as little
information as possible on a side data set. In statistical terms this can be
understood as extracting statistics which are maximally sufficient for the
original dataset, while simultaneously maximally ancillary for the side
dataset. We formulate this tradeoff as a constrained optimization problem and
characterize its solutions. We then derive a gradient descent algorithm for
this problem, which is based on the Generalized Iterative Scaling method for
finding maximum entropy distributions. The method is demonstrated on synthetic
data, as well as on real face recognition datasets, and is shown to outperform
standard methods such as oriented PCA.
|
1212.2484 | Decision Making with Partially Consonant Belief Functions | cs.AI | This paper studies decision making for Walley's partially consonant belief
functions (pcb). In a pcb, the set of foci are partitioned. Within each
partition, the foci are nested. The pcb class includes probability functions
and possibility functions as extreme cases. Unlike earlier proposals for a
decision theory with belief functions, we employ an axiomatic approach. We
adopt an axiom system similar in spirit to von Neumann - Morgenstern's linear
utility theory for a preference relation on pcb lotteries. We prove a
representation theorem for this relation. Utility for a pcb lottery is a
combination of linear utility for probabilistic lottery and binary utility for
possibilistic lottery.
|
1212.2485 | Phase Transition of Tractability in Constraint Satisfaction and Bayesian
Network Inference | cs.AI cs.DS | There has been great interest in identifying tractable subclasses of NP
complete problems and designing efficient algorithms for these tractable
classes. Constraint satisfaction and Bayesian network inference are two
examples of such problems that are of great importance in AI and algorithms. In
this paper we study, under the frameworks of random constraint satisfaction
problems and random Bayesian networks, a typical tractable subclass
characterized by the treewidth of the problems. We show that the property of
having a bounded treewidth for CSPs and Bayesian network inference problem has
a phase transition that occurs while the underlying structures of problems are
still sparse. This implies that algorithms making use of treewidth based
structural knowledge only work efficiently in a limited range of random
instance.
|
1212.2486 | Extending Factor Graphs so as to Unify Directed and Undirected Graphical
Models | cs.AI | The two most popular types of graphical model are directed models (Bayesian
networks) and undirected models (Markov random fields, or MRFs). Directed and
undirected models offer complementary properties in model construction,
expressing conditional independencies, expressing arbitrary factorizations of
joint distributions, and formulating message-passing inference algorithms. We
show that the strengths of these two representations can be combined in a
single type of graphical model called a 'factor graph'. Every Bayesian network
or MRF can be easily converted to a factor graph that expresses the same
conditional independencies, expresses the same factorization of the joint
distribution, and can be used for probabilistic inference through application
of a single, simple message-passing algorithm. In contrast to chain graphs,
where message-passing is implemented on a hypergraph, message-passing can be
directly implemented on the factor graph. We describe a modified 'Bayes-ball'
algorithm for establishing conditional independence in factor graphs, and we
show that factor graphs form a strict superset of Bayesian networks and MRFs.
In particular, we give an example of a commonly-used 'mixture of experts' model
fragment, whose independencies cannot be represented in a Bayesian network or
an MRF, but can be represented in a factor graph. We finish by giving examples
of real-world problems that are not well suited to representation in Bayesian
networks and MRFs, but are well-suited to representation in factor graphs.
|
1212.2487 | Locally Weighted Naive Bayes | cs.LG stat.ML | Despite its simplicity, the naive Bayes classifier has surprised machine
learning researchers by exhibiting good performance on a variety of learning
problems. Encouraged by these results, researchers have looked to overcome
naive Bayes primary weakness - attribute independence - and improve the
performance of the algorithm. This paper presents a locally weighted version of
naive Bayes that relaxes the independence assumption by learning local models
at prediction time. Experimental results show that locally weighted naive Bayes
rarely degrades accuracy compared to standard naive Bayes and, in many cases,
improves accuracy dramatically. The main advantage of this method compared to
other techniques for enhancing naive Bayes is its conceptual and computational
simplicity.
|
1212.2488 | A Distance-Based Branch and Bound Feature Selection Algorithm | cs.LG stat.ML | There is no known efficient method for selecting k Gaussian features from n
which achieve the lowest Bayesian classification error. We show an example of
how greedy algorithms faced with this task are led to give results that are not
optimal. This motivates us to propose a more robust approach. We present a
Branch and Bound algorithm for finding a subset of k independent Gaussian
features which minimizes the naive Bayesian classification error. Our algorithm
uses additive monotonic distance measures to produce bounds for the Bayesian
classification error in order to exclude many feature subsets from evaluation,
while still returning an optimal solution. We test our method on synthetic data
as well as data obtained from gene expression profiling.
|
1212.2490 | On the Convergence of Bound Optimization Algorithms | cs.LG stat.ML | Many practitioners who use the EM algorithm complain that it is sometimes
slow. When does this happen, and what can be done about it? In this paper, we
study the general class of bound optimization algorithms - including
Expectation-Maximization, Iterative Scaling and CCCP - and their relationship
to direct optimization algorithms such as gradient-based methods for parameter
learning. We derive a general relationship between the updates performed by
bound optimization methods and those of gradient and second-order methods and
identify analytic conditions under which bound optimization algorithms exhibit
quasi-Newton behavior, and conditions under which they possess poor,
first-order convergence. Based on this analysis, we consider several specific
algorithms, interpret and analyze their convergence properties and provide some
recipes for preprocessing input to these algorithms to yield faster convergence
behavior. We report empirical results supporting our analysis and showing that
simple data preprocessing can result in dramatically improved performance of
bound optimizers in practice.
|
1212.2491 | Automated Analytic Asymptotic Evaluation of the Marginal Likelihood for
Latent Models | cs.LG stat.ML | We present and implement two algorithms for analytic asymptotic evaluation of
the marginal likelihood of data given a Bayesian network with hidden nodes. As
shown by previous work, this evaluation is particularly hard for latent
Bayesian network models, namely networks that include hidden variables, where
asymptotic approximation deviates from the standard BIC score. Our algorithms
solve two central difficulties in asymptotic evaluation of marginal likelihood
integrals, namely, evaluation of regular dimensionality drop for latent
Bayesian network models and computation of non-standard approximation formulas
for singular statistics for these models. The presented algorithms are
implemented in Matlab and Maple and their usage is demonstrated for marginal
likelihood approximations for Bayesian networks with hidden variables.
|
1212.2493 | Decentralized Sensor Fusion With Distributed Particle Filters | cs.AI cs.RO | This paper presents a scalable Bayesian technique for decentralized state
estimation from multiple platforms in dynamic environments. As has long been
recognized, centralized architectures impose severe scaling limitations for
distributed systems due to the enormous communication overheads. We propose a
strictly decentralized approach in which only nearby platforms exchange
information. They do so through an interactive communication protocol aimed at
maximizing information flow. Our approach is evaluated in the context of a
distributed surveillance scenario that arises in a robotic system for playing
the game of laser tag. Our results, both from simulation and using physical
robots, illustrate an unprecedented scaling capability to large teams of
vehicles.
|
1212.2494 | Learning Generative Models of Similarity Matrices | cs.LG stat.ML | We describe a probabilistic (generative) view of affinity matrices along with
inference algorithms for a subclass of problems associated with data
clustering. This probabilistic view is helpful in understanding different
models and algorithms that are based on affinity functions OF the data. IN
particular, we show how(greedy) inference FOR a specific probabilistic model IS
equivalent TO the spectral clustering algorithm.It also provides a framework
FOR developing new algorithms AND extended models. AS one CASE, we present new
generative data clustering models that allow us TO infer the underlying
distance measure suitable for the clustering problem at hand. These models seem
to perform well in a larger class of problems for which other clustering
algorithms (including spectral clustering) usually fail. Experimental
evaluation was performed in a variety point data sets, showing excellent
performance.
|
1212.2495 | Policy-contingent abstraction for robust robot control | cs.RO cs.AI cs.SY | This paper presents a scalable control algorithm that enables a deployed
mobile robot system to make high-level decisions under full consideration of
its probabilistic belief. Our approach is based on insights from the rich
literature of hierarchical controllers and hierarchical MDPs. The resulting
controller has been successfully deployed in a nursing facility near
Pittsburgh, PA. To the best of our knowledge, this work is a unique instance of
applying POMDPs to high-level robotic control problems.
|
1212.2496 | An Axiomatic Approach to Robustness in Search Problems with Multiple
Scenarios | cs.AI | This paper is devoted to the search of robust solutions in state space graphs
when costs depend on scenarios. We first present axiomatic requirements for
preference compatibility with the intuitive idea of robustness.This leads us to
propose the Lorenz dominance rule as a basis for robustness analysis. Then,
after presenting complexity results about the determination of robust
solutions, we propose a new sophistication of A* specially designed to
determine the set of robust paths in a state space graph. The behavior of the
algorithm is illustrated on a small example. Finally, an axiomatic
justification of the refinement of robustness by an OWA criterion is provided.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.