id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1303.0642 | Bayesian Compressed Regression | stat.ML cs.LG | As an alternative to variable selection or shrinkage in high dimensional
regression, we propose to randomly compress the predictors prior to analysis.
This dramatically reduces storage and computational bottlenecks, performing
well when the predictors can be projected to a low dimensional linear subspace
with minimal loss of information about the response. As opposed to existing
Bayesian dimensionality reduction approaches, the exact posterior distribution
conditional on the compressed data is available analytically, speeding up
computation by many orders of magnitude while also bypassing robustness issues
due to convergence and mixing problems with MCMC. Model averaging is used to
reduce sensitivity to the random projection matrix, while accommodating
uncertainty in the subspace dimension. Strong theoretical support is provided
for the approach by showing near parametric convergence rates for the
predictive density in the large p small n asymptotic paradigm. Practical
performance relative to competitors is illustrated in simulations and real data
applications.
|
1303.0644 | Automatic symmetry based cluster approach for anomalous brain
identification in PET scan image : An Analysis | cs.CV | Medical image segmentation is referred to the segmentation of known anatomic
structures from different medical images. Normally, the medical data researches
are more complicated and an exclusive structures. This computer aided diagnosis
is used for assisting doctors in evaluating medical imagery or in recognizing
abnormal findings in a medical image. To integrate the specialized knowledge
for medical data processing is helpful to form a real useful healthcare
decision making system. This paper studies the different symmetry based
distances applied in clustering algorithms and analyzes symmetry approach for
Positron Emission Tomography (PET) scan image segmentation. Unlike CT and MRI,
the PET scan identifies the structure of blood flow to and from organs. PET
scan also helps in early diagnosis of cancer and heart, brain and gastro
intestinal ailments and to detect the progress of treatment. In this paper, the
scope diagnostic task expands for PET image in various brain functions.
|
1303.0645 | Symmetry Based Cluster Approach for Automatic Recognition of the
Epileptic Focus in Brain Using PET Scan Image : An Analysis | cs.CV | Recognition of epileptic focal point is the important diagnosis when
screening the epilepsy patients for latent surgical cures. The accurate
localization is challenging one because of the low spatial resolution images
with more noisy data. Positron Emission Tomography (PET) has now replaced the
issues and caring a high resolution. This paper focuses the research of
automated localization of epileptic seizures in brain functional images using
symmetry based cluster approach. This approach presents a fully automated
symmetry based brain abnormality detection method for PET sequences. PET images
are spatially normalized to Digital Imaging and Communications in Medicine
(DICOM) standard and then it has been trained using symmetry based cluster
approach using Medical Image Processing, Analysis & Visualization (MIPAV) tool.
The performance evolution is considered by the metric like accuracy of
diagnosis. The obtained result is surely assists the surgeon for the automated
identification of seizures focus.
|
1303.0646 | The Zen of Multidisciplinary Team Recommendation | cs.SI cs.IR physics.soc-ph | In order to accomplish complex tasks, it is often necessary to compose a team
consisting of experts with diverse competencies. However, for proper
functioning, it is also preferable that a team be socially cohesive. A team
recommendation system, which facilitates the search for potential team members
can be of great help both for (i) individuals who need to seek out
collaborators and (ii) managers who need to build a team for some specific
tasks.
A decision support system which readily helps summarize such metrics, and
possibly rank the teams in a personalized manner according to the end users'
preferences, can be a great tool to navigate what would otherwise be an
information avalanche.
In this work we present a general framework of how to compose such subsystems
together to build a composite team recommendation system, and instantiate it
for a case study of academic teams.
|
1303.0647 | Spatial Fuzzy C Means PET Image Segmentation of Neurodegenerative
Disorder | cs.CV | Nuclear image has emerged as a promising research work in medical field.
Images from different modality meet its own challenge. Positron Emission
Tomography (PET) image may help to precisely localize disease to assist in
planning the right treatment for each case and saving valuable time. In this
paper, a novel approach of Spatial Fuzzy C Means (PET SFCM) clustering
algorithm is introduced on PET scan image datasets. The proposed algorithm is
incorporated the spatial neighborhood information with traditional FCM and
updating the objective function of each cluster. This algorithm is implemented
and tested on huge data collection of patients with brain neuro degenerative
disorder such as Alzheimers disease. It has demonstrated its effectiveness by
testing it for real world patient data sets. Experimental results are compared
with conventional FCM and K Means clustering algorithm. The performance of the
PET SFCM provides satisfactory results compared with other two algorithms
|
1303.0663 | Denoising Deep Neural Networks Based Voice Activity Detection | cs.LG cs.SD stat.ML | Recently, the deep-belief-networks (DBN) based voice activity detection (VAD)
has been proposed. It is powerful in fusing the advantages of multiple
features, and achieves the state-of-the-art performance. However, the deep
layers of the DBN-based VAD do not show an apparent superiority to the
shallower layers. In this paper, we propose a denoising-deep-neural-network
(DDNN) based VAD to address the aforementioned problem. Specifically, we
pre-train a deep neural network in a special unsupervised denoising greedy
layer-wise mode, and then fine-tune the whole network in a supervised way by
the common back-propagation algorithm. In the pre-training phase, we take the
noisy speech signals as the visible layer and try to extract a new feature that
minimizes the reconstruction cross-entropy loss between the noisy speech
signals and its corresponding clean speech signals. Experimental results show
that the proposed DDNN-based VAD not only outperforms the DBN-based VAD but
also shows an apparent performance improvement of the deep layers over
shallower layers.
|
1303.0665 | Personalized News Recommendation with Context Trees | cs.IR cs.LG stat.ML | The profusion of online news articles makes it difficult to find interesting
articles, a problem that can be assuaged by using a recommender system to bring
the most relevant news stories to readers. However, news recommendation is
challenging because the most relevant articles are often new content seen by
few users. In addition, they are subject to trends and preference changes over
time, and in many cases we do not have sufficient information to profile the
reader.
In this paper, we introduce a class of news recommendation systems based on
context trees. They can provide high-quality news recommendation to anonymous
visitors based on present browsing behaviour. We show that context-tree
recommender systems provide good prediction accuracy and recommendation
novelty, and they are sufficiently flexible to capture the unique properties of
news articles.
|
1303.0667 | Query Expansion Using Term Distribution and Term Association | cs.IR | Good term selection is an important issue for an automatic query expansion
(AQE) technique. AQE techniques that select expansion terms from the target
corpus usually do so in one of two ways. Distribution based term selection
compares the distribution of a term in the (pseudo) relevant documents with
that in the whole corpus / random distribution. Two well-known
distribution-based methods are based on Kullback-Leibler Divergence (KLD) and
Bose-Einstein statistics (Bo1). Association based term selection, on the other
hand, uses information about how a candidate term co-occurs with the original
query terms. Local Context Analysis (LCA) and Relevance-based Language Model
(RM3) are examples of association-based methods. Our goal in this study is to
investigate how these two classes of methods may be combined to improve
retrieval effectiveness. We propose the following combination-based approach.
Candidate expansion terms are first obtained using a distribution based method.
This set is then refined based on the strength of the association of terms with
the original query terms. We test our methods on 11 TREC collections. The
proposed combinations generally yield better results than each individual
method, as well as other state-of-the-art AQE approaches. En route to our
primary goal, we also propose some modifications to LCA and Bo1 which lead to
improved performance.
|
1303.0669 | Second Order Asymptotics for Random Number Generation | cs.IT math.IT | We treat a random number generation from an i.i.d. probability distribution
of $P$ to that of $Q$. When $Q$ or $P$ is a uniform distribution, the problems
have been well-known as the uniform random number generation and the
resolvability problem respectively, and analyzed not only in the context of the
first order asymptotic theory but also that in the second asymptotic theory. On
the other hand, when both $P$ and $Q$ are not a uniform distribution, the
second order asymptotics has not been treated. In this paper, we focus on the
second order asymptotics of a random number generation for arbitrary
probability distributions $P$ and $Q$ on a finite set. In particular, we derive
the optimal second order generation rate under an arbitrary permissible
confidence coefficient.
|
1303.0691 | Learning AMP Chain Graphs and some Marginal Models Thereof under
Faithfulness: Extended Version | stat.ML cs.AI cs.LG | This paper deals with chain graphs under the Andersson-Madigan-Perlman (AMP)
interpretation. In particular, we present a constraint based algorithm for
learning an AMP chain graph a given probability distribution is faithful to.
Moreover, we show that the extension of Meek's conjecture to AMP chain graphs
does not hold, which compromises the development of efficient and correct
score+search learning algorithms under assumptions weaker than faithfulness.
We also introduce a new family of graphical models that consists of
undirected and bidirected edges. We name this new family maximal
covariance-concentration graphs (MCCGs) because it includes both covariance and
concentration graphs as subfamilies. However, every MCCG can be seen as the
result of marginalizing out some nodes in an AMP CG. We describe global, local
and pairwise Markov properties for MCCGs and prove their equivalence. We
characterize when two MCCGs are Markov equivalent, and show that every Markov
equivalence class of MCCGs has a distinguished member. We present a constraint
based algorithm for learning a MCCG a given probability distribution is
faithful to.
Finally, we present a graphical criterion for reading dependencies from a
MCCG of a probability distribution that satisfies the graphoid properties, weak
transitivity and composition. We prove that the criterion is sound and complete
in certain sense.
|
1303.0695 | Non-Asymptotic Output Statistics of Random Binning and Its Applications | cs.IT math.IT | In this paper we develop a finite blocklength version of the Output
Statistics of Random Binning (OSRB) framework. The framework is shown to be
optimal in the point-to-point case. New second order regions for broadcast
channel and wiretap channel with strong secrecy criterion are derived.
|
1303.0696 | A Technique for Deriving One-Shot Achievability Results in Network
Information Theory | cs.IT math.IT | This paper proposes a novel technique to prove a one-shot version of
achievability results in network information theory. The technique is not based
on covering and packing lemmas. In this technique, we use an stochastic encoder
and decoder with a particular structure for coding that resembles both the ML
and the joint-typicality coders. Although stochastic encoders and decoders do
not usually enhance the capacity region, their use simplifies the analysis. The
Jensen inequality lies at the heart of error analysis, which enables us to deal
with the expectation of many terms coming from stochastic encoders and decoders
at once. The technique is illustrated via several examples: point-to-point
channel coding, Gelfand-Pinsker, Broadcast channel (Marton), Berger-Tung,
Heegard-Berger/Kaspi, Multiple description coding and Joint source-channel
coding over a MAC. Most of our one-shot results are new. The asymptotic forms
of these expressions is the same as that of classical results. Our one-shot
bounds in conjunction with multi-dimensional Berry-Essen CLT imply new results
in the finite blocklength regime. In particular applying the one-shot result
for the memoryless broadcast channel in the asymptotic case, we get the entire
region of Marton's inner bound without any need for time-sharing.
|
1303.0699 | Low-complexity dominance-based Sphere Decoder for MIMO Systems | cs.IT math.IT | The sphere decoder (SD) is an attractive low-complexity alternative to
maximum likelihood (ML) detection in a variety of communication systems. It is
also employed in multiple-input multiple-output (MIMO) systems where the
computational complexity of the optimum detector grows exponentially with the
number of transmit antennas. We propose an enhanced version of the SD based on
an additional cost function derived from conditions on worst case interference,
that we call dominance conditions. The proposed detector, the king sphere
decoder (KSD), has a computational complexity that results to be not larger
than the complexity of the sphere decoder and numerical simulations show that
the complexity reduction is usually quite significant.
|
1303.0707 | On the Achievable Error Region of Physical Layer Authentication
Techniques over Rayleigh Fading Channels | cs.IT cs.CR math.IT | For a physical layer message authentication procedure based on the comparison
of channel estimates obtained from the received messages, we focus on an outer
bound on the type I/II error probability region. Channel estimates are modelled
as multivariate Gaussian vectors, and we assume that the attacker has only some
side information on the channel estimate, which he does not know directly. We
derive the attacking strategy that provides the tightest bound on the error
region, given the statistics of the side information. This turns out to be a
zero mean, circularly symmetric Gaussian density whose correlation matrices may
be obtained by solving a constrained optimization problem. We propose an
iterative algorithm for its solution: Starting from the closed form solution of
a relaxed problem, we obtain, by projection, an initial feasible solution;
then, by an iterative procedure, we look for the fixed point solution of the
problem. Numerical results show that for cases of interest the iterative
approach converges, and perturbation analysis shows that the found solution is
a local minimum.
|
1303.0718 | Spectral Theory for Networks with Attractive and Repulsive Interactions | math.SP cond-mat.dis-nn cs.SI | There is a wealth of applied problems that can be posed as a dynamical system
defined on a network with both attractive and repulsive interactions. Some
examples include: understanding synchronization properties of nonlinear
oscillator;, the behavior of groups, or cliques, in social networks; the study
of optimal convergence for consensus algorithm; and many other examples.
Frequently the problems involve computing the index of a matrix, i.e. the
number of positive and negative eigenvalues, and the dimension of the kernel.
In this paper we consider one of the most common examples, where the matrix
takes the form of a signed graph Laplacian. We show that the there are
topological constraints on the index of the Laplacian matrix related to the
dimension of a certain homology group. In certain situations, when the homology
group is trivial, the index of the operator is rigid and is determined only by
the topology of the network and is independent of the strengths of the
interactions. In general these constraints give upper and lower bounds on the
number of positive and negative eigenvalues, with the dimension of the homology
group counting the number of eigenvalue crossings. The homology group also
gives a natural decomposition of the dynamics into "fixed" degrees of freedom,
whose index does not depend on the edge-weights, and an orthogonal set of
"free" degrees of freedom, whose index changes as the edge weights change. We
also present some numerical studies of this problem for large random matrices.
|
1303.0727 | Estimating a sharp convergence bound for randomized ensembles | math.PR cs.SI math.ST stat.ML stat.TH | When randomized ensembles such as bagging or random forests are used for
binary classification, the prediction error of the ensemble tends to decrease
and stabilize as the number of classifiers increases. However, the precise
relationship between prediction error and ensemble size is unknown in practice.
In the standard case when classifiers are aggregated by majority vote, the
present work offers a way to quantify this convergence in terms of "algorithmic
variance," i.e. the variance of prediction error due only to the randomized
training algorithm. Specifically, we study a theoretical upper bound on this
variance, and show that it is sharp --- in the sense that it is attained by a
specific family of randomized classifiers. Next, we address the problem of
estimating the unknown value of the bound, which leads to a unique twist on the
classical problem of non-parametric density estimation. In particular, we
develop an estimator for the bound and show that its MSE matches optimal
non-parametric rates under certain conditions. (Concurrent with this work, some
closely related results have also been considered in Cannings and Samworth
(2017) and Lopes (2019).)
|
1303.0742 | Multivariate Temporal Dictionary Learning for EEG | cs.LG q-bio.NC stat.ML | This article addresses the issue of representing electroencephalographic
(EEG) signals in an efficient way. While classical approaches use a fixed Gabor
dictionary to analyze EEG signals, this article proposes a data-driven method
to obtain an adapted dictionary. To reach an efficient dictionary learning,
appropriate spatial and temporal modeling is required. Inter-channels links are
taken into account in the spatial multivariate model, and shift-invariance is
used for the temporal model. Multivariate learned kernels are informative (a
few atoms code plentiful energy) and interpretable (the atoms can have a
physiological meaning). Using real EEG data, the proposed method is shown to
outperform the classical multichannel matching pursuit used with a Gabor
dictionary, as measured by the representative power of the learned dictionary
and its spatial flexibility. Moreover, dictionary learning can capture
interpretable patterns: this ability is illustrated on real data, learning a
P300 evoked potential.
|
1303.0766 | Bibliometrics for Internet Media: Applying the h-Index to YouTube | cs.DL cs.SI physics.soc-ph | The h-index can be a useful metric for evaluating a person's output of
Internet media. Here we advocate and demonstrate adaption of the h-index and
the g-index to the top video content creators on YouTube. The h-index for
Internet video media is based on videos and their view counts. The index h is
defined as the number of videos with >= h*10^5 views. The index g is defined as
the number of videos with >= g*10^5 views on average. When compared to a video
creator's total view count, the h-index and g-index better capture both
productivity and impact in a single metric.
|
1303.0775 | Hybrid Maximum Likelihood Modulation Classification Using Multiple
Radios | cs.IT math.IT stat.ML | The performance of a modulation classifier is highly sensitive to channel
signal-to-noise ratio (SNR). In this paper, we focus on amplitude-phase
modulations and propose a modulation classification framework based on
centralized data fusion using multiple radios and the hybrid maximum likelihood
(ML) approach. In order to alleviate the computational complexity associated
with ML estimation, we adopt the Expectation Maximization (EM) algorithm. Due
to SNR diversity, the proposed multi-radio framework provides robustness to
channel SNR. Numerical results show the superiority of the proposed approach
with respect to single radio approaches as well as to modulation classifiers
using moments based estimators.
|
1303.0777 | Integrating hidden information which is observed and the observer
information regularities | nlin.AO cs.IT math.IT | Bayesian integral functional measure of entropy-uncertainty (EF) on
trajectories of Markov multi-dimensional diffusion process is cutting off by
interactive impulses (controls). Each cutoff minimax of EF superimposes and
entangles conjugated fractions in microprocess, enclosing the captured entropy
fractions as source of an information unit. The impulse step-up action launches
the unit formation and step-down action finishes it and brings energy from the
interactive jump. This finite jump transfers the entangled entropy from
uncertain Yes-logic to the certain-information No-logic information unit whose
measuring at end of the cut kills final entropy-uncertainty and limits unit.
The Yes-No logic holds Bit Participator creating elementary information
observer without physical pre-law. Cooperating two units in doublet and an
opposite directional information unit in triplet forms minimal stable
structure. Information path functional (IPF) integrates multiple hidden
information contributions along the cutting process correlations in information
units of cooperating doublets-triplets, bound by free information, and enfolds
the sequence of enclosing triplet structures in the information network (IN)
that sequentially decreases the entropy and maximizes information. The IN bound
triplets release free information rising information forces enable attracting
new information unit and ordering it. While IPF collects the information units,
the IN performs logical computing using doublet-triplet code. The IN different
levels unite logic of quantum micro- and macro- information processes,
composing quantum and/or classical computation.
|
1303.0783 | Epidemic threshold in directed networks | physics.soc-ph cs.SI | Epidemics have so far been mostly studied in undirected networks. However,
many real-world networks, such as the social network Twitter and the WWW
networks, upon which information, emotion or malware spreads, are shown to be
directed networks, composed of both unidirectional links and bidirectional
links. We define the directionality as the percentage of unidirectional links.
The epidemic threshold for the susceptible-infected-susceptible (SIS) epidemic
has been proved to be 1/lambda_{1} in directed networks by N-intertwined
Mean-field Approximation, where lambda_{1}, also called as spectral radius, is
the largest eigenvalue of the adjacency matrix. Here, we propose two algorithms
to generate directed networks with a given degree distribution, where the
directionality can be controlled. The effect of directionality on the spectral
radius lambda_{1}, principal eigenvector x_{1}, spectral gap
lambda_{1}-|lambda_{2}|) and algebraic connectivity |mu_{N-1}| is studied.
Important findings are that the spectral radius lambda_{1} decreases with the
directionality, and the spectral gap and the algebraic connectivity increase
with the directionality. The extent of the decrease of the spectral radius
depends on both the degree distribution and the degree-degree correlation
rho_{D}. Hence, the epidemic threshold of directed networks is larger than that
of undirected networks, and a random walk converges to its steady-state faster
in directed networks than in undirected networks with degree distribution.
|
1303.0787 | Restricted Manipulation in Iterative Voting: Convergence and Condorcet
Efficiency | cs.AI cs.GT | In collective decision making, where a voting rule is used to take a
collective decision among a group of agents, manipulation by one or more agents
is usually considered negative behavior to be avoided, or at least to be made
computationally difficult for the agents to perform. However, there are
scenarios in which a restricted form of manipulation can instead be beneficial.
In this paper we consider the iterative version of several voting rules, where
at each step one agent is allowed to manipulate by modifying his ballot
according to a set of restricted manipulation moves which are computationally
easy and require little information to be performed. We prove convergence of
iterative voting rules when restricted manipulation is allowed, and we present
experiments showing that most iterative voting rules have a higher Condorcet
efficiency than their non-iterative version.
|
1303.0789 | How to Be Both Rich and Happy: Combining Quantitative and Qualitative
Strategic Reasoning about Multi-Player Games (Extended Abstract) | cs.LO cs.MA | We propose a logical framework combining a game-theoretic study of abilities
of agents to achieve quantitative objectives in multi-player games by
optimizing payoffs or preferences on outcomes with a logical analysis of the
abilities of players for achieving qualitative objectives of players, i.e.,
reaching or maintaining game states with desired properties. We enrich
concurrent game models with payoffs for the normal form games associated with
the states of the model and propose a quantitative extension of the logic ATL*
enabling the combination of quantitative and qualitative reasoning.
|
1303.0791 | Strategic Analysis of Trust Models for User-Centric Networks | cs.GT cs.MA | We present a strategic analysis of a trust model that has recently been
proposed for promoting cooperative behaviour in user-centric networks. The
mechanism for cooperation is based on a combination of reputation and virtual
currency schemes in which service providers reward paying customers and punish
non-paying ones by adjusting their reputation, and hence the price they pay for
services. We model and analyse this system using PRISM-games, a tool that
performs automated verification and strategy synthesis for stochastic
multi-player games using the probabilistic alternating-time temporal logic with
rewards (rPATL). We construct optimal strategies for both service users and
providers, which expose potential risks of the cooperation mechanism and which
we use to devise improvements that counteract these risks.
|
1303.0792 | Concurrent Game Structures with Roles | cs.LO cs.MA | In the following paper we present a new semantics for the well-known
strategic logic ATL. It is based on adding roles to concurrent game structures,
that is at every state, each agent belongs to exactly one role, and the role
specifies what actions are available to him at that state. We show advantages
of the new semantics, provide motivating examples based on sensor networks, and
analyze model checking complexity.
|
1303.0793 | Reasoning about Strategies under Partial Observability and Fairness
Constraints | cs.LO cs.MA | A number of extensions exist for Alternating-time Temporal Logic; some of
these mix strategies and partial observability but, to the best of our
knowledge, no work provides a unified framework for strategies, partial
observability and fairness constraints. In this paper we propose ATLK^F_po, a
logic mixing strategies under partial observability and epistemic properties of
agents in a system with fairness constraints on states, and we provide a model
checking algorithm for it.
|
1303.0794 | Reducing Validity in Epistemic ATL to Validity in Epistemic CTL | cs.LO cs.AI cs.MA | We propose a validity preserving translation from a subset of epistemic
Alternating-time Temporal Logic (ATL) to epistemic Computation Tree Logic
(CTL). The considered subset of epistemic ATL is known to have the finite model
property and decidable model-checking. This entails the decidability of
validity but the implied algorithm is unfeasible. Reducing the validity problem
to that in a corresponding system of CTL makes the techniques for automated
deduction for that logic available for the handling of the apparently more
complex system of ATL.
|
1303.0808 | Sequential decoding of a general classical-quantum channel | quant-ph cs.IT math.IT | Since a quantum measurement generally disturbs the state of a quantum system,
one might think that it should not be possible for a sender and receiver to
communicate reliably when the receiver performs a large number of sequential
measurements to determine the message of the sender. We show here that this
intuition is not true, by demonstrating that a sequential decoding strategy
works well even in the most general "one-shot" regime, where we are given a
single instance of a channel and wish to determine the maximal number of bits
that can be communicated up to a small failure probability. This result follows
by generalizing a non-commutative union bound to apply for a sequence of
general measurements. We also demonstrate two ways in which a receiver can
recover a state close to the original state after it has been decoded by a
sequence of measurements that each succeed with high probability. The second of
these methods will be useful in realizing an efficient decoder for fully
quantum polar codes, should a method ever be found to realize an efficient
decoder for classical-quantum polar codes.
|
1303.0817 | On Cooperation in Multi-Terminal Computation and Rate Distortion | cs.IT math.IT | A receiver wants to compute a function of two correlated sources separately
observed by two transmitters. One of the transmitters may send a possibly
private message to the other transmitter in a cooperation phase before both
transmitters communicate to the receiver. For this network configuration this
paper investigates both a function computation setup, wherein the receiver
wants to compute a given function of the sources exactly, and a rate distortion
setup, wherein the receiver wants to compute a given function within some
distortion.
For the function computation setup, a general inner bound to the rate region
is established and shown to be tight in a number of cases: partially invertible
functions, full cooperation between transmitters, one-round point-to-point
communication, two-round point-to-point communication, and the cascade setup
where the transmitters and the receiver are aligned. In particular it is shown
that the ratio of the total number of transmitted bits without cooperation and
the total number of transmitted bits with cooperation can be arbitrarily large.
Furthermore, one bit of cooperation suffices to arbitrarily reduce the amount
of information both transmitters need to convey to the receiver.
For the rate distortion version, an inner bound to the rate region is
exhibited which always includes, and sometimes strictly, the convex hull of
Kaspi-Berger's related inner bounds. The strict inclusion is shown via two
examples.
|
1303.0818 | Riemannian metrics for neural networks I: feedforward networks | cs.NE cs.IT cs.LG math.DG math.IT | We describe four algorithms for neural network training, each adapted to
different scalability constraints. These algorithms are mathematically
principled and invariant under a number of transformations in data and network
representation, from which performance is thus independent. These algorithms
are obtained from the setting of differential geometry, and are based on either
the natural gradient using the Fisher information matrix, or on Hessian
methods, scaled down in a specific way to allow for scalability while keeping
some of their key mathematical properties.
|
1303.0861 | Structural and Cognitive Bottlenecks to Information Access in Social
Networks | cs.SI cs.CY physics.soc-ph | Information in networks is non-uniformly distributed, enabling individuals in
certain network positions to get preferential access to information. Social
scientists have developed influential theories about the role of network
structure in information access. These theories were validated through numerous
studies, which examined how individuals leverage their social networks for
competitive advantage, such as a new job or higher compensation. It is not
clear how these theories generalize to online networks, which differ from
real-world social networks in important respects, including asymmetry of social
links. We address this problem by analyzing how users of the social news
aggregator Digg adopt stories recommended by friends, i.e., users they follow.
We measure the impact different factors, such as network position and activity
rate; have on access to novel information, which in Digg's case means set of
distinct news stories. We show that a user can improve his information access
by linking to active users, though this becomes less effective as the number of
friends, or their activity, grows due to structural network constraints. These
constraints arise because users in structurally diverse position within the
follower graph have topically diverse interests from their friends. Moreover,
though in most cases user's friends are exposed to almost all the information
available in the network, after they make their recommendations, the user sees
only a small fraction of the available information. Our study suggests that
cognitive and structural bottlenecks limit access to novel information in
online social networks.
|
1303.0866 | Adaptive Partitioning and its Applicability to a Highly Scalable and
Available Geo-Spatial Indexing Solution | cs.DB | Satellite Tracking of People (STOP) tracks thousands of GPS-enabled devices
24 hours a day and 365 days a year. With locations captured for each device
every minute, STOP servers receive tens of millions of points each day. In
addition to cataloging these points in real-time, STOP must also respond to
questions from customers such as, "What devices of mine were at this location
two months ago?" They often then broaden their question to one such as, "Which
of my devices have ever been at this location?" The processing requirements
necessary to answer these questions while continuing to process inbound data in
real-time is non-trivial.
To meet this demand, STOP developed Adaptive Partitioning to provide a
cost-effective and highly available hardware platform for the geographical and
time-spatial indexing capabilities necessary for responding to customer data
requests while continuing to catalog inbound data in real-time.
|
1303.0868 | LabelRank: A Stabilized Label Propagation Algorithm for Community
Detection in Networks | cs.SI cs.DS physics.soc-ph | An important challenge in big data analysis nowadays is detection of cohesive
groups in large-scale networks, including social networks, genetic networks,
communication networks and so. In this paper, we propose LabelRank, an
efficient algorithm detecting communities through label propagation. A set of
operators is introduced to control and stabilize the propagation dynamics.
These operations resolve the randomness issue in traditional label propagation
algorithms (LPA), stabilizing the discovered communities in all runs of the
same network. Tests on real-world networks demonstrate that LabelRank
significantly improves the quality of detected communities compared to LPA, as
well as other popular algorithms.
|
1303.0875 | LT^2C^2: A language of thought with Turing-computable Kolmogorov
complexity | q-bio.NC cs.AI | In this paper, we present a theoretical effort to connect the theory of
program size to psychology by implementing a concrete language of thought with
Turing-computable Kolmogorov complexity (LT^2C^2) satisfying the following
requirements: 1) to be simple enough so that the complexity of any given finite
binary sequence can be computed, 2) to be based on tangible operations of human
reasoning (printing, repeating,...), 3) to be sufficiently powerful to generate
all possible sequences but not too powerful as to identify regularities which
would be invisible to humans. We first formalize LT^2C^2, giving its syntax and
semantics and defining an adequate notion of program size. Our setting leads to
a Kolmogorov complexity function relative to LT^2C^2 which is computable in
polynomial time, and it also induces a prediction algorithm in the spirit of
Solomonoff's inductive inference theory. We then prove the efficacy of this
language by investigating regularities in strings produced by participants
attempting to generate random strings. Participants had a profound
understanding of randomness and hence avoided typical misconceptions such as
exaggerating the number of alternations. We reasoned that remaining
regularities would express the algorithmic nature of human thoughts, revealed
in the form of specific patterns. Kolmogorov complexity relative to LT^2C^2
passed three expected tests examined here: 1) human sequences were less complex
than control PRNG sequences, 2) human sequences were not stationary, showing
decreasing values of complexity resulting from fatigue, 3) each individual
showed traces of algorithmic stability since fitting of partial sequences was
more effective to predict subsequent sequences than average fits. This work
extends on previous efforts to combine notions of Kolmogorov complexity theory
and algorithmic information theory to psychology, by explicitly ...
|
1303.0890 | Set-Membership Conjugate Gradient Constrained Adaptive Filtering
Algorithm for Beamforming | cs.IT math.IT | We introduce a new linearly constrained minimum variance (LCMV) beamformer
that combines the set-membership (SM) technique with the conjugate gradient
(CG) method, and develop a low-complexity adaptive filtering algorithm for
beamforming. The proposed algorithm utilizes a CG-based vector and a variable
forgetting factor to perform the data-selective updates that are controlled by
a time-varying bound related to the parameters. For the update, the CG-based
vector is calculated iteratively (one iteration per update) to obtain the
filter parameters and to avoid the matrix inversion. The resulting iterations
construct a space of feasible solutions that satisfy the constraints of the
LCMV optimization problem. The proposed algorithm reduces the computational
complexity significantly and shows an enhanced convergence and tracking
performance over existing algorithms.
|
1303.0926 | Injectivity w.r.t. Distribution of Elements in the Compressed Sequences
Derived from Primitive Sequences over $Z/p^eZ$ | cs.IT math.IT | Let $p\geq3$ be a prime and $e\geq2$ an integer. Let $\sigma(x)$ be a
primitive polynomial of degree $n$ over $Z/p^eZ$ and $G'(\sigma(x),p^e)$ the
set of primitive linear recurring sequences generated by $\sigma(x)$. A
compressing map $\varphi$ on $Z/p^eZ$ naturally induces a map $\hat{\varphi}$
on $G'(\sigma(x),p^e)$. For a subset $D$ of the image of
$\varphi$,$\hat{\varphi}$ is called to be injective w.r.t. $D$-uniformity if
the distribution of elements of $D$ in the compressed sequence implies all
information of the original primitive sequence. In this correspondence, for at
least $1-2(p-1)/(p^n-1)$ of primitive polynomials of degree $n$, a clear
criterion on $\varphi$ is obtained to decide whether $\hat{\varphi}$ is
injective w.r.t. $D$-uniformity, and the majority of maps on $Z/p^eZ$ induce
injective maps on $G'(\sigma(x),p^e)$. Furthermore, a sufficient condition on
$\varphi$ is given to ensure injectivity of $\hat{\varphi}$ w.r.t.
$D$-uniformity. It follows from the sufficient condition that if $\sigma(x)$ is
strongly primitive and the compressing map $\varphi(x)=f(x_{e-1})$, where
$f(x_{e-1})$ is a permutation polynomial over $\mathbb{F}_{p}$, then
$\hat{\varphi}$ is injective w.r.t. $D$-uniformity for $\emptyset\neq
D\subset\mathbb{F}_{p}$. Moreover, we give three specific families of
compressing maps which induce injective maps on $G'(\sigma(x),p^e)$.
|
1303.0930 | An Authentication Scheme for Subspace Codes over Network Based on Linear
Codes | cs.CR cs.IT math.IT | Network coding provides the advantage of maximizing the usage of network
resources, and has great application prospects in future network
communications. However, the properties of network coding also make the
pollution attack more serious. In this paper, we give an unconditional secure
authentication scheme for network coding based on a linear code $C$.
Safavi-Naini and Wang gave an authentication code for multi-receivers and
multiple messages. We notice that the scheme of Safavi-Naini and Wang is
essentially constructed with Reed-Solomon codes. And we modify their
construction slightly to make it serve for authenticating subspace codes over
linear network. Also, we generalize the construction with linear codes. The
generalization to linear codes has the similar advantages as generalizing
Shamir's secret sharing scheme to linear secret sharing sceme based on linear
codes. One advantage of this generalization is that for a fixed message space,
our scheme allows arbitrarily many receivers to check the integrity of their
own messages, while the scheme with Reed-Solomon codes has a constraint on the
number of verifying receivers. Another advantage is that we introduce access
structure in the generalized scheme. Massey characterized the access structure
of linear secret sharing scheme by minimal codewords in the dual code whose
first component is 1. We slightly modify the definition of minimal codewords.
Let $C$ be a $[V,k]$ linear code. For any coordinate $i\in \{1,2,\cdots,V\}$, a
codeword $\vec{c}$ in $C$ is called minimal respect to $i$ if the codeword
$\vec{c}$ has component 1 at the $i$-th coordinate and there is no other
codeword whose $i$-th component is 1 with support strictly contained in that of
$\vec{c}$. Then the security of receiver $R_i$ in our authentication scheme is
characterized by the minimal codewords respect to $i$ in the dual code
$C^\bot$.
|
1303.0934 | GURLS: a Least Squares Library for Supervised Learning | cs.LG cs.AI cs.MS | We present GURLS, a least squares, modular, easy-to-extend software library
for efficient supervised learning. GURLS is targeted to machine learning
practitioners, as well as non-specialists. It offers a number state-of-the-art
training strategies for medium and large-scale learning, and routines for
efficient model selection. The library is particularly well suited for
multi-output problems (multi-category/multi-label). GURLS is currently
available in two independent implementations: Matlab and C++. It takes
advantage of the favorable properties of regularized least squares algorithm to
exploit advanced tools in linear algebra. Routines to handle computations with
very large matrices by means of memory-mapped storage and distributed task
execution are available. The package is distributed under the BSD licence and
is available for download at https://github.com/CBCL/GURLS.
|
1303.0943 | A New Approach of Deriving Bounds between Entropy and Error from Joint
Distribution: Case Study for Binary Classifications | cs.IT math.IT | The existing upper and lower bounds between entropy and error are mostly
derived through an inequality means without linking to joint distributions. In
fact, from either theoretical or application viewpoint, there exists a need to
achieve a complete set of interpretations to the bounds in relation to joint
distributions. For this reason, in this work we propose a new approach of
deriving the bounds between entropy and error from a joint distribution. The
specific case study is given on binary classifications, which can justify the
need of the proposed approach. Two basic types of classification errors are
investigated, namely, the Bayesian and non-Bayesian errors. For both errors, we
derive the closed-form expressions of upper bound and lower bound in relation
to joint distributions. The solutions show that Fano's lower bound is an exact
bound for any type of errors in a relation diagram of "Error Probability vs.
Conditional Entropy". A new upper bound for the Bayesian error is derived with
respect to the minimum prior probability, which is generally tighter than
Kovalevskij's upper bound.
|
1303.0964 | GBM Volumetry using the 3D Slicer Medical Image Computing Platform | cs.CV | Volumetric change in glioblastoma multiforme (GBM) over time is a critical
factor in treatment decisions. Typically, the tumor volume is computed on a
slice-by-slice basis using MRI scans obtained at regular intervals. (3D)Slicer
- a free platform for biomedical research - provides an alternative to this
manual slice-by-slice segmentation process, which is significantly faster and
requires less user interaction. In this study, 4 physicians segmented GBMs in
10 patients, once using the competitive region-growing based GrowCut
segmentation module of Slicer, and once purely by drawing boundaries completely
manually on a slice-by-slice basis. Furthermore, we provide a variability
analysis for three physicians for 12 GBMs. The time required for GrowCut
segmentation was on an average 61% of the time required for a pure manual
segmentation. A comparison of Slicer-based segmentation with manual
slice-by-slice segmentation resulted in a Dice Similarity Coefficient of 88.43
+/- 5.23% and a Hausdorff Distance of 2.32 +/- 5.23 mm.
|
1303.0970 | A heuristic optimization method for mitigating the impact of a virus
attack | cs.SI physics.soc-ph q-bio.PE | Taking precautions before or during the start of a virus outbreak can heavily
reduce the number of infected. The question which individuals should be
immunized in order to mitigate the impact of the virus on the rest of
population has received quite some attention in the literature. The dynamics of
the of a virus spread through a population is often represented as information
spread over a complex network. The strategies commonly proposed to determine
which nodes are to be selected for immunization often involve only one
centrality measure at a time, while often the topology of the network seems to
suggest that a single metric is insufficient to capture the influence of a node
entirely.
In this work we present a generic method based on a genetic algorithm (GA)
which does not rely explicitly on any centrality measures during its search but
only exploits this type of information to narrow the search space. The fitness
of an individual is defined as the estimated expected number of infections of a
virus following SIR dynamics. The proposed method is evaluated on two contact
networks: the Goodreau's Faux Mesa high school and the US air transportation
network. The GA method manages to outperform the most common strategies based
on a single metric for the air transportation network and its performance is
comparable with the best performing strategy for the high school network.
|
1303.1026 | Non-overlapping codes | cs.DM cs.IT math.CO math.IT | We say that a $q$-ary length $n$ code is \emph{non-overlapping} if the set of
non-trivial prefixes of codewords and the set of non-trivial suffices of
codewords are disjoint. These codes were first studied by Levenshtein in 1964,
motivated by applications in synchronisation. More recently these codes were
independently invented (under the name \emph{cross-bifix-free} codes) by
Baji\'c and Stojanovi\'c.
We provide a simple construction for a class of non-overlapping codes which
has optimal cardinality whenever $n$ divides $q$. Moreover, for all parameters
$n$ and $q$ we show that a code from this class is close to optimal, in the
sense that it has cardinality within a constant factor of an upper bound due to
Levenshtein from 1970. Previous constructions have cardinality within a
constant factor of the upper bound only when $q$ is fixed.
Chee, Kiah, Purkayastha and Wang showed that a $q$-ary length $n$
non-overlapping code contains at most $q^n/(2n-1)$ codewords; this bound is
weaker than the Levenshtein bound. Their proof appealed to the application in
synchronisation: we provide a direct combinatorial argument to establish the
bound of Chee \emph{et al}.
We also consider codes of short length, finding the leading term of the
maximal cardinality of a non-overlapping code when $n$ is fixed and
$q\rightarrow \infty$. The largest cardinality of non-overlapping codes of
lengths $3$ or less is determined exactly.
|
1303.1038 | Anytime Reliable LDPC Convolutional Codes for Networked Control over
Wireless Channel | cs.IT cs.SY math.IT | This paper deals with the problem of stabilizing an unstable system through
networked control over the wireless medium. In such a situation a remote sensor
communicates the measurements to the system controller through a noisy channel.
In particular, in the AWGN scenario, we show that protograph-based LDPC
convolutional codes achieve anytime reliability and we also derive a lower
bound to the signal-to-noise ratio required to stabilize the system. Moreover,
on the Rayleigh-fading channel, we show by simulations that resorting to
multiple sensors allows to achieve a diversity gain.
|
1303.1051 | A Genetic algorithm to solve the container storage space allocation
problem | cs.NE | This paper presented a genetic algorithm (GA) to solve the container storage
problem in the port. This problem is studied with different container types
such as regular, open side, open top, tank, empty and refrigerated containers.
The objective of this problem is to determine an optimal containers
arrangement, which respects customers delivery deadlines, reduces the rehandle
operations of containers and minimizes the stop time of the container ship. In
this paper, an adaptation of the genetic algorithm to the container storage
problem is detailed and some experimental results are presented and discussed.
The proposed approach was compared to a Last In First Out (LIFO) algorithm
applied to the same problem and has recorded good results
|
1303.1090 | Embedded Online Optimization for Model Predictive Control at Megahertz
Rates | cs.SY math.OC | Faster, cheaper, and more power efficient optimization solvers than those
currently offered by general-purpose solutions are required for extending the
use of model predictive control (MPC) to resource-constrained embedded
platforms. We propose several custom computational architectures for different
first-order optimization methods that can handle linear-quadratic MPC problems
with input, input-rate, and soft state constraints. We provide analysis
ensuring the reliable operation of the resulting controller under reduced
precision fixed-point arithmetic. Implementation of the proposed architectures
in FPGAs shows that satisfactory control performance at a sample rate beyond 1
MHz is achievable even on low-end devices, opening up new possibilities for the
application of MPC on embedded systems.
|
1303.1093 | On Large Deviation Property of Recurrence Times | cs.IT math.IT | We extend the study by Ornstein and Weiss on the asymptotic behavior of the
normalized version of recurrence times and establish the large deviation
property for a certain class of mixing processes. Further, an estimator for
entropy based on recurrence times is proposed for which large deviation
behavior is proved for stationary and ergodic sources satisfying similar mixing
conditions.
|
1303.1095 | A New Achievable Scheme for Interference Relay Channels | cs.IT math.IT | We establish an achievable rate region for discrete memoryless interference
relay channels that consist of two source-destination pairs and one or more
relays. We develop an achievable scheme combining Han-Kobayashi and noisy
network coding schemes. We apply our achievability to two cases. First, we
characterize the capacity region of a class of discrete memoryless interference
relay channels. This class naturally generalizes the injective deterministic
discrete memoryless interference channel by El Gamal and Costa and the
deterministic discrete memoryless relay channel with orthogonal receiver
components by Kim. Moreover, for the Gaussian interference relay channel with
orthogonal receiver components, we show that our scheme achieves a better sum
rate than that of noisy network coding.
|
1303.1098 | On Match Lengths and the Asymptotic Behavior of Sliding Window
Lempel-Ziv Algorithm for Zero Entropy Sequences | cs.IT math.IT | The Sliding Window Lempel-Ziv (SWLZ) algorithm has been studied from various
perspectives in information theory literature. In this paper, we provide a
general law which defines the asymptotics of match length for stationary and
ergodic zero entropy processes. Moreover, we use this law to choose the match
length $L_o$ in the almost sure optimality proof of Fixed Shift Variant of
Lempel-Ziv (FSLZ) and SWLZ algorithms given in literature. First, through an
example of stationary and ergodic processes generated by irrational rotation we
establish that for a window size of $n_w$ a compression ratio given by
$O(\frac{\log n_w}{{n_w}^a})$ where $a$ is arbitrarily close to 1 and $0 < a <
1$, is obtained under the application of FSLZ and SWLZ algorithms. Further, we
give a general expression for the compression ratio for a class of stationary
and totally ergodic processes with zero entropy.
|
1303.1144 | Recursive Sparse Recovery in Large but Structured Noise - Part 2 | cs.IT math.IT | We study the problem of recursively recovering a time sequence of sparse
vectors, St, from measurements Mt := St + Lt that are corrupted by structured
noise Lt which is dense and can have large magnitude. The structure that we
require is that Lt should lie in a low dimensional subspace that is either
fixed or changes "slowly enough"; and the eigenvalues of its covariance matrix
are "clustered". We do not assume any model on the sequence of sparse vectors.
Their support sets and their nonzero element values may be either independent
or correlated over time (usually in many applications they are correlated). The
only thing required is that there be some support change every so often. We
introduce a novel solution approach called Recursive Projected Compressive
Sensing with cluster-PCA (ReProCS-cPCA) that addresses some of the limitations
of earlier work. Under mild assumptions, we show that, with high probability,
ReProCS-cPCA can exactly recover the support set of St at all times; and the
reconstruction errors of both St and Lt are upper bounded by a time-invariant
and small value.
|
1303.1152 | An Equivalence between the Lasso and Support Vector Machines | cs.LG stat.ML | We investigate the relation of two fundamental tools in machine learning and
signal processing, that is the support vector machine (SVM) for classification,
and the Lasso technique used in regression. We show that the resulting
optimization problems are equivalent, in the following sense. Given any
instance of an $\ell_2$-loss soft-margin (or hard-margin) SVM, we construct a
Lasso instance having the same optimal solutions, and vice versa.
As a consequence, many existing optimization algorithms for both SVMs and
Lasso can also be applied to the respective other problem instances. Also, the
equivalence allows for many known theoretical insights for SVM and Lasso to be
translated between the two settings. One such implication gives a simple
kernelized version of the Lasso, analogous to the kernels used in the SVM
setting. Another consequence is that the sparsity of a Lasso solution is equal
to the number of support vectors for the corresponding SVM instance, and that
one can use screening rules to prune the set of support vectors. Furthermore,
we can relate sublinear time algorithms for the two problems, and give a new
such algorithm variant for the Lasso. We also study the regularization paths
for both methods.
|
1303.1201 | Multi-Pair Amplify-and-Forward Relaying with Very Large Antenna Arrays | cs.IT math.IT | We consider a multi-pair relay channel where multiple sources simultaneously
communicate with destinations using a relay. Each source or destination has
only a single antenna, while the relay is equipped with a very large antenna
array. We investigate the power efficiency of this system when maximum ratio
combining/maximal ratio transmission (MRC/MRT) or zero-forcing (ZF) processing
is used at the relay. Using a very large array, the transmit power of each
source or relay (or both) can be made inversely proportional to the number of
relay antennas while maintaining a given quality-of-service. At the same time,
the achievable sum rate can be increased by a factor of the number of
source-destination pairs. We show that when the number of antennas grows to
infinity, the asymptotic achievable rates of MRC/MRT and ZF are the same if we
scale the power at the sources. Depending on the large scale fading effect,
MRC/MRT can outperform ZF or vice versa if we scale the power at the relay.
|
1303.1208 | Classification with Asymmetric Label Noise: Consistency and Maximal
Denoising | stat.ML cs.LG | In many real-world classification problems, the labels of training examples
are randomly corrupted. Most previous theoretical work on classification with
label noise assumes that the two classes are separable, that the label noise is
independent of the true class label, or that the noise proportions for each
class are known. In this work, we give conditions that are necessary and
sufficient for the true class-conditional distributions to be identifiable.
These conditions are weaker than those analyzed previously, and allow for the
classes to be nonseparable and the noise levels to be asymmetric and unknown.
The conditions essentially state that a majority of the observed labels are
correct and that the true class-conditional distributions are "mutually
irreducible," a concept we introduce that limits the similarity of the two
distributions. For any label noise problem, there is a unique pair of true
class-conditional distributions satisfying the proposed conditions, and we
argue that this pair corresponds in a certain sense to maximal denoising of the
observed distributions.
Our results are facilitated by a connection to "mixture proportion
estimation," which is the problem of estimating the maximal proportion of one
distribution that is present in another. We establish a novel rate of
convergence result for mixture proportion estimation, and apply this to obtain
consistency of a discrimination rule based on surrogate loss minimization.
Experimental results on benchmark data and a nuclear particle classification
problem demonstrate the efficacy of our approach.
|
1303.1209 | Sample-Optimal Average-Case Sparse Fourier Transform in Two Dimensions | cs.DS cs.IT math.IT | We present the first sample-optimal sublinear time algorithms for the sparse
Discrete Fourier Transform over a two-dimensional sqrt{n} x sqrt{n} grid. Our
algorithms are analyzed for /average case/ signals. For signals whose spectrum
is exactly sparse, our algorithms use O(k) samples and run in O(k log k) time,
where k is the expected sparsity of the signal. For signals whose spectrum is
approximately sparse, our algorithm uses O(k log n) samples and runs in O(k
log^2 n) time; the latter algorithm works for k=Theta(sqrt{n}). The number of
samples used by our algorithms matches the known lower bounds for the
respective signal models.
By a known reduction, our algorithms give similar results for the
one-dimensional sparse Discrete Fourier Transform when n is a power of a small
composite number (e.g., n = 6^t).
|
1303.1217 | Impulsive Noise Mitigation in Powerline Communications Using Sparse
Bayesian Learning | stat.ML cs.IT math.IT | Additive asynchronous and cyclostationary impulsive noise limits
communication performance in OFDM powerline communication (PLC) systems.
Conventional OFDM receivers assume additive white Gaussian noise and hence
experience degradation in communication performance in impulsive noise.
Alternate designs assume a parametric statistical model of impulsive noise and
use the model parameters in mitigating impulsive noise. These receivers require
overhead in training and parameter estimation, and degrade due to model and
parameter mismatch, especially in highly dynamic environments. In this paper,
we model impulsive noise as a sparse vector in the time domain without any
other assumptions, and apply sparse Bayesian learning methods for estimation
and mitigation without training. We propose three iterative algorithms with
different complexity vs. performance trade-offs: (1) we utilize the noise
projection onto null and pilot tones to estimate and subtract the noise
impulses; (2) we add the information in the data tones to perform joint noise
estimation and OFDM detection; (3) we embed our algorithm into a decision
feedback structure to further enhance the performance of coded systems. When
compared to conventional OFDM PLC receivers, the proposed receivers achieve SNR
gains of up to 9 dB in coded and 10 dB in uncoded systems in the presence of
impulsive noise.
|
1303.1220 | Reduced-Rank DOA Estimation based on Joint Iterative Subspace
Optimization and Grid Search | cs.IT math.IT | In this paper, we propose a novel reduced-rank algorithm for direction of
arrival (DOA) estimation based on the minimum variance (MV) power spectral
evaluation. It is suitable to DOA estimation with large arrays and can be
applied to arbitrary array geometries. The proposed DOA estimation algorithm is
formulated as a joint optimization of a subspace projection matrix and an
auxiliary reduced-rank parameter vector with respect to the MV and grid search.
A constrained least squares method is employed to solve this joint optimization
problem for the output power over the grid. The proposed algorithm is described
for problems of large number of users' direction finding with or without exact
information of the number of sources, and does not require the singular value
decomposition (SVD). The spatial smoothing (SS) technique is also employed in
the proposed algorithm for dealing with correlated sources problem. Simulations
are conducted with comparisons against existent algorithms to show the improved
performance of the proposed algorithm in different scenarios.
|
1303.1232 | Japanese-Spanish Thesaurus Construction Using English as a Pivot | cs.CL cs.AI | We present the results of research with the goal of automatically creating a
multilingual thesaurus based on the freely available resources of Wikipedia and
WordNet. Our goal is to increase resources for natural language processing
tasks such as machine translation targeting the Japanese-Spanish language pair.
Given the scarcity of resources, we use existing English resources as a pivot
for creating a trilingual Japanese-Spanish-English thesaurus. Our approach
consists of extracting the translation tuples from Wikipedia, disambiguating
them by mapping them to WordNet word senses. We present results comparing two
methods of disambiguation, the first using VSM on Wikipedia article texts and
WordNet definitions, and the second using categorical information extracted
from Wikipedia, We find that mixing the two methods produces favorable results.
Using the proposed method, we have constructed a multilingual
Spanish-Japanese-English thesaurus consisting of 25,375 entries. The same
method can be applied to any pair of languages that are linked to English in
Wikipedia.
|
1303.1243 | A Generalized Hybrid Real-Coded Quantum Evolutionary Algorithm Based on
Particle Swarm Theory with Arithmetic Crossover | cs.NE | This paper proposes a generalized Hybrid Real-coded Quantum Evolutionary
Algorithm (HRCQEA) for optimizing complex functions as well as combinatorial
optimization. The main idea of HRCQEA is to devise a new technique for mutation
and crossover operators. Using the evolutionary equation of PSO a
Single-Multiple gene Mutation (SMM) is designed and the concept of Arithmetic
Crossover (AC) is used in the new Crossover operator. In HRCQEA, each triploid
chromosome represents a particle and the position of the particle is updated
using SMM and Quantum Rotation Gate (QRG), which can make the balance between
exploration and exploitation. Crossover is employed to expand the search space,
Hill Climbing Selection (HCS) and elitism help to accelerate the convergence
speed. Simulation results on Knapsack Problem and five benchmark complex
functions with high dimension show that HRCQEA performs better in terms of
ability to discover the global optimum and convergence speed.
|
1303.1264 | Discovery of factors in matrices with grades | cs.LG cs.NA | We present an approach to decomposition and factor analysis of matrices with
ordinal data. The matrix entries are grades to which objects represented by
rows satisfy attributes represented by columns, e.g. grades to which an image
is red, a product has a given feature, or a person performs well in a test. We
assume that the grades form a bounded scale equipped with certain aggregation
operators and conforms to the structure of a complete residuated lattice. We
present a greedy approximation algorithm for the problem of decomposition of
such matrix in a product of two matrices with grades under the restriction that
the number of factors be small. Our algorithm is based on a geometric insight
provided by a theorem identifying particular rectangular-shaped submatrices as
optimal factors for the decompositions. These factors correspond to formal
concepts of the input data and allow an easy interpretation of the
decomposition. We present illustrative examples and experimental evaluation.
|
1303.1271 | Convex and Scalable Weakly Labeled SVMs | cs.LG | In this paper, we study the problem of learning from weakly labeled data,
where labels of the training examples are incomplete. This includes, for
example, (i) semi-supervised learning where labels are partially known; (ii)
multi-instance learning where labels are implicitly known; and (iii) clustering
where labels are completely unknown. Unlike supervised learning, learning with
weak labels involves a difficult Mixed-Integer Programming (MIP) problem.
Therefore, it can suffer from poor scalability and may also get stuck in local
minimum. In this paper, we focus on SVMs and propose the WellSVM via a novel
label generation strategy. This leads to a convex relaxation of the original
MIP, which is at least as tight as existing convex Semi-Definite Programming
(SDP) relaxations. Moreover, the WellSVM can be solved via a sequence of SVM
subproblems that are much more scalable than previous convex SDP relaxations.
Experiments on three weakly labeled learning tasks, namely, (i) semi-supervised
learning; (ii) multi-instance learning for locating regions of interest in
content-based information retrieval; and (iii) clustering, clearly demonstrate
improved performance, and WellSVM is also readily applicable on large data
sets.
|
1303.1280 | Large-Margin Metric Learning for Partitioning Problems | cs.LG stat.ML | In this paper, we consider unsupervised partitioning problems, such as
clustering, image segmentation, video segmentation and other change-point
detection problems. We focus on partitioning problems based explicitly or
implicitly on the minimization of Euclidean distortions, which include
mean-based change-point detection, K-means, spectral clustering and normalized
cuts. Our main goal is to learn a Mahalanobis metric for these unsupervised
problems, leading to feature weighting and/or selection. This is done in a
supervised way by assuming the availability of several potentially partially
labelled datasets that share the same metric. We cast the metric learning
problem as a large-margin structured prediction problem, with proper definition
of regularizers and losses, leading to a convex optimization problem which can
be solved efficiently with iterative techniques. We provide experiments where
we show how learning the metric may significantly improve the partitioning
performance in synthetic examples, bioinformatics, video segmentation and image
segmentation problems.
|
1303.1285 | Bandlimited Signal Reconstruction From the Distribution of Unknown
Sampling Locations | cs.IT math.IT math.ST stat.TH | We study the reconstruction of bandlimited fields from samples taken at
unknown but statistically distributed sampling locations. The setup is
motivated by distributed sampling where precise knowledge of sensor locations
can be difficult.
Periodic one-dimensional bandlimited fields are considered for sampling.
Perfect samples of the field at independent and identically distributed
locations are obtained. The statistical realization of sampling locations is
not known. First, it is shown that a bandlimited field cannot be uniquely
determined with samples taken at statistically distributed but unknown
locations, even if the number of samples is infinite. Next, it is assumed that
the order of sample locations is known. In this case, using insights from
order-statistics, an estimate for the field with useful asymptotic properties
is designed. Distortion (mean-squared error) and central-limit are established
for this estimate.
|
1303.1292 | Stabilizing switching signals for switched linear systems | cs.SY math.OC | This article deals with stability of continuous-time switched linear systems
under constrained switching. Given a family of linear systems, possibly
containing unstable dynamics, we characterize a new class of switching signals
under which the switched linear system generated by it and the family of
systems is globally asymptotically stable. Our characterization of such
stabilizing switching signals involves the asymptotic frequency of switching,
the asymptotic fraction of activation of the constituent systems, and the
asymptotic densities of admissible transitions among them. Our techniques
employ multiple Lyapunov-like functions, and extend preceding results both in
scope and applicability.
|
1303.1312 | A Fast Iterative Bayesian Inference Algorithm for Sparse Channel
Estimation | stat.ML cs.IT math.IT | In this paper, we present a Bayesian channel estimation algorithm for
multicarrier receivers based on pilot symbol observations. The inherent sparse
nature of wireless multipath channels is exploited by modeling the prior
distribution of multipath components' gains with a hierarchical representation
of the Bessel K probability density function; a highly efficient, fast
iterative Bayesian inference method is then applied to the proposed model. The
resulting estimator outperforms other state-of-the-art Bayesian and
non-Bayesian estimators, either by yielding lower mean squared estimation error
or by attaining the same accuracy with improved convergence rate, as shown in
our numerical evaluation.
|
1303.1354 | Adaptive Spatial Aloha, Fairness and Stochastic Geometry | cs.NI cs.IT math.IT | This work aims at combining adaptive protocol design, utility maximization
and stochastic geometry. We focus on a spatial adaptation of Aloha within the
framework of ad hoc networks. We consider quasi-static networks in which
mobiles learn the local topology and incorporate this information to adapt
their medium access probability (MAP) selection to their local environment. We
consider the cases where nodes cooperate in a distributed way to maximize the
global throughput or to achieve either proportional fair or max-min fair medium
access. In the proportional fair case, we show that nodes can compute their
optimal MAPs as solutions to certain fixed point equations. In the maximum
throughput case, the optimal MAPs are obtained through a Gibbs Sampling based
algorithm. In the max min case, these are obtained as the solution of a convex
optimization problem. The main performance analysis result of the paper is that
this type of distributed adaptation can be analyzed using stochastic geometry
in the proportional fair case. In this case, we show that, when the nodes form
a homogeneous Poisson point process in the Euclidean plane, the distribution of
the optimal MAP can be obtained from that of a certain shot noise process
w.r.t. the node Poisson point process and that the mean utility can also be
derived from this distribution. We discuss the difficulties to be faced for
analyzing the performance of the other cases (maximum throughput and max-min
fairness). Numerical results illustrate our findings and quantify the gains
brought by spatial adaptation in such networks.
|
1303.1369 | Coevolution and correlated multiplexity in multiplex networks | physics.soc-ph cond-mat.dis-nn cond-mat.stat-mech cs.SI | Distinct channels of interaction in a complex networked system define network
layers, which co-exist and co-operate for the system's function. Towards
realistic modeling and understanding such multiplex systems, we introduce and
study a class of growing multiplex network models in which different network
layers coevolve, and examine how the entangled growth of coevolving layers can
shape the overall network structure. We show analytically and numerically that
the coevolution can induce strong degree correlations across layers, as well as
modulate degree distributions. We further show that such a coevolution-induced
correlated multiplexity can alter the system's response to dynamical process,
exemplified by the suppressed susceptibility to a threshold cascade process.
|
1303.1384 | Causality in concurrent systems | cs.DC cs.AI | Concurrent systems identify systems, either software, hardware or even
biological systems, that are characterized by sets of independent actions that
can be executed in any order or simultaneously. Computer scientists resort to a
causal terminology to describe and analyse the relations between the actions in
these systems. However, a thorough discussion about the meaning of causality in
such a context has not been developed yet. This paper aims to fill the gap.
First, the paper analyses the notion of causation in concurrent systems and
attempts to build bridges with the existing philosophical literature,
highlighting similarities and divergences between them. Second, the paper
analyses the use of counterfactual reasoning in ex-post analysis in concurrent
systems (i.e. execution trace analysis).
|
1303.1414 | Dynamical influence processes on networks: General theory and
applications to social contagion | physics.soc-ph cs.SI | We study binary state dynamics on a network where each node acts in response
to the average state of its neighborhood. Allowing varying amounts of
stochasticity in both the network and node responses, we find different
outcomes in random and deterministic versions of the model. In the limit of a
large, dense network, however, we show that these dynamics coincide. We
construct a general mean field theory for random networks and show this
predicts that the dynamics on the network are a smoothed version of the average
response function dynamics. Thus, the behavior of the system can range from
steady state to chaotic depending on the response functions, network
connectivity, and update synchronicity. As a specific example, we model the
competing tendencies of imitation and non-conformity by incorporating an
off-threshold into standard threshold models of social contagion. In this way
we attempt to capture important aspects of fashions and societal trends. We
compare our theory to extensive simulations of this "limited imitation
contagion" model on Poisson random graphs, finding agreement between the
mean-field theory and stochastic simulations.
|
1303.1420 | Verifying a platform for digital imaging: a multi-tool strategy | cs.SE cs.CV | Fiji is a Java platform widely used by biologists and other experimental
scientists to process digital images. In particular, in our research - made
together with a biologists team; we use Fiji in some pre-processing steps
before undertaking a homological digital processing of images. In a previous
work, we have formalised the correctness of the programs which use homological
techniques to analyse digital images. However, the verification of Fiji's
pre-processing step was missed. In this paper, we present a multi-tool approach
filling this gap, based on the combination of Why/Krakatoa, Coq and ACL2.
|
1303.1441 | A Hybrid Approach to Extract Keyphrases from Medical Documents | cs.IR cs.CL | Keyphrases are the phrases, consisting of one or more words, representing the
important concepts in the articles. Keyphrases are useful for a variety of
tasks such as text summarization, automatic indexing,
clustering/classification, text mining etc. This paper presents a hybrid
approach to keyphrase extraction from medical documents. The keyphrase
extraction approach presented in this paper is an amalgamation of two methods:
the first one assigns weights to candidate keyphrases based on an effective
combination of features such as position, term frequency, inverse document
frequency and the second one assign weights to candidate keyphrases using some
knowledge about their similarities to the structure and characteristics of
keyphrases available in the memory (stored list of keyphrases). An efficient
candidate keyphrase identification method as the first component of the
proposed keyphrase extraction system has also been introduced in this paper.
The experimental results show that the proposed hybrid approach performs better
than some state-of-the art keyphrase extraction approaches.
|
1303.1454 | Causality in Bayesian Belief Networks | cs.AI | We address the problem of causal interpretation of the graphical structure of
Bayesian belief networks (BBNs). We review the concept of causality explicated
in the domain of structural equations models and show that it is applicable to
BBNs. In this view, which we call mechanism-based, causality is defined within
models and causal asymmetries arise when mechanisms are placed in the context
of a system. We lay the link between structural equations models and BBNs
models and formulate the conditions under which the latter can be given causal
interpretation.
|
1303.1455 | From Conditional Oughts to Qualitative Decision Theory | cs.AI | The primary theme of this investigation is a decision theoretic account of
conditional ought statements (e.g., "You ought to do A, if C") that rectifies
glaring deficiencies in classical deontic logic. The resulting account forms a
sound basis for qualitative decision theory, thus providing a framework for
qualitative planning under uncertainty. In particular, we show that adding
causal relationships (in the form of a single graph) as part of an epistemic
state is sufficient to facilitate the analysis of action sequences, their
consequences, their interaction with observations, their expected utilities
and, hence, the synthesis of plans and strategies under uncertainty.
|
1303.1456 | A Probabilistic Algorithm for Calculating Structure: Borrowing from
Simulated Annealing | cs.AI | We have developed a general Bayesian algorithm for determining the
coordinates of points in a three-dimensional space. The algorithm takes as
input a set of probabilistic constraints on the coordinates of the points, and
an a priori distribution for each point location. The output is a
maximum-likelihood estimate of the location of each point. We use the extended,
iterated Kalman filter, and add a search heuristic for optimizing its solution
under nonlinear conditions. This heuristic is based on the same principle as
the simulated annealing heuristic for other optimization problems. Our method
uses any probabilistic constraints that can be expressed as a function of the
point coordinates (for example, distance, angles, dihedral angles, and
planarity). It assumes that all constraints have Gaussian noise. In this paper,
we describe the algorithm and show its performance on a set of synthetic data
to illustrate its convergence properties, and its applicability to domains such
ng molecular structure determination.
|
1303.1457 | A Study of Scaling Issues in Bayesian Belief Networks for Ship
Classification | cs.AI | The problems associated with scaling involve active and challenging research
topics in the area of artificial intelligence. The purpose is to solve real
world problems by means of AI technologies, in cases where the complexity of
representation of the real world problem is potentially combinatorial. In this
paper, we present a novel approach to cope with the scaling issues in Bayesian
belief networks for ship classification. The proposed approach divides the
conceptual model of a complex ship classification problem into a set of small
modules that work together to solve the classification problem while preserving
the functionality of the original model. The possible ways of explaining sensor
returns (e.g., the evidence) for some features, such as portholes along the
length of a ship, are sometimes combinatorial. Thus, using an exhaustive
approach, which entails the enumeration of all possible explanations, is
impractical for larger problems. We present a network structure (referred to as
Sequential Decomposition, SD) in which each observation is associated with a
set of legitimate outcomes which are consistent with the explanation of each
observed piece of evidence. The results show that the SD approach allows one to
represent feature-observation relations in a manageable way and achieve the
same explanatory power as an exhaustive approach.
|
1303.1458 | Tradeoffs in Constructing and Evaluating Temporal Influence Diagrams | cs.AI | This paper addresses the tradeoffs which need to be considered in reasoning
using probabilistic network representations, such as Influence Diagrams (IDs).
In particular, we examine the tradeoffs entailed in using Temporal Influence
Diagrams (TIDs) which adequately capture the temporal evolution of a dynamic
system without prohibitive data and computational requirements. Three
approaches for TID construction which make different tradeoffs are examined:
(1) tailoring the network at each time interval to the data available (rather
then just copying the original Bayes Network for all time intervals); (2)
modeling the evolution of a parsimonious subset of variables (rather than all
variables); and (3) model selection approaches, which seek to minimize some
measure of the predictive accuracy of the model without introducing too many
parameters, which might cause "overfitting" of the model. Methods of evaluating
the accuracy/efficiency of the tradeoffs are proposed.
|
1303.1459 | End-User Construction of Influence Diagrams for Bayesian Statistics | cs.AI | Influence diagrams are ideal knowledge representations for Bayesian
statistical models. However, these diagrams are difficult for end users to
interpret and to manipulate. We present a user-based architecture that enables
end users to create and to manipulate the knowledge representation. We use the
problem of physicians' interpretation of two-arm parallel randomized clinical
trials (TAPRCT) to illustrate the architecture and its use. There are three
primary data structures. Elements of statistical models are encoded as
subgraphs of a restricted class of influence diagram. The interpretations of
those elements are mapped into users' language in a domain-specific, user-based
semantic interface, called a patient-flow diagram, in the TAPRCT problem.
Pennitted transformations of the statistical model that maintain the semantic
relationships of the model are encoded in a metadata-state diagram, called the
cohort-state diagram, in the TAPRCT problem. The algorithm that runs the system
uses modular actions called construction steps. This framework has been
implemented in a system called THOMAS, that allows physicians to interpret the
data reported from a TAPRCT.
|
1303.1460 | On Considering Uncertainty and Alternatives in Low-Level Vision | cs.AI cs.CV | In this paper we address the uncertainty issues involved in the low-level
vision task of image segmentation. Researchers in computer vision have worked
extensively on this problem, in which the goal is to partition (or segment) an
image into regions that are homogeneous or uniform in some sense. This
segmentation is often utilized by some higher level process, such as an object
recognition system. We show that by considering uncertainty in a Bayesian
formalism, we can use statistical image models to build an approximate
representation of a probability distribution over a space of alternative
segmentations. We give detailed descriptions of the various levels of
uncertainty associated with this problem, discuss the interaction of prior and
posterior distributions, and provide the operations for constructing this
representation.
|
1303.1461 | Forecasting Sleep Apnea with Dynamic Network Models | cs.AI | Dynamic network models (DNMs) are belief networks for temporal reasoning. The
DNM methodology combines techniques from time series analysis and probabilistic
reasoning to provide (1) a knowledge representation that integrates
noncontemporaneous and contemporaneous dependencies and (2) methods for
iteratively refining these dependencies in response to the effects of exogenous
influences. We use belief-network inference algorithms to perform forecasting,
control, and discrete event simulation on DNMs. The belief network formulation
allows us to move beyond the traditional assumptions of linearity in the
relationships among time-dependent variables and of normality in their
probability distributions. We demonstrate the DNM methodology on an important
forecasting problem in medicine. We conclude with a discussion of how the
methodology addresses several limitations found in traditional time series
analyses.
|
1303.1462 | Normative Engineering Risk Management Systems | cs.AI | This paper describes a normative system design that incorporates diagnosis,
dynamic evolution, decision making, and information gathering. A single
influence diagram demonstrates the design's coherence, yet each activity is
more effectively modeled and evaluated separately. Application to offshore oil
platforms illustrates the design. For this application, the normative system is
embedded in a real-time expert system.
|
1303.1463 | Diagnosis of Multiple Faults: A Sensitivity Analysis | cs.AI | We compare the diagnostic accuracy of three diagnostic inference models: the
simple Bayes model, the multimembership Bayes model, which is isomorphic to the
parallel combination function in the certainty-factor model, and a model that
incorporates the noisy OR-gate interaction. The comparison is done on 20
clinicopathological conference (CPC) cases from the American Journal of
Medicine-challenging cases describing actual patients often with multiple
disorders. We find that the distributions produced by the noisy OR model agree
most closely with the gold-standard diagnoses, although substantial differences
exist between the distributions and the diagnoses. In addition, we find that
the multimembership Bayes model tends to significantly overestimate the
posterior probabilities of diseases, whereas the simple Bayes model tends to
significantly underestimate the posterior probabilities. Our results suggest
that additional work to refine the noisy OR model for internal medicine will be
worthwhile.
|
1303.1464 | Additive Belief-Network Models | cs.AI | The inherent intractability of probabilistic inference has hindered the
application of belief networks to large domains. Noisy OR-gates [30] and
probabilistic similarity networks [18, 17] escape the complexity of inference
by restricting model expressiveness. Recent work in the application of
belief-network models to time-series analysis and forecasting [9, 10] has given
rise to the additive belief network model (ABNM). We (1) discuss the nature and
implications of the approximations made by an additive decomposition of a
belief network, (2) show greater efficiency in the induction of additive models
when available data are scarce, (3) generalize probabilistic inference
algorithms to exploit the additive decomposition of ABNMs, (4) show greater
efficiency of inference, and (5) compare results on inference with a simple
additive belief network.
|
1303.1465 | Parameter Adjustment in Bayes Networks. The generalized noisy OR-gate | cs.AI | Spiegelhalter and Lauritzen [15] studied sequential learning in Bayesian
networks and proposed three models for the representation of conditional
probabilities. A forth model, shown here, assumes that the parameter
distribution is given by a product of Gaussian functions and updates them from
the _ and _r messages of evidence propagation. We also generalize the noisy
OR-gate for multivalued variables, develop the algorithm to compute probability
in time proportional to the number of parents (even in networks with loops) and
apply the learning model to this gate.
|
1303.1466 | A fuzzy relation-based extension of Reggia's relational model for
diagnosis handling uncertain and incomplete information | cs.AI | Relational models for diagnosis are based on a direct description of the
association between disorders and manifestations. This type of model has been
specially used and developed by Reggia and his co-workers in the late eighties
as a basic starting point for approaching diagnosis problems. The paper
proposes a new relational model which includes Reggia's model as a particular
case and which allows for a more expressive representation of the observations
and of the manifestations associated with disorders. The model distinguishes,
i) between manifestations which are certainly absent and those which are not
(yet) observed, and ii) between manifestations which cannot be caused by a
given disorder and manifestations for which we do not know if they can or
cannot be caused by this disorder. This new model, which can handle uncertainty
in a non-probabilistic way, is based on possibility theory and so-called
twofold fuzzy sets, previously introduced by the authors.
|
1303.1467 | Dialectic Reasoning with Inconsistent Information | cs.AI | From an inconsistent database non-trivial arguments may be constructed both
for a proposition, and for the contrary of that proposition. Therefore,
inconsistency in a logical database causes uncertainty about which conclusions
to accept. This kind of uncertainty is called logical uncertainty. We define a
concept of "acceptability", which induces a means for differentiating
arguments. The more acceptable an argument, the more confident we are in it. A
specific interest is to use the acceptability classes to assign linguistic
qualifiers to propositions, such that the qualifier assigned to a propositions
reflects its logical uncertainty. A more general interest is to understand how
classes of acceptability can be defined for arguments constructed from an
inconsistent database, and how this notion of acceptability can be devised to
reflect different criteria. Whilst concentrating on the aspects of assigning
linguistic qualifiers to propositions, we also indicate the more general
significance of the notion of acceptability.
|
1303.1468 | Causal Independence for Knowledge Acquisition and Inference | cs.AI | I introduce a temporal belief-network representation of causal independence
that a knowledge engineer can use to elicit probabilistic models. Like the
current, atemporal belief-network representation of causal independence, the
new representation makes knowledge acquisition tractable. Unlike the atemproal
representation, however, the temporal representation can simplify inference,
and does not require the use of unobservable variables. The representation is
less general than is the atemporal representation, but appears to be useful for
many practical applications.
|
1303.1469 | Utility-Based Abstraction and Categorization | cs.AI | We take a utility-based approach to categorization. We construct
generalizations about events and actions by considering losses associated with
failing to distinguish among detailed distinctions in a decision model. The
utility-based methods transform detailed states of the world into more abstract
categories comprised of disjunctions of the states. We show how we can cluster
distinctions into groups of distinctions at progressively higher levels of
abstraction, and describe rules for decision making with the abstractions. The
techniques introduce a utility-based perspective on the nature of concepts, and
provide a means of simplifying decision models used in automated reasoning
systems. We demonstrate the techniques by describing the capabilities and
output of TUBA, a program for utility-based abstraction.
|
1303.1470 | Sensitivity Analysis for Probability Assessments in Bayesian Networks | cs.AI | When eliciting probability models from experts, knowledge engineers may
compare the results of the model with expert judgment on test scenarios, then
adjust model parameters to bring the behavior of the model more in line with
the expert's intuition. This paper presents a methodology for analytic
computation of sensitivity values to measure the impact of small changes in a
network parameter on a target probability value or distribution. These values
can be used to guide knowledge elicitation. They can also be used in a gradient
descent algorithm to estimate parameter values that maximize a measure of
goodness-of-fit to both local and holistic probability assessments.
|
1303.1471 | Causal Modeling | cs.AI | Causal Models are like Dependency Graphs and Belief Nets in that they provide
a structure and a set of assumptions from which a joint distribution can, in
principle, be computed. Unlike Dependency Graphs, Causal Models are models of
hierarchical and/or parallel processes, rather than models of distributions
(partially) known to a model builder through some sort of gestalt. As such,
Causal Models are more modular, easier to build, more intuitive, and easier to
understand than Dependency Graph Models. Causal Models are formally defined and
Dependency Graph Models are shown to be a special case of them. Algorithms
supporting inference are presented. Parsimonious methods for eliciting
dependent probabilities are presented.
|
1303.1472 | Some Complexity Considerations in the Combination of Belief Networks | cs.AI | One topic that is likely to attract an increasing amount of attention within
the Knowledge-base systems research community is the coordination of
information provided by multiple experts. We envision a situation in which
several experts independently encode information as belief networks. A
potential user must then coordinate the conclusions and recommendations of
these networks to derive some sort of consensus. One approach to such a
consensus is the fusion of the contributed networks into a single, consensus
model prior to the consideration of any case-specific data (specific
observations, test results). This approach requires two types of combination
procedures, one for probabilities, and one for graphs. Since the combination of
probabilities is relatively well understood, the key barriers to this approach
lie in the realm of graph theory. This paper provides formal definitions of
some of the operations necessary to effect the necessary graphical
combinations, and provides complexity analyses of these procedures. The paper's
key result is that most of these operations are NP-hard, and its primary
message is that the derivation of ?good? consensus networks must be done
heuristically.
|
1303.1473 | Deriving a Minimal I-map of a Belief Network Relative to a Target
Ordering of its Nodes | cs.AI | This paper identifies and solves a new optimization problem: Given a belief
network (BN) and a target ordering on its variables, how can we efficiently
derive its minimal I-map whose arcs are consistent with the target ordering? We
present three solutions to this problem, all of which lead to directed acyclic
graphs based on the original BN's recursive basis relative to the specified
ordering (such a DAG is sometimes termed the boundary DAG drawn from the given
BN relative to the said ordering [5]). Along the way, we also uncover an
important general principal about arc reversals: when reordering a BN according
to some target ordering, (while attempting to minimize the number of arcs
generated), the sequence of arc reversals should follow the topological
ordering induced by the original belief network's arcs to as great an extent as
possible. These results promise to have a significant impact on the derivation
of consensus models, as well as on other algorithms that require the
reconfiguration and/or combination of BN's.
|
1303.1474 | Probabilistic Conceptual Network: A Belief Representation Scheme for
Utility-Based Categorization | cs.AI | Probabilistic conceptual network is a knowledge representation scheme
designed for reasoning about concepts and categorical abstractions in
utility-based categorization. The scheme combines the formalisms of abstraction
and inheritance hierarchies from artificial intelligence, and probabilistic
networks from decision analysis. It provides a common framework for
representing conceptual knowledge, hierarchical knowledge, and uncertainty. It
facilitates dynamic construction of categorization decision models at varying
levels of abstraction. The scheme is applied to an automated machining problem
for reasoning about the state of the machine at varying levels of abstraction
in support of actions for maintaining competitiveness of the plant.
|
1303.1475 | Reasoning about the Value of Decision-Model Refinement: Methods and
Application | cs.AI | We investigate the value of extending the completeness of a decision model
along different dimensions of refinement. Specifically, we analyze the expected
value of quantitative, conceptual, and structural refinement of decision
models. We illustrate the key dimensions of refinement with examples. The
analyses of value of model refinement can be used to focus the attention of an
analyst or an automated reasoning system on extensions of a decision model
associated with the greatest expected value.
|
1303.1476 | Mixtures of Gaussians and Minimum Relative Entropy Techniques for
Modeling Continuous Uncertainties | cs.AI | Problems of probabilistic inference and decision making under uncertainty
commonly involve continuous random variables. Often these are discretized to a
few points, to simplify assessments and computations. An alternative
approximation is to fit analytically tractable continuous probability
distributions. This approach has potential simplicity and accuracy advantages,
especially if variables can be transformed first. This paper shows how a
minimum relative entropy criterion can drive both transformation and fitting,
illustrating with a power and logarithm family of transformations and mixtures
of Gaussian (normal) distributions, which allow use of efficient influence
diagram methods. The fitting procedure in this case is the well-known EM
algorithm. The selection of the number of components in a fitted mixture
distribution is automated with an objective that trades off accuracy and
computational cost.
|
1303.1477 | Valuation Networks and Conditional Independence | cs.AI | Valuation networks have been proposed as graphical representations of
valuation-based systems (VBSs). The VBS framework is able to capture many
uncertainty calculi including probability theory, Dempster-Shafer's
belief-function theory, Spohn's epistemic belief theory, and Zadeh's
possibility theory. In this paper, we show how valuation networks encode
conditional independence relations. For the probabilistic case, the class of
probability models encoded by valuation networks includes undirected graph
models, directed acyclic graph models, directed balloon graph models, and
recursive causal graph models.
|
1303.1478 | Relevant Explanations: Allowing Disjunctive Assignments | cs.AI | Relevance-based explanation is a scheme in which partial assignments to
Bayesian belief network variables are explanations (abductive conclusions). We
allow variables to remain unassigned in explanations as long as they are
irrelevant to the explanation, where irrelevance is defined in terms of
statistical independence. When multiple-valued variables exist in the system,
especially when subsets of values correspond to natural types of events, the
over specification problem, alleviated by independence-based explanation,
resurfaces. As a solution to that, as well as for addressing the question of
explanation specificity, it is desirable to collapse such a subset of values
into a single value on the fly. The equivalent method, which is adopted here,
is to generalize the notion of assignments to allow disjunctive assignments. We
proceed to define generalized independence based explanations as maximum
posterior probability independence based generalized assignments (GIB-MAPs).
GIB assignments are shown to have certain properties that ease the design of
algorithms for computing GIB-MAPs. One such algorithm is discussed here, as
well as suggestions for how other algorithms may be adapted to compute
GIB-MAPs. GIB-MAP explanations still suffer from instability, a problem which
may be addressed using ?approximate? conditional independence as a condition
for irrelevance.
|
1303.1479 | A Generalization of the Noisy-Or Model | cs.AI | The Noisy-Or model is convenient for describing a class of uncertain
relationships in Bayesian networks [Pearl 1988]. Pearl describes the Noisy-Or
model for Boolean variables. Here we generalize the model to nary input and
output variables and to arbitrary functions other than the Boolean OR function.
This generalization is a useful modeling aid for construction of Bayesian
networks. We illustrate with some examples including digital circuit diagnosis
and network reliability analysis.
|
1303.1480 | Using First-Order Probability Logic for the Construction of Bayesian
Networks | cs.AI | We present a mechanism for constructing graphical models, specifically
Bayesian networks, from a knowledge base of general probabilistic information.
The unique feature of our approach is that it uses a powerful first-order
probabilistic logic for expressing the general knowledge base. This logic
allows for the representation of a wide range of logical and probabilistic
information. The model construction procedure we propose uses notions from
direct inference to identify pieces of local statistical information from the
knowledge base that are most appropriate to the particular event we want to
reason about. These pieces are composed to generate a joint probability
distribution specified as a Bayesian network. Although there are fundamental
difficulties in dealing with fully general knowledge, our procedure is
practical for quite rich knowledge bases and it supports the construction of a
far wider range of networks than allowed for by current template technology.
|
1303.1481 | Representing and Reasoning With Probabilistic Knowledge: A Bayesian
Approach | cs.AI | PAGODA (Probabilistic Autonomous Goal-Directed Agent) is a model for
autonomous learning in probabilistic domains [desJardins, 1992] that
incorporates innovative techniques for using the agent's existing knowledge to
guide and constrain the learning process and for representing, reasoning with,
and learning probabilistic knowledge. This paper describes the probabilistic
representation and inference mechanism used in PAGODA. PAGODA forms theories
about the effects of its actions and the world state on the environment over
time. These theories are represented as conditional probability distributions.
A restriction is imposed on the structure of the theories that allows the
inference mechanism to find a unique predicted distribution for any action and
world state description. These restricted theories are called uniquely
predictive theories. The inference mechanism, Probability Combination using
Independence (PCI), uses minimal independence assumptions to combine the
probabilities in a theory to make probabilistic predictions.
|
1303.1482 | Graph-Grammar Assistance for Automated Generation of Influence Diagrams | cs.AI | One of the most difficult aspects of modeling complex dilemmas in
decision-analytic terms is composing a diagram of relevance relations from a
set of domain concepts. Decision models in domains such as medicine, however,
exhibit certain prototypical patterns that can guide the modeling process.
Medical concepts can be classified according to semantic types that have
characteristic positions and typical roles in an influence-diagram model. We
have developed a graph-grammar production system that uses such inherent
interrelationships among medical terms to facilitate the modeling of medical
decisions.
|
1303.1483 | Using Causal Information and Local Measures to Learn Bayesian Networks | cs.AI | In previous work we developed a method of learning Bayesian Network models
from raw data. This method relies on the well known minimal description length
(MDL) principle. The MDL principle is particularly well suited to this task as
it allows us to tradeoff, in a principled way, the accuracy of the learned
network against its practical usefulness. In this paper we present some new
results that have arisen from our work. In particular, we present a new local
way of computing the description length. This allows us to make significant
improvements in our search algorithm. In addition, we modify our algorithm so
that it can take into account partial domain information that might be provided
by a domain expert. The local computation of description length also opens the
door for local refinement of an existent network. The feasibility of our
approach is demonstrated by experiments involving networks of a practical size.
|
1303.1484 | Minimal Assumption Distribution Propagation in Belief Networks | cs.AI | As belief networks are used to model increasingly complex situations, the
need to automatically construct them from large databases will become
paramount. This paper concentrates on solving a part of the belief network
induction problem: that of learning the quantitative structure (the conditional
probabilities), given the qualitative structure. In particular, a theory is
presented that shows how to propagate inference distributions in a belief
network, with the only assumption being that the given qualitative structure is
correct. Most inference algorithms must make at least this assumption. The
theory is based on four network transformations that are sufficient for any
inference in a belief network. Furthermore, the claim is made that contrary to
popular belief, error will not necessarily grow as the inference chain grows.
Instead, for QBN belief nets induced from large enough samples, the error is
more likely to decrease as the size of the inference chain increases.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.