id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1302.4916 | Stacking from Tags: Clustering Bookmarks around a Theme | cs.IR | Since very recently, users on the social bookmarking service Delicious can
stack web pages in addition to tagging them. Stacking enables users to group
web pages around specific themes with the aim of recommending to others.
However, users still stack a small subset of what they tag, and thus many web
pages remain unstacked. This paper presents early research towards
automatically clustering web pages from tags to find stacks and extend
recommendations.
|
1302.4922 | Structure Discovery in Nonparametric Regression through Compositional
Kernel Search | stat.ML cs.LG stat.ME | Despite its importance, choosing the structural form of the kernel in
nonparametric regression remains a black art. We define a space of kernel
structures which are built compositionally by adding and multiplying a small
number of base kernels. We present a method for searching over this space of
structures which mirrors the scientific discovery process. The learned
structures can often decompose functions into interpretable components and
enable long-range extrapolation on time-series datasets. Our structure search
method outperforms many widely used kernels and kernel combination methods on a
variety of prediction tasks.
|
1302.4928 | Graphical Models for Preference and Utility | cs.AI | Probabilistic independence can dramatically simplify the task of eliciting,
representing, and computing with probabilities in large domains. A key
technique in achieving these benefits is the idea of graphical modeling. We
survey existing notions of independence for utility functions in a
multi-attribute space, and suggest that these can be used to achieve similar
advantages. Our new results concern conditional additive independence, which we
show always has a perfect representation as separation in an undirected graph
(a Markov network). Conditional additive independencies entail a particular
functional for the utility function that is analogous to a product
decomposition of a probability function, and confers analogous benefits. This
functional form has been utilized in the Bayesian network and influence diagram
literature, but generally without an explanation in terms of independence. The
functional form yields a decomposition of the utility function that can greatly
speed up expected utility calculations, particularly when the utility graph has
a similar topology to the probabilistic network being used.
|
1302.4929 | Counterfactuals and Policy Analysis in Structural Models | cs.AI | Evaluation of counterfactual queries (e.g., "If A were true, would C have
been true?") is important to fault diagnosis, planning, determination of
liability, and policy analysis. We present a method of revaluating
counterfactuals when the underlying causal model is represented by structural
models - a nonlinear generalization of the simultaneous equations models
commonly used in econometrics and social sciences. This new method provides a
coherent means for evaluating policies involving the control of variables
which, prior to enacting the policy were influenced by other variables in the
system.
|
1302.4930 | Belief Functions and Default Reasoning | cs.AI | We present a new approach to dealing with default information based on the
theory of belief functions. Our semantic structures, inspired by Adams'
epsilon-semantics, are epsilon-belief assignments, where values committed to
focal elements are either close to 0 or close to 1. We define two systems based
on these structures, and relate them to other non-monotonic systems presented
in the literature. We show that our second system correctly addresses the
well-known problems of specificity, irrelevance, blocking of inheritance,
ambiguity, and redundancy.
|
1302.4931 | An Algebraic Semantics for Possibilistic Logic | cs.AI cs.LO | The first contribution of this paper is the presentation of a Pavelka - like
formulation of possibilistic logic in which the language is naturally enriched
by two connectives which represent negation (eg) and a new type of conjunction
(otimes). The space of truth values for this logic is the lattice of
possibility functions, that, from an algebraic point of view, forms a quantal.
A second contribution comes from the understanding of the new conjunction as
the combination of tokens of information coming from different sources, which
makes our language "dynamic". A Gentzen calculus is presented, which is proved
sound and complete with respect to the given semantics. The problem of truth
functionality is discussed in this context.
|
1302.4932 | Automating Computer Bottleneck Detection with Belief Nets | cs.AI | We describe an application of belief networks to the diagnosis of bottlenecks
in computer systems. The technique relies on a high-level functional model of
the interaction between application workloads, the Windows NT operating system,
and system hardware. Given a workload description, the model predicts the
values of observable system counters available from the Windows NT performance
monitoring tool. Uncertainty in workloads, predictions, and counter values are
characterized with Gaussian distributions. During diagnostic inference, we use
observed performance monitor values to find the most probable assignment to the
workload parameters. In this paper we provide some background on automated
bottleneck detection, describe the structure of the system model, and discuss
empirical procedures for model calibration and verification. Part of the
calibration process includes generating a dataset to estimate a multivariate
Gaussian error model. Initial results in diagnosing bottlenecks are presented.
|
1302.4933 | Chain Graphs for Learning | cs.AI | Chain graphs combine directed and undirected graphs and their underlying
mathematics combines properties of the two. This paper gives a simplified
definition of chain graphs based on a hierarchical combination of Bayesian
(directed) and Markov (undirected) networks. Examples of a chain graph are
multivariate feed-forward networks, clustering with conditional interaction
between variables, and forms of Bayes classifiers. Chain graphs are then
extended using the notation of plates so that samples and data analysis
problems can be represented in a graphical model as well. Implications for
learning are discussed in the conclusion.
|
1302.4934 | Error Estimation in Approximate Bayesian Belief Network Inference | cs.AI | We can perform inference in Bayesian belief networks by enumerating
instantiations with high probability thus approximating the marginals. In this
paper, we present a method for determining the fraction of instantiations that
has to be considered such that the absolute error in the marginals does not
exceed a predefined value. The method is based on extreme value theory.
Essentially, the proposed method uses the reversed generalized Pareto
distribution to model probabilities of instantiations below a given threshold.
Based on this distribution, an estimate of the maximal absolute error if
instantiations with probability smaller than u are disregarded can be made.
|
1302.4935 | Generating the Structure of a Fuzzy Rule under Uncertainty | cs.AI | The aim of this paper is to present a method for identifying the structure of
a rule in a fuzzy model. For this purpose, an ATMS shall be used (Zurita 1994).
An algorithm obtaining the identification of the structure will be suggested
(Castro 1995). The minimal structure of the rule (with respect to the number of
variables that must appear in the rule) will be found by this algorithm.
Furthermore, the identification parameters shall be obtained simultaneously.
The proposed method shall be applied for classification to an example. The {em
Iris Plant Database} shall be learnt for all three kinds of plants.
|
1302.4936 | Practical Model-Based Diagnosis with Qualitative Possibilistic
Uncertainty | cs.AI | An approach to fault isolation that exploits vastly incomplete models is
presented. It relies on separate descriptions of each component behavior,
together with the links between them, which enables focusing of the reasoning
to the relevant part of the system. As normal observations do not need
explanation, the behavior of the components is limited to anomaly propagation.
Diagnostic solutions are disorders (fault modes or abnormal signatures) that
are consistent with the observations, as well as abductive explanations. An
ordinal representation of uncertainty based on possibility theory provides a
simple exception-tolerant description of the component behaviors. We can for
instance distinguish between effects that are more or less certainly present
(or absent) and effects that are more or less certainly present (or absent)
when a given anomaly is present. A realistic example illustrates the benefits
of this approach.
|
1302.4937 | Decision Flexibility | cs.AI | The development of new methods and representations for temporal
decision-making requires a principled basis for characterizing and measuring
the flexibility of decision strategies in the face of uncertainty. Our goal in
this paper is to provide a framework - not a theory - for observing how
decision policies behave in the face of informational perturbations, to gain
clues as to how they might behave in the face of unanticipated, possibly
unarticulated uncertainties. To this end, we find it beneficial to distinguish
between two types of uncertainty: "Small World" and "Large World" uncertainty.
The first type can be resolved by posing an unambiguous question to a
"clairvoyant," and is anchored on some well-defined aspect of a decision frame.
The second type is more troublesome, yet it is often of greater interest when
we address the issue of flexibility; this type of uncertainty can be resolved
only by consulting a "psychic." We next observe that one approach to
flexibility used in the economics literature is already implicitly accounted
for in the Maximum Expected Utility (MEU) principle from decision theory.
Though simple, the observation establishes the context for a more illuminating
notion of flexibility, what we term flexibility with respect to information
revelation. We show how to perform flexibility analysis of a static (i.e.,
single period) decision problem using a simple example, and we observe that the
most flexible alternative thus identified is not necessarily the MEU
alternative. We extend our analysis for a dynamic (i.e., multi-period) model,
and we demonstrate how to calculate the value of flexibility for decision
strategies that allow downstream revision of an upstream commitment decision.
|
1302.4938 | A Transformational Characterization of Equivalent Bayesian Network
Structures | cs.AI | We present a simple characterization of equivalent Bayesian network
structures based on local transformations. The significance of the
characterization is twofold. First, we are able to easily prove several new
invariant properties of theoretical interest for equivalent structures. Second,
we use the characterization to derive an efficient algorithm that identifies
all of the compelled edges in a structure. Compelled edge identification is of
particular importance for learning Bayesian network structures from data
because these edges indicate causal relationships when certain assumptions
hold.
|
1302.4939 | Conditioning Methods for Exact and Approximate Inference in Causal
Networks | cs.AI | We present two algorithms for exact and approximate inference in causal
networks. The first algorithm, dynamic conditioning, is a refinement of cutset
conditioning that has linear complexity on some networks for which cutset
conditioning is exponential. The second algorithm, B-conditioning, is an
algorithm for approximate inference that allows one to trade-off the quality of
approximations with the computation time. We also present some experimental
results illustrating the properties of the proposed algorithms.
|
1302.4940 | Independence Concepts for Convex Sets of Probabilities | cs.AI | In this paper we study different concepts of independence for convex sets of
probabilities. There will be two basic ideas for independence. The first is
irrelevance. Two variables are independent when a change on the knowledge about
one variable does not affect the other. The second one is factorization. Two
variables are independent when the joint convex set of probabilities can be
decomposed on the product of marginal convex sets. In the case of the Theory of
Probability, these two starting points give rise to the same definition. In the
case of convex sets of probabilities, the resulting concepts will be strongly
related, but they will not be equivalent. As application of the concept of
independence, we shall consider the problem of building a global convex set
from marginal convex sets of probabilities.
|
1302.4941 | Clustering Without (Thinking About) Triangulation | cs.AI | The undirected technique for evaluating belief networks [Jensen, et.al.,
1990, Lauritzen and Spiegelhalter, 1988] requires clustering the nodes in the
network into a junction tree. In the traditional view, the junction tree is
constructed from the cliques of the moralized and triangulated belief network:
triangulation is taken to be the primitive concept, the goal towards which any
clustering algorithm (e.g. node elimination) is directed. In this paper, we
present an alternative conception of clustering, in which clusters and the
junction tree property play the role of primitives: given a graph (not a tree)
of clusters which obey (a modified version of) the junction tree property, we
transform this graph until we have obtained a tree. There are several
advantages to this approach: it is much clearer and easier to understand, which
is important for humans who are constructing belief networks; it admits a wider
range of heuristics which may enable more efficient or superior clustering
algorithms; and it serves as the natural basis for an incremental clustering
scheme, which we describe.
|
1302.4942 | Implementation of Continuous Bayesian Networks Using Sums of Weighted
Gaussians | cs.AI | Bayesian networks provide a method of representing conditional independence
between random variables and computing the probability distributions associated
with these random variables. In this paper, we extend Bayesian network
structures to compute probability density functions for continuous random
variables. We make this extension by approximating prior and conditional
densities using sums of weighted Gaussian distributions and then finding the
propagation rules for updating the densities in terms of these weights. We
present a simple example that illustrates the Bayesian network for continuous
variables; this example shows the effect of the network structure and
approximation errors on the computation of densities for variables in the
network.
|
1302.4943 | Elicitation of Probabilities for Belief Networks: Combining Qualitative
and Quantitative Information | cs.AI | Although the usefulness of belief networks for reasoning under uncertainty is
widely accepted, obtaining numerical probabilities that they require is still
perceived a major obstacle. Often not enough statistical data is available to
allow for reliable probability estimation. Available information may not be
directly amenable for encoding in the network. Finally, domain experts may be
reluctant to provide numerical probabilities. In this paper, we propose a
method for elicitation of probabilities from a domain expert that is
non-invasive and accommodates whatever probabilistic information the expert is
willing to state. We express all available information, whether qualitative or
quantitative in nature, in a canonical form consisting of (in) equalities
expressing constraints on the hyperspace of possible joint probability
distributions. We then use this canonical form to derive second-order
probability distributions over the desired probabilities.
|
1302.4944 | Numerical Representations of Acceptance | cs.AI | Accepting a proposition means that our confidence in this proposition is
strictly greater than the confidence in its negation. This paper investigates
the subclass of uncertainty measures, expressing confidence, that capture the
idea of acceptance, what we call acceptance functions. Due to the monotonicity
property of confidence measures, the acceptance of a proposition entails the
acceptance of any of its logical consequences. In agreement with the idea that
a belief set (in the sense of Gardenfors) must be closed under logical
consequence, it is also required that the separate acceptance o two
propositions entail the acceptance of their conjunction. Necessity (and
possibility) measures agree with this view of acceptance while probability and
belief functions generally do not. General properties of acceptance functions
are estabilished. The motivation behind this work is the investigation of a
setting for belief revision more general than the one proposed by Alchourron,
Gardenfors and Makinson, in connection with the notion of conditioning.
|
1302.4945 | Fraud/Uncollectible Debt Detection Using a Bayesian Network Based
Learning System: A Rare Binary Outcome with Mixed Data Structures | cs.AI | The fraud/uncollectible debt problem in the telecommunications industry
presents two technical challenges: the detection and the treatment of the
account given the detection. In this paper, we focus on the first problem of
detection using Bayesian network models, and we briefly discuss the application
of a normative expert system for the treatment at the end. We apply Bayesian
network models to the problem of fraud/uncollectible debt detection for
telecommunication services. In addition to being quite successful at predicting
rare event outcomes, it is able to handle a mixture of categorical and
continuous data. We present a performance comparison using linear and
non-linear discriminant analysis, classification and regression trees, and
Bayesian network models
|
1302.4946 | A Constraint Satisfaction Approach to Decision under Uncertainty | cs.AI | The Constraint Satisfaction Problem (CSP) framework offers a simple and sound
basis for representing and solving simple decision problems, without
uncertainty. This paper is devoted to an extension of the CSP framework
enabling us to deal with some decisions problems under uncertainty. This
extension relies on a differentiation between the agent-controllable decision
variables and the uncontrollable parameters whose values depend on the
occurrence of uncertain events. The uncertainty on the values of the parameters
is assumed to be given under the form of a probability distribution. Two
algorithms are given, for computing respectively decisions solving the problem
with a maximal probability, and conditional decisions mapping the largest
possible amount of possible cases to actual decisions.
|
1302.4947 | Plausibility Measures: A User's Guide | cs.AI | We examine a new approach to modeling uncertainty based on plausibility
measures, where a plausibility measure just associates with an event its
plausibility, an element is some partially ordered set. This approach is easily
seen to generalize other approaches to modeling uncertainty, such as
probability measures, belief functions, and possibility measures. The lack of
structure in a plausibility measure makes it easy for us to add structure on an
"as needed" basis, letting us examine what is required to ensure that a
plausibility measure has certain properties of interest. This gives us insight
into the essential features of the properties in question, while allowing us to
prove general results that apply to many approaches to reasoning about
uncertainty. Plausibility measures have already proved useful in analyzing
default reasoning. In this paper, we examine their "algebraic properties,"
analogues to the use of + and * in probability theory. An understanding of such
properties will be essential if plausibility measures are to be used in
practice as a representation tool.
|
1302.4948 | Testing Identifiability of Causal Effects | cs.AI | This paper concerns the probabilistic evaluation of the effects of actions in
the presence of unmeasured variables. We show that the identification of causal
effect between a singleton variable X and a set of variables Y can be
accomplished systematically, in time polynomial in the number of variables in
the graph. When the causal effect is identifiable, a closed-form expression can
be obtained for the probability that the action will achieve a specified goal,
or a set of goals.
|
1302.4949 | A Characterization of the Dirichlet Distribution with Application to
Learning Bayesian Networks | cs.AI cs.LG | We provide a new characterization of the Dirichlet distribution. This
characterization implies that under assumptions made by several previous
authors for learning belief networks, a Dirichlet prior on the parameters is
inevitable.
|
1302.4950 | Fast Belief Update Using Order-of-Magnitude Probabilities | cs.AI | We present an algorithm, called Predict, for updating beliefs in causal
networks quantified with order-of-magnitude probabilities. The algorithm takes
advantage of both the structure and the quantification of the network and
presents a polynomial asymptotic complexity. Predict exhibits a conservative
behavior in that it is always sound but not always complete. We provide
sufficient conditions for completeness and present algorithms for testing these
conditions and for computing a complete set of plausible values. We propose
Predict as an efficient method to estimate probabilistic values and illustrate
its use in conjunction with two known algorithms for probabilistic inference.
Finally, we describe an application of Predict to plan evaluation, present
experimental results, and discuss issues regarding its use with conditional
logics of belief, and in the characterization of irrelevance.
|
1302.4951 | Transforming Prioritized Defaults and Specificity into Parallel Defaults | cs.AI | We show how to transform any set of prioritized propositional defaults into
an equivalent set of parallel (i.e., unprioritized) defaults, in
circumscription. We give an algorithm to implement the transform. We show how
to use the transform algorithm as a generator of a whole family of inferencing
algorithms for circumscription. The method is to employ the transform algorithm
as a front end to any inferencing algorithm, e.g., one of the previously
available, that handles the parallel (empty) case of prioritization. Our
algorithms provide not just coverage of a new expressive class, but also
alternatives to previous algorithms for implementing the previously covered
class (?layered?) of prioritization. In particular, we give a new
query-answering algorithm for prioritized cirumscription which is sound and
complete for the full expressive class of unrestricted finite prioritization
partial orders, for propositional defaults (or minimized predicates). By
contrast, previous algorithms required that the prioritization partial order be
layered, i.e., structured similar to the system of rank in the military. Our
algorithm enables, for the first time, the implementation of the most useful
class of prioritization: non-layered prioritization partial orders. Default
inheritance, for example, typically requires non-layered prioritization to
represent specificity adequately. Our algorithm enables not only the
implementation of default inheritance (and specificity) within prioritized
circumscription, but also the extension and combination of default inheritance
with other kinds of prioritized default reasoning, e.g.: with stratified logic
programs with negation-as-failure. Such logic programs are previously known to
be representable equivalently as layered-priority predicate circumscriptions.
Worst-case, the transform increases the number of defaults exponentially. We
discuss how inferencing is practically implementable nevertheless in two kinds
of situations: general expressiveness but small numbers of defaults, or
expressive special cases with larger numbers of defaults. One such expressive
special case is non-?top-heaviness? of the prioritization partial order. In
addition to its direct implementation, the transform can also be exploited
analytically to generate special case algorithms, e.g., a tractable transform
for a class within default inheritance (detailed in another, forthcoming
paper). We discuss other aspects of the significance of the fundamental result.
One can view the transform as reducing n degrees of partially ordered belief
confidence to just 2 degrees of confidence: for-sure and (unprioritized)
default. Ordinary, parallel default reasoning, e.g., in parallel
circumscription or Poole's Theorist, can be viewed in these terms as reducing 2
degrees of confidence to just 1 degree of confidence: that of the non-monotonic
theory's conclusions. The expressive reduction's computational complexity
suggests that prioritization is valuable for its expressive conciseness, just
as defaults are for theirs. For Reiter's Default Logic and Poole's Theorist,
the transform implies how to extend those formalisms so as to equip them with a
concept of prioritization that is exactly equivalent to that in
circumscription. This provides an interesting alternative to Brewka's approach
to equipping them with prioritization-type precedence.
|
1302.4952 | Efficient Decision-Theoretic Planning: Techniques and Empirical Analysis | cs.AI | This paper discusses techniques for performing efficient decision-theoretic
planning. We give an overview of the DRIPS decision-theoretic refinement
planning system, which uses abstraction to efficiently identify optimal plans.
We present techniques for automatically generating search control information,
which can significantly improve the planner's performance. We evaluate the
efficiency of DRIPS both with and without the search control rules on a complex
medical planning problem and compare its performance to that of a
branch-and-bound decision tree algorithm.
|
1302.4953 | Fuzzy Logic and Probability | cs.AI | In this paper we deal with a new approach to probabilistic reasoning in a
logical framework. Nearly almost all logics of probability that have been
proposed in the literature are based on classical two-valued logic. After
making clear the differences between fuzzy logic and probability theory, here
we propose a {em fuzzy} logic of probability for which completeness results (in
a probabilistic sense) are provided. The main idea behind this approach is that
probability values of crisp propositions can be understood as truth-values of
some suitable fuzzy propositions associated to the crisp ones. Moreover,
suggestions and examples of how to extend the formalism to cope with
conditional probabilities and with other uncertainty formalisms are also
provided.
|
1302.4954 | Probabilistic Temporal Reasoning with Endogenous Change | cs.AI | This paper presents a probabilistic model for reasoning about the state of a
system as it changes over time, both due to exogenous and endogenous
influences. Our target domain is a class of medical prediction problems that
are neither so urgent as to preclude careful diagnosis nor progress so slowly
as to allow arbitrary testing and treatment options. In these domains there is
typically enough time to gather information about the patient's state and
consider alternative diagnoses and treatments, but the temporal interaction
between the timing of tests, treatments, and the course of the disease must
also be considered. Our approach is to elicit a qualitative structural model of
the patient from a human expert---the model identifies important attributes,
the way in which exogenous changes affect attribute values, and the way in
which the patient's condition changes endogenously. We then elicit
probabilistic information to capture the expert's uncertainty about the effects
of tests and treatments and the nature and timing of endogenous state changes.
This paper describes the model in the context of a problem in treating vehicle
accident trauma, and suggests a method for solving the model based on the
technique of sequential imputation. A complementary goal of this work is to
understand and synthesize a disparate collection of research efforts all using
the name ?probabilistic temporal reasoning.? This paper analyzes related work
and points out essential differences between our proposed model and other
approaches in the literature.
|
1302.4955 | Toward a Characterization of Uncertainty Measure for the Dempster-Shafer
Theory | cs.AI | This is a working paper summarizing results of an ongoing research project
whose aim is to uniquely characterize the uncertainty measure for the
Dempster-Shafer Theory. A set of intuitive axiomatic requirements is presented,
some of their implications are shown, and the proof is given of the minimality
of recently proposed measure AU among all measures satisfying the proposed
requirements.
|
1302.4956 | A Definition and Graphical Representation for Causality | cs.AI | We present a precise definition of cause and effect in terms of a fundamental
notion called unresponsiveness. Our definition is based on Savage's (1954)
formulation of decision theory and departs from the traditional view of
causation in that our causal assertions are made relative to a set of
decisions. An important consequence of this departure is that we can reason
about cause locally, not requiring a causal explanation for every dependency.
Such local reasoning can be beneficial because it may not be necessary to
determine whether a particular dependency is causal to make a decision. Also in
this paper, we examine the graphical encoding of causal relationships. We show
that influence diagrams in canonical form are an accurate and efficient
representation of causal relationships. In addition, we establish a
correspondence between canonical form and Pearl's causal theory.
|
1302.4957 | Learning Bayesian Networks: A Unification for Discrete and Gaussian
Domains | cs.AI | We examine Bayesian methods for learning Bayesian networks from a combination
of prior knowledge and statistical data. In particular, we unify the approaches
we presented at last year's conference for discrete and Gaussian domains. We
derive a general Bayesian scoring metric, appropriate for both domains. We then
use this metric in combination with well-known statistical facts about the
Dirichlet and normal--Wishart distributions to derive our metrics for discrete
and Gaussian domains.
|
1302.4958 | A Bayesian Approach to Learning Causal Networks | cs.AI | Whereas acausal Bayesian networks represent probabilistic independence,
causal Bayesian networks represent causal relationships. In this paper, we
examine Bayesian methods for learning both types of networks. Bayesian methods
for learning acausal networks are fairly well developed. These methods often
employ assumptions to facilitate the construction of priors, including the
assumptions of parameter independence, parameter modularity, and likelihood
equivalence. We show that although these assumptions also can be appropriate
for learning causal networks, we need additional assumptions in order to learn
causal networks. We introduce two sufficient assumptions, called {em mechanism
independence} and {em component independence}. We show that these new
assumptions, when combined with parameter independence, parameter modularity,
and likelihood equivalence, allow us to apply methods for learning acausal
networks to learn causal networks.
|
1302.4959 | Display of Information for Time-Critical Decision Making | cs.AI | We describe methods for managing the complexity of information displayed to
people responsible for making high-stakes, time-critical decisions. The
techniques provide tools for real-time control of the configuration and
quantity of information displayed to a user, and a methodology for designing
flexible human-computer interfaces for monitoring applications. After defining
a prototypical set of display decision problems, we introduce the expected
value of revealed information (EVRI) and the related measure of expected value
of displayed information (EVDI). We describe how these measures can be used to
enhance computer displays used for monitoring complex systems. We motivate the
presentation by discussing our efforts to employ decision-theoretic control of
displays for a time-critical monitoring application at the NASA Mission Control
Center in Houston.
|
1302.4960 | Reasoning, Metareasoning, and Mathematical Truth: Studies of Theorem
Proving under Limited Resources | cs.AI | In earlier work, we introduced flexible inference and decision-theoretic
metareasoning to address the intractability of normative inference. Here,
rather than pursuing the task of computing beliefs and actions with decision
models composed of distinctions about uncertain events, we examine methods for
inferring beliefs about mathematical truth before an automated theorem prover
completes a proof. We employ a Bayesian analysis to update belief in truth,
given theorem-proving progress, and show how decision-theoretic methods can be
used to determine the value of continuing to deliberate versus taking immediate
action in time-critical situations.
|
1302.4961 | Improved Sampling for Diagnostic Reasoning in Bayesian Networks | cs.AI | Bayesian networks offer great potential for use in automating large scale
diagnostic reasoning tasks. Gibbs sampling is the main technique used to
perform diagnostic reasoning in large richly interconnected Bayesian networks.
Unfortunately Gibbs sampling can take an excessive time to generate a
representative sample. In this paper we describe and test a number of heuristic
strategies for improving sampling in noisy-or Bayesian networks. The strategies
include Monte Carlo Markov chain sampling techniques other than Gibbs sampling.
Emphasis is put on strategies that can be implemented in distributed systems.
|
1302.4962 | Cautious Propagation in Bayesian Networks | cs.AI | Consider the situation where some evidence e has been entered to a Bayesian
network. When performing conflict analysis, sensitivity analysis, or when
answering questions like "What if the finding on X had been y instead of x?"
you need probabilities P (e'| h), where e' is a subset of e, and h is a
configuration of a (possibly empty) set of variables. Cautious propagation is a
modification of HUGIN propagation into a Shafer-Shenoy-like architecture. It is
less efficient than HUGIN propagation; however, it provides easy access to P
(e'| h) for a great deal of relevant subsets e'.
|
1302.4963 | Information/Relevance Influence Diagrams | cs.AI | In this paper we extend the influence diagram (ID) representation for
decisions under uncertainty. In the standard ID, arrows into a decision node
are only informational; they do not represent constraints on what the decision
maker can do. We can represent such constraints only indirectly, using arrows
to the children of the decision and sometimes adding more variables to the
influence diagram, thus making the ID more complicated. Users of influence
diagrams often want to represent constraints by arrows into decision nodes. We
represent constraints on decisions by allowing relevance arrows into decision
nodes. We call the resulting representation information/relevance influence
diagrams (IRIDs). Information/relevance influence diagrams allow for direct
representation and specification of constrained decisions. We use a combination
of stochastic dynamic programming and Gibbs sampling to solve IRIDs. This
method is especially useful when exact methods for solving IDs fail.
|
1302.4964 | Estimating Continuous Distributions in Bayesian Classifiers | cs.LG cs.AI stat.ML | When modeling a probability distribution with a Bayesian network, we are
faced with the problem of how to handle continuous variables. Most previous
work has either solved the problem by discretizing, or assumed that the data
are generated by a single Gaussian. In this paper we abandon the normality
assumption and instead use statistical methods for nonparametric density
estimation. For a naive Bayesian classifier, we present experimental results on
a variety of natural and artificial domains, comparing two methods of density
estimation: assuming normality and modeling each conditional distribution with
a single Gaussian; and using nonparametric kernel density estimation. We
observe large reductions in error on several natural and artificial data sets,
which suggests that kernel estimation is a useful tool for learning Bayesian
models.
|
1302.4965 | Stochastic Simulation Algorithms for Dynamic Probabilistic Networks | cs.AI | Stochastic simulation algorithms such as likelihood weighting often give
fast, accurate approximations to posterior probabilities in probabilistic
networks, and are the methods of choice for very large networks. Unfortunately,
the special characteristics of dynamic probabilistic networks (DPNs), which are
used to represent stochastic temporal processes, mean that standard simulation
algorithms perform very poorly. In essence, the simulation trials diverge
further and further from reality as the process is observed over time. In this
paper, we present simulation algorithms that use the evidence observed at each
time step to push the set of trials back towards reality. The first algorithm,
"evidence reversal" (ER) restructures each time slice of the DPN so that the
evidence nodes for the slice become ancestors of the state variables. The
second algorithm, called "survival of the fittest" sampling (SOF),
"repopulates" the set of trials at each time step using a stochastic
reproduction rate weighted by the likelihood of the evidence according to each
trial. We compare the performance of each algorithm with likelihood weighting
on the original network, and also investigate the benefits of combining the ER
and SOF methods. The ER/SOF combination appears to maintain bounded error
independent of the number of time steps in the simulation.
|
1302.4966 | Probabilistic Exploration in Planning while Learning | cs.AI | Sequential decision tasks with incomplete information are characterized by
the exploration problem; namely the trade-off between further exploration for
learning more about the environment and immediate exploitation of the accrued
information for decision-making. Within artificial intelligence, there has been
an increasing interest in studying planning-while-learning algorithms for these
decision tasks. In this paper we focus on the exploration problem in
reinforcement learning and Q-learning in particular. The existing exploration
strategies for Q-learning are of a heuristic nature and they exhibit limited
scaleability in tasks with large (or infinite) state and action spaces.
Efficient experimentation is needed for resolving uncertainties when possible
plans are compared (i.e. exploration). The experimentation should be sufficient
for selecting with statistical significance a locally optimal plan (i.e.
exploitation). For this purpose, we develop a probabilistic hill-climbing
algorithm that uses a statistical selection procedure to decide how much
exploration is needed for selecting a plan which is, with arbitrarily high
probability, arbitrarily close to a locally optimal one. Due to its generality
the algorithm can be employed for the exploration strategy of robust
Q-learning. An experiment on a relatively complex control task shows that the
proposed exploration strategy performs better than a typical exploration
strategy.
|
1302.4967 | On the Detection of Conflicts in Diagnostic Bayesian Networks Using
Abstraction | cs.AI | An important issue in the use of expert systems is the so-called brittleness
problem. Expert systems model only a limited part of the world. While the
explicit management of uncertainty in expert systems itigates the brittleness
problem, it is still possible for a system to be used, unwittingly, in ways
that the system is not prepared to address. Such a situation may be detected by
the method of straw models, first presented by Jensen et al. [1990] and later
generalized and justified by Laskey [1991]. We describe an algorithm, which we
have implemented, that takes as input an annotated diagnostic Bayesian network
(the base model) and constructs, without assistance, a bipartite network to be
used as a straw model. We show that in some cases this straw model is better
that the independent straw model of Jensen et al., the only other straw model
for which a construction algorithm has been designed and implemented.
|
1302.4968 | HUGS: Combining Exact Inference and Gibbs Sampling in Junction Trees | cs.AI | Dawid, Kjaerulff and Lauritzen (1994) provided a preliminary description of a
hybrid between Monte-Carlo sampling methods and exact local computations in
junction trees. Utilizing the strengths of both methods, such hybrid inference
methods has the potential of expanding the class of problems which can be
solved under bounded resources as well as solving problems which otherwise
resist exact solutions. The paper provides a detailed description of a
particular instance of such a hybrid scheme; namely, combination of exact
inference and Gibbs sampling in discrete Bayesian networks. We argue that this
combination calls for an extension of the usual message passing scheme of
ordinary junction trees.
|
1302.4969 | Sensitivities: An Alternative to Conditional Probabilities for Bayesian
Belief Networks | cs.AI | We show an alternative way of representing a Bayesian belief network by
sensitivities and probability distributions. This representation is equivalent
to the traditional representation by conditional probabilities, but makes
dependencies between nodes apparent and intuitively easy to understand. We also
propose a QR matrix representation for the sensitivities and/or conditional
probabilities which is more efficient, in both memory requirements and
computational speed, than the traditional representation for computer-based
implementations of probabilistic inference. We use sensitivities to show that
for a certain class of binary networks, the computation time for approximate
probabilistic inference with any positive upper bound on the error of the
result is independent of the size of the network. Finally, as an alternative to
traditional algorithms that use conditional probabilities, we describe an exact
algorithm for probabilistic inference that uses the QR-representation for
sensitivities and updates probability distributions of nodes in a network
according to messages from the neighbors.
|
1302.4970 | Is There a Role for Qualitative Risk Assessment? | cs.AI | Classically, risk is characterized by a point value probability indicating
the likelihood of occurrence of an adverse effect. However, there are domains
where the attainability of objective numerical risk characterizations is
increasingly being questioned. This paper reviews the arguments in favour of
extending classical techniques of risk assessment to incorporate meaningful
qualitative and weak quantitative risk characterizations. A technique in which
linguistic uncertainty terms are defined in terms of patterns of argument is
then proposed. The technique is demonstrated using a prototype computer-based
system for predicting the carcinogenic risk due to novel chemical compounds.
|
1302.4971 | On the Complexity of Solving Markov Decision Problems | cs.AI | Markov decision problems (MDPs) provide the foundations for a number of
problems of interest to AI researchers studying automated planning and
reinforcement learning. In this paper, we summarize results regarding the
complexity of solving MDPs and the running time of MDP solution algorithms. We
argue that, although MDPs can be solved efficiently in theory, more study is
needed to reveal practical algorithms for solving large problems quickly. To
encourage future research, we sketch some alternative methods of analysis that
rely on the structure of MDPs.
|
1302.4972 | Causal Inference and Causal Explanation with Background Knowledge | cs.AI | This paper presents correct algorithms for answering the following two
questions; (i) Does there exist a causal explanation consistent with a set of
background knowledge which explains all of the observed independence facts in a
sample? (ii) Given that there is such a causal explanation what are the causal
relationships common to every such causal explanation?
|
1302.4973 | Strong Completeness and Faithfulness in Bayesian Networks | cs.AI | A completeness result for d-separation applied to discrete Bayesian networks
is presented and it is shown that in a strong measure-theoretic sense almost
all discrete distributions for a given network structure are faithful; i.e. the
independence facts true of the distribution are all and only those entailed by
the network structure.
|
1302.4974 | A Theoretical Framework for Context-Sensitive Temporal Probability Model
Construction with Application to Plan Projection | cs.AI | We define a context-sensitive temporal probability logic for representing
classes of discrete-time temporal Bayesian networks. Context constraints allow
inference to be focused on only the relevant portions of the probabilistic
knowledge. We provide a declarative semantics for our language. We present a
Bayesian network construction algorithm whose generated networks give sound and
complete answers to queries. We use related concepts in logic programming to
justify our approach. We have implemented a Bayesian network construction
algorithm for a subset of the theory and demonstrate it's application to the
problem of evaluating the effectiveness of treatments for acute cardiac
conditions.
|
1302.4975 | Refining Reasoning in Qualitative Probabilistic Networks | cs.AI | In recent years there has been a spate of papers describing systems for
probabilisitic reasoning which do not use numerical probabilities. In some
cases the simple set of values used by these systems make it impossible to
predict how a probability will change or which hypothesis is most likely given
certain evidence. This paper concentrates on such situations, and suggests a
number of ways in which they may be resolved by refining the representation.
|
1302.4976 | On the Testability of Causal Models with Latent and Instrumental
Variables | cs.AI | Certain causal models involving unmeasured variables induce no independence
constraints among the observed variables but imply, nevertheless, inequality
contraints on the observed distribution. This paper derives a general formula
for such instrumental variables, that is, exogenous variables that directly
affect some variables but not all. With the help of this formula, it is
possible to test whether a model involving instrumental variables may account
for the data, or, conversely, whether a given variables can be deemed
instrumental.
|
1302.4977 | Probabilistic Evaluation of Sequential Plans from Causal Models with
Hidden Variables | cs.AI | The paper concerns the probabilistic evaluation of plans in the presence of
unmeasured variables, each plan consisting of several concurrent or sequential
actions. We establish a graphical criterion for recognizing when the effects of
a given plan can be predicted from passive observations on measured variables
only. When the criterion is satisfied, a closed-form expression is provided for
the probability that the plan will achieve a specified goal.
|
1302.4978 | Exploiting the Rule Structure for Decision Making within the Independent
Choice Logic | cs.AI | This paper introduces the independent choice logic, and in particular the
"single agent with nature" instance of the independent choice logic, namely
ICLdt. This is a logical framework for decision making uncertainty that extends
both logic programming and stochastic models such as influence diagrams. This
paper shows how the representation of a decision problem within the independent
choice logic can be exploited to cut down the combinatorics of dynamic
programming. One of the main problems with influence diagram evaluation
techniques is the need to optimise a decision for all values of the 'parents'
of a decision variable. In this paper we show how the rule based nature of the
ICLdt can be exploited so that we only make distinctions in the values of the
information available for a decision that will make a difference to utility.
|
1302.4979 | Abstraction in Belief Networks: The Role of Intermediate States in
Diagnostic Reasoning | cs.AI | Bayesian belief networks are bing increasingly used as a knowledge
representation for diagnostic reasoning. One simple method for conducting
diagnostic reasoning is to represent system faults and observations only. In
this paper, we investigate how having intermediate nodes-nodes other than fault
and observation nodes affects the diagnostic performance of a Bayesian belief
network. We conducted a series of experiments on a set of real belief networks
for medical diagnosis in liver and bile disease. We compared the effects on
diagnostic performance of a two-level network consisting just of disease and
finding nodes with that of a network which models intermediate
pathophysiological disease states as well. We provide some theoretical evidence
for differences observed between the abstracted two-level network and the full
network.
|
1302.4980 | Accounting for Context in Plan Recognition, with Application to Traffic
Monitoring | cs.AI | Typical approaches to plan recognition start from a representation of an
agent's possible plans, and reason evidentially from observations of the
agent's actions to assess the plausibility of the various candidates. A more
expansive view of the task (consistent with some prior work) accounts for the
context in which the plan was generated, the mental state and planning process
of the agent, and consequences of the agent's actions in the world. We present
a general Bayesian framework encompassing this view, and focus on how context
can be exploited in plan recognition. We demonstrate the approach on a problem
in traffic monitoring, where the objective is to induce the plan of the driver
from observation of vehicle movements. Starting from a model of how the driver
generates plans, we show how the highway context can appropriately influence
the recognizer's interpretation of observed driver behavior.
|
1302.4981 | A New Pruning Method for Solving Decision Trees and Game Trees | cs.AI | The main goal of this paper is to describe a new pruning method for solving
decision trees and game trees. The pruning method for decision trees suggests a
slight variant of decision trees that we call scenario trees. In scenario
trees, we do not need a conditional probability for each edge emanating from a
chance node. Instead, we require a joint probability for each path from the
root node to a leaf node. We compare the pruning method to the traditional
rollback method for decision trees and game trees. For problems that require
Bayesian revision of probabilities, a scenario tree representation with the
pruning method is more efficient than a decision tree representation with the
rollback method. For game trees, the pruning method is more efficient than the
rollback method.
|
1302.4982 | Directed Cyclic Graphical Representations of Feedback Models | cs.AI | The use of directed acyclic graphs (DAGs) to represent conditional
independence relations among random variables has proved fruitful in a variety
of ways. Recursive structural equation models are one kind of DAG model.
However, non-recursive structural equation models of the kinds used to model
economic processes are naturally represented by directed cyclic graphs with
independent errors, a characterization of conditional independence errors, a
characterization of conditional independence constraints is obtained, and it is
shown that the result generalizes in a natural way to systems in which the
error variables or noises are statistically dependent. For non-linear systems
with independent errors a sufficient condition for conditional independence of
variables in associated distributions is obtained.
|
1302.4983 | Causal Inference in the Presence of Latent Variables and Selection Bias | cs.AI | We show that there is a general, informative and reliable procedure for
discovering causal relations when, for all the investigator knows, both latent
variables and selection bias may be at work. Given information about
conditional independence and dependence relations between measured variables,
even when latent variables and selection bias may be present, there are
sufficient conditions for reliably concluding that there is a causal path from
one variable to another, and sufficient conditions for reliably concluding when
no such causal path exists.
|
1302.4984 | Modeling Failure Priors and Persistence in Model-Based Diagnosis | cs.AI | Probabilistic model-based diagnosis computes the posterior probabilities of
failure of components from the prior probabilities of component failure and
observations of system behavior. One problem with this method is that such
priors are almost never directly available. One of the reasons is that the
prior probability estimates include an implicit notion of a time interval over
which they are specified -- for example, if the probability of failure of a
component is 0.05, is this over the period of a day or is this over a week? A
second problem facing probabilistic model-based diagnosis is the modeling of
persistence. Say we have an observation about a system at time t_1 and then
another observation at a later time t_2. To compute posterior probabilities
that take into account both the observations, we need some model of how the
state of the system changes from time t_1 to t_2. In this paper, we address
these problems using techniques from Reliability theory. We show how to compute
the failure prior of a component from an empirical measure of its reliability
-- the Mean Time Between Failure (MTBF). We also develop a scheme to model
persistence when handling multiple time tagged observations.
|
1302.4985 | A Polynomial Algorithm for Computing the Optimal Repair Strategy in a
System with Independent Component Failures | cs.AI | The goal of diagnosis is to compute good repair strategies in response to
anomalous system behavior. In a decision theoretic framework, a good repair
strategy has low expected cost. In a general formulation of the problem, the
computation of the optimal (lowest expected cost) repair strategy for a system
with multiple faults is intractable. In this paper, we consider an interesting
and natural restriction on the behavior of the system being diagnosed: (a) the
system exhibits faulty behavior if and only if one or more components is
malfunctioning. (b) The failures of the system components are independent.
Given this restriction on system behavior, we develop a polynomial time
algorithm for computing the optimal repair strategy. We then go on to introduce
a system hierarchy and the notion of inspecting (testing) components before
repair. We develop a linear time algorithm for computing an optimal repair
strategy for the hierarchical system which includes both repair and inspection.
|
1302.4986 | Exploiting System Hierarchy to Compute Repair Plans in Probabilistic
Model-based Diagnosis | cs.AI | The goal of model-based diagnosis is to isolate causes of anomalous system
behavior and recommend inexpensive repair actions in response. In general,
precomputing optimal repair policies is intractable. To date, investigators
addressing this problem have explored approximations that either impose
restrictions on the system model (such as a single fault assumption) or compute
an immediate best action with limited lookahead. In this paper, we develop a
formulation of repair in model-based diagnosis and a repair algorithm that
computes optimal sequences of actions. This optimal approach is costly but can
be applied to precompute an optimal repair strategy for compact systems. We
show how we can exploit a hierarchical system specification to make this
approach tractable for large systems. When introducing hierarchy, we also
consider the tradeoff between simply replacing a component and decomposing it
to repair its subcomponents. The hierarchical repair algorithm is suitable for
off-line precomputation of an optimal repair strategy. A modification of the
algorithm takes advantage of an iterative deepening scheme to trade off
inference time and the quality of the computed strategy.
|
1302.4987 | Path Planning under Time-Dependent Uncertainty | cs.AI | Standard algorithms for finding the shortest path in a graph require that the
cost of a path be additive in edge costs, and typically assume that costs are
deterministic. We consider the problem of uncertain edge costs, with potential
probabilistic dependencies among the costs. Although these dependencies violate
the standard dynamic-programming decomposition, we identify a weaker stochastic
consistency condition that justifies a generalized dynamic-programming approach
based on stochastic dominance. We present a revised path-planning algorithm and
prove that it produces optimal paths under time-dependent uncertain costs. We
test the algorithm by applying it to a model of stochastic bus networks, and
present empirical performance results comparing it to some alternatives.
Finally, we consider extensions of these concepts to a more general class of
problems of heuristic search under uncertainty.
|
1302.4988 | Defaults and Infinitesimals: Defeasible Inference by Nonarchimedean
Entropy-Maximization | cs.AI | We develop a new semantics for defeasible inference based on extended
probability measures allowed to take infinitesimal values, on the
interpretation of defaults as generalized conditional probability constraints
and on a preferred-model implementation of entropy maximization.
|
1302.4989 | An Order of Magnitude Calculus | cs.AI | This paper develops a simple calculus for order of magnitude reasoning. A
semantics is given with soundness and completeness results. Order of magnitude
probability functions are easily defined and turn out to be equivalent to kappa
functions, which are slight generalizations of Spohn's Natural Conditional
Functions. The calculus also gives rise to an order of magnitude decision
theory, which can be used to justify an amended version of Pearl's decision
theory for kappa functions, although the latter is weaker and less expressive.
|
1302.4990 | A Method for Implementing a Probabilistic Model as a Relational Database | cs.AI | This paper discusses a method for implementing a probabilistic inference
system based on an extended relational data model. This model provides a
unified approach for a variety of applications such as dynamic programming,
solving sparse linear equations, and constraint propagation. In this framework,
the probability model is represented as a generalized relational database.
Subsequent probabilistic requests can be processed as standard relational
queries. Conventional database management systems can be easily adopted for
implementing such an approximate reasoning system.
|
1302.4991 | Optimization of Inter-Subnet Belief Updating in Multiply Sectioned
Bayesian Networks | cs.AI | Recent developments show that Multiply Sectioned Bayesian Networks (MSBNs)
can be used for diagnosis of natural systems as well as for model-based
diagnosis of artificial systems. They can be applied to single-agent oriented
reasoning systems as well as multi-agent distributed probabilistic reasoning
systems. Belief propagation between a pair of subnets plays a central role in
maintenance of global consistency in a MSBN. This paper studies the operation
UpdateBelief, presented originally with MSBNs, for inter-subnet propagation. We
analyze how the operation achieves its intended functionality, which provides
hints as for how its efficiency can be improved. We then define two new
versions of UpdateBelief that reduce the computation time for inter-subnet
propagation. One of them is optimal in the sense that the minimum amount of
computation for coordinating multi-linkage belief propagation is required. The
optimization problem is solved through the solution of a graph-theoretic
problem: the minimum weight open tour in a tree.
|
1302.4992 | Generating Explanations for Evidential Reasoning | cs.AI | In this paper, we present two methods to provide explanations for reasoning
with belief functions in the valuation-based systems. One approach, inspired by
Strat's method, is based on sensitivity analysis, but its computation is
simpler thus easier to implement than Strat's. The other one is to examine the
impact of evidence on the conclusion based on the measure of the information
content in the evidence. We show the property of additivity for the pieces of
evidence that are conditional independent within the context of the
valuation-based systems. We will give an example to show how these approaches
are applied in an evidential network.
|
1302.4993 | Inference with Causal Independence in the CPSC Network | cs.AI | This paper reports experiments with the causal independence inference
algorithm proposed by Zhang and Poole (1994b) on the CPSC network created by
Pradhan et al. (1994). It is found that the algorithm is able to answer 420 of
the 422 possible zero-observation queries, 94 of 100 randomly generated
five-observation queries, 87 of 100 randomly generated ten-observation queries,
and 69 of 100 randomly generated twenty-observation queries.
|
1302.5002 | Asymptotic Data Rates of Receive-Diversity Systems with MMSE Estimation
and Spatially Correlated Interferers | cs.IT math.IT | An asymptotic technique is presented to characterize the bits/symbol
achievable on a representative wireless link in a spatially distributed network
with active interferers at correlated positions, N receive diversity branches,
and linear Minimum-Mean-Square-Error (MMSE) receivers. This framework is then
applied to systems including analogs to Matern type I and type II networks
which are useful to model systems with Medium-Access Control (MAC), cellular
uplinks with orthogonal transmissions and frequency reuse, and Boolean cluster
networks. It is found that for our network models, with moderately large N, the
correlation between interferer positions does not significantly influence the
bits/symbol resulting in simple approximations for the data rates achievable in
such networks which are known to be difficult to analyze and for which few
analytical results are available.
|
1302.5010 | Matching Pursuit LASSO Part II: Applications and Sparse Recovery over
Batch Signals | cs.CV cs.LG stat.ML | Matching Pursuit LASSIn Part I \cite{TanPMLPart1}, a Matching Pursuit LASSO
({MPL}) algorithm has been presented for solving large-scale sparse recovery
(SR) problems. In this paper, we present a subspace search to further improve
the performance of MPL, and then continue to address another major challenge of
SR -- batch SR with many signals, a consideration which is absent from most of
previous $\ell_1$-norm methods. As a result, a batch-mode {MPL} is developed to
vastly speed up sparse recovery of many signals simultaneously. Comprehensive
numerical experiments on compressive sensing and face recognition tasks
demonstrate the superior performance of MPL and BMPL over other methods
considered in this paper, in terms of sparse recovery ability and efficiency.
In particular, BMPL is up to 400 times faster than existing $\ell_1$-norm
methods considered to be state-of-the-art.O Part II: Applications and Sparse
Recovery over Batch Signals
|
1302.5021 | Linear Coding Schemes for the Distributed Computation of Subspaces | cs.IT math.IT | Let $X_1, ..., X_m$ be a set of $m$ statistically dependent sources over the
common alphabet $\mathbb{F}_q$, that are linearly independent when considered
as functions over the sample space. We consider a distributed function
computation setting in which the receiver is interested in the lossless
computation of the elements of an $s$-dimensional subspace $W$ spanned by the
elements of the row vector $[X_1, \ldots, X_m]\Gamma$ in which the $(m \times
s)$ matrix $\Gamma$ has rank $s$. A sequence of three increasingly refined
approaches is presented, all based on linear encoders.
The first approach uses a common matrix to encode all the sources and a
Korner-Marton like receiver to directly compute $W$. The second improves upon
the first by showing that it is often more efficient to compute a carefully
chosen superspace $U$ of $W$. The superspace is identified by showing that the
joint distribution of the $\{X_i\}$ induces a unique decomposition of the set
of all linear combinations of the $\{X_i\}$, into a chain of subspaces
identified by a normalized measure of entropy. This subspace chain also
suggests a third approach, one that employs nested codes. For any joint
distribution of the $\{X_i\}$ and any $W$, the sum-rate of the nested code
approach is no larger than that under the Slepian-Wolf (SW) approach. Under the
SW approach, $W$ is computed by first recovering each of the $\{X_i\}$. For a
large class of joint distributions and subspaces $W$, the nested code approach
is shown to improve upon SW. Additionally, a class of source distributions and
subspaces are identified, for which the nested-code approach is sum-rate
optimal.
|
1302.5039 | Cognitive Interference Alignment for OFDM Two-tiered Networks | cs.IT math.IT | In this contribution, we introduce an interference alignment scheme that
allows the coexistence of an orthogonal frequency division multiplexing (OFDM)
macro-cell and a cognitive small-cell, deployed in a two-tiered structure and
transmitting over the same bandwidth. We derive the optimal linear strategy for
the single antenna secondary base station, maximizing the spectral efficiency
of the opportunistic link, accounting for both signal sub-space structure and
power loading strategy. Our analytical and numerical findings prove that the
precoder structure proposed is optimal for the considered scenario in the face
of Rayleigh and exponential decaying channels.
|
1302.5056 | Pooling-Invariant Image Feature Learning | cs.CV cs.LG | Unsupervised dictionary learning has been a key component in state-of-the-art
computer vision recognition architectures. While highly effective methods exist
for patch-based dictionary learning, these methods may learn redundant features
after the pooling stage in a given early vision architecture. In this paper, we
offer a novel dictionary learning scheme to efficiently take into account the
invariance of learned features after the spatial pooling stage. The algorithm
is built on simple clustering, and thus enjoys efficiency and scalability. We
discuss the underlying mechanism that justifies the use of clustering
algorithms, and empirically show that the algorithm finds better dictionaries
than patch-based methods with the same dictionary size.
|
1302.5082 | Proceedings of the Third International Workshop on Domain-Specific
Languages and Models for Robotic Systems (DSLRob 2012) | cs.RO | Proceedings of the Third International Workshop on Domain-Specific Languages
and Models for Robotic Systems (DSLRob'12), held at the 2012 International
Conference on Simulation, Modeling, and Programming for Autonomous Robots
(SIMPAR 2012), November 2012 in Tsukuba, Japan.
The main topics of the workshop were Domain-Specific Languages (DSLs) and
Model-driven Architecture (MDA) for robotics. A domain-specific language (DSL)
is a programming language dedicated to a particular problem domain that offers
specific notations and abstractions that increase programmer productivity
within that domain. Models-driven architecture (MDA) offers a high-level way
for domain users to specify the functionality of their system at the right
level of abstraction. DSLs and models have historically been used for
programming complex systems. However recently they have garnered interest as a
separate field of study. Robotic systems blend hardware and software in a
holistic way that intrinsically raises many crosscutting concerns (concurrency,
uncertainty, time constraints, ...), for which reason, traditional
general-purpose languages often lead to a poor fit between the language
features and the implementation requirements. DSLs and models offer a powerful,
systematic way to overcome this problem, enabling the programmer to quickly and
precisely implement novel software solutions to complex problems within the
robotics domain.
|
1302.5085 | Model-driven engineering approach to design and implementation of robot
control system | cs.RO cs.SE | In this paper we apply a model-driven engineering approach to designing
domain-specific solutions for robot control system development. We present a
case study of the complete process, including identification of the domain
meta-model, graphical notation definition and source code generation for
subsumption architecture -- a well-known example of robot control architecture.
Our goal is to show that both the definition of the robot-control architecture
and its supporting tools fits well into the typical workflow of model-driven
engineering development.
|
1302.5125 | High-Dimensional Probability Estimation with Deep Density Models | stat.ML cs.LG | One of the fundamental problems in machine learning is the estimation of a
probability distribution from data. Many techniques have been proposed to study
the structure of data, most often building around the assumption that
observations lie on a lower-dimensional manifold of high probability. It has
been more difficult, however, to exploit this insight to build explicit,
tractable density models for high-dimensional data. In this paper, we introduce
the deep density model (DDM), a new approach to density estimation. We exploit
insights from deep learning to construct a bijective map to a representation
space, under which the transformation of the distribution of the data is
approximately factorized and has identical and known marginal densities. The
simplicity of the latent distribution under the model allows us to feasibly
explore it, and the invertibility of the map to characterize contraction of
measure across it. This enables us to compute normalized densities for
out-of-sample data. This combination of tractability and flexibility allows us
to tackle a variety of probabilistic tasks on high-dimensional datasets,
including: rapid computation of normalized densities at test-time without
evaluating a partition function; generation of samples without MCMC; and
characterization of the joint entropy of the data.
|
1302.5130 | Quantum-inspired Huffman Coding | cs.IT cs.ET math.IT | Huffman Compression, also known as Huffman Coding, is one of many compression
techniques in use today. The two important features of Huffman coding are
instantaneousness that is the codes can be interpreted as soon as they are
received and variable length that is a most frequent symbol has length smaller
than a less frequent symbol. The traditional Huffman coding has two procedures:
constructing a tree in O(n^2) and then traversing it in O(n). Quantum computing
is a promising approach of computation that is based on equations from Quantum
Mechanics. Instantaneousness and variable length features are difficult to
generalize to the quantum case. The quantum coding field is pioneered by
Schumacher works on block coding scheme. To encode N signals sequentially, it
requires O(N3) computational steps. The encoding and decoding processes are far
from instantaneous. Moreover, the lengths of all the codewords are the same. A
Huffman-coding-inspired scheme for the storage of quantum information takes
O(N(log N)a) computational steps for a sequential implementation on
non-parallel machines.
|
1302.5145 | Prediction and Clustering in Signed Networks: A Local to Global
Perspective | cs.SI cs.LG | The study of social networks is a burgeoning research area. However, most
existing work deals with networks that simply encode whether relationships
exist or not. In contrast, relationships in signed networks can be positive
("like", "trust") or negative ("dislike", "distrust"). The theory of social
balance shows that signed networks tend to conform to some local patterns that,
in turn, induce certain global characteristics. In this paper, we exploit both
local as well as global aspects of social balance theory for two fundamental
problems in the analysis of signed networks: sign prediction and clustering.
Motivated by local patterns of social balance, we first propose two families of
sign prediction methods: measures of social imbalance (MOIs), and supervised
learning using high order cycles (HOCs). These methods predict signs of edges
based on triangles and \ell-cycles for relatively small values of \ell.
Interestingly, by examining measures of social imbalance, we show that the
classic Katz measure, which is used widely in unsigned link prediction,
actually has a balance theoretic interpretation when applied to signed
networks. Furthermore, motivated by the global structure of balanced networks,
we propose an effective low rank modeling approach for both sign prediction and
clustering. For the low rank modeling approach, we provide theoretical
performance guarantees via convex relaxations, scale it up to large problem
sizes using a matrix factorization based algorithm, and provide extensive
experimental validation including comparisons with local approaches. Our
experimental results indicate that, by adopting a more global viewpoint of
balance structure, we get significant performance and computational gains in
prediction and clustering tasks on signed networks. Our work therefore
highlights the usefulness of the global aspect of balance theory for the
analysis of signed networks.
|
1302.5150 | Measuring Agglomeration of Agglomerated Particles Pictures | cs.CE math-ph math.AT math.MP math.NA math.PR | In this article, we introduce a novel geometrical index $\delta_{agg}$, which
is associated with the Euler number and is obtained by an image processing
procedure for a given digital picture of aggregated particles such that
$\delta_{agg}$ exhibits the degree of the agglomerations of the particles. In
the previous work (Matsutani, Shimosako, Wang, Appl.Math.Modeling {\bf{37}}
(2013), 4007-4022), we proposed an algorithm to construct a picture of
agglomerated particles as a Monte-Carlo simulation whose agglomeration degree
is controlled by $\gamma_{agg} \in (0,1)$. By applying the image processing
procedure to the pictures of the agglomeration particles constructed following
the algorithm, we show that $\delta_{agg}$ statistically reproduces the
agglomeration parameter $\gamma_{agg}$.
|
1302.5153 | Constructing Polar Codes Using Iterative Bit-Channel Upgrading | cs.IT math.IT | The definition of polar codes given by Arikan is explicit, but the
construction complexity is an issue. This is due to the exponential growth in
the size of the output alphabet of the bit-channels as the codeword length
increases. Tal and Vardy recently presented a method for constructing polar
codes which controls this growth. They approximated each bit-channel with a
better channel and a worse channel while reducing the alphabet size. They
constructed a polar code based on the worse channel and used the better channel
to measure the distance from the optimal channel. This paper considers the
knowledge gained from the perspective of the better channel. A method is
presented using iterative upgrading of the bit-channels which successively
results in a channel closer to the original one. It is shown that this approach
can be used to obtain a channel arbitrarily close to the original channel, and
therefore to the optimal construction of a polar code.
|
1302.5166 | Rate-Compatible Short-Length Protograph LDPC Codes | cs.IT math.IT | This paper produces a rate-compatible protograph LDPC code at 1k information
blocklength with superior performance in both waterfall and error floor
regions. The design of such codes has proved difficult in the past because the
constraints imposed by structured design (protographs), rate-compatibility, as
well as small block length, are not easily satisfied together. For example, as
the block length decreases, the predominance of decoding threshold as the main
parameter in coding design is reduced, thus complicating the search for good
codes. Our rate-compatible protograph codes have rates ranging from 1/3 to 4/5
and show no error floor down to $10^{-6}$ FER.
|
1302.5168 | q-ary Compressive Sensing | cs.IT math.IT math.ST stat.TH | We introduce q-ary compressive sensing, an extension of 1-bit compressive
sensing. We propose a novel sensing mechanism and a corresponding recovery
procedure. The recovery properties of the proposed approach are analyzed both
theoretically and empirically. Results in 1-bit compressive sensing are
recovered as a special case. Our theoretical results suggest a tradeoff between
the quantization parameter q, and the number of measurements m in the control
of the error of the resulting recovery algorithm, as well its robustness to
noise.
|
1302.5181 | Basic Classes of Grammars with Prohibition | cs.FL cs.CL | A practical tool for natural language modeling and development of
human-machine interaction is developed in the context of formal grammars and
languages. A new type of formal grammars, called grammars with prohibition, is
introduced. Grammars with prohibition provide more powerful tools for natural
language generation and better describe processes of language learning than the
conventional formal grammars. Here we study relations between languages
generated by different grammars with prohibition based on conventional types of
formal grammars such as context-free or context sensitive grammars. Besides, we
compare languages generated by different grammars with prohibition and
languages generated by conventional formal grammars. In particular, it is
demonstrated that they have essentially higher computational power and
expressive possibilities in comparison with the conventional formal grammars.
Thus, while conventional formal grammars are recursive and subrecursive
algorithms, many classes of grammars with prohibition are superrecursive
algorithms. Results presented in this work are aimed at the development of
human-machine interaction, modeling natural languages, empowerment of
programming languages, computer simulation, better software systems, and theory
of recursion.
|
1302.5186 | Unsupervised edge map scoring: a statistical complexity approach | cs.CV stat.AP | We propose a new Statistical Complexity Measure (SCM) to qualify edge maps
without Ground Truth (GT) knowledge. The measure is the product of two indices,
an \emph{Equilibrium} index $\mathcal{E}$ obtained by projecting the edge map
into a family of edge patterns, and an \emph{Entropy} index $\mathcal{H}$,
defined as a function of the Kolmogorov Smirnov (KS) statistic.
This new measure can be used for performance characterization which includes:
(i)~the specific evaluation of an algorithm (intra-technique process) in order
to identify its best parameters, and (ii)~the comparison of different
algorithms (inter-technique process) in order to classify them according to
their quality.
Results made over images of the South Florida and Berkeley databases show
that our approach significantly improves over Pratt's Figure of Merit (PFoM)
which is the objective reference-based edge map evaluation standard, as it
takes into account more features in its evaluation.
|
1302.5189 | Object Detection in Real Images | cs.CV | Object detection and recognition are important problems in computer vision.
Since these problems are meta-heuristic, despite a lot of research, practically
usable, intelligent, real-time, and dynamic object detection/recognition
methods are still unavailable. We propose a new object detection/recognition
method, which improves over the existing methods in every stage of the object
detection/recognition process. In addition to the usual features, we propose to
use geometric shapes, like linear cues, ellipses and quadrangles, as additional
features. The full potential of geometric cues is exploited by using them to
extract other features in a robust, computationally efficient, and less
meta-heuristic manner. We also propose a new hierarchical codebook, which
provides good generalization and discriminative properties. The codebook
enables fast multi-path inference mechanisms based on propagation of
conditional likelihoods, that make it robust to occlusion and noise. It has the
capability of dynamic learning. We also propose a new learning method that has
generative and discriminative learning capabilities, does not need large and
fully supervised training dataset, and is capable of online learning. The
preliminary work of detecting geometric shapes in real images has been
completed. This preliminary work is the focus of this report. Future path for
realizing the proposed object detection/recognition method is also discussed in
brief.
|
1302.5205 | The exponential family in abstract information theory | cs.IT math-ph math.IT math.MP | We introduce generalized notions of a divergence function and a Fisher
information matrix. We propose to generalize the notion of an exponential
family of models by reformulating it in terms of the Fisher information matrix.
Our methods are those of information geometry. The context is general enough to
include applications from outside statistics.
|
1302.5215 | Development Of Ontology-Based Intelligent System For Software Testing | cs.AI cs.SE | Software testing is a prime factor in software industry. Besides knowing the
importance of testing, only limited time is allocated for teaching it. It will
be more efficient if testing is taught simultaneously with programming
foundations. This integrated learning of testing techniques and programming
allows the programmers to perform in a better way and this leads to the
improvement of the performance of the industry progress. In this paper, a
technique named ontology is introduced, it first defines the various testing
process in hierarchy and define relationships among them, to share and reuse
the knowledge that is captured, secondly metadata is created by natural
language processing and finally, the application use ontologies to support test
management, it act as knowledge base for multiple environment with the
integrated teaching of programming foundation and testing concepts. Keywords:
Meta Data, Ontology, Software Testing, Integration, Programming Foundations.
|
1302.5226 | Dobrushin ergodicity coefficient for Markov operators on cones, and
beyond | math.OA cs.MA math.DG | The analysis of classical consensus algorithms relies on contraction
properties of adjoints of Markov operators, with respect to Hilbert's
projective metric or to a related family of seminorms (Hopf's oscillation or
Hilbert's seminorm). We generalize these properties to abstract consensus
operators over normal cones, which include the unital completely positive maps
(Kraus operators) arising in quantum information theory. In particular, we show
that the contraction rate of such operators, with respect to the Hopf
oscillation seminorm, is given by an analogue of Dobrushin's ergodicity
coefficient. We derive from this result a characterization of the contraction
rate of a non-linear flow, with respect to Hopf's oscillation seminorm and to
Hilbert's projective metric.
|
1302.5235 | Predicting the Temporal Dynamics of Information Diffusion in Social
Networks | cs.SI physics.soc-ph | Online social networks play a major role in the spread of information at very
large scale and it becomes essential to provide means to analyse this
phenomenon. In this paper we address the issue of predicting the temporal
dynamics of the information diffusion process. We develop a graph-based
approach built on the assumption that the macroscopic dynamics of the spreading
process are explained by the topology of the network and the interactions that
occur through it, between pairs of users, on the basis of properties at the
microscopic level. We introduce a generic model, called T-BaSIC, and describe
how to estimate its parameters from users behaviours using machine learning
techniques. Contrary to classical approaches where the parameters are fixed in
advance, T-BaSIC's parameters are functions depending of time, which permit to
better approximate and adapt to the diffusion phenomenon observed in online
social networks. Our proposal has been validated on real Twitter datasets.
Experiments show that our approach is able to capture the particular patterns
of diffusion depending of the studied sub-networks of users and topics. The
results corroborate the "two-step" theory (1955) that states that information
flows from media to a few "opinion leaders" who then transfer it to the mass
population via social networks and show that it applies in the online context.
This work also highlights interesting recommendations for future
investigations.
|
1302.5280 | Opportunistic Interference Alignment for MIMO Interfering
Multiple-Access Channels | cs.IT math.IT | We consider the $K$-cell multiple-input multiple-output (MIMO) interfering
multiple-access channel (IMAC) with time-invariant channel coefficients, where
each cell consists of a base station (BS) with $M$ antennas and $N$ users
having $L$ antennas each. In this paper, we propose two opportunistic
interference alignment (OIA) techniques utilizing multiple transmit antennas at
each user: antenna selection-based OIA and singular value decomposition
(SVD)-based OIA. Their performance is analyzed in terms of \textit{user scaling
law} required to achieve $KS$ degrees-of-freedom (DoF), where $S(\le M)$
denotes the number of simultaneously transmitting users per cell. We assume
that each selected user transmits a single data stream at each time-slot. It is
shown that the antenna selection-based OIA does not fundamentally change the
user scaling condition if $L$ is fixed, compared with the single-input
multiple-output (SIMO) IMAC case, which is given by $\text{SNR}^{(K 1)S}$,
where SNR denotes the signal-to-noise ratio. In addition, we show that the
SVD-based OIA can greatly reduce the user scaling condition to
$\text{SNR}^{(K-1)S-L+1}$ through optimizing a weight vector at each user.
Simulation results validate the derived scaling laws of the proposed OIA
techniques. The sum-rate performance of the proposed OIA techniques is compared
with the conventional techniques in MIMO IMAC channels and it is shown that the
proposed OIA techniques outperform the conventional techniques.
|
1302.5281 | Fundamental bound on the reliability of quantum information transmission | quant-ph cs.IT math.IT | Information theory tells us that if the rate of sending information across a
noisy channel were above the capacity of that channel, then the transmission
would necessarily be unreliable. For classical information sent over classical
or quantum channels, one could, under certain conditions, make a stronger
statement that the reliability of the transmission shall decay exponentially to
zero with the number of channel uses and the proof of this statement typically
relies on a certain fundamental bound on the reliability of the transmission.
Such a statement or the bound has never been given for sending quantum
information. We give this bound and then use it to give the first example where
the reliability of sending quantum information at rates above the capacity
decays exponentially to zero. We also show that our framework can be used for
proving generalized bounds on the reliability.
|
1302.5302 | Dynamic Memory Allocation Policies for Postings in Real-Time Twitter
Search | cs.IR cs.DB | We explore a real-time Twitter search application where tweets are arriving
at a rate of several thousands per second. Real-time search demands that they
be indexed and searchable immediately, which leads to a number of
implementation challenges. In this paper, we focus on one aspect: dynamic
postings allocation policies for index structures that are completely held in
main memory. The core issue can be characterized as a "Goldilocks Problem".
Because memory remains today a scare resource, an allocation policy that is too
aggressive leads to inefficient utilization, while a policy that is too
conservative is slow and leads to fragmented postings lists. We present a
dynamic postings allocation policy that allocates memory in increasingly-larger
"slices" from a small number of large, fixed pools of memory. Through
analytical models and experiments, we explore different settings that balance
time (query evaluation speed) and space (memory utilization).
|
1302.5348 | Graph-based Generalization Bounds for Learning Binary Relations | cs.LG | We investigate the generalizability of learned binary relations: functions
that map pairs of instances to a logical indicator. This problem has
application in numerous areas of machine learning, such as ranking, entity
resolution and link prediction. Our learning framework incorporates an example
labeler that, given a sequence $X$ of $n$ instances and a desired training size
$m$, subsamples $m$ pairs from $X \times X$ without replacement. The challenge
in analyzing this learning scenario is that pairwise combinations of random
variables are inherently dependent, which prevents us from using traditional
learning-theoretic arguments. We present a unified, graph-based analysis, which
allows us to analyze this dependence using well-known graph identities. We are
then able to bound the generalization error of learned binary relations using
Rademacher complexity and algorithmic stability. The rate of uniform
convergence is partially determined by the labeler's subsampling process. We
thus examine how various assumptions about subsampling affect generalization;
under a natural random subsampling process, our bounds guarantee
$\tilde{O}(1/\sqrt{n})$ uniform convergence.
|
1302.5371 | Non-Linear Distributed Average Consensus using Bounded Transmissions | cs.DC cs.IT math.IT | A distributed average consensus algorithm in which every sensor transmits
with bounded peak power is proposed. In the presence of communication noise, it
is shown that the nodes reach consensus asymptotically to a finite random
variable whose expectation is the desired sample average of the initial
observations with a variance that depends on the step size of the algorithm and
the variance of the communication noise. The asymptotic performance is
characterized by deriving the asymptotic covariance matrix using results from
stochastic approximation theory. It is shown that using bounded transmissions
results in slower convergence compared to the linear consensus algorithm based
on the Laplacian heuristic. Simulations corroborate our analytical findings.
|
1302.5374 | A Weight-coded Evolutionary Algorithm for the Multidimensional Knapsack
Problem | cs.NE math.OC | A revised weight-coded evolutionary algorithm (RWCEA) is proposed for solving
multidimensional knapsack problems. This RWCEA uses a new decoding method and
incorporates a heuristic method in initialization. Computational results show
that the RWCEA performs better than a weight-coded evolutionary algorithm
proposed by Raidl (1999) and to some existing benchmarks, it can yield better
results than the ones reported in the OR-library.
|
1302.5376 | Spatial CSIT Allocation Policies for Network MIMO Channels | cs.IT math.IT | In this work, we study the problem of the optimal dissemination of channel
state information (CSI) among K spatially distributed transmitters (TXs)
jointly cooperating to serve K receivers (RXs). One of the particularities of
this work lies in the fact that the CSI is distributed in the sense that each
TX obtains its own estimate of the global multi-user MIMO channel with no
further exchange of information being allowed between the TXs. Although this is
well suited to model the cooperation between non-colocated TXs, e.g., in
cellular Coordinated Multipoint (CoMP) schemes, this type of setting has
received little attention so far in the information theoretic society. We study
in this work what are the CSI requirements at every TX, as a function of the
network geometry, to ensure that the maximal number of degrees-of-freedom (DoF)
is achieved, i.e., the same DoF as obtained under perfect CSI at all TXs. We
advocate the use of the generalized DoF to take into account the geometry of
the network in the analysis. Consistent with the intuition, the derived DoF
maximizing CSI allocation policy suggests that TX cooperation should be limited
to a specific finite neighborhood around each TX. This is in sharp contrast
with the conventional (uniform) CSI dissemination policy which induces CSI
requirements that grow unbounded with the network size. The proposed CSI
allocation policy suggests an alternative to clustering which overcomes
fundamental limitations such as (i) edge interference and (ii) unbounded
increase of the CSIT requirements with the cluster size. Finally, we show how
finite neighborhood CSIT exchange translates into finite neighborhood message
exchange so that finally global interference management is possible with only
local cooperation
|
1302.5383 | Stochastic Ordering of Fading Channels Through the Shannon Transform | cs.IT math.IT | A new stochastic order between two fading distributions is introduced. A
fading channel dominates another in the ergodic capacity ordering sense, if the
Shannon transform of the first is greater than that of the second at all values
of average signal to noise ratio. It is shown that some parametric fading
models such as the Nakagami-m, Rician, and Hoyt are distributions that are
monotonic in their line of sight parameters with respect to the ergodic
capacity order. Some operations under which the ergodic capacity order is
preserved are also discussed. Through these properties of the ergodic capacity
order, it is possible to compare under two different fading scenarios, the
ergodic capacity of a composite system involving multiple fading links with
coding/decoding capabilities only at the transmitter/receiver. Such comparisons
can be made even in cases when a closed form expression for the ergodic
capacity of the composite system is not analytically tractable. Applications to
multiple access channels, and extensions to multiple-input multiple-output
(MIMO) systems are also discussed.
|
1302.5384 | Facilitating Machine to Machine (M2M) Communication using GSM Network | cs.IT cs.NI math.IT | In this paper a method to facilitate M2M communication using existing GSM
networks is proposed - as M2M devices primarily use SMS as their data bearer,
the focus is on increasing the number of devices that can use the associated
GSM signaling channels at a time. This is achieved by defining a new class of
low mobility, static M2M devices which use a modified physical layer control
frame structure. The proposal is expected to aid a quick, reliable and
cost-effective deployment of M2M devices in the existing GSM networks.
|
1302.5417 | An Ontology Construction Approach for the Domain Of Poultry Science
Using Protege | cs.AI | The information retrieval systems that are present nowadays are mainly based
on full text matching of keywords or topic based classification. This matching
of keywords often returns a large number of irrelevant information and this
does not meet the users query requirement. In order to solve this problem and
to enhance the search using semantic environment, a technique named ontology is
implemented for the field of poultry in this paper. Ontology is an emerging
technique in the current field of research in semantic environment. This paper
constructs ontology using the tool named Protege version 4.0 and this also
generates Resource Description Framework schema and XML scripts for using
poultry ontology in web.
|
1302.5449 | Nonparametric Basis Pursuit via Sparse Kernel-based Learning | cs.LG cs.CV cs.IT math.IT stat.ML | Signal processing tasks as fundamental as sampling, reconstruction, minimum
mean-square error interpolation and prediction can be viewed under the prism of
reproducing kernel Hilbert spaces. Endowing this vantage point with
contemporary advances in sparsity-aware modeling and processing, promotes the
nonparametric basis pursuit advocated in this paper as the overarching
framework for the confluence of kernel-based learning (KBL) approaches
leveraging sparse linear regression, nuclear-norm regularization, and
dictionary learning. The novel sparse KBL toolbox goes beyond translating
sparse parametric approaches to their nonparametric counterparts, to
incorporate new possibilities such as multi-kernel selection and matrix
smoothing. The impact of sparse KBL to signal processing applications is
illustrated through test cases from cognitive radio sensing, microarray data
imputation, and network traffic prediction.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.