id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1302.1334 | Principles of modal and vector theory of formal intelligence systems | cs.AI | The paper considers the class of information systems capable of solving
heuristic problems on basis of formal theory that was termed modal and vector
theory of formal intelligent systems (FIS). The paper justifies the
construction of FIS resolution algorithm, defines the main features of these
systems and proves theorems that underlie the theory. The principle of
representation diversity of FIS construction is formulated. The paper deals
with the main principles of constructing and functioning formal intelligent
system (FIS) on basis of FIS modal and vector theory. The following phenomena
are considered: modular architecture of FIS presentation sub-system, algorithms
of data processing at every step of the stage of creating presentations.
Besides the paper suggests the structure of neural elements, i.e. zone
detectors and processors that are the basis for FIS construction.
|
1302.1335 | Ontology Guided Information Extraction from Unstructured Text | cs.IR | In this paper, we describe an approach to populate an existing ontology with
instance information present in the natural language text provided as input. An
ontology is defined as an explicit conceptualization of a shared domain. This
approach starts with a list of relevant domain ontologies created by human
experts, and techniques for identifying the most appropriate ontology to be
extended with information from a given text. Then we demonstrate heuristics to
extract information from the unstructured text and for adding it as structured
information to the selected ontology. This identification of the relevant
ontology is critical, as it is used in identifying relevant information in the
text. We extract information in the form of semantic triples from the text,
guided by the concepts in the ontology. We then convert the extracted
information about the semantic class instances into Resource Description
Framework (RDF3) and append it to the existing domain ontology. This enables us
to perform more precise semantic queries over the semantic triple store thus
created. We have achieved 95% accuracy of information extraction in our
implementation.
|
1302.1349 | Optimal Power and Rate Allocation in the Degraded Gaussian Relay Channel
with Energy Harvesting Nodes | cs.IT cs.ET math.IT | Energy Harvesting (EH) is a novel technique to prolong the lifetime of the
wireless networks such as wireless sensor networks or Ad-Hoc networks, by
providing an unlimited source of energy for their nodes. In this sense, it has
emerged as a promising technique for Green Communications, recently. On the
other hand, cooperative communication with the help of relay nodes improves the
performance of wireless communication networks by increasing the system
throughput or the reliability as well as the range and efficient energy
utilization. In order to investigate the cooperation in EH nodes, in this
paper, we consider the problem of optimal power and rate allocation in the
degraded full-duplex Gaussian relay channel in which source and relay can
harvest energy from their environments. We consider the general stochastic
energy arrivals at the source and the relay with known EH times and amounts at
the transmitters before the start of transmission. This problem has a min-max
optimization form that along with the constraints is not easy to solve. We
propose a method based on a mathematical theorem proposed by Terkelsen [1] to
transform it to a solvable convex optimization form. Also, we consider some
special cases for the harvesting profile of the source and the relay nodes and
find their solutions efficiently.
|
1302.1351 | Adaptive Sparse Channel Estimation for Time-Variant MIMO-OFDM Systems | cs.IT math.IT | Accurate channel state information (CSI) is required for coherent detection
in time-variant multiple-input multipleoutput (MIMO) communication systems
using orthogonal frequency division multiplexing (OFDM) modulation. One of
low-complexity and stable adaptive channel estimation (ACE) approaches is the
normalized least mean square (NLMS)-based ACE. However, it cannot exploit the
inherent sparsity of MIMO channel which is characterized by a few dominant
channel taps. In this paper, we propose two adaptive sparse channel estimation
(ASCE) methods to take advantage of such sparse structure information for
time-variant MIMO-OFDM systems. Unlike traditional NLMS-based method, two
proposed methods are implemented by introducing sparse penalties to the cost
function of NLMS algorithm. Computer simulations confirm obvious performance
advantages of the proposed ASCEs over the traditional ACE.
|
1302.1353 | Adaptive Sparse Channel Estimation for Time-Variant MISO Communication
Systems | cs.IT math.IT | Channel estimation problem is one of the key technical issues in time-variant
multiple-input single-output (MSIO) communication systems. To estimate the MISO
channel, least mean square (LMS) algorithm is applied to adaptive channel
estimation (ACE). Since the MISO channel is often described by sparse channel
model, such sparsity can be exploited and then estimation performance can be
improved by adaptive sparse channel estimation (ASCE) methods using sparse LMS
algorithms. However, conventional ASCE methods have two main drawbacks: 1)
sensitive to random scale of training signal and 2) unstable in low
signal-to-noise ratio (SNR) regime. To overcome these two harmful factors, in
this paper, we propose a novel ASCE method using normalized LMS (NLMS)
algorithm (ASCE-NLMS). In addition, we also proposed an improved ASCE method
using normalized least mean fourth (NLMF) algorithm (ASCE-NLMF). Two proposed
methods can exploit the channel sparsity effectively. Also, stability of the
proposed methods is confirmed by mathematical derivation. Computer simulation
results show that the proposed sparse channel estimation methods can achieve
better estimation performance than conventional methods.
|
1302.1358 | Sparse Channel Estimation for MIMO-OFDM Amplify-and-Forward Two-Way
Relay Networks | cs.IT math.IT | Accurate channel impulse response (CIR) is required for coherent detection
and it can also help improve communication quality of service in
next-generation wireless communication systems. One of the advanced systems is
multi-input multi-output orthogonal frequency-division multiplexing (MIMO-OFDM)
amplify and forward two-way relay networks (AF-TWRN). Linear channel estimation
methods, e.g., least square (LS), have been proposed to estimate the CIR.
However, these methods never take advantage of channel sparsity and then cause
performance loss. In this paper, we propose a sparse channel estimation method
to exploit the sparse structure information in the CIR at each end user. Sparse
channel estimation problem is formulated as compressed sensing (CS) using
sparse decomposition theory and the estimation process is implemented by LASSO
algorithm. Computer simulation results are given to confirm the superiority of
proposed method over the LS-based channel estimation method.
|
1302.1369 | Key User Extraction Based on Telecommunication Data (aka. Key Users in
Social Network. How to find them?) | cs.SI physics.soc-ph | The number of systems that collect vast amount of data about users rapidly
grow during last few years. Many of these systems contain data not only about
people characteristics but also about their relationships with other system
users. From this kind of data it is possible to extract a social network that
reflects the connections between system's users. Moreover, the analysis of such
social network enables to investigate different characteristics of its members
and their linkages. One of the types of examining such network is key users
extraction. Key users are these who have the biggest impact on other network
members as well as have big influence on network evolution. The obtained about
these users knowledge enables to investigate and predict changes within the
network. So this knowledge is very important for the people or companies who
make a profit from the network like telecommunication company. The second
important thing is the ability to extract these users as quick as possible,
i.e. developed the algorithm that will be time-effective in large social
networks where number of nodes and edges equal few millions. In this master
thesis the method of key user extraction, which is called social position, was
analyzed. Moreover, social position measure was compared with other methods,
which are used to assess the centrality of a node. Furthermore, three
algorithms used to social position calculation was introduced along with
results of comparison between their processing time and others centrality
methods.
|
1302.1380 | Towards the Rapid Development of a Natural Language Understanding Module | cs.CL | When developing a conversational agent, there is often an urgent need to have
a prototype available in order to test the application with real users. A
Wizard of Oz is a possibility, but sometimes the agent should be simply
deployed in the environment where it will be used. Here, the agent should be
able to capture as many interactions as possible and to understand how people
react to failure. In this paper, we focus on the rapid development of a natural
language understanding module by non experts. Our approach follows the learning
paradigm and sees the process of understanding natural language as a
classification problem. We test our module with a conversational agent that
answers questions in the art domain. Moreover, we show how our approach can be
used by a natural language interface to a cinema database.
|
1302.1396 | Finite Horizon Adaptive Optimal Distributed Power Allocation for
Enhanced Cognitive Radio Network in the Presence of Channel Uncertainties | cs.ET cs.IT math.IT | In this paper, novel enhanced Cognitive Radio Network is considered by using
power control where secondary users are allowed to use wireless resources of
the primary users when primary users are deactivated, but also allow secondary
users to coexist with primary users while primary users are activated by
managing interference caused from secondary users to primary users. Therefore,
a novel finite horizon adaptive optimal distributed power allocation scheme is
proposed by incorporating the effect of channel uncertainties for enhanced
cognitive radio network in the presence of wireless channel uncertainties under
two cases. In Case 1, proposed scheme can force the Signal-to-interference
(SIR) of the secondary users to converge to a higher target value for
increasing network throughput when primary users' are not communicating within
finite horizon. Once primary users are activated as in the Case 2, proposed
scheme can not only force the SIR of primary users to converge to a higher
target SIR, but also force the SIR of secondary users to converge to a lower
value for regulating their interference to Pus during finite time period. In
order to mitigate the attenuation of SIR due to channel uncertainties the
proposed novel finite horizon adaptive optimal distributed power allocation
allows the SIR of both primary users' and secondary users' to converge to a
desired target SIR while minimizing the energy consumption within finite
horizon. Simulation results illustrate that this novel finite horizon adaptive
optimal distributed power allocation scheme can converge much faster and cost
less energy than others by adapting to the channel variations optimally.
|
1302.1400 | A new greedy randomized adaptive search procedure for multiobjective RNA
structural alignment | cs.DS cs.CE | RNA secondary structures prediction is one of the main issues in
bioinformatics. It seeks to elucidate structural conserved regions within a set
of RNA sequences. Unfortunately, finding an accurate conserved structure is a
very hard task to do. Within the present study, the prediction problem is
considered as a multiobjective optimization process in which the structural
conservation and the sensitivity of the multiple alignment are optimized. The
proposed method called GRASPMORSA is based on an aggregate function and GRASP
procedure. The initial solutions are obtained by using a random progressive
local/ global algorithm, and then they are refined by an iterative realignment.
Experiments within a large scale of data have shown the efficacy and
effectiveness of the proposed method and its capacity to reach good quality
solutions.
|
1302.1419 | Blind One-Bit Compressive Sampling | cs.IT math.IT math.NA | The problem of 1-bit compressive sampling is addressed in this paper. We
introduce an optimization model for reconstruction of sparse signals from 1-bit
measurements. The model targets a solution that has the least l0-norm among all
signals satisfying consistency constraints stemming from the 1-bit
measurements. An algorithm for solving the model is developed. Convergence
analysis of the algorithm is presented. Our approach is to obtain a sequence of
optimization problems by successively approximating the l0-norm and to solve
resulting problems by exploiting the proximity operator. We examine the
performance of our proposed algorithm and compare it with the binary iterative
hard thresholding (BIHT) [10] a state-of-the-art algorithm for 1-bit
compressive sampling reconstruction. Unlike the BIHT, our model and algorithm
does not require a prior knowledge on the sparsity of the signal. This makes
our proposed work a promising practical approach for signal acquisition.
|
1302.1422 | S\'emantique des d\'eterminants dans un cadre richement typ\'e | cs.CL | The variation of word meaning according to the context leads us to enrich the
type system of our syntactical and semantic analyser of French based on
categorial grammars and Montague semantics (or lambda-DRT). The main advantage
of a deep semantic analyse is too represent meaning by logical formulae that
can be easily used e.g. for inferences. Determiners and quantifiers play a
fundamental role in the construction of those formulae. But in our rich type
system the usual semantic terms do not work. We propose a solution ins- pired
by the tau and epsilon operators of Hilbert, kinds of generic elements and
choice functions. This approach unifies the treatment of the different determi-
ners and quantifiers as well as the dynamic binding of pronouns. Above all,
this fully computational view fits in well within the wide coverage parser
Grail, both from a theoretical and a practical viewpoint.
|
1302.1459 | A Buffer-aided Successive Opportunistic Relay Selection Scheme with
Power Adaptation and Inter-Relay Interference Cancellation for Cooperative
Diversity Systems | cs.IT math.IT | In this paper we consider a simple cooperative network consisting of a
source, a destination and a cluster of decode-and-forward half-duplex relays.
At each time-slot, the source and (possibly) one of the relays transmit a
packet to another relay and the destination, respectively, resulting in
inter-relay interference (IRI). In this work, with the aid of buffers at the
relays, we mitigate the detrimental effect of IRI through interference
cancellation. More specifically, we propose the min-power scheme that minimizes
the total energy expenditure per time slot under an IRI cancellation scheme.
Apart from minimizing the energy expenditure, the min-power selection scheme,
also provides better throughput and lower outage probability than existing
works in the literature. It is the first time that interference cancellation is
combined with buffer-aided relays and power adaptation to mitigate the IRI and
minimize the energy expenditure. The new relay selection policy is analyzed in
terms of outage probability and diversity, by modeling the evolution of the
relay buffers as a Markov Chain (MC). We construct the state transition matrix
of the MC, and hence obtain the steady state with which we can characterize the
outage probability. The proposed scheme outperforms relevant state-of-the-art
relay selection schemes in terms of throughput, diversity and energy
efficiency, as demonstrated via examples.
|
1302.1461 | Stopping Criteria for Iterative Decoding based on Mutual Information | cs.IT math.IT | In this paper we investigate stopping criteria for iterative decoding from a
mutual information perspective. We introduce new iteration stopping rules based
on an approximation of the mutual information between encoded bits and decoder
soft output. The first type stopping rule sets a threshold value directly on
the approximated mutual information for terminating decoding. The threshold can
be adjusted according to the expected bit error rate. The second one adopts a
strategy similar to that of the well known cross-entropy stopping rule by
applying a fixed threshold on the ratio of a simple metric obtained after each
iteration over that of the first iteration. Compared with several well known
stopping rules, the new methods achieve higher efficiency.
|
1302.1484 | Analytical and Numerical Characterizations of Shannon Ordering for
Discrete Memoryless Channels | cs.IT math.IT | This paper studies several problems concerning channel inclusion, which is a
partial ordering between discrete memoryless channels (DMCs) proposed by
Shannon. Specifically, majorization-based conditions are derived for channel
inclusion between certain DMCs. Furthermore, under general conditions, channel
equivalence defined through Shannon ordering is shown to be the same as
permutation of input and output symbols. The determination of channel inclusion
is considered as a convex optimization problem, and the sparsity of the weights
related to the representation of the worse DMC in terms of the better one is
revealed when channel inclusion holds between two DMCs. For the exploitation of
this sparsity, an effective iterative algorithm is established based on
modifying the orthogonal matching pursuit algorithm.
|
1302.1489 | Multi-rate Sub-Nyquist Spectrum Sensing in Cognitive Radios | cs.IT math.IT | Wideband spectrum sensing is becoming increasingly important to cognitive
radio (CR) systems for exploiting spectral opportunities. This paper introduces
a novel multi-rate sub-Nyquist spectrum sensing (MS3) system that implements
cooperative wideband spectrum sensing in a CR network. MS3 can detect the
wideband spectrum using partial measurements without reconstructing the full
frequency spectrum. Sub-Nyquist sampling rates are adopted in sampling channels
for wrapping the frequency spectrum onto itself. This significantly reduces
sensing requirements of CR. The effects of sub-Nyquist sampling are considered,
and the performance of multi-channel sub-Nyquist samplings is analyzed. To
improve its detection performance, sub-Nyquist sampling rates are chosen to be
different such that the numbers of samples are consecutive prime numbers.
Furthermore, when the received signals at CRs are faded or shadowed, the
performance of MS3 is analytically evaluated. Numerical results show that the
proposed system can significantly enhance the wideband spectrum sensing
performance while requiring low computational and implementation complexities.
|
1302.1510 | Multi-Dimensional Spatially-Coupled Codes | cs.IT math.IT | Spatially-coupled (SC) codes are constructed by coupling many regular
low-density parity-check codes in a chain. The decoding chain of SC codes stops
when facing burst erasures. This problem can not be overcome by increasing
coupling number. In this paper, we introduce multi-dimensional (MD) SC codes.
Numerical results show that 2D-SC codes are more robust to the burst erasures
than 1D-SC codes. Furthermore, we consider designing MD-SC codes with smaller
rateloss.
|
1302.1511 | Spatially-Coupled Precoded Rateless Codes | cs.IT math.IT | Raptor codes are rateless codes that achieve the capacity on the binary
erasure channels. However the maximum degree of optimal output degree
distribution is unbounded. This leads to a computational complexity problem
both at encoders and decoders. Aref and Urbanke investigated the potential
advantage of universal achieving-capacity property of proposed
spatially-coupled (SC) low-density generator matrix (LDGM) codes. However the
decoding error probability of SC-LDGM codes is bounded away from 0. In this
paper, we investigate SC-LDGM codes concatenated with SC low-density
parity-check codes. The proposed codes can be regarded as SC Hsu-Anastasopoulos
rateless codes. We derive a lower bound of the asymptotic overhead from
stability analysis for successful decoding by density evolution. The numerical
calculation reveals that the lower bound is tight. We observe that with a
sufficiently large number of information bits, the asymptotic overhead and the
decoding error rate approach 0 with bounded maximum degree.
|
1302.1512 | Efficient Termination of Spatially-Coupled Codes | cs.IT math.IT | Spatially-coupled low-density parity-check codes attract much attention due
to their capacity-achieving performance and a memory-efficient sliding-window
decoding algorithm. On the other hand, the encoder needs to solve large linear
equations to terminate the encoding process. In this paper, we propose modified
spatially-coupled codes. The modified $(\dl,\dr,L)$ codes have less rate-loss,
i.e., higher coding rate, and have the same threshold as $(\dl,\dr,L)$ codes
and are efficiently terminable by using an accumulator.
|
1302.1515 | A Polynomial Time Algorithm for Lossy Population Recovery | cs.DS cs.LG | We give a polynomial time algorithm for the lossy population recovery
problem. In this problem, the goal is to approximately learn an unknown
distribution on binary strings of length $n$ from lossy samples: for some
parameter $\mu$ each coordinate of the sample is preserved with probability
$\mu$ and otherwise is replaced by a `?'. The running time and number of
samples needed for our algorithm is polynomial in $n$ and $1/\varepsilon$ for
each fixed $\mu>0$. This improves on algorithm of Wigderson and Yehudayoff that
runs in quasi-polynomial time for any $\mu > 0$ and the polynomial time
algorithm of Dvir et al which was shown to work for $\mu \gtrapprox 0.30$ by
Batman et al. In fact, our algorithm also works in the more general framework
of Batman et al. in which there is no a priori bound on the size of the support
of the distribution. The algorithm we analyze is implicit in previous work; our
main contribution is to analyze the algorithm by showing (via linear
programming duality and connections to complex analysis) that a certain matrix
associated with the problem has a robust local inverse even though its
condition number is exponentially small. A corollary of our result is the first
polynomial time algorithm for learning DNFs in the restriction access model of
Dvir et al.
|
1302.1519 | Update Rules for Parameter Estimation in Bayesian Networks | cs.LG stat.ML | This paper re-examines the problem of parameter estimation in Bayesian
networks with missing values and hidden variables from the perspective of
recent work in on-line learning [Kivinen & Warmuth, 1994]. We provide a unified
framework for parameter estimation that encompasses both on-line learning,
where the model is continuously adapted to new data cases as they arrive, and
the more traditional batch learning, where a pre-accumulated set of samples is
used in a one-time model selection process. In the batch case, our framework
encompasses both the gradient projection algorithm and the EM algorithm for
Bayesian networks. The framework also leads to new on-line and batch parameter
update schemes, including a parameterized version of EM. We provide both
empirical and theoretical results indicating that parameterized EM allows
faster convergence to the maximum likelihood parameters than does standard EM.
|
1302.1520 | Bayes Networks for Sonar Sensor Fusion | cs.AI | Wide-angle sonar mapping of the environment by mobile robot is nontrivial due
to several sources of uncertainty: dropouts due to "specular" reflections,
obstacle location uncertainty due to the wide beam, and distance measurement
error. Earlier papers address the latter problems, but dropouts remain a
problem in many environments. We present an approach that lifts the
overoptimistic independence assumption used in earlier work, and use Bayes nets
to represent the dependencies between objects of the model. Objects of the
model consist of readings, and of regions in which "quasi location invariance"
of the (possible) obstacles exists, with respect to the readings. Simulation
supports the method's feasibility. The model is readily extensible to allow for
prior distributions, as well as other types of sensing operations.
|
1302.1521 | Exploiting Uncertain and Temporal Information in Correlation | cs.AI | A modelling language is described which is suitable for the correlation of
information when the underlying functional model of the system is incomplete or
uncertain and the temporal dependencies are imprecise. An efficient and
incremental implementation is outlined which depends on cost functions
satisfying certain criteria. Possibilistic logic and probability theory (as it
is used in the applications targetted) satisfy these criteria.
|
1302.1522 | Correlated Action Effects in Decision Theoretic Regression | cs.AI | Much recent research in decision theoretic planning has adopted Markov
decision processes (MDPs) as the model of choice, and has attempted to make
their solution more tractable by exploiting problem structure. One particular
algorithm, structured policy construction achieves this by means of a decision
theoretic analog of goal regression using action descriptions based on Bayesian
networks with tree-structured conditional probability tables. The algorithm as
presented is not able to deal with actions with correlated effects. We describe
a new decision theoretic regression operator that corrects this weakness. While
conceptually straightforward, this extension requires a somewhat more
complicated technical approach.
|
1302.1523 | Corporate Evidential Decision Making in Performance Prediction Domains | cs.AI | Performance prediction or forecasting sporting outcomes involves a great deal
of insight into the particular area one is dealing with, and a considerable
amount of intuition about the factors that bear on such outcomes and
performances. The mathematical Theory of Evidence offers representation
formalisms which grant experts a high degree of freedom when expressing their
subjective beliefs in the context of decision-making situations like
performance prediction. Furthermore, this reasoning framework incorporates a
powerful mechanism to systematically pool the decisions made by individual
subject matter experts. The idea behind such a combination of knowledge is to
improve the competence (quality) of the overall decision-making process. This
paper reports on a performance prediction experiment carried out during the
European Football Championship in 1996. Relying on the knowledge of four
predictors, Evidence Theory was used to forecast the final scores of all 31
matches. The results of this empirical study are very encouraging.
|
1302.1524 | Algorithms for Learning Decomposable Models and Chordal Graphs | cs.AI | Decomposable dependency models and their graphical counterparts, i.e.,
chordal graphs, possess a number of interesting and useful properties. On the
basis of two characterizations of decomposable models in terms of independence
relationships, we develop an exact algorithm for recovering the chordal
graphical representation of any given decomposable model. We also propose an
algorithm for learning chordal approximations of dependency models isomorphic
to general undirected graphs.
|
1302.1525 | Incremental Pruning: A Simple, Fast, Exact Method for Partially
Observable Markov Decision Processes | cs.AI | Most exact algorithms for general partially observable Markov decision
processes (POMDPs) use a form of dynamic programming in which a
piecewise-linear and convex representation of one value function is transformed
into another. We examine variations of the "incremental pruning" method for
solving this problem and compare them to earlier algorithms from theoretical
and empirical perspectives. We find that incremental pruning is presently the
most efficient exact method for solving POMDPs.
|
1302.1526 | Defining Explanation in Probabilistic Systems | cs.AI | As probabilistic systems gain popularity and are coming into wider use, the
need for a mechanism that explains the system's findings and recommendations
becomes more critical. The system will also need a mechanism for ordering
competing explanations. We examine two representative approaches to explanation
in the literature - one due to G\"ardenfors and one due to Pearl - and show
that both suffer from significant problems. We propose an approach to defining
a notion of "better explanation" that combines some of the features of both
together with more recent work by Pearl and others on causality.
|
1302.1527 | Structured Arc Reversal and Simulation of Dynamic Probabilistic Networks | cs.AI | We present an algorithm for arc reversal in Bayesian networks with
tree-structured conditional probability tables, and consider some of its
advantages, especially for the simulation of dynamic probabilistic networks. In
particular, the method allows one to produce CPTs for nodes involved in the
reversal that exploit regularities in the conditional distributions. We argue
that this approach alleviates some of the overhead associated with arc
reversal, plays an important role in evidence integration and can be used to
restrict sampling of variables in DPNs. We also provide an algorithm that
detects the dynamic irrelevance of state variables in forward simulation. This
algorithm exploits the structured CPTs in a reversed network to determine, in a
time-independent fashion, the conditions under which a variable does or does
not need to be sampled.
|
1302.1528 | A Bayesian Approach to Learning Bayesian Networks with Local Structure | cs.LG cs.AI stat.ML | Recently several researchers have investigated techniques for using data to
learn Bayesian networks containing compact representations for the conditional
probability distributions (CPDs) stored at each node. The majority of this work
has concentrated on using decision-tree representations for the CPDs. In
addition, researchers typically apply non-Bayesian (or asymptotically Bayesian)
scoring functions such as MDL to evaluate the goodness-of-fit of networks to
the data. In this paper we investigate a Bayesian approach to learning Bayesian
networks that contain the more general decision-graph representations of the
CPDs. First, we describe how to evaluate the posterior probability that is, the
Bayesian score of such a network, given a database of observed cases. Second,
we describe various search spaces that can be used, in conjunction with a
scoring function and a search procedure, to identify one or more high-scoring
networks. Finally, we present an experimental evaluation of the search spaces,
using a greedy algorithm and a Bayesian scoring function.
|
1302.1529 | Exploring Parallelism in Learning Belief Networks | cs.AI cs.LG | It has been shown that a class of probabilistic domain models cannot be
learned correctly by several existing algorithms which employ a single-link
look ahead search. When a multi-link look ahead search is used, the
computational complexity of the learning algorithm increases. We study how to
use parallelism to tackle the increased complexity in learning such models and
to speed up learning in large domains. An algorithm is proposed to decompose
the learning task for parallel processing. A further task decomposition is used
to balance load among processors and to increase the speed-up and efficiency.
For learning from very large datasets, we present a regrouping of the available
processors such that slow data access through file can be replaced by fast
memory access. Our implementation in a parallel computer demonstrates the
effectiveness of the algorithm.
|
1302.1530 | Efficient Induction of Finite State Automata | cs.AI cs.FL | This paper introduces a new algorithm for the induction if complex finite
state automata from samples of behavior. The algorithm is based on information
theoretic principles. The algorithm reduces the search space by many orders of
magnitude over what was previously thought possible. We compare the algorithm
with some existing induction techniques for finite state automata and show that
the algorithm is much superior in both run time and quality of inductions.
|
1302.1531 | Robustness Analysis of Bayesian Networks with Local Convex Sets of
Distributions | cs.AI | Robust Bayesian inference is the calculation of posterior probability bounds
given perturbations in a probabilistic model. This paper focuses on
perturbations that can be expressed locally in Bayesian networks through convex
sets of distributions. Two approaches for combination of local models are
considered. The first approach takes the largest set of joint distributions
that is compatible with the local sets of distributions; we show how to reduce
this type of robust inference to a linear programming problem. The second
approach takes the convex hull of joint distributions generated from the local
sets of distributions; we demonstrate how to apply interior-point optimization
methods to generate posterior bounds and how to generate approximations that
are guaranteed to converge to correct posterior bounds. We also discuss
calculation of bounds for expected utilities and variances, and global
perturbation models.
|
1302.1532 | A Standard Approach for Optimizing Belief Network Inference using Query
DAGs | cs.AI | This paper proposes a novel, algorithm-independent approach to optimizing
belief network inference. rather than designing optimizations on an algorithm
by algorithm basis, we argue that one should use an unoptimized algorithm to
generate a Q-DAG, a compiled graphical representation of the belief network,
and then optimize the Q-DAG and its evaluator instead. We present a set of
Q-DAG optimizations that supplant optimizations designed for traditional
inference algorithms, including zero compression, network pruning and caching.
We show that our Q-DAG optimizations require time linear in the Q-DAG size, and
significantly simplify the process of designing algorithms for optimizing
belief network inference.
|
1302.1533 | Model Reduction Techniques for Computing Approximately Optimal Solutions
for Markov Decision Processes | cs.AI | We present a method for solving implicit (factored) Markov decision processes
(MDPs) with very large state spaces. We introduce a property of state space
partitions which we call epsilon-homogeneity. Intuitively, an
epsilon-homogeneous partition groups together states that behave approximately
the same under all or some subset of policies. Borrowing from recent work on
model minimization in computer-aided software verification, we present an
algorithm that takes a factored representation of an MDP and an 0<=epsilon<=1
and computes a factored epsilon-homogeneous partition of the state space. This
partition defines a family of related MDPs - those MDPs with state space equal
to the blocks of the partition, and transition probabilities "approximately"
like those of any (original MDP) state in the source block. To formally study
such families of MDPs, we introduce the new notion of a "bounded parameter MDP"
(BMDP), which is a family of (traditional) MDPs defined by specifying upper and
lower bounds on the transition probabilities and rewards. We describe
algorithms that operate on BMDPs to find policies that are approximately
optimal with respect to the original MDP. In combination, our method for
reducing a large implicit MDP to a possibly much smaller BMDP using an
epsilon-homogeneous partition, and our methods for selecting actions in BMDPs
constitute a new approach for analyzing large implicit MDPs. Among its
advantages, this new approach provides insight into existing algorithms to
solving implicit MDPs, provides useful connections to work in automata theory
and model minimization, and suggests methods, which involve varying epsilon, to
trade time and space (specifically in terms of the size of the corresponding
state space) for solution quality.
|
1302.1534 | A Scheme for Approximating Probabilistic Inference | cs.AI | This paper describes a class of probabilistic approximation algorithms based
on bucket elimination which offer adjustable levels of accuracy and efficiency.
We analyze the approximation for several tasks: finding the most probable
explanation, belief updating and finding the maximum a posteriori hypothesis.
We identify regions of completeness and provide preliminary empirical
evaluation on randomly generated networks.
|
1302.1535 | Myopic Value of Information in Influence Diagrams | cs.AI | We present a method for calculation of myopic value of information in
influence diagrams (Howard & Matheson, 1981) based on the strong junction tree
framework (Jensen, Jensen & Dittmer, 1994). The difference in instantiation
order in the influence diagrams is reflected in the corresponding junction
trees by the order in which the chance nodes are marginalized. This order of
marginalization can be changed by table expansion and in effect the same
junction tree with expanded tables may be used for calculating the expected
utility for scenarios with different instantiation order. We also compare our
method to the classic method of modeling different instantiation orders in the
same influence diagram.
|
1302.1536 | Limitations of Skeptical Default Reasoning | cs.AI | Poole has shown that nonmonotonic logics do not handle the lottery paradox
correctly. In this paper we will show that Pollock's theory of defeasible
reasoning fails for the same reason: defeasible reasoning is incompatible with
the skeptical notion of derivability.
|
1302.1537 | Decision-making Under Ordinal Preferences and Comparative Uncertainty | cs.AI | This paper investigates the problem of finding a preference relation on a set
of acts from the knowledge of an ordering on events (subsets of states of the
world) describing the decision-maker (DM)s uncertainty and an ordering of
consequences of acts, describing the DMs preferences. However, contrary to
classical approaches to decision theory, we try to do it without resorting to
any numerical representation of utility nor uncertainty, and without even using
any qualitative scale on which both uncertainty and preference could be mapped.
It is shown that although many axioms of Savage theory can be preserved and
despite the intuitive appeal of the method for constructing a preference over
acts, the approach is inconsistent with a probabilistic representation of
uncertainty, but leads to the kind of uncertainty theory encountered in
non-monotonic reasoning (especially preferential and rational inference),
closely related to possibility theory. Moreover the method turns out to be
either very little decisive or to lead to very risky decisions, although its
basic principles look sound. This paper raises the question of the very
possibility of purely symbolic approaches to Savage-like decision-making under
uncertainty and obtains preliminary negative results.
|
1302.1538 | Sequential Update of Bayesian Network Structure | cs.AI cs.LG | There is an obvious need for improving the performance and accuracy of a
Bayesian network as new data is observed. Because of errors in model
construction and changes in the dynamics of the domains, we cannot afford to
ignore the information in new data. While sequential update of parameters for a
fixed structure can be accomplished using standard techniques, sequential
update of network structure is still an open problem. In this paper, we
investigate sequential update of Bayesian networks were both parameters and
structure are expected to change. We introduce a new approach that allows for
the flexible manipulation of the tradeoff between the quality of the learned
networks and the amount of information that is maintained about past
observations. We formally describe our approach including the necessary
modifications to the scoring functions for learning Bayesian networks, evaluate
its effectiveness through an empirical study, and extend it to the case of
missing data.
|
1302.1539 | Image Segmentation in Video Sequences: A Probabilistic Approach | cs.CV cs.AI | "Background subtraction" is an old technique for finding moving objects in a
video sequence for example, cars driving on a freeway. The idea is that
subtracting the current image from a timeaveraged background image will leave
only nonstationary objects. It is, however, a crude approximation to the task
of classifying each pixel of the current image; it fails with slow-moving
objects and does not distinguish shadows from moving objects. The basic idea of
this paper is that we can classify each pixel using a model of how that pixel
looks when it is part of different classes. We learn a mixture-of-Gaussians
classification model for each pixel using an unsupervised technique- an
efficient, incremental version of EM. Unlike the standard image-averaging
approach, this automatically updates the mixture component for each class
according to likelihood of membership; hence slow-moving objects are handled
perfectly. Our approach also identifies and eliminates shadows much more
effectively than other techniques such as thresholding. Application of this
method as part of the Roadwatch traffic surveillance project is expected to
result in significant improvements in vehicle identification and tracking.
|
1302.1540 | The Complexity of Plan Existence and Evaluation in Probabilistic Domains | cs.AI | We examine the computational complexity of testing and finding small plans in
probabilistic planning domains with succinct representations. We find that many
problems of interest are complete for a variety of complexity classes: NP,
co-NP, PP, NP^PP, co-NP^PP, and PSPACE. Of these, the probabilistic classes PP
and NP^PP are likely to be of special interest in the field of uncertainty in
artificial intelligence and are deserving of additional study. These results
suggest a fruitful direction of future algorithmic development.
|
1302.1541 | Algorithm Portfolio Design: Theory vs. Practice | cs.AI | Stochastic algorithms are among the best for solving computationally hard
search and reasoning problems. The runtime of such procedures is characterized
by a random variable. Different algorithms give rise to different probability
distributions. One can take advantage of such differences by combining several
algorithms into a portfolio, and running them in parallel or interleaving them
on a single processor. We provide a detailed evaluation of the portfolio
approach on distributions of hard combinatorial search problems. We show under
what conditions the protfolio approach can have a dramatic computational
advantage over the best traditional methods.
|
1302.1542 | Learning Bayesian Nets that Perform Well | cs.AI cs.LG | A Bayesian net (BN) is more than a succinct way to encode a probabilistic
distribution; it also corresponds to a function used to answer queries. A BN
can therefore be evaluated by the accuracy of the answers it returns. Many
algorithms for learning BNs, however, attempt to optimize another criterion
(usually likelihood, possibly augmented with a regularizing term), which is
independent of the distribution of queries that are posed. This paper takes the
"performance criteria" seriously, and considers the challenge of computing the
BN whose performance - read "accuracy over the distribution of queries" - is
optimal. We show that many aspects of this learning task are more difficult
than the corresponding subtasks in the standard model.
|
1302.1543 | Probability Update: Conditioning vs. Cross-Entropy | cs.AI | Conditioning is the generally agreed-upon method for updating probability
distributions when one learns that an event is certainly true. But it has been
argued that we need other rules, in particular the rule of cross-entropy
minimization, to handle updates that involve uncertain information. In this
paper we re-examine such a case: van Fraassen's Judy Benjamin problem, which in
essence asks how one might update given the value of a conditional probability.
We argue that -- contrary to the suggestions in the literature -- it is
possible to use simple conditionalization in this case, and thereby obtain
answers that agree fully with intuition. This contrasts with proposals such as
cross-entropy, which are easier to apply but can give unsatisfactory answers.
Based on the lessons from this example, we speculate on some general
philosophical issues concerning probability update.
|
1302.1544 | Problem-Focused Incremental Elicitation of Multi-Attribute Utility
Models | cs.AI cs.GT | Decision theory has become widely accepted in the AI community as a useful
framework for planning and decision making. Applying the framework typically
requires elicitation of some form of probability and utility information. While
much work in AI has focused on providing representations and tools for
elicitation of probabilities, relatively little work has addressed the
elicitation of utility models. This imbalance is not particularly justified
considering that probability models are relatively stable across problem
instances, while utility models may be different for each instance. Spending
large amounts of time on elicitation can be undesirable for interactive systems
used in low-stakes decision making and in time-critical decision making. In
this paper we investigate the issues of reasoning with incomplete utility
models. We identify patterns of problem instances where plans can be proved to
be suboptimal if the (unknown) utility function satisfies certain conditions.
We present an approach to planning and decision making that performs the
utility elicitation incrementally and in a way that is informed by the domain
model.
|
1302.1545 | Models and Selection Criteria for Regression and Classification | cs.LG stat.ML | When performing regression or classification, we are interested in the
conditional probability distribution for an outcome or class variable Y given a
set of explanatoryor input variables X. We consider Bayesian models for this
task. In particular, we examine a special class of models, which we call
Bayesian regression/classification (BRC) models, that can be factored into
independent conditional (y|x) and input (x) models. These models are
convenient, because the conditional model (the portion of the full model that
we care about) can be analyzed by itself. We examine the practice of
transforming arbitrary Bayesian models to BRC models, and argue that this
practice is often inappropriate because it ignores prior knowledge that may be
important for learning. In addition, we examine Bayesian methods for learning
models from data. We discuss two criteria for Bayesian model selection that are
appropriate for repression/classification: one described by Spiegelhalter et
al. (1993), and another by Buntine (1993). We contrast these two criteria using
the prequential framework of Dawid (1984), and give sufficient conditions under
which the criteria agree.
|
1302.1546 | Inference with Idempotent Valuations | cs.AI | Valuation based systems verifying an idempotent property are studied. A
partial order is defined between the valuations giving them a lattice
structure. Then, two different strategies are introduced to represent
valuations: as infimum of the most informative valuations or as supremum of the
least informative ones. It is studied how to carry out computations with both
representations in an efficient way. The particular cases of finite sets and
convex polytopes are considered.
|
1302.1547 | Perception, Attention, and Resources: A Decision-Theoretic Approach to
Graphics Rendering | cs.AI cs.GR | We describe work to control graphics rendering under limited computational
resources by taking a decision-theoretic perspective on perceptual costs and
computational savings of approximations. The work extends earlier work on the
control of rendering by introducing methods and models for computing the
expected cost associated with degradations of scene components. The expected
cost is computed by considering the perceptual cost of degradations and a
probability distribution over the attentional focus of viewers. We review the
critical literature describing findings on visual search and attention, discuss
the implications of the findings, and introduce models of expected perceptual
cost. Finally, we discuss policies that harness information about the expected
cost of scene components.
|
1302.1548 | Time-Critical Reasoning: Representations and Application | cs.AI | We review the problem of time-critical action and discuss a reformulation
that shifts knowledge acquisition from the assessment of complex temporal
probabilistic dependencies to the direct assessment of time-dependent utilities
over key outcomes of interest. We dwell on a class of decision problems
characterized by the centrality of diagnosing and reacting in a timely manner
to pathological processes. We motivate key ideas in the context of trauma-care
triage and transportation decisions.
|
1302.1549 | Learning Belief Networks in Domains with Recursively Embedded Pseudo
Independent Submodels | cs.AI cs.LG | A pseudo independent (PI) model is a probabilistic domain model (PDM) where
proper subsets of a set of collectively dependent variables display marginal
independence. PI models cannot be learned correctly by many algorithms that
rely on a single link search. Earlier work on learning PI models has suggested
a straightforward multi-link search algorithm. However, when a domain contains
recursively embedded PI submodels, it may escape the detection of such an
algorithm. In this paper, we propose an improved algorithm that ensures the
learning of all embedded PI submodels whose sizes are upper bounded by a
predetermined parameter. We show that this improved learning capability only
increases the complexity slightly beyond that of the previous algorithm. The
performance of the new algorithm is demonstrated through experiment.
|
1302.1550 | Relational Bayesian Networks | cs.AI | A new method is developed to represent probabilistic relations on multiple
random events. Where previously knowledge bases containing probabilistic rules
were used for this purpose, here a probability distribution over the relations
is directly represented by a Bayesian network. By using a powerful way of
specifying conditional probability distributions in these networks, the
resulting formalism is more expressive than the previous ones. Particularly, it
provides for constraints on equalities of events, and it allows to define
complex, nested combination functions.
|
1302.1551 | Composition of Probability Measures on Finite Spaces | cs.AI | Decomposable models and Bayesian networks can be defined as sequences of
oligo-dimensional probability measures connected with operators of composition.
The preliminary results suggest that the probabilistic models allowing for
effective computational procedures are represented by sequences possessing a
special property; we shall call them perfect sequences. The paper lays down the
elementary foundation necessary for further study of iterative application of
operators of composition. We believe to develop a technique describing several
graph models in a unifying way. We are convinced that practically all
theoretical results and procedures connected with decomposable models and
Bayesian networks can be translated into the terminology introduced in this
paper. For example, complexity of computational procedures in these models is
closely dependent on possibility to change the ordering of oligo-dimensional
measures defining the model. Therefore, in this paper, lot of attention is paid
to possibility to change ordering of the operators of composition.
|
1302.1552 | An Information-Theoretic Analysis of Hard and Soft Assignment Methods
for Clustering | cs.LG stat.ML | Assignment methods are at the heart of many algorithms for unsupervised
learning and clustering - in particular, the well-known K-means and
Expectation-Maximization (EM) algorithms. In this work, we study several
different methods of assignment, including the "hard" assignments used by
K-means and the ?soft' assignments used by EM. While it is known that K-means
minimizes the distortion on the data and EM maximizes the likelihood, little is
known about the systematic differences of behavior between the two algorithms.
Here we shed light on these differences via an information-theoretic analysis.
The cornerstone of our results is a simple decomposition of the expected
distortion, showing that K-means (and its extension for inferring general
parametric densities from unlabeled sample data) must implicitly manage a
trade-off between how similar the data assigned to each cluster are, and how
the data are balanced among the clusters. How well the data are balanced is
measured by the entropy of the partition defined by the hard assignments. In
addition to letting us predict and verify systematic differences between
K-means and EM on specific examples, the decomposition allows us to give a
rather general argument showing that K ?means will consistently find densities
with less "overlap" than EM. We also study a third natural assignment method
that we call posterior assignment, that is close in spirit to the soft
assignments of EM, but leads to a surprisingly different algorithm.
|
1302.1553 | Nested Junction Trees | cs.AI | The efficiency of inference in both the Hugin and, most notably, the
Shafer-Shenoy architectures can be improved by exploiting the independence
relations induced by the incoming messages of a clique. That is, the message to
be sent from a clique can be computed via a factorization of the clique
potential in the form of a junction tree. In this paper we show that by
exploiting such nested junction trees in the computation of messages both space
and time costs of the conventional propagation methods may be reduced. The
paper presents a structured way of exploiting the nested junction trees
technique to achieve such reductions. The usefulness of the method is
emphasized through a thorough empirical evaluation involving ten large
real-world Bayesian networks and the Hugin inference algorithm.
|
1302.1554 | Object-Oriented Bayesian Networks | cs.AI | Bayesian networks provide a modeling language and associated inference
algorithm for stochastic domains. They have been successfully applied in a
variety of medium-scale applications. However, when faced with a large complex
domain, the task of modeling using Bayesian networks begins to resemble the
task of programming using logical circuits. In this paper, we describe an
object-oriented Bayesian network (OOBN) language, which allows complex domains
to be described in terms of inter-related objects. We use a Bayesian network
fragment to describe the probabilistic relations between the attributes of an
object. These attributes can themselves be objects, providing a natural
framework for encoding part-of hierarchies. Classes are used to provide a
reusable probabilistic model which can be applied to multiple similar objects.
Classes also support inheritance of model fragments from a class to a subclass,
allowing the common aspects of related classes to be defined only once. Our
language has clear declarative semantics: an OOBN can be interpreted as a
stochastic functional program, so that it uniquely specifies a probabilistic
model. We provide an inference algorithm for OOBNs, and show that much of the
structural information encoded by an OOBN--particularly the encapsulation of
variables within an object and the reuse of model fragments in different
contexts--can also be used to speed up the inference process.
|
1302.1555 | Nonuniform Dynamic Discretization in Hybrid Networks | cs.AI | We consider probabilistic inference in general hybrid networks, which include
continuous and discrete variables in an arbitrary topology. We reexamine the
question of variable discretization in a hybrid network aiming at minimizing
the information loss induced by the discretization. We show that a nonuniform
partition across all variables as opposed to uniform partition of each variable
separately reduces the size of the data structures needed to represent a
continuous function. We also provide a simple but efficient procedure for
nonuniform partition. To represent a nonuniform discretization in the computer
memory, we introduce a new data structure, which we call a Binary Split
Partition (BSP) tree. We show that BSP trees can be an exponential factor
smaller than the data structures in the standard uniform discretization in
multiple dimensions and show how the BSP trees can be used in the standard join
tree algorithm. We show that the accuracy of the inference process can be
significantly improved by adjusting discretization with evidence. We construct
an iterative anytime algorithm that gradually improves the quality of the
discretization and the accuracy of the answer on a query. We provide empirical
evidence that the algorithm converges.
|
1302.1556 | Probabilistic Acceptance | cs.AI | The idea of fully accepting statements when the evidence has rendered them
probable enough faces a number of difficulties. We leave the interpretation of
probability largely open, but attempt to suggest a contextual approach to full
belief. We show that the difficulties of probabilistic acceptance are not as
severe as they are sometimes painted, and that though there are oddities
associated with probabilistic acceptance they are in some instances less
awkward than the difficulties associated with other nonmonotonic formalisms. We
show that the structure at which we arrive provides a natural home for
statistical inference.
|
1302.1557 | Network Fragments: Representing Knowledge for Constructing Probabilistic
Models | cs.AI | In most current applications of belief networks, domain knowledge is
represented by a single belief network that applies to all problem instances in
the domain. In more complex domains, problem-specific models must be
constructed from a knowledge base encoding probabilistic relationships in the
domain. Most work in knowledge-based model construction takes the rule as the
basic unit of knowledge. We present a knowledge representation framework that
permits the knowledge base designer to specify knowledge in larger semantically
meaningful units which we call network fragments. Our framework provides for
representation of asymmetric independence and canonical intercausal
interaction. We discuss the combination of network fragments to form
problem-specific models to reason about particular problem instances. The
framework is illustrated using examples from the domain of military situation
awareness.
|
1302.1558 | Computational Advantages of Relevance Reasoning in Bayesian Belief
Networks | cs.AI | This paper introduces a computational framework for reasoning in Bayesian
belief networks that derives significant advantages from focused inference and
relevance reasoning. This framework is based on d -separation and other simple
and computationally efficient techniques for pruning irrelevant parts of a
network. Our main contribution is a technique that we call relevance-based
decomposition. Relevance-based decomposition approaches belief updating in
large networks by focusing on their parts and decomposing them into partially
overlapping subnetworks. This makes reasoning in some intractable networks
possible and, in addition, often results in significant speedup, as the total
time taken to update all subnetworks is in practice often considerably less
than the time taken to update the network as a whole. We report results of
empirical tests that demonstrate practical significance of our approach.
|
1302.1559 | Incremental Map Generation by Low Cost Robots Based on
Possibility/Necessity Grids | cs.RO cs.AI | In this paper we present some results obtained with a troupe of low-cost
robots designed to cooperatively explore and adquire the map of unknown
structured orthogonal environments. In order to improve the covering of the
explored zone, the robots show different behaviours and cooperate by
transferring each other the perceived environment when they meet. The returning
robots deliver to a host computer their partial maps and the host incrementally
generates the map of the environment by means of apossibility/ necessity grid.
|
1302.1560 | A Target Classification Decision Aid | cs.AI | A submarine's sonar team is responsible for detecting, localising and
classifying targets using information provided by the platform's sensor suite.
The information used to make these assessments is typically uncertain and/or
incomplete and is likely to require a measure of confidence in its reliability.
Moreover, improvements in sensor and communication technology are resulting in
increased amounts of on-platform and off-platform information available for
evaluation. This proliferation of imprecise information increases the risk of
overwhelming the operator. To assist the task of localisation and
classification a concept demonstration decision aid (Horizon), based on
evidential reasoning, has been developed. Horizon is an information fusion
software package for representing and fusing imprecise information about the
state of the world, expressed across suitable frames of reference. The Horizon
software is currently at prototype stage.
|
1302.1561 | Structure and Parameter Learning for Causal Independence and Causal
Interaction Models | cs.AI cs.LG | This paper discusses causal independence models and a generalization of these
models called causal interaction models. Causal interaction models are models
that have independent mechanisms where a mechanism can have several causes. In
addition to introducing several particular types of causal interaction models,
we show how we can apply the Bayesian approach to learning causal interaction
models obtaining approximate posterior distributions for the models and obtain
MAP and ML estimates for the parameters. We illustrate the approach with a
simulation study of learning model posteriors.
|
1302.1562 | Support and Plausibility Degrees in Generalized Functional Models | cs.AI | By discussing several examples, the theory of generalized functional models
is shown to be very natural for modeling some situations of reasoning under
uncertainty. A generalized functional model is a pair (f, P) where f is a
function describing the interactions between a parameter variable, an
observation variable and a random source, and P is a probability distribution
for the random source. Unlike traditional functional models, generalized
functional models do not require that there is only one value of the parameter
variable that is compatible with an observation and a realization of the random
source. As a consequence, the results of the analysis of a generalized
functional model are not expressed in terms of probability distributions but
rather by support and plausibility functions. The analysis of a generalized
functional model is very logical and is inspired from ideas already put forward
by R.A. Fisher in his theory of fiducial probability.
|
1302.1563 | The Cognitive Processing of Causal Knowledge | cs.AI | There is a brief description of the probabilistic causal graph model for
representing, reasoning with, and learning causal structure using Bayesian
networks. It is then argued that this model is closely related to how humans
reason with and learn causal structure. It is shown that studies in psychology
on discounting (reasoning concerning how the presence of one cause of an effect
makes another cause less probable) support the hypothesis that humans reach the
same judgments as algorithms for doing inference in Bayesian networks. Next, it
is shown how studies by Piaget indicate that humans learn causal structure by
observing the same independencies and dependencies as those used by certain
algorithms for learning the structure of a Bayesian network. Based on this
indication, a subjective definition of causality is forwarded. Finally, methods
for further testing the accuracy of these claims are discussed.
|
1302.1564 | Representing Aggregate Belief through the Competitive Equilibrium of a
Securities Market | cs.AI cs.GT q-fin.GN | We consider the problem of belief aggregation: given a group of individual
agents with probabilistic beliefs over a set of uncertain events, formulate a
sensible consensus or aggregate probability distribution over these events.
Researchers have proposed many aggregation methods, although on the question of
which is best the general consensus is that there is no consensus. We develop a
market-based approach to this problem, where agents bet on uncertain events by
buying or selling securities contingent on their outcomes. Each agent acts in
the market so as to maximize expected utility at given securities prices,
limited in its activity only by its own risk aversion. The equilibrium prices
of goods in this market represent aggregate beliefs. For agents with constant
risk aversion, we demonstrate that the aggregate probability exhibits several
desirable properties, and is related to independently motivated techniques. We
argue that the market-based approach provides a plausible mechanism for belief
aggregation in multiagent systems, as it directly addresses self-motivated
agent incentives for participation and for truthfulness, and can provide a
decision-theoretic foundation for the "expert weights" often employed in
centralized pooling techniques.
|
1302.1565 | Learning Bayesian Networks from Incomplete Databases | cs.AI cs.LG | Bayesian approaches to learn the graphical structure of Bayesian Belief
Networks (BBNs) from databases share the assumption that the database is
complete, that is, no entry is reported as unknown. Attempts to relax this
assumption involve the use of expensive iterative methods to discriminate among
different structures. This paper introduces a deterministic method to learn the
graphical structure of a BBN from a possibly incomplete database. Experimental
evaluations show a significant robustness of this method and a remarkable
independence of its execution time from the number of missing data.
|
1302.1567 | Cost-Sharing in Bayesian Knowledge Bases | cs.AI | Bayesian knowledge bases (BKBs) are a generalization of Bayes networks and
weighted proof graphs (WAODAGs), that allow cycles in the causal graph.
Reasoning in BKBs requires finding the most probable inferences consistent with
the evidence. The cost-sharing heuristic for finding least-cost explanations in
WAODAGs was presented and shown to be effective by Charniak and Husain.
However, the cycles in BKBs would make the definition of cost-sharing cyclic as
well, if applied directly to BKBs. By treating the defining equations of
cost-sharing as a system of equations, one can properly define an admissible
cost-sharing heuristic for BKBs. Empirical evaluation shows that cost-sharing
improves performance significantly when applied to BKBs.
|
1302.1568 | Conditional Utility, Utility Independence, and Utility Networks | cs.GT cs.AI | We introduce a new interpretation of two related notions - conditional
utility and utility independence. Unlike the traditional interpretation, the
new interpretation renders the notions the direct analogues of their
probabilistic counterparts. To capture these notions formally, we appeal to the
notion of utility distribution, introduced in previous paper. We show that
utility distributions, which have a structure that is identical to that of
probability distributions, can be viewed as a special case of an additive
multiattribute utility functions, and show how this special case permits us to
capture the novel senses of conditional utility and utility independence.
Finally, we present the notion of utility networks, which do for utilities what
Bayesian networks do for probabilities. Specifically, utility networks exploit
the new interpretation of conditional utility and utility independence to
compactly represent a utility distribution.
|
1302.1569 | Sequential Thresholds: Context Sensitive Default Extensions | cs.AI | Default logic encounters some conceptual difficulties in representing common
sense reasoning tasks. We argue that we should not try to formulate modular
default rules that are presumed to work in all or most circumstances. We need
to take into account the importance of the context which is continuously
evolving during the reasoning process. Sequential thresholding is a
quantitative counterpart of default logic which makes explicit the role context
plays in the construction of a non-monotonic extension. We present a semantic
characterization of generic non-monotonic reasoning, as well as the
instantiations pertaining to default logic and sequential thresholding. This
provides a link between the two mechanisms as well as a way to integrate the
two that can be beneficial to both.
|
1302.1570 | On Stable Multi-Agent Behavior in Face of Uncertainty | cs.AI | A stable joint plan should guarantee the achievement of a designer's goal in
a multi-agent environment, while ensuring that deviations from the prescribed
plan would be detected. We present a computational framework where stable joint
plans can be studied, as well as several basic results about the
representation, verification and synthesis of stable joint plans.
|
1302.1571 | Score and Information for Recursive Exponential Models with Incomplete
Data | stat.ME cs.AI | Recursive graphical models usually underlie the statistical modelling
concerning probabilistic expert systems based on Bayesian networks. This paper
defines a version of these models, denoted as recursive exponential models,
which have evolved by the desire to impose sophisticated domain knowledge onto
local fragments of a model. Besides the structural knowledge, as specified by a
given model, the statistical modelling may also include expert opinion about
the values of parameters in the model. It is shown how to translate imprecise
expert knowledge into approximately conjugate prior distributions. Based on
possibly incomplete data, the score and the observed information are derived
for these models. This accounts for both the traditional score and observed
information, derived as derivatives of the log-likelihood, and the posterior
score and observed information, derived as derivatives of the log-posterior
distribution. Throughout the paper the specialization into recursive graphical
models is accounted for by a simple example.
|
1302.1572 | Lexical Access for Speech Understanding using Minimum Message Length
Encoding | cs.CL | The Lexical Access Problem consists of determining the intended sequence of
words corresponding to an input sequence of phonemes (basic speech sounds) that
come from a low-level phoneme recognizer. In this paper we present an
information-theoretic approach based on the Minimum Message Length Criterion
for solving the Lexical Access Problem. We model sentences using phoneme
realizations seen in training, and word and part-of-speech information obtained
from text corpora. We show results on multiple-speaker, continuous, read speech
and discuss a heuristic using equivalence classes of similar sounding words
which speeds up the recognition process without significant deterioration in
recognition accuracy.
|
1302.1573 | Region-Based Approximations for Planning in Stochastic Domains | cs.AI | This paper is concerned with planning in stochastic domains by means of
partially observable Markov decision processes (POMDPs). POMDPs are difficult
to solve. This paper identifies a subclass of POMDPs called region observable
POMDPs, which are easier to solve and can be used to approximate general POMDPs
to arbitrary accuracy.
|
1302.1574 | Independence of Causal Influence and Clique Tree Propagation | cs.AI | This paper explores the role of independence of causal influence (ICI) in
Bayesian network inference. ICI allows one to factorize a conditional
probability table into smaller pieces. We describe a method for exploiting the
factorization in clique tree propagation (CTP) - the state-of-the-art exact
inference algorithm for Bayesian networks. We also present empirical results
showing that the resulting algorithm is significantly more efficient than the
combination of CTP and previous techniques for exploiting ICI.
|
1302.1575 | Fast Value Iteration for Goal-Directed Markov Decision Processes | cs.AI | Planning problems where effects of actions are non-deterministic can be
modeled as Markov decision processes. Planning problems are usually
goal-directed. This paper proposes several techniques for exploiting the
goal-directedness to accelerate value iteration, a standard algorithm for
solving Markov decision processes. Empirical studies have shown that the
techniques can bring about significant speedups.
|
1302.1592 | Required Base Station Density in Coordinated Multi-Point Uplink with
Rate Constraints | cs.IT math.IT | In this paper we obtain the required spatial density of base stations (BSs)
in a coordinated multi-point uplink cellular network to meet a chosen quality
of service metric. Our model assumes cooperation amongst two BSs and the
required density is obtained under shadowing and Rayleigh fading for different
LTE-A path loss models. The proposed approach guarantees that the worst-case
achievable rate in the entire coverage region is above a target rate with
chosen probability. Two models for the position of the BSs are considered: a
hexagonal grid and a Poisson point process (PPP) modified to set a minimum cell
size. First, for each cooperation region, the location with the minimum rate
coverage probability - the worst-case point - is determined. Next, accurate
closed-form approximations are obtained for the worst-case rate coverage
probability. The approximations presented are useful for the quick assessment
of network performance and can be utilized in parametric studies for network
design. Here, they are applied to obtain the required density of BSs to achieve
a target rate coverage probability. As an added benefit, the formulation here
quantifies the penalty in moving from a regular BS deployment (the grid model)
to a random BS deployment (the PPP model).
|
1302.1596 | Tag-based Semantic Website Recommendation for Turkish Language | cs.IR | With the dramatic increase in the number of websites on the internet, tagging
has become popular for finding related, personal and important documents. When
the potentially increasing internet markets are analyzed, Turkey, in which most
of the people use Turkish language on the internet, found to be exponentially
increasing. In this paper, a tag-based website recommendation method is
presented, where similarity measures are combined with semantic relationships
of tags. In order to evaluate the system, an experiment with 25 people from
Turkey is undertaken and participants are firstly asked to provide websites and
tags in Turkish and then they are asked to evaluate recommended websites.
|
1302.1601 | On the Capacity Region for Index Coding | cs.IT math.IT | A new inner bound on the capacity region of a general index coding problem is
established. Unlike most existing bounds that are based on graph theoretic or
algebraic tools, the bound is built on a random coding scheme and optimal
decoding, and has a simple polymatroidal single-letter expression. The utility
of the inner bound is demonstrated by examples that include the capacity region
for all index coding problems with up to five messages (there are 9846
nonisomorphic ones).
|
1302.1610 | Adaptive low rank and sparse decomposition of video using compressive
sensing | cs.IT cs.CV math.IT | We address the problem of reconstructing and analyzing surveillance videos
using compressive sensing. We develop a new method that performs video
reconstruction by low rank and sparse decomposition adaptively. Background
subtraction becomes part of the reconstruction. In our method, a background
model is used in which the background is learned adaptively as the compressive
measurements are processed. The adaptive method has low latency, and is more
robust than previous methods. We will present experimental results to
demonstrate the advantages of the proposed method.
|
1302.1611 | Bounded regret in stochastic multi-armed bandits | math.ST cs.LG stat.ML stat.TH | We study the stochastic multi-armed bandit problem when one knows the value
$\mu^{(\star)}$ of an optimal arm, as a well as a positive lower bound on the
smallest positive gap $\Delta$. We propose a new randomized policy that attains
a regret {\em uniformly bounded over time} in this setting. We also prove
several lower bounds, which show in particular that bounded regret is not
possible if one only knows $\Delta$, and bounded regret of order $1/\Delta$ is
not possible if one only knows $\mu^{(\star)}$
|
1302.1612 | Arabic text summarization based on latent semantic analysis to enhance
arabic documents clustering | cs.IR cs.CL | Arabic Documents Clustering is an important task for obtaining good results
with the traditional Information Retrieval (IR) systems especially with the
rapid growth of the number of online documents present in Arabic language.
Documents clustering aim to automatically group similar documents in one
cluster using different similarity/distance measures. This task is often
affected by the documents length, useful information on the documents is often
accompanied by a large amount of noise, and therefore it is necessary to
eliminate this noise while keeping useful information to boost the performance
of Documents clustering. In this paper, we propose to evaluate the impact of
text summarization using the Latent Semantic Analysis Model on Arabic Documents
Clustering in order to solve problems cited above, using five
similarity/distance measures: Euclidean Distance, Cosine Similarity, Jaccard
Coefficient, Pearson Correlation Coefficient and Averaged Kullback-Leibler
Divergence, for two times: without and with stemming. Our experimental results
indicate that our proposed approach effectively solves the problems of noisy
information and documents length, and thus significantly improve the clustering
performance.
|
1302.1626 | On the Classification of Extremal Doubly Even Self-Dual Codes with
2-Transitive Automorphism Group | math.CO cs.IT math.GR math.IT | In this note, we complete the classification of extremal doubly even
self-dual codes with 2-transitive automorphism groups.
|
1302.1638 | Discovery of Maximal Frequent Item Sets using Subset Creation | cs.DB | Data mining is the practice to search large amount of data to discover data
patterns. Data mining uses mathematical algorithms to group the data and
evaluate the future events. Association rule is a research area in the field of
knowledge discovery. Many data mining researchers had improved upon the quality
of association rule for business development by incorporating influential
factors like utility, number of items sold and for the mining of association
data patterns. In this paper, we propose an efficient algorithm to find maximal
frequent itemset first. Most of the association rule algorithms used to find
minimal frequent item first, then with the help of minimal frequent itemsets
derive the maximal frequent itemsets, these methods consume more time to find
maximal frequent itemsets. To overcome this problem, we propose a new approach
to find maximal frequent itemset directly using the concepts of subsets. The
proposed method is found to be efficient in finding maximal frequent itemsets.
|
1302.1649 | Eye-GUIDE (Eye-Gaze User Interface Design) Messaging for
Physically-Impaired People | cs.HC cs.CV | Eye-GUIDE is an assistive communication tool designed for the paralyzed or
physically impaired people who were unable to move parts of their bodies
especially people whose communications are limited only to eye movements. The
prototype consists of a camera and a computer. Camera captures images then it
will be send to the computer, where the computer will be the one to interpret
the data. Thus, Eye-GUIDE focuses on camera-based gaze tracking. The proponent
designed the prototype to perform simple tasks and provides graphical user
interface in order the paralyzed or physically impaired person can easily use
it.
|
1302.1669 | Possible and Necessary Winner Problem in Social Polls | cs.GT cs.AI cs.DS cs.SI | Social networks are increasingly being used to conduct polls. We introduce a
simple model of such social polling. We suppose agents vote sequentially, but
the order in which agents choose to vote is not necessarily fixed. We also
suppose that an agent's vote is influenced by the votes of their friends who
have already voted. Despite its simplicity, this model provides useful insights
into a number of areas including social polling, sequential voting, and
manipulation. We prove that the number of candidates and the network structure
affect the computational complexity of computing which candidate necessarily or
possibly can win in such a social poll. For social networks with bounded
treewidth and a bounded number of candidates, we provide polynomial algorithms
for both problems. In other cases, we prove that computing which candidates
necessarily or possibly win are computationally intractable.
|
1302.1690 | A Fast Learning Algorithm for Image Segmentation with Max-Pooling
Convolutional Networks | cs.CV | We present a fast algorithm for training MaxPooling Convolutional Networks to
segment images. This type of network yields record-breaking performance in a
variety of tasks, but is normally trained on a computationally expensive
patch-by-patch basis. Our new method processes each training image in a single
pass, which is vastly more efficient.
We validate the approach in different scenarios and report a 1500-fold
speed-up. In an application to automated steel defect detection and
segmentation, we obtain excellent performance with short training times.
|
1302.1700 | Fast Image Scanning with Deep Max-Pooling Convolutional Neural Networks | cs.CV cs.AI | Deep Neural Networks now excel at image classification, detection and
segmentation. When used to scan images by means of a sliding window, however,
their high computational complexity can bring even the most powerful hardware
to its knees. We show how dynamic programming can speedup the process by orders
of magnitude, even when max-pooling layers are present.
|
1302.1726 | Uncovering the Wider Structure of Extreme Right Communities Spanning
Popular Online Networks | cs.SI physics.soc-ph | Recent years have seen increased interest in the online presence of extreme
right groups. Although originally composed of dedicated websites, the online
extreme right milieu now spans multiple networks, including popular social
media platforms such as Twitter, Facebook and YouTube. Ideally therefore, any
contemporary analysis of online extreme right activity requires the
consideration of multiple data sources, rather than being restricted to a
single platform. We investigate the potential for Twitter to act as a gateway
to communities within the wider online network of the extreme right, given its
facility for the dissemination of content. A strategy for representing
heterogeneous network data with a single homogeneous network for the purpose of
community detection is presented, where these inherently dynamic communities
are tracked over time. We use this strategy to discover and analyze persistent
English and German language extreme right communities.
|
1302.1727 | Terrorist Network: Towards An Analysis | cs.SI physics.soc-ph | Terrorist network is a paradigms to understand the terrorism. The terrorist
involves a lot of people, and among them are involved as perpetrators, but on
the contrary it is very difficult to know who they are caused by lack of
information. Network structure is used to reveal other things about the
terrorist beyond the ability of social sciences.
|
1302.1733 | Feature Selection for Microarray Gene Expression Data using Simulated
Annealing guided by the Multivariate Joint Entropy | q-bio.QM cs.CE cs.LG stat.ML | In this work a new way to calculate the multivariate joint entropy is
presented. This measure is the basis for a fast information-theoretic based
evaluation of gene relevance in a Microarray Gene Expression data context. Its
low complexity is based on the reuse of previous computations to calculate
current feature relevance. The mu-TAFS algorithm --named as such to
differentiate it from previous TAFS algorithms-- implements a simulated
annealing technique specially designed for feature subset selection. The
algorithm is applied to the maximization of gene subset relevance in several
public-domain microarray data sets. The experimental results show a notoriously
high classification performance and low size subsets formed by biologically
meaningful genes.
|
1302.1772 | An ANN-based Method for Detecting Vocal Fold Pathology | cs.LG cs.CV cs.SD | There are different algorithms for vocal fold pathology diagnosis. These
algorithms usually have three stages which are Feature Extraction, Feature
Reduction and Classification. While the third stage implies a choice of a
variety of machine learning methods, the first and second stages play a
critical role in performance and accuracy of the classification system. In this
paper we present initial study of feature extraction and feature reduction in
the task of vocal fold pathology diagnosis. A new type of feature vector, based
on wavelet packet decomposition and Mel-Frequency-Cepstral-Coefficients
(MFCCs), is proposed. Also Principal Component Analysis (PCA) is used for
feature reduction. An Artificial Neural Network is used as a classifier for
evaluating the performance of our proposed method.
|
1302.1777 | Wideband Spectrum Sensing for Cognitive Radio Networks: A Survey | cs.IT math.IT | Cognitive radio has emerged as one of the most promising candidate solutions
to improve spectrum utilization in next generation cellular networks. A crucial
requirement for future cognitive radio networks is wideband spectrum sensing:
secondary users reliably detect spectral opportunities across a wide frequency
range. In this article, various wideband spectrum sensing algorithms are
presented, together with a discussion of the pros and cons of each algorithm
and the challenging issues. Special attention is paid to the use of sub-Nyquist
techniques, including compressive sensing and multi-channel sub-Nyquist
sampling techniques.
|
1302.1789 | Lensless Compressive Sensing Imaging | cs.CV cs.IT math.IT | In this paper, we propose a lensless compressive sensing imaging
architecture. The architecture consists of two components, an aperture assembly
and a sensor. No lens is used. The aperture assembly consists of a two
dimensional array of aperture elements. The transmittance of each aperture
element is independently controllable. The sensor is a single detection
element, such as a single photo-conductive cell. Each aperture element together
with the sensor defines a cone of a bundle of rays, and the cones of the
aperture assembly define the pixels of an image. Each pixel value of an image
is the integration of the bundle of rays in a cone. The sensor is used for
taking compressive measurements. Each measurement is the integration of rays in
the cones modulated by the transmittance of the aperture elements. A
compressive sensing matrix is implemented by adjusting the transmittance of the
individual aperture elements according to the values of the sensing matrix. The
proposed architecture is simple and reliable because no lens is used.
Furthermore, the sharpness of an image from our device is only limited by the
resolution of the aperture assembly, but not affected by blurring due to
defocus. The architecture can be used for capturing images of visible lights,
and other spectra such as infrared, or millimeter waves. Such devices may be
used in surveillance applications for detecting anomalies or extracting
features such as speed of moving objects. Multiple sensors may be used with a
single aperture assembly to capture multi-view images simultaneously. A
prototype was built by using a LCD panel and a photoelectric sensor for
capturing images of visible spectrum.
|
1302.1836 | The Capacity Region of the Wireless Ergodic Fading Interference Channel
with Partial CSIT to Within One Bit | cs.IT math.IT | Fundamental capacity limits are studied for the two-user wireless ergodic
fading IC with partial Channel State Information at the Transmitters (CSIT)
where each transmitter is equipped with an arbitrary deterministic function of
the channel state (this model yields a full control over how much state
information is available). One of the main challenges in the analysis of fading
networks, specifically multi-receiver networks including fading ICs, is to
obtain efficient capacity outer bounds. In this paper, a novel capacity outer
bound is established for the two-user ergodic fading IC. For this purpose, by a
subtle combination of broadcast channel techniques (i.e., manipulating mutual
information functions composed of vector random variables by Csiszar-Korner
identity) and genie-aided techniques, first a single-letter outer bound
characterized by mutual information functions including some auxiliary random
variables is derived. Then, by novel arguments the derived bound is optimized
over its auxiliaries only using the entropy power inequality. Besides being
well-described, our outer bound is efficient from several aspects.
Specifically, it is optimal for the fading IC with uniformly strong
interference. Also, it is sum-rate optimal for the channel with uniformly mixed
interference. More importantly, it is proved that when each transmitter has
access to any amount of CSIT that includes the interference to noise ratio of
its non-corresponding receiver, the outer bound differs by no more than one bit
from the achievable rate region given by Han-Kobayashi scheme. This result is
viewed as a natural generalization of the ETW to within one bit capacity result
for the static channel to the wireless ergodic fading case.
|
1302.1837 | On the Capacity Region of the Two-User Interference Channel | cs.IT math.IT | One of the key open problems in network information theory is to obtain the
capacity region for the two-user Interference Channel (IC). In this paper, new
results are derived for this channel. As a first result, a noisy interference
regime is given for the general IC where the sum-rate capacity is achieved by
treating interference as noise at the receivers. To obtain this result, a
single-letter outer bound in terms of some auxiliary random variables is first
established for the sum-rate capacity of the general IC and then those
conditions under which this outer bound is reduced to the achievable sum-rate
given by the simple treating interference as noise strategy are specified. The
main benefit of this approach is that it is applicable for any two-user IC
(potentially non-Gaussian). For the special case of Gaussian channel, our
result is reduced to the noisy interference regime that was previously
obtained. Next, some results are given on the Han-Kobayashi (HK) achievable
rate region. The evaluation of this rate region is in general difficult. In
this paper, a simple characterization of the HK rate region is derived for some
special cases, specifically, for a novel very weak interference regime. As a
remarkable characteristic, it is shown that for this very weak interference
regime, the achievable sum-rate due to the HK region is identical to the one
given by the simple treating interference as noise strategy.
|
1302.1842 | Adaptive Compressive Spectrum Sensing for Wideband Cognitive Radios | cs.IT math.IT | This letter presents an adaptive spectrum sensing algorithm that detects
wideband spectrum using sub-Nyquist sampling rates. By taking advantage of
compressed sensing (CS), the proposed algorithm reconstructs the wideband
spectrum from compressed samples. Furthermore, an l2 norm validation approach
is proposed that enables cognitive radios (CRs) to automatically terminate the
signal acquisition once the current spectral recovery is satisfactory, leading
to enhanced CR throughput. Numerical results show that the proposed algorithm
can not only shorten the spectrum sensing interval, but also improve the
throughput of wideband CRs.
|
1302.1845 | Linked-Cluster Technique for Finding the Distance of a Quantum LDPC Code | quant-ph cs.IT math.IT | We present a linked-cluster technique for calculating the distance of a
quantum LDPC code. It offers an advantage over existing deterministic
techniques for codes with small relative distances (which includes all known
families of quantum LDPC codes), and over the probabilistic technique for codes
with sufficiently high rates.
|
1302.1847 | Wideband Spectrum Sensing with Sub-Nyquist Sampling in Cognitive Radios | cs.IT math.IT | Multi-rate asynchronous sub-Nyquist sampling (MASS) is proposed for wideband
spectrum sensing. Corresponding spectral recovery conditions are derived and
the probability of successful recovery is given. Compared to previous
approaches, MASS offers lower sampling rate, and is an attractive approach for
cognitive radio networks.
|
1302.1857 | Relaying Technologies for Smart Grid Communications | cs.IT cs.NI math.IT | Wireless technologies can support a broad range of smart grid applications
including advanced metering infrastructure (AMI) and demand response (DR).
However, there are many formidable challenges when wireless technologies are
applied to the smart gird, e.g., the tradeoffs between wireless coverage and
capacity, the high reliability requirement for communication, and limited
spectral resources. Relaying has emerged as one of the most promising candidate
solutions for addressing these issues. In this article, an introduction to
various relaying strategies is presented, together with a discussion of how to
improve spectral efficiency and coverage in relay-based information and
communications technology (ICT) infrastructure for smart grid applications.
Special attention is paid to the use of unidirectional relaying, collaborative
beamforming, and bidirectional relaying strategies.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.