id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1301.6256 | Tight is better: Performance Improvement of the Compressive Classifier
Using Equi-Norm Tight Frames | cs.IT math.IT math.ST stat.TH | Detecting or classifying already known sparse signals contaminated by
Gaussian noise from compressive measurements is different from reconstructing
sparse signals, as its objective is to minimize the error probability which
describes performance of the detectors or classifiers. This paper is concerned
about the performance improvement of a commonly used Compressive Classifier. We
prove that when the arbitrary sensing matrices used to get the Compressive
Measurements are transformed into Equi-Norm Tight Frames, i.e. the matrices
that are row-orthogonal, The Compressive Classifier achieves better
performance. Although there are other proofs that among all Equi-Norm Tight
Frames the Equiangular tight Frames (ETFs) bring best worst-case performance,
the existence and construction of ETFs on some dimensions is still an open
problem. As the construction of Equi-Norm Tight Frames from any arbitrary
matrices is very easy and practical compared with ETF matrices, the result of
this paper can also provide a practical method to design an improved sensing
matrix for Compressive Classification. We can conclude that: Tight is Better!
|
1301.6262 | Developing Parallel Dependency Graph In Improving Game Balancing | cs.AI | The dependency graph is a data architecture that models all the dependencies
between the different types of assets in the game. It depicts the
dependency-based relationships between the assets of a game. For example, a
player must construct an arsenal before he can build weapons. It is vital that
the dependency graph of a game is designed logically to ensure a logical
sequence of game play. However, a mere logical dependency graph is not
sufficient in sustaining the players' enduring interests in a game, which
brings the problem of game balancing into picture. The issue of game balancing
arises when the players do not feel the chances of winning the game over their
AI opponents who are more skillful in the game play. At the current state of
research, the architecture of dependency graph is monolithic for the players.
The sequence of asset possession is always foreseeable because there is only a
single dependency graph. Game balancing is impossible when the assets of AI
players are overwhelmingly outnumbering that of human players. This paper
proposes a parallel architecture of dependency graph for the AI players and
human players. Instead of having a single dependency graph, a parallel
architecture is proposed where the dependency graph of AI player is adjustable
with that of human player using a support dependency as a game balancing
mechanism. This paper exhibits that the parallel dependency graph helps to
improve game balancing.
|
1301.6265 | Neural Networks Built from Unreliable Components | cs.NE cs.IT math.IT | Recent advances in associative memory design through strutured pattern sets
and graph-based inference algorithms have allowed the reliable learning and
retrieval of an exponential number of patterns. Both these and classical
associative memories, however, have assumed internally noiseless computational
nodes. This paper considers the setting when internal computations are also
noisy. Even if all components are noisy, the final error probability in recall
can often be made exceedingly small, as we characterize. There is a threshold
phenomenon. We also show how to optimize inference algorithm parameters when
knowing statistical properties of internal noise.
|
1301.6272 | State-Dependent Z Channel | cs.IT math.IT | In this paper we study the Z channel with side information non-causally
available at the encoders. We use Marton encoding along with Gelfand-Pinsker
random binning scheme and Chong-Motani-Garg-El Gamal (CMGE) jointly decoding to
find an achievable rate region. We will see that our achievable rate region
gives the achievable rate of the multiple access channel with side information
and also degraded broadcast channel with side information. We will also derive
an inner bound and an outer bound on the capacity region of the state-dependent
degraded discrete memoryless Z channel and also will observe that our outer
bound meets the inner bound for the rates corresponding to the second
transmitter. Also, by assuming the high signal to noise ratio and strong
interference regime, and using the lattice strategies, we derive an achievable
rate region for the Gaussian degraded Z channel with additive interference
non-causally available at both of the encoders. Our method is based on lattice
transmission scheme, jointly decoding at the first decoder and successive
decoding at the second decoder. Using such coding scheme we remove the effect
of the interference completely.
|
1301.6277 | LA-LDA: A Limited Attention Topic Model for Social Recommendation | cs.SI cs.IR cs.LG | Social media users have finite attention which limits the number of incoming
messages from friends they can process. Moreover, they pay more attention to
opinions and recommendations of some friends more than others. In this paper,
we propose LA-LDA, a latent topic model which incorporates limited,
non-uniformly divided attention in the diffusion process by which opinions and
information spread on the social network. We show that our proposed model is
able to learn more accurate user models from users' social network and item
adoption behavior than models which do not take limited attention into account.
We analyze voting on news items on the social news aggregator Digg and show
that our proposed model is better able to predict held out votes than
alternative models. Our study demonstrates that psycho-socially motivated
models have better ability to describe and predict observed behavior than
models which only consider topics.
|
1301.6291 | Nested Lattice Codes for Gaussian Two-Way Relay Channels | cs.IT math.IT | In this paper, we consider a Gaussian two-way relay channel (GTRC), where two
sources exchange messages with each other through a relay. We assume that there
is no direct link between sources, and all nodes operate in full-duplex mode.
By utilizing nested lattice codes for the uplink (i.e., MAC phase), and
structured binning for the downlink (i.e., broadcast phase), we propose two
achievable schemes. Scheme 1 is based on compute and forward scheme of [1]
while scheme 2 utilizes two different lattices for source nodes based on a
three-stage lattice partition chain. We show that scheme 2 can achieve capacity
region at the high signal-to-noise ratio (SNR). Regardless all channel
parameters, the achievable rate of scheme 2 is within 0.2654 bit from the
cut-set outer bound for user 1. For user 2, the proposed scheme achieves within
0.167 bit from the outer bound if channel coefficient is larger than one, and
achieves within 0.2658 bit from the outer bound if channel coefficient is
smaller than one. Moreover, sum rate of the proposed scheme is within 0.334
bits from the sum capacity. These gaps for GTRC are the best gap-to-capacity
results to date.
|
1301.6295 | Fixed Points of Generalized Approximate Message Passing with Arbitrary
Matrices | cs.IT math.IT | The estimation of a random vector with independent components passed through
a linear transform followed by a componentwise (possibly nonlinear) output map
arises in a range of applications. Approximate message passing (AMP) methods,
based on Gaussian approximations of loopy belief propagation, have recently
attracted considerable attention for such problems. For large random
transforms, these methods exhibit fast convergence and admit precise analytic
characterizations with testable conditions for optimality, even for certain
non-convex problem instances. However, the behavior of AMP under general
transforms is not fully understood. In this paper, we consider the generalized
AMP (GAMP) algorithm and relate the method to more common optimization
techniques. This analysis enables a precise characterization of the GAMP
algorithm fixed-points that applies to arbitrary transforms. In particular, we
show that the fixed points of the so-called max-sum GAMP algorithm for MAP
estimation are critical points of a constrained maximization of the posterior
density. The fixed-points of the sum-product GAMP algorithm for estimation of
the posterior marginals can be interpreted as critical points of a certain free
energy.
|
1301.6301 | Deterministic Constructions for Large Girth Protograph LDPC Codes | cs.IT math.IT | The bit-error threshold of the standard ensemble of Low Density Parity Check
(LDPC) codes is known to be close to capacity, if there is a non-zero fraction
of degree-two bit nodes. However, the degree-two bit nodes preclude the
possibility of a block-error threshold. Interestingly, LDPC codes constructed
using protographs allow the possibility of having both degree-two bit nodes and
a block-error threshold. In this paper, we analyze density evolution for
protograph LDPC codes over the binary erasure channel and show that their
bit-error probability decreases double exponentially with the number of
iterations when the erasure probability is below the bit-error threshold and
long chain of degree-two variable nodes are avoided in the protograph. We
present deterministic constructions of such protograph LDPC codes with girth
logarithmic in blocklength, resulting in an exponential fall in bit-error
probability below the threshold. We provide optimized protographs, whose
block-error thresholds are better than that of the standard ensemble with
minimum bit-node degree three. These protograph LDPC codes are theoretically of
great interest, and have applications, for instance, in coding with strong
secrecy over wiretap channels.
|
1301.6302 | Simultaneous Information and Energy Transfer: A Two-User MISO
Interference Channel Case | cs.IT math.IT | This paper considers the sum rate maximization problem of a two-user
multiple-input single-output interference channel with receivers that can
scavenge energy from the radio signals transmitted by the transmitters. We
first study the optimal transmission strategy for an ideal scenario where the
two receivers can simultaneously decode the information signal and harvest
energy. Then, considering the limitations of the current circuit technology, we
propose two practical schemes based on TDMA, where, at each time slot, the
receiver either operates in the energy harvesting mode or in the information
detection mode. Optimal transmission strategies for the two practical schemes
are respectively investigated. Simulation results show that the three schemes
exhibit interesting tradeoff between achievable sum rate and energy harvesting
requirement, and do not dominate each other in terms of maximum achievable sum
rate.
|
1301.6312 | Rooting out the Rumor Culprit from Suspects | cs.SI cs.IT math.IT | Suppose that a rumor originating from a single source among a set of suspects
spreads in a network, how to root out this rumor source? With the a priori
knowledge of suspect nodes and an observation of infected nodes, we construct a
maximum a posteriori (MAP) estimator to identify the rumor source using the
susceptible-infected (SI) model. The a priori suspect set and its associated
connectivity bring about new ingredients to the problem, and thus we propose to
use local rumor center, a generalized concept based on rumor centrality, to
identify the source from suspects. For regular tree-type networks of node
degree {\delta}, we characterize Pc(n), the correct detection probability of
the estimator upon observing n infected nodes, in both the finite and
asymptotic regimes. First, when every infected node is a suspect, Pc(n)
asymptotically grows from 0.25 to 0.307 with {\delta} from 3 to infinity, a
result first established in Shah and Zaman (2011, 2012) via a different
approach; and it monotonically decreases with n and increases with {\delta}.
Second, when the suspects form a connected subgraph of the network, Pc(n)
asymptotically significantly exceeds the a priori probability if {\delta}>2,
and reliable detection is achieved as {\delta} becomes large; furthermore, it
monotonically decreases with n and increases with {\delta}. Third, when there
are only two suspects, Pc(n) is asymptotically at least 0.75 if {\delta}>2; and
it increases with the distance between the two suspects. Fourth, when there are
multiple suspects, among all possible connection patterns, that they form a
connected subgraph of the network achieves the smallest detection probability.
Our analysis leverages ideas from the Polya's urn model in probability theory
and sheds insight into the behavior of the rumor spreading process not only in
the asymptotic regime but also for the general finite-n regime.
|
1301.6314 | Equitability Analysis of the Maximal Information Coefficient, with
Comparisons | cs.LG q-bio.QM stat.ML | A measure of dependence is said to be equitable if it gives similar scores to
equally noisy relationships of different types. Equitability is important in
data exploration when the goal is to identify a relatively small set of
strongest associations within a dataset as opposed to finding as many non-zero
associations as possible, which often are too many to sift through. Thus an
equitable statistic, such as the maximal information coefficient (MIC), can be
useful for analyzing high-dimensional data sets. Here, we explore both
equitability and the properties of MIC, and discuss several aspects of the
theory and practice of MIC. We begin by presenting an intuition behind the
equitability of MIC through the exploration of the maximization and
normalization steps in its definition. We then examine the speed and optimality
of the approximation algorithm used to compute MIC, and suggest some directions
for improving both. Finally, we demonstrate in a range of noise models and
sample sizes that MIC is more equitable than natural alternatives, such as
mutual information estimation and distance correlation.
|
1301.6315 | Multiple-Antenna Interference Channel with Receive Antenna Joint
Processing and Real Interference Alignment | cs.IT math.IT | We consider a constant $K$-user Gaussian interference channel with $M$
antennas at each transmitter and $N$ antennas at each receiver, denoted as a
$(K,M,N)$ channel. Relying on a result on simultaneous Diophantine
approximation, a real interference alignment scheme with joint receive antenna
processing is developed. The scheme is used to provide new proofs for two
previously known results, namely 1) the total degrees of freedom (DoF) of a
$(K, N, N)$ channel is $NK/2$; and 2) the total DoF of a $(K, M, N)$ channel is
at least $KMN/(M+N)$. We also derive the DoF region of the $(K,N,N)$ channel,
and an inner bound on the DoF region of the $(K,M,N)$ channel.
|
1301.6316 | Hierarchical Data Representation Model - Multi-layer NMF | cs.LG | In this paper, we propose a data representation model that demonstrates
hierarchical feature learning using nsNMF. We extend unit algorithm into
several layers. Experiments with document and image data successfully
discovered feature hierarchies. We also prove that proposed method results in
much better classification and reconstruction performance, especially for small
number of features. feature hierarchies.
|
1301.6318 | Quasi-Equiangular Frame (QEF) : A New Flexible Configuration of Frame | cs.IT math.AG math.IT | Frame theory is a powerful tool in the domain of signal processing and
communication. Among its numerous configurations, the ones which have drawn
much attention recently are Equiangular Tight Frame (ETF) and Grassmannian
Frame. These frames both have some kind of optimality in coherence, thus bring
robustness or optimal performance in applications such as digital fingerprint,
erasure channels, and Compressive Sensing. However, too strict constraint on
existence and construction of ETF and Grassmannian Frame became the main
obstacle for widespread use. In this paper, we propose a new configuration of
frame: Quasi-Equiangular Frame, as a compromise but more convenient and
flexible approximation of ETF and Grassmannian Frame. We will give formal
definition of Quasi-Equiangular Frame and analyze its relationship with ETF and
Grassmannian frame. Furthermore, for popularity of ETF and Grassmannian frame
in Compressive Sensing, we utilize the technique of random matrices to obtain
asymptotical concentration estimation of the Restricted Isometry Constant (RIC)
of Quasi-Equiangular Frame with respect to its key parameter.
|
1301.6324 | An improvement to k-nearest neighbor classifier | cs.CV cs.LG stat.ML | K-Nearest neighbor classifier (k-NNC) is simple to use and has little design
time like finding k values in k-nearest neighbor classifier, hence these are
suitable to work with dynamically varying data-sets. There exists some
fundamental improvements over the basic k-NNC, like weighted k-nearest
neighbors classifier (where weights to nearest neighbors are given based on
linear interpolation), using artificially generated training set called
bootstrapped training set, etc. These improvements are orthogonal to space
reduction and classification time reduction techniques, hence can be coupled
with any of them. The paper proposes another improvement to the basic k-NNC
where the weights to nearest neighbors are given based on Gaussian distribution
(instead of linear interpolation as done in weighted k-NNC) which is also
independent of any space reduction and classification time reduction technique.
We formally show that our proposed method is closely related to non-parametric
density estimation using a Gaussian kernel. We experimentally demonstrate using
various standard data-sets that the proposed method is better than the existing
ones in most cases.
|
1301.6328 | Explicit Constructions of Quasi-Uniform Codes from Groups | math.GR cs.IT math.IT | We address the question of constructing explicitly quasi-uniform codes from
groups. We determine the size of the codebook, the alphabet and the minimum
distance as a function of the corresponding group, both for abelian and some
nonabelian groups. Potentials applications comprise the design of almost affine
codes and non-linear network codes.
|
1301.6331 | Optimal Locally Repairable Codes via Rank-Metric Codes | cs.IT math.IT | This paper presents a new explicit construction for locally repairable codes
(LRCs) for distributed storage systems which possess all-symbols locality and
maximal possible minimum distance, or equivalently, can tolerate the maximal
number of node failures. This construction, based on maximum rank distance
(MRD) Gabidulin codes, provides new optimal vector and scalar LRCs. In
addition, the paper also discusses mechanisms by which codes obtained using
this construction can be used to construct LRCs with efficient repair of failed
nodes by combination of LRC with regenerating codes.
|
1301.6339 | Lov\'asz's Theta Function, R\'enyi's Divergence and the Sphere-Packing
Bound | cs.IT math.IT quant-ph | Lov\'asz's bound to the capacity of a graph and the the sphere-packing bound
to the probability of error in channel coding are given a unified presentation
as information radii of the Csisz\'ar type using the R{\'e}nyi divergence in
the classical-quantum setting. This brings together two results in coding
theory that are usually considered as being of a very different nature, one
being a "combinatorial" result and the other being "probabilistic". In the
context of quantum information theory, this difference disappears.
|
1301.6340 | An "Umbrella" Bound of the Lov\'asz-Gallager Type | cs.IT math.IT | We propose a novel approach for bounding the probability of error of discrete
memoryless channels with a zero-error capacity based on a combination of
Lov\'asz' and Gallager's ideas. The obtained bounds are expressed in terms of a
function $\vartheta(\rho)$, introduced here, that varies from the cut-off rate
of the channel to the Lov\'azs theta function as $\rho$ varies from 1 to
$\infty$ and which is intimately related to Gallager's expurgated coefficient.
The obtained bound to the reliability function, though loose in its present
form, is finite for all rates larger than the Lov\'asz theta function.
|
1301.6345 | On AVCs with Quadratic Constraints | cs.IT math.IT | In this work we study an Arbitrarily Varying Channel (AVC) with quadratic
power constraints on the transmitter and a so-called "oblivious" jammer (along
with additional AWGN) under a maximum probability of error criterion, and no
private randomness between the transmitter and the receiver. This is in
contrast to similar AVC models under the average probability of error criterion
considered in [1], and models wherein common randomness is allowed [2] -- these
distinctions are important in some communication scenarios outlined below.
We consider the regime where the jammer's power constraint is smaller than
the transmitter's power constraint (in the other regime it is known no positive
rate is possible). For this regime we show the existence of stochastic codes
(with no common randomness between the transmitter and receiver) that enables
reliable communication at the same rate as when the jammer is replaced with
AWGN with the same power constraint. This matches known information-theoretic
outer bounds. In addition to being a stronger result than that in [1] (enabling
recovery of the results therein), our proof techniques are also somewhat more
direct, and hence may be of independent interest.
|
1301.6348 | Capacity Optimization through Sensing Threshold Adaptation for Cognitive
Radio Networks | cs.IT math.IT math.OC | In this paper we propose the capacity optimization over sensing threshold for
sensing-based cognitive radio networks. The objective function of the proposed
optimization is to maximize the capacity at the secondary user subject to the
constraints on the transmit power and the sensing threshold in order to protect
the primary user. The defined optimization problem is a convex optimization
over the transmit power and the sensing threshold where the concavity on
sensing threshold is proved. The problem is solved by using Lagrange duality
decomposition method in conjunction with a subgradient iterative algorithm and
the numerical results show that the proposed optimization can lead to
significant capacity maximization for the secondary user as long as the primary
user can afford.
|
1301.6356 | Brute force searching, the typical set and Guesswork | cs.IT cs.CR math.IT | Consider the situation where a word is chosen probabilistically from a finite
list. If an attacker knows the list and can inquire about each word in turn,
then selecting the word via the uniform distribution maximizes the attacker's
difficulty, its Guesswork, in identifying the chosen word. It is tempting to
use this property in cryptanalysis of computationally secure ciphers by
assuming coded words are drawn from a source's typical set and so, for all
intents and purposes, uniformly distributed within it. By applying recent
results on Guesswork, for i.i.d. sources it is this equipartition ansatz that
we investigate here. In particular, we demonstrate that the expected Guesswork
for a source conditioned to create words in the typical set grows, with word
length, at a lower exponential rate than that of the uniform approximation,
suggesting use of the approximation is ill-advised.
|
1301.6359 | Subjective Reality and Strong Artificial Intelligence | cs.AI | The main prospective aim of modern research related to Artificial
Intelligence is the creation of technical systems that implement the idea of
Strong Intelligence. According our point of view the path to the development of
such systems comes through the research in the field related to perceptions.
Here we formulate the model of the perception of external world which may be
used for the description of perceptual activity of intelligent beings. We
consider a number of issues related to the development of the set of patterns
which will be used by the intelligent system when interacting with environment.
The key idea of the presented perception model is the idea of subjective
reality. The principle of the relativity of perceived world is formulated. It
is shown that this principle is the immediate consequence of the idea of
subjective reality. In this paper we show how the methodology of subjective
reality may be used for the creation of different types of Strong AI systems.
|
1301.6362 | Subspace Codes for Random Networks Based on Pl\"{u}cker Coordinates and
Schubert Cells | cs.IT math.IT | The Pl\"{u}cker coordinate description of subspaces has been recently
discussed in the context of constant dimension subspace codes for random
networks, as well as the Schubert cell description of certain code parameters.
In this paper this classical tool is used to reformulate some standard
constructions of constant dimension codes so as to give a unified framework. A
general method of constructing non-constant dimension subspace codes with
respect to a given minimum subspace distance or minimum injection distance
among subspaces is presented. These codes may be described as the union of
constant dimension subspace codes restricted to selected Schubert cells. The
selection of these Schubert cells is based on the subset distance of tuples
corresponding to the Pl\"{u}cker coordinate matrices associated with the
subspaces contained in the respective Schubert cells. In this context, it is
shown that a recent construction of non-constant dimension Ferrers-diagram
rank-metric subspace codes (Khaleghi and Kschischang) is subsumed in the
present framework.
|
1301.6363 | Towards An Exact Combinatorial Algorithm for LP Decoding of Turbo Codes | cs.IT math.IT | We present a novel algorithm that solves the turbo code LP decoding problem
in a fininte number of steps by Euclidean distance minimizations, which in turn
rely on repeated shortest path computations in the trellis graph representing
the turbo code. Previous attempts to exploit the combinatorial graph structure
only led to algorithms which are either of heuristic nature or do not guarantee
finite convergence. A numerical study shows that our algorithm clearly beats
the running time, up to a factor of 100, of generic commercial LP solvers for
medium-sized codes, especially for high SNR values.
|
1301.6386 | A Two Level Feedback System Design to Regulation Service Provision | cs.SY | Demand side management has gained increasing importance as the penetration of
renewable energy grows. Based on a Markov jump process modelling of a group of
thermostatic loads, this paper proposes a two level feedback system design
between the independent system operator (ISO) and the regulation service
provider such that two objectives are achieved: (1) the ISO can optimally
dispatch regulation signals to multiple providers in real time in order to
reduce the requirement for expensive spinning reserves, and (2) each regulation
provider can control its thermostatic loads to respond the ISO signal. It is
also shown that the amount of regulation service that can be provided is
implicitly restricted by a few fundamental parameters of the provider itself,
such as the allowable set point choice and its thermal constant. An interesting
finding is that the regulation provider's ability to provide a large amount of
long term accumulated regulation and short term signal tracking restrict each
other. Simulation results are presented to verify and illustrate the
performance of the proposed framework.
|
1301.6388 | Polarization of the Renyi Information Dimension with Applications to
Compressed Sensing | cs.IT math.IT | In this paper, we show that the Hadamard matrix acts as an extractor over the
reals of the Renyi information dimension (RID), in an analogous way to how it
acts as an extractor of the discrete entropy over finite fields. More
precisely, we prove that the RID of an i.i.d. sequence of mixture random
variables polarizes to the extremal values of 0 and 1 (corresponding to
discrete and continuous distributions) when transformed by a Hadamard matrix.
Further, we prove that the polarization pattern of the RID admits a closed form
expression and follows exactly the Binary Erasure Channel (BEC) polarization
pattern in the discrete setting. We also extend the results from the single- to
the multi-terminal setting, obtaining a Slepian-Wolf counterpart of the RID
polarization. We discuss applications of the RID polarization to Compressed
Sensing of i.i.d. sources. In particular, we use the RID polarization to
construct a family of deterministic $\pm 1$-valued sensing matrices for
Compressed Sensing. We run numerical simulations to compare the performance of
the resulting matrices with that of random Gaussian and random Hadamard
matrices. The results indicate that the proposed matrices afford competitive
performances while being explicitly constructed.
|
1301.6393 | Precoded Integer-Forcing Universally Achieves the MIMO Capacity to
Within a Constant Gap | cs.IT math.IT | An open-loop single-user multiple-input multiple-output communication scheme
is considered where a transmitter, equipped with multiple antennas, encodes the
data into independent streams all taken from the same linear code. The coded
streams are then linearly precoded using the encoding matrix of a perfect
linear dispersion space-time code. At the receiver side, integer-forcing
equalization is applied, followed by standard single-stream decoding. It is
shown that this communication architecture achieves the capacity of any
Gaussian multiple-input multiple-output channel up to a gap that depends only
on the number of transmit antennas.
|
1301.6397 | Scalar Quantize-and-Forward for Symmetric Half-duplex Two-Way Relay
Channels | cs.IT math.IT | Scalar Quantize & Forward (QF) schemes are studied for the Two-Way Relay
Channel. Different QF approaches are compared in terms of rates as well as
relay and decoder complexity. A coding scheme not requiring Slepian-Wolf coding
at the relay is proposed and properties of the corresponding sum-rate
optimization problem are presented. A numerical scheme similar to the
Blahut-Arimoto algorithm is derived that guides optimized quantizer design. The
results are supported by simulations.
|
1301.6398 | Variable-Length Channel Quantizers for Maximum Diversity and Array Gains | cs.IT math.IT | We consider a $t \times 1$ multiple-antenna fading channel with quantized
channel state information at the transmitter (CSIT). Our goal is to maximize
the diversity and array gains that are associated with the symbol error rate
(SER) performance of the system. It is well-known that for both beamforming and
precoding strategies, finite-rate fixed-length quantizers (FLQs) cannot achieve
the full-CSIT diversity and array gains. In this work, for any function
$f(P)\in\omega(1)$, we construct variable-length quantizers (VLQs) that can
achieve these full-CSIT gains with rates $1+(f(P) \log P)/P$ and $1+f(P)/P^t$
for the beamforming and precoding strategies, respectively, where $P$ is the
power constraint of the transmitter. We also show that these rates are the best
possible up to $o(1)$ multipliers in their $P$-dependent terms. In particular,
although the full-CSIT SER is not achievable at any (even infinite) feedback
rate, the full-CSIT diversity and array gains can be achieved with a feedback
rate of 1 bit per channel state asymptotically.
|
1301.6400 | Achieving Fully Proportional Representation is Easy in Practice | cs.MA cs.GT | We provide experimental evaluation of a number of known and new algorithms
for approximate computation of Monroe's and Chamberlin-Courant's rules. Our
experiments, conducted both on real-life preference-aggregation data and on
synthetic data, show that even very simple and fast algorithms can in many
cases find near-perfect solutions. Our results confirm and complement very
recent theoretical analysis of Skowron et al., who have shown good lower bounds
on the quality of (some of) the algorithms that we study.
|
1301.6406 | Joint Power Adjustment and Interference Mitigation Techniques for
Cooperative Spread Spectrum Systems | cs.IT math.IT | This paper presents joint power allocation and interference mitigation
techniques for the downlink of spread spectrum systems which employ multiple
relays and the amplify and forward cooperation strategy. We propose a joint
constrained optimization framework that considers the allocation of power
levels across the relays subject to an individual power constraint and the
design of linear receivers for interference suppression. We derive constrained
minimum mean-squared error (MMSE) expressions for the parameter vectors that
determine the optimal power levels across the relays and the linear receivers.
In order to solve the proposed optimization problem efficiently, we develop
joint adaptive power allocation and interference suppression algorithms that
can be implemented in a distributed fashion. The proposed stochastic gradient
(SG) and recursive least squares (RLS) algorithms mitigate the interference by
adjusting the power levels across the relays and estimating the parameters of
the linear receiver. SG and RLS channel estimation algorithms are also derived
to determine the coefficients of the channels across the base station, the
relays and the destination terminal. The results of simulations show that the
proposed techniques obtain significant gains in performance and capacity over
non-cooperative systems and cooperative schemes with equal power allocation.
|
1301.6408 | A Universal Probability Assignment for Prediction of Individual
Sequences | cs.IT math.IT | Is it a good idea to use the frequency of events in the past, as a guide to
their frequency in the future (as we all do anyway)? In this paper the question
is attacked from the perspective of universal prediction of individual
sequences. It is shown that there is a universal sequential probability
assignment, such that for a large class loss functions (optimization goals),
the predictor minimizing the expected loss under this probability, is a good
universal predictor. The proposed probability assignment is based on randomly
dithering the empirical frequencies of states in the past, and it is easy to
show that randomization is essential. This yields a very simple universal
prediction scheme which is similar to Follow-the-Perturbed-Leader (FPL) and
works for a large class of loss functions, as well as a partial justification
for using probabilistic assumptions.
|
1301.6410 | Linear Programming Decoding of Spatially Coupled Codes | cs.IT math.IT | For a given family of spatially coupled codes, we prove that the LP threshold
on the BSC of the graph cover ensemble is the same as the LP threshold on the
BSC of the derived spatially coupled ensemble. This result is in contrast with
the fact that the BP threshold of the derived spatially coupled ensemble is
believed to be larger than the BP threshold of the graph cover ensemble as
noted by the work of Kudekar et al. (2011, 2012). To prove this, we establish
some properties related to the dual witness for LP decoding which was
introduced by Feldman et al. (2007) and simplified by Daskalakis et al. (2008).
More precisely, we prove that the existence of a dual witness which was
previously known to be sufficient for LP decoding success is also necessary and
is equivalent to the existence of certain acyclic hyperflows. We also derive a
sublinear (in the block length) upper bound on the weight of any edge in such
hyperflows, both for regular LPDC codes and for spatially coupled codes and we
prove that the bound is asymptotically tight for regular LDPC codes. Moreover,
we show how to trade crossover probability for "LP excess" on all the variable
nodes, for any binary linear code.
|
1301.6412 | Random Access and Source-Channel Coding Error Exponents for Multiple
Access Channels | cs.IT math.IT | A new universal coding/decoding scheme for random access with collision
detection is given in the case of two senders. The result is used to give an
achievable joint source-channel coding error exponent for multiple access
channels in the case of independent sources. This exponent is improved in a
modified model that admits error free 0 rate communication between the senders.
|
1301.6422 | On Connectivity Thresholds in the Intersection of Random Key Graphs on
Random Geometric Graphs | cs.IT math.CO math.IT math.PR | In a random key graph (RKG) of $n$ nodes each node is randomly assigned a key
ring of $K_n$ cryptographic keys from a pool of $P_n$ keys. Two nodes can
communicate directly if they have at least one common key in their key rings.
We assume that the $n$ nodes are distributed uniformly in $[0,1]^2.$ In
addition to the common key requirement, we require two nodes to also be within
$r_n$ of each other to be able to have a direct edge. Thus we have a random
graph in which the RKG is superposed on the familiar random geometric graph
(RGG). For such a random graph, we obtain tight bounds on the relation between
$K_n,$ $P_n$ and $r_n$ for the graph to be asymptotically almost surely
connected.
|
1301.6426 | Joint Design of Channel and Network Coding for Star Networks | cs.IT math.IT | Channel coding alone is not sufficient to reliably transmit a message of
finite length $K$ from a source to one or more destinations as in, e.g., file
transfer. To ensure that no data is lost, it must be combined with rateless
erasure correcting schemes on a higher layer, such as a time-division multiple
access (TDMA) system paired with automatic repeat request (ARQ) or random
linear network coding (RLNC). We consider binary channel coding on a binary
symmetric channel (BSC) and q-ary RLNC for erasure correction in a star
network, where Y sources send messages to each other with the help of a central
relay. In this scenario RLNC has been shown to have a throughput advantage over
TDMA schemes as K and q tend to infinity. In this paper we focus on finite
block lengths and compare the expected throughputs of RLNC and TDMA. For a
total message length of K bits, which can be subdivided into blocks of smaller
size prior to channel coding, we obtain the channel coding rate and the number
of blocks that maximize the expected throughput of both RLNC and TDMA, and we
find that TDMA is more throughput-efficient for small message lengths K and
small q.
|
1301.6427 | Fundamental Inequalities and Identities Involving Mutual and Directed
Informations in Closed-Loop Systems | cs.IT math.IT | We present several novel identities and inequalities relating the mutual
information and the directed information in systems with feedback. The internal
blocks within such systems are restricted only to be causal mappings, but are
allowed to be non-linear, stochastic and time varying. Moreover, the involved
signals can be arbitrarily distributed. We bound the directed information
between signals inside the feedback loop by the mutual information between
signals inside and outside the feedback loop. This fundamental result has an
interesting interpretation as a law of conservation of information flow.
Building upon it, we derive several novel identities and inequalities, which
allow us to prove some existing information inequalities under less restrictive
assumptions. Finally, we establish new relationships between nested directed
informations inside a feedback loop. This yields a new and general
data-processing inequality for systems with feedback.
|
1301.6431 | Automatic Verification of Parameterised Interleaved Multi-Agent Systems | cs.MA cs.LO | A key problem in verification of multi-agent systems by model checking
concerns the fact that the state-space of the system grows exponentially with
the number of agents present. This makes practical model checking unfeasible
whenever the system contains more than a few agents. In this paper we put
forward a technique to establish a cutoff result, thereby showing that all
systems of arbitrary number of agents can be verified by model checking a
single system containing a number of agents equal to the cutoff of the system.
While this problem is undecidable in general, we here define a class of
parameterised interpreted systems and a parameterised temporal-epistemic logic
for which the result can be shown. We exemplify the theoretical results on a
robotic example and present an implementation of the technique on top of mcmas,
an open-source model checker for multi-agent systems.
|
1301.6433 | Delay Minimization in Varying-Bandwidth Direct Multicast with Side
Information | cs.IT math.CO math.IT | We study the delay minimization in a direct multicast communication scheme
where a base station wishes to transmit a set of original packets to a group of
clients. Each of the clients already has in its cache a subset of the original
packets, and requests for all the remaining packets. The base station
communicates directly with the clients by broadcasting information to them.
Assume that bandwidths vary between the station and different clients. We
propose a method to minimize the total delay required for the base station to
satisfy requests from all clients.
|
1301.6449 | To Obtain or not to Obtain CSI in the Presence of Hybrid Adversary | cs.IT cs.CR math.IT | We consider the wiretap channel model under the presence of a hybrid, half
duplex adversary that is capable of either jamming or eavesdropping at a given
time. We analyzed the achievable rates under a variety of scenarios involving
different methods for obtaining transmitter CSI. Each method provides a
different grade of information, not only to the transmitter on the main
channel, but also to the adversary on all channels. Our analysis shows that
main CSI is more valuable for the adversary than the jamming CSI in both
delay-limited and ergodic scenarios. Similarly, in certain cases under the
ergodic scenario, interestingly, no CSI may lead to higher achievable secrecy
rates than with CSI.
|
1301.6453 | Structured Lattice Codes for 2 \times 2 \times 2 MIMO Interference
Channel | cs.IT math.IT | We consider the 2\times 2\times 2 multiple-input multipleoutput interference
channel where two source-destination pairs wish to communicate with the aid of
two intermediate relays. In this paper, we propose a novel lattice strategy
called Aligned Precoded Compute-and-Forward (PCoF). This scheme consists of two
phases: 1) Using the CoF framework based on signal alignment we transform the
Gaussian network into a deterministic finite field network. 2) Using linear
precoding (over finite field) we eliminate the end-to-end interference in the
finite field domain. Further, we exploit the algebraic structure of lattices to
enhance the performance at finite SNR, such that beyond a degree of freedom
result (also achievable by other means). We can also show that Aligned PCoF
outperforms time sharing in a range of reasonably moderate SNR, with increasing
gain as SNR increases.
|
1301.6456 | A Singleton Bound for Lattice Schemes | cs.IT math.IT | In this paper, we derive a Singleton bound for lattice schemes and obtain
Singleton bounds known for binary codes and subspace codes as special cases. It
is shown that the modular structure affects the strength of the Singleton
bound. We also obtain a new upper bound on the code size for non-constant
dimension codes. The plots of this bound along with plots of the code sizes of
known non-constant dimension codes in the literature reveal that our bound is
tight for certain parameters of the code.
|
1301.6465 | Extendable MDL | cs.IT math.IT math.ST stat.TH | In this paper we show that combination of the minimum description length
principle and a exchange-ability condition leads directly to the use of
Jeffreys prior. This approach works in most cases even when Jeffreys prior
cannot be normalized. Kraft's inequality links codes and distributions but a
closer look at this inequality demonstrates that this link only makes sense
when sequences are considered as prefixes of potential longer sequences. For
technical reasons only results for exponential families are stated. Results on
when Jeffreys prior can be normalized after conditioning on a initializing
string are given. An exotic case where no initial string allow Jeffreys prior
to be normalized is given and some way of handling such exotic cases are
discussed.
|
1301.6467 | Non-Asymptotic and Second-Order Achievability Bounds for Coding With
Side-Information | cs.IT math.IT | We present novel non-asymptotic or finite blocklength achievability bounds
for three side-information problems in network information theory. These
include (i) the Wyner-Ahlswede-Korner (WAK) problem of almost-lossless source
coding with rate-limited side-information, (ii) the Wyner-Ziv (WZ) problem of
lossy source coding with side-information at the decoder and (iii) the
Gel'fand-Pinsker (GP) problem of channel coding with noncausal state
information available at the encoder. The bounds are proved using ideas from
channel simulation and channel resolvability. Our bounds for all three problems
improve on all previous non-asymptotic bounds on the error probability of the
WAK, WZ and GP problems--in particular those derived by Verdu. Using our novel
non-asymptotic bounds, we recover the general formulas for the optimal rates of
these side-information problems. Finally, we also present achievable
second-order coding rates by applying the multidimensional Berry-Esseen theorem
to our new non-asymptotic bounds. Numerical results show that the second-order
coding rates obtained using our non-asymptotic achievability bounds are
superior to those obtained using existing finite blocklength bounds.
|
1301.6471 | Generalizing the Sampling Property of the Q-function for Error Rate
Analysis of Cooperative Communication in Fading Channels | cs.IT math.IT | This paper extends some approximation methods that are used to identify
closed form Bit Error Rate (BER) expressions which are frequently utilized in
investigation and comparison of performance for wireless communication systems
in the literature. By using this group of approximation methods, some
expectation integrals, which are complicated to analyze and have high
computational complexity to evaluate through Monte Carlo simulations, are
computed. For these integrals, by using the sampling property of the integrand
functions of one or more arguments, reliable BER expressions revealing the
diversity and coding gains are derived. Although the methods we present are
valid for a larger class of integration problems, in this work we show the step
by step derivation of the BER expressions for a canonical cooperative
communication scenario in addition to a network coded system starting from
basic building blocks. The derived expressions agree with the simulation
results for a very wide range of signal-to-noise ratio (SNR) values.
|
1301.6473 | On the precoder design of a wireless energy harvesting node in linear
vector Gaussian channels with arbitrary input distribution | cs.IT math.IT | A Wireless Energy Harvesting Node (WEHN) operating in linear vector Gaussian
channels with arbitrarily distributed input symbols is considered in this
paper. The precoding strategy that maximizes the mutual information along N
independent channel accesses is studied under non-causal knowledge of the
channel state and harvested energy (commonly known as offline approach). It is
shown that, at each channel use, the left singular vectors of the precoder are
equal to the eigenvectors of the Gram channel matrix. Additionally, an
expression that relates the optimal singular values of the precoder with the
energy harvesting profile through the Minimum Mean-Square Error (MMSE) matrix
is obtained. Then, the specific situation in which the right singular vectors
of the precoder are set to the identity matrix is considered. In this scenario,
the optimal offline power allocation, named Mercury Water-Flowing, is derived
and an intuitive graphical representation is presented. Two optimal offline
algorithms to compute the Mercury Water- Flowing solution are proposed and an
exhaustive study of their computational complexity is performed. Moreover, an
online algorithm is designed, which only uses causal knowledge of the harvested
energy and channel state. Finally, the achieved mutual information is evaluated
through simulation.
|
1301.6479 | Ontology-based Data Access: A Study through Disjunctive Datalog, CSP,
and MMSNP | cs.DB cs.AI | Ontology-based data access is concerned with querying incomplete data sources
in the presence of domain-specific knowledge provided by an ontology. A central
notion in this setting is that of an ontology-mediated query, which is a
database query coupled with an ontology. In this paper, we study several
classes of ontology-mediated queries, where the database queries are given as
some form of conjunctive query and the ontologies are formulated in description
logics or other relevant fragments of first-order logic, such as the guarded
fragment and the unary-negation fragment. The contributions of the paper are
three-fold. First, we characterize the expressive power of ontology-mediated
queries in terms of fragments of disjunctive datalog. Second, we establish
intimate connections between ontology-mediated queries and constraint
satisfaction problems (CSPs) and their logical generalization, MMSNP formulas.
Third, we exploit these connections to obtain new results regarding (i)
first-order rewritability and datalog-rewritability of ontology-mediated
queries, (ii) P/NP dichotomies for ontology-mediated queries, and (iii) the
query containment problem for ontology-mediated queries.
|
1301.6484 | Perspectives on Balanced Sequences | cs.IT math.IT | We examine and compare several different classes of "balanced" block codes
over q-ary alphabets, namely symbol-balanced (SB) codes, charge-balanced (CB)
codes, and polarity-balanced (PB) codes. Known results on the maximum size and
asymptotic minimal redundancy of SB and CB codes are reviewed. We then
determine the maximum size and asymptotic minimal redundancy of PB codes and of
codes which are both CB and PB. We also propose efficient Knuth-like encoders
and decoders for all these types of balanced codes.
|
1301.6491 | SINR-based k-coverage probability in cellular networks with arbitrary
shadowing | cs.NI cs.IT math.IT math.PR | We give numerically tractable, explicit integral expressions for the
distribution of the signal-to-interference-and-noise-ratio (SINR) experienced
by a typical user in the down-link channel from the k-th strongest base
stations of a cellular network modelled by Poisson point process on the plane.
Our signal propagation-loss model comprises of a power-law path-loss function
with arbitrarily distributed shadowing, independent across all base stations,
with and without Rayleigh fading. Our results are valid in the whole domain of
SINR, in particular for SINR<1, where one observes multiple coverage. In this
latter aspect our paper complements previous studies reported in [Dhillon et
al. JSAC 2012].
|
1301.6512 | Secrecy in the 2-User Symmetric Deterministic Interference Channel with
Transmitter Cooperation | cs.IT math.IT | This work presents novel achievable schemes for the 2-user symmetric linear
deterministic interference channel with limited-rate transmitter cooperation
and perfect secrecy constraints at the receivers. The proposed achievable
scheme consists of a combination of interference cancelation, relaying of the
other user's data bits, time sharing, and transmission of random bits,
depending on the rate of the cooperative link and the relative strengths of the
signal and the interference. The results show, for example, that the proposed
scheme achieves the same rate as the capacity without the secrecy constraints,
in the initial part of the weak interference regime. Also, sharing random bits
through the cooperative link can achieve a higher secrecy rate compared to
sharing data bits, in the very high interference regime. The results highlight
the importance of limited transmitter cooperation in facilitating secure
communications over 2-user interference channels.
|
1301.6520 | Variational Equalities of Directed Information and Applications | cs.IT math.IT | In this paper we introduce two variational equalities of directed
information, which are analogous to those of mutual information employed in the
Blahut-Arimoto Algorithm (BAA). Subsequently, we introduce nonanticipative Rate
Distortion Function (RDF) ${R}^{na}_{0,n}(D)$ defined via directed information
introduced in [1], and we establish its equivalence to Gorbunov-Pinsker's
nonanticipatory $\epsilon$-entropy $R^{\varepsilon}_{0,n}(D)$. By invoking
certain results we first establish existence of the infimizing reproduction
distribution for ${R}^{na}_{0,n}(D)$, and then we give its implicit form for
the stationary case. Finally, we utilize one of the variational equalities and
the closed form expression of the optimal reproduction distribution to provide
an algorithm for the computation of ${R}^{na}_{0,n}(D)$.
|
1301.6522 | Optimal Nonstationary Reproduction Distribution for Nonanticipative RDF
on Abstract Alphabets | cs.IT cs.SY math.IT | In this paper we introduce a definition for nonanticipative Rate Distortion
Function (RDF) on abstract alphabets, and we invoke weak convergence of
probability measures to show various of its properties, such as, existence of
the optimal reproduction conditional distribution, compactness of the fidelity
set, lower semicontinuity of the RDF functional, etc. Further, we derive the
closed form expression of the optimal nonstationary reproduction distribution.
This expression is computed recursively backward in time. Throughout the paper
we point out an operational meaning of the nonanticipative RDF by recalling the
coding theorem derive in \cite{tatikonda2000}, and we state relations to
Gorbunov-Pinsker's nonanticipatory $\epsilon-$entropy \cite{gorbunov-pinsker}.
|
1301.6529 | Generalised Multi-sequence Shift-Register Synthesis using Module
Minimisation | cs.IT math.IT | We show how to solve a generalised version of the Multi-sequence Linear
Feedback Shift-Register (MLFSR) problem using minimisation of free modules over
$\mathbb F[x]$. We show how two existing algorithms for minimising such modules
run particularly fast on these instances. Furthermore, we show how one of them
can be made even faster for our use. With our modeling of the problem,
classical algebraic results tremendously simplify arguing about the algorithms.
For the non-generalised MLFSR, these algorithms are as fast as what is
currently known. We then use our generalised MLFSR to give a new fast decoding
algorithm for Reed Solomon codes.
|
1301.6574 | Self-Organizing Map and social networks: Unfolding online social
popularity | cs.SI physics.soc-ph | The present study uses the Kohonen self organizing map (SOM) to represent the
popularity patterns of Myspace music artists from their attributes on the
platform and their position in the social network. The method is applied to
cluster the profiles (the nodes of the social network) and the best friendship
links (the edges). It shows that the SOM is an efficient tool to interpret the
complex links between the audience and the influence of the musicians. It
finally provides a robust classifier of the online social network behaviors.
|
1301.6587 | Information Theoretic Cut-set Bounds on the Capacity of Poisson Wireless
Networks | cs.IT cs.NI math.IT | This paper presents a stochastic geometry model for the investigation of
fundamental information theoretic limitations in wireless networks. We derive a
new unified multi-parameter cut-set bound on the capacity of networks of
arbitrary Poisson node density, size, power and bandwidth, under fast fading in
a rich scattering environment. In other words, we upper-bound the optimal
performance in terms of total communication rate, under any scheme, that can be
achieved between a subset of network nodes (defined by the cut) with all the
remaining nodes. Additionally, we identify four different operating regimes,
depending on the magnitude of the long-range and short-range signal to noise
ratios. Thus, we confirm previously known scaling laws (e.g., in bandwidth
and/or power limited wireless networks), and we extend them with specific
bounds. Finally, we use our results to provide specific numerical examples.
|
1301.6588 | Tradeoffs for reliable quantum information storage in surface codes and
color codes | quant-ph cs.IT math.IT | The family of hyperbolic surface codes is one of the rare families of quantum
LDPC codes with non-zero rate and unbounded minimum distance. First, we
introduce a family of hyperbolic color codes. This produces a new family of
quantum LDPC codes with non-zero rate and with minimum distance logarithmic in
the blocklength. Second, we study the tradeoff between the length n, the number
of encoded qubits k and the distance d of surface codes and color codes. We
prove that kd^2 is upper bounded by C(log k)^2n, where C is a constant that
depends only on the row weight of the parity-check matrix. Our results prove
that the best asymptotic minimum distance of LDPC surface codes and color codes
with non-zero rate is logarithmic in the length.
|
1301.6589 | Energy-Efficient Communication in the Presence of Synchronization Errors | cs.IT math.IT | Communication systems are traditionally designed to have tight
transmitter-receiver synchronization. This requirement has negligible overhead
in the high-SNR regime. However, in many applications, such as wireless sensor
networks, communication needs to happen primarily in the energy-efficient
regime of low SNR, where requiring tight synchronization can be highly
suboptimal.
In this paper, we model the noisy channel with synchronization errors as an
insertion/deletion/substitution channel. For this channel, we propose a new
communication scheme that requires only loose transmitter-receiver
synchronization. We show that the proposed scheme is asymptotically optimal for
the Gaussian channel with synchronization errors in terms of energy efficiency
as measured by the rate per unit energy. In the process, we also establish that
the lack of synchronization causes negligible loss in energy efficiency. We
further show that, for a general discrete memoryless channel with
synchronization errors and a general input cost function admitting a zero-cost
symbol, the rate per unit cost achieved by the proposed scheme is within a
factor two of the information-theoretic optimum.
|
1301.6591 | PDF articles metadata harvester | cs.DL cs.IR | Scientific journals are very important in recording the finding from
researchers around the world. The recent media to disseminate scientific
journals is PDF. On scheme to find the scientific journals over the internet is
via metadata. Metadata stores information about article summary. Embedding
metadata into PDF of scientific article will grant the consistency of metadata
readness. Harvesting the metadata from scientific journal is very interesting
field at the moment. This paper will discuss about scientific journal metadata
harvesters involving XMP.
|
1301.6599 | An Upper Bound on the Capacity of non-Binary Deletion Channels | cs.IT math.IT | We derive an upper bound on the capacity of non-binary deletion channels.
Although binary deletion channels have received significant attention over the
years, and many upper and lower bounds on their capacity have been derived,
such studies for the non-binary case are largely missing. The state of the art
is the following: as a trivial upper bound, capacity of an erasure channel with
the same input alphabet as the deletion channel can be used, and as a lower
bound the results by Diggavi and Grossglauser are available. In this paper, we
derive the first non-trivial non-binary deletion channel capacity upper bound
and reduce the gap with the existing achievable rates. To derive the results we
first prove an inequality between the capacity of a 2K-ary deletion channel
with deletion probability $d$, denoted by $C_{2K}(d)$, and the capacity of the
binary deletion channel with the same deletion probability, $C_2(d)$, that is,
$C_{2K}(d)\leq C_2(d)+(1-d)\log(K)$. Then by employing some existing upper
bounds on the capacity of the binary deletion channel, we obtain upper bounds
on the capacity of the 2K-ary deletion channel. We illustrate via examples the
use of the new bounds and discuss their asymptotic behavior as $d \rightarrow
0$.
|
1301.6600 | Weighted Sum Rate Maximization for Downlink OFDMA with Subcarrier-pair
based Opportunistic DF Relaying | cs.SY | This paper addresses a weighted sum rate (WSR) maximization problem for
downlink OFDMA aided by a decode-and-forward (DF) relay under a total power
constraint. A novel subcarrier-pair based opportunistic DF relaying protocol is
proposed. Specifically, user message bits are transmitted in two time slots. A
subcarrier in the first slot can be paired with a subcarrier in the second slot
for the DF relay-aided transmission to a user. In particular, the source and
the relay can transmit simultaneously to implement beamforming at the
subcarrier in the second slot. Each unpaired subcarrier in either the first or
second slot is used for the source's direct transmission to a user. A benchmark
protocol, same as the proposed one except that the transmit beamforming is not
used for the relay-aided transmission, is also considered. For each protocol, a
polynomial-complexity algorithm is developed to find at least an approximately
optimum resource allocation (RA), by using continuous relaxation, the dual
method, and Hungarian algorithm. Instrumental to the algorithm design is an
elegant definition of optimization variables, motivated by the idea of
regarding the unpaired subcarriers as virtual subcarrier pairs in the direct
transmission mode. The effectiveness of the RA algorithm and the impact of
relay position and total power on the protocols' performance are illustrated by
numerical experiments. The proposed protocol always leads to a maximum WSR
equal to or greater than that for the benchmark one, and the performance gain
of using the proposed one is significant especially when the relay is in close
proximity to the source and the total power is low. Theoretical analysis is
presented to interpret these observations.
|
1301.6626 | Discriminative Feature Selection for Uncertain Graph Classification | cs.LG cs.DB stat.ML | Mining discriminative features for graph data has attracted much attention in
recent years due to its important role in constructing graph classifiers,
generating graph indices, etc. Most measurement of interestingness of
discriminative subgraph features are defined on certain graphs, where the
structure of graph objects are certain, and the binary edges within each graph
represent the "presence" of linkages among the nodes. In many real-world
applications, however, the linkage structure of the graphs is inherently
uncertain. Therefore, existing measurements of interestingness based upon
certain graphs are unable to capture the structural uncertainty in these
applications effectively. In this paper, we study the problem of discriminative
subgraph feature selection from uncertain graphs. This problem is challenging
and different from conventional subgraph mining problems because both the
structure of the graph objects and the discrimination score of each subgraph
feature are uncertain. To address these challenges, we propose a novel
discriminative subgraph feature selection method, DUG, which can find
discriminative subgraph features in uncertain graphs based upon different
statistical measures including expectation, median, mode and phi-probability.
We first compute the probability distribution of the discrimination scores for
each subgraph feature based on dynamic programming. Then a branch-and-bound
algorithm is proposed to search for discriminative subgraphs efficiently.
Extensive experiments on various neuroimaging applications (i.e., Alzheimer's
Disease, ADHD and HIV) have been performed to analyze the gain in performance
by taking into account structural uncertainties in identifying discriminative
subgraph features for graph classification.
|
1301.6630 | Political Disaffection: a case study on the Italian Twitter community | cs.SI cs.LG physics.soc-ph | In our work we analyse the political disaffection or "the subjective feeling
of powerlessness, cynicism, and lack of confidence in the political process,
politicians, and democratic institutions, but with no questioning of the
political regime" by exploiting Twitter data through machine learning
techniques. In order to validate the quality of the time-series generated by
the Twitter data, we highlight the relations of these data with political
disaffection as measured by means of public opinion surveys. Moreover, we show
that important political news of Italian newspapers are often correlated with
the highest peaks of the produced time-series.
|
1301.6643 | On the Performance of Low Density Parity Check Codes for Gaussian
Interference Channels | cs.IT math.IT | In this paper, two-user Gaussian interference channel(GIC) is revisited with
the objective of developing implementable (explicit) channel codes.
Specifically, low density parity check (LDPC) codes are adopted for use over
these channels, and their benefits are studied. Different scenarios on the
level of interference are considered. In particular, for strong interference
channel examples with binary phase shift keying (BPSK), it is demonstrated that
rates better than those offered by single user codes with time sharing are
achievable. Promising results are also observed with quadrature-shift-keying
(QPSK). Under general interference a Han-Kobayashi coding based scheme is
employed splitting the information into public and private parts, and utilizing
appropriate iterative decoders at the receivers. Using QPSK modulation at the
two transmitters, it is shown that rate points higher than those achievable by
time sharing are obtained.
|
1301.6646 | Image registration with sparse approximations in parametric dictionaries | cs.CV | We examine in this paper the problem of image registration from the new
perspective where images are given by sparse approximations in parametric
dictionaries of geometric functions. We propose a registration algorithm that
looks for an estimate of the global transformation between sparse images by
examining the set of relative geometrical transformations between the
respective features. We propose a theoretical analysis of our registration
algorithm and we derive performance guarantees based on two novel important
properties of redundant dictionaries, namely the robust linear independence and
the transformation inconsistency. We propose several illustrations and insights
about the importance of these dictionary properties and show that common
properties such as coherence or restricted isometry property fail to provide
sufficient information in registration problems. We finally show with
illustrative experiments on simple visual objects and handwritten digits images
that our algorithm outperforms baseline competitor methods in terms of
transformation-invariant distance computation and classification.
|
1301.6648 | Generalized Bregman Divergence and Gradient of Mutual Information for
Vector Poisson Channels | cs.IT math.IT stat.ML | We investigate connections between information-theoretic and
estimation-theoretic quantities in vector Poisson channel models. In
particular, we generalize the gradient of mutual information with respect to
key system parameters from the scalar to the vector Poisson channel model. We
also propose, as another contribution, a generalization of the classical
Bregman divergence that offers a means to encapsulate under a unifying
framework the gradient of mutual information results for scalar and vector
Poisson and Gaussian channel models. The so-called generalized Bregman
divergence is also shown to exhibit various properties akin to the properties
of the classical version. The vector Poisson channel model is drawing
considerable attention in view of its application in various domains: as an
example, the availability of the gradient of mutual information can be used in
conjunction with gradient descent methods to effect compressive-sensing
projection designs in emerging X-ray and document classification applications.
|
1301.6658 | Minimum Relative Entropy for Quantum Estimation: Feasibility and General
Solution | quant-ph cs.IT math.IT | We propose a general framework for solving quantum state estimation problems
using the minimum relative entropy criterion. A convex optimization approach
allows us to decide the feasibility of the problem given the data and, whenever
necessary, to relax the constraints in order to allow for a physically
admissible solution. Building on these results, the variational analysis can be
completed ensuring existence and uniqueness of the optimum. The latter can then
be computed by standard, efficient standard algorithms for convex optimization,
without resorting to approximate methods or restrictive assumptions on its
rank.
|
1301.6659 | Clustering-Based Matrix Factorization | cs.LG | Recommender systems are emerging technologies that nowadays can be found in
many applications such as Amazon, Netflix, and so on. These systems help users
to find relevant information, recommendations, and their preferred items.
Slightly improvement of the accuracy of these recommenders can highly affect
the quality of recommendations. Matrix Factorization is a popular method in
Recommendation Systems showing promising results in accuracy and complexity. In
this paper we propose an extension of matrix factorization which adds general
neighborhood information on the recommendation model. Users and items are
clustered into different categories to see how these categories share
preferences. We then employ these shared interests of categories in a fusion by
Biased Matrix Factorization to achieve more accurate recommendations. This is a
complement for the current neighborhood aware matrix factorization models which
rely on using direct neighborhood information of users and items. The proposed
model is tested on two well-known recommendation system datasets: Movielens100k
and Netflix. Our experiment shows applying the general latent features of
categories into factorized recommender models improves the accuracy of
recommendations. The current neighborhood-aware models need a great number of
neighbors to acheive good accuracies. To the best of our knowledge, the
proposed model is better than or comparable with the current neighborhood-aware
models when they consider fewer number of neighbors.
|
1301.6662 | On Time-optimal Trajectories for a Car-like Robot with One Trailer | math.OC cs.RO | In addition to the theoretical value of challenging optimal control problmes,
recent progress in autonomous vehicles mandates further research in optimal
motion planning for wheeled vehicles. Since current numerical optimal control
techniques suffer from either the curse of dimens ionality, e.g. the
Hamilton-Jacobi-Bellman equation, or the curse of complexity, e.g.
pseudospectral optimal control and max-plus methods, analytical
characterization of geodesics for wheeled vehicles becomes important not only
from a theoretical point of view but also from a prac tical one. Such an
analytical characterization provides a fast motion planning algorithm that can
be used in robust feedback loops. In this work, we use the Pontryagin Maximum
Principle to characterize extremal trajectories, i.e. candidate geodesics, for
a car-like robot with one trailer. We use time as the distance function. In
spite of partial progress, this problem has remained open in the past two
decades. Besides straight motion and turn with maximum allowed curvature, we
identify planar elastica as the third piece of motion that occurs along our
extr emals. We give a detailed characterization of such curves, a special case
of which, called \emph{merging curve}, connects maximum curvature turns to
straight line segments. The structure of extremals in our case is revealed
through analytical integration of the system and adjoint equations.
|
1301.6675 | A Temporal Bayesian Network for Diagnosis and Prediction | cs.AI | Diagnosis and prediction in some domains, like medical and industrial
diagnosis, require a representation that combines uncertainty management and
temporal reasoning. Based on the fact that in many cases there are few state
changes in the temporal range of interest, we propose a novel representation
called Temporal Nodes Bayesian Networks (TNBN). In a TNBN each node represents
an event or state change of a variable, and an arc corresponds to a
causal-temporal relationship. The temporal intervals can differ in number and
size for each temporal node, so this allows multiple granularity. Our approach
is contrasted with a dynamic Bayesian network for a simple medical example. An
empirical evaluation is presented for a more complex problem, a subsystem of a
fossil power plant, in which this approach is used for fault diagnosis and
prediction with good results.
|
1301.6676 | Inferring Parameters and Structure of Latent Variable Models by
Variational Bayes | cs.LG stat.ML | Current methods for learning graphical models with latent variables and a
fixed structure estimate optimal values for the model parameters. Whereas this
approach usually produces overfitting and suboptimal generalization
performance, carrying out the Bayesian program of computing the full posterior
distributions over the parameters remains a difficult problem. Moreover,
learning the structure of models with latent variables, for which the Bayesian
approach is crucial, is yet a harder problem. In this paper I present the
Variational Bayes framework, which provides a solution to these problems. This
approach approximates full posterior distributions over model parameters and
structures, as well as latent variables, in an analytical manner without
resorting to sampling methods. Unlike in the Laplace approximation, these
posteriors are generally non-Gaussian and no Hessian needs to be computed. The
resulting algorithm generalizes the standard Expectation Maximization
algorithm, and its convergence is guaranteed. I demonstrate that this algorithm
can be applied to a large class of models in several domains, including
unsupervised clustering and blind source separation.
|
1301.6677 | Relative Loss Bounds for On-line Density Estimation with the Exponential
Family of Distributions | cs.LG stat.ML | We consider on-line density estimation with a parameterized density from the
exponential family. The on-line algorithm receives one example at a time and
maintains a parameter that is essentially an average of the past examples.
After receiving an example the algorithm incurs a loss which is the negative
log-likelihood of the example w.r.t. the past parameter of the algorithm. An
off-line algorithm can choose the best parameter based on all the examples. We
prove bounds on the additional total loss of the on-line algorithm over the
total loss of the off-line algorithm. These relative loss bounds hold for an
arbitrary sequence of examples. The goal is to design algorithms with the best
possible relative loss bounds. We use a certain divergence to derive and
analyze the algorithms. This divergence is a relative entropy between two
exponential distributions.
|
1301.6678 | An Application of Uncertain Reasoning to Requirements Engineering | cs.SE cs.AI | This paper examines the use of Bayesian Networks to tackle one of the tougher
problems in requirements engineering, translating user requirements into system
requirements. The approach taken is to model domain knowledge as Bayesian
Network fragments that are glued together to form a complete view of the domain
specific system requirements. User requirements are introduced as evidence and
the propagation of belief is used to determine what are the appropriate system
requirements as indicated by user requirements. This concept has been
demonstrated in the development of a system specification and the results are
presented here.
|
1301.6679 | Possibilistic logic bases and possibilistic graphs | cs.AI | Possibilistic logic bases and possibilistic graphs are two different
frameworks of interest for representing knowledge. The former stratifies the
pieces of knowledge (expressed by logical formulas) according to their level of
certainty, while the latter exhibits relationships between variables. The two
types of representations are semantically equivalent when they lead to the same
possibility distribution (which rank-orders the possible interpretations). A
possibility distribution can be decomposed using a chain rule which may be
based on two different kinds of conditioning which exist in possibility theory
(one based on product in a numerical setting, one based on minimum operation in
a qualitative setting). These two types of conditioning induce two kinds of
possibilistic graphs. In both cases, a translation of these graphs into
possibilistic bases is provided. The converse translation from a possibilistic
knowledge base into a min-based graph is also described.
|
1301.6680 | Artificial Decision Making Under Uncertainty in Intelligent Buildings | cs.AI | Our hypothesis is that by equipping certain agents in a multi-agent system
controlling an intelligent building with automated decision support, two
important factors will be increased. The first is energy saving in the
building. The second is customer value---how the people in the building
experience the effects of the actions of the agents. We give evidence for the
truth of this hypothesis through experimental findings related to tools for
artificial decision making. A number of assumptions related to agent control,
through monitoring and delegation of tasks to other kinds of agents, of rooms
at a test site are relaxed. Each assumption controls at least one uncertainty
that complicates considerably the procedures for selecting actions part of each
such agent. We show that in realistic decision situations, room-controlling
agents can make bounded rational decisions even under dynamic real-time
constraints. This result can be, and has been, generalized to other domains
with even harsher time constraints.
|
1301.6681 | Reasoning With Conditional Ceteris Paribus Preference Statem | cs.AI | In many domains it is desirable to assess the preferences of users in a
qualitative rather than quantitative way. Such representations of qualitative
preference orderings form an importnat component of automated decision tools.
We propose a graphical representation of preferences that reflects conditional
dependence and independence of preference statements under a ceteris paribus
(all else being equal) interpretation. Such a representation is ofetn compact
and arguably natural. We describe several search algorithms for dominance
testing based on this representation; these algorithms are quite effective,
especially in specific network topologies, such as chain-and tree- structured
networks, as well as polytrees.
|
1301.6682 | Continuous Value Function Approximation for Sequential Bidding Policies | cs.AI cs.GT | Market-based mechanisms such as auctions are being studied as an appropriate
means for resource allocation in distributed and mulitagent decision problems.
When agents value resources in combination rather than in isolation, they must
often deliberate about appropriate bidding strategies for a sequence of
auctions offering resources of interest. We briefly describe a discrete dynamic
programming model for constructing appropriate bidding policies for resources
exhibiting both complementarities and substitutability. We then introduce a
continuous approximation of this model, assuming that money (or the numeraire
good) is infinitely divisible. Though this has the potential to reduce the
computational cost of computing policies, value functions in the transformed
problem do not have a convenient closed form representation. We develop {em
grid-based} approximation for such value functions, representing value
functions using piecewise linear approximations. We show that these methods can
offer significant computational savings with relatively small cost in solution
quality.
|
1301.6683 | Discovering the Hidden Structure of Complex Dynamic Systems | cs.AI cs.LG | Dynamic Bayesian networks provide a compact and natural representation for
complex dynamic systems. However, in many cases, there is no expert available
from whom a model can be elicited. Learning provides an alternative approach
for constructing models of dynamic systems. In this paper, we address some of
the crucial computational aspects of learning the structure of dynamic systems,
particularly those where some relevant variables are partially observed or even
entirely unknown. Our approach is based on the Structural Expectation
Maximization (SEM) algorithm. The main computational cost of the SEM algorithm
is the gathering of expected sufficient statistics. We propose a novel
approximation scheme that allows these sufficient statistics to be computed
efficiently. We also investigate the fundamental problem of discovering the
existence of hidden variables without exhaustive and expensive search. Our
approach is based on the observation that, in dynamic systems, ignoring a
hidden variable typically results in a violation of the Markov property. Thus,
our algorithm searches for such violations in the data, and introduces hidden
variables to explain them. We provide empirical results showing that the
algorithm is able to learn the dynamics of complex systems in a computationally
tractable way.
|
1301.6684 | Comparing Bayesian Network Classifiers | cs.LG cs.AI stat.ML | In this paper, we empirically evaluate algorithms for learning four types of
Bayesian network (BN) classifiers - Naive-Bayes, tree augmented Naive-Bayes, BN
augmented Naive-Bayes and general BNs, where the latter two are learned using
two variants of a conditional-independence (CI) based BN-learning algorithm.
Experimental results show the obtained classifiers, learned using the CI based
algorithms, are competitive with (or superior to) the best known classifiers,
based on both Bayesian networks and other formalisms; and that the
computational time for learning and using these classifiers is relatively
small. Moreover, these results also suggest a way to learn yet more effective
classifiers; we demonstrate empirically that this new algorithm does work as
expected. Collectively, these results argue that BN classifiers deserve more
attention in machine learning and data mining communities.
|
1301.6685 | Fast Learning from Sparse Data | cs.LG stat.ML | We describe two techniques that significantly improve the running time of
several standard machine-learning algorithms when data is sparse. The first
technique is an algorithm that effeciently extracts one-way and two-way
counts--either real or expected-- from discrete data. Extracting such counts is
a fundamental step in learning algorithms for constructing a variety of models
including decision trees, decision graphs, Bayesian networks, and naive-Bayes
clustering models. The second technique is an algorithm that efficiently
performs the E-step of the EM algorithm (i.e. inference) when applied to a
naive-Bayes clustering model. Using real-world data sets, we demonstrate a
dramatic decrease in running time for algorithms that incorporate these
techniques.
|
1301.6686 | Causal Discovery from a Mixture of Experimental and Observational Data | cs.AI | This paper describes a Bayesian method for combining an arbitrary mixture of
observational and experimental data in order to learn causal Bayesian networks.
Observational data are passively observed. Experimental data, such as that
produced by randomized controlled trials, result from the experimenter
manipulating one or more variables (typically randomly) and observing the
states of other variables. The paper presents a Bayesian method for learning
the causal structure and parameters of the underlying causal process that is
generating the data, given that (1) the data contains a mixture of
observational and experimental case records, and (2) the causal process is
modeled as a causal Bayesian network. This learning method was applied using as
input various mixtures of experimental and observational data that were
generated from the ALARM causal Bayesian network. In these experiments, the
absolute and relative quantities of experimental and observational data were
varied systematically. For each of these training datasets, the learning method
was applied to predict the causal structure and to estimate the causal
parameters that exist among randomly selected pairs of nodes in ALARM that are
not confounded. The paper reports how these structure predictions and parameter
estimates compare with the true causal structures and parameters as given by
the ALARM network.
|
1301.6687 | Loglinear models for first-order probabilistic reasoning | cs.AI | Recent work on loglinear models in probabilistic constraint logic programming
is applied to first-order probabilistic reasoning. Probabilities are defined
directly on the proofs of atomic formulae, and by marginalisation on the atomic
formulae themselves. We use Stochastic Logic Programs (SLPs) composed of
labelled and unlabelled definite clauses to define the proof probabilities. We
have a conservative extension of first-order reasoning, so that, for example,
there is a one-one mapping between logical and random variables. We show how,
in this framework, Inductive Logic Programming (ILP) can be used to induce the
features of a loglinear model from data. We also compare the presented
framework with other approaches to first-order probabilistic reasoning.
|
1301.6688 | Learning Polytrees | cs.AI cs.LG | We consider the task of learning the maximum-likelihood polytree from data.
Our first result is a performance guarantee establishing that the optimal
branching (or Chow-Liu tree), which can be computed very easily, constitutes a
good approximation to the best polytree. We then show that it is not possible
to do very much better, since the learning problem is NP-hard even to
approximately solve within some constant factor.
|
1301.6689 | A Hybrid Anytime Algorithm for the Constructiion of Causal Models From
Sparse Data | cs.AI | We present a hybrid constraint-based/Bayesian algorithm for learning causal
networks in the presence of sparse data. The algorithm searches the space of
equivalence classes of models (essential graphs) using a heuristic based on
conventional constraint-based techniques. Each essential graph is then
converted into a directed acyclic graph and scored using a Bayesian scoring
metric. Two variants of the algorithm are developed and tested using data from
randomly generated networks of sizes from 15 to 45 nodes with data sizes
ranging from 250 to 2000 records. Both variations are compared to, and found to
consistently outperform two variations of greedy search with restarts.
|
1301.6690 | Model-Based Bayesian Exploration | cs.AI cs.LG | Reinforcement learning systems are often concerned with balancing exploration
of untested actions against exploitation of actions that are known to be good.
The benefit of exploration can be estimated using the classical notion of Value
of Information - the expected improvement in future decision quality arising
from the information acquired by exploration. Estimating this quantity requires
an assessment of the agent's uncertainty about its current value estimates for
states. In this paper we investigate ways of representing and reasoning about
this uncertainty in algorithms where the system attempts to learn a model of
its environment. We explicitly represent uncertainty about the parameters of
the model and build probability distributions over Q-values based on these.
These distributions are used to compute a myopic approximation to the value of
information for each action and hence to select the action that best balances
exploration and exploitation.
|
1301.6691 | Hybrid Probabilistic Programs: Algorithms and Complexity | cs.AI | Hybrid Probabilistic Programs (HPPs) are logic programs that allow the
programmer to explicitly encode his knowledge of the dependencies between
events being described in the program. In this paper, we classify HPPs into
three classes called HPP_1,HPP_2 and HPP_r,r>= 3. For these classes, we provide
three types of results for HPPs. First, we develop algorithms to compute the
set of all ground consequences of an HPP. Then we provide algorithms and
complexity results for the problems of entailment ("Given an HPP P and a query
Q as input, is Q a logical consequence of P?") and consistency ("Given an HPP P
as input, is P consistent?"). Our results provide a fine characterization of
when polynomial algorithms exist for the above problems, and when these
problems become intractable.
|
1301.6692 | Assessing the value of a candidate. Comparing belief function and
possibility theories | cs.AI | The problem of assessing the value of a candidate is viewed here as a
multiple combination problem. On the one hand a candidate can be evaluated
according to different criteria, and on the other hand several experts are
supposed to assess the value of candidates according to each criterion.
Criteria are not equally important, experts are not equally competent or
reliable. Moreover levels of satisfaction of criteria, or levels of confidence
are only assumed to take their values in qualitative scales which are just
linearly ordered. The problem is discussed within two frameworks, the
transferable belief model and the qualitative possibility theory. They
respectively offer a quantitative and a qualitative setting for handling the
problem, providing thus a way to compare the nature of the underlying
assumptions.
|
1301.6694 | Qualitative Models for Decision Under Uncertainty without the
Commensurability Assumption | cs.AI | This paper investigates a purely qualitative version of Savage's theory for
decision making under uncertainty. Until now, most representation theorems for
preference over acts rely on a numerical representation of utility and
uncertainty where utility and uncertainty are commensurate. Disrupting the
tradition, we relax this assumption and introduce a purely ordinal axiom
requiring that the Decision Maker (DM) preference between two acts only depends
on the relative position of their consequences for each state. Within this
qualitative framework, we determine the only possible form of the decision rule
and investigate some instances compatible with the transitivity of the strict
preference. Finally we propose a mild relaxation of our ordinality axiom,
leaving room for a new family of qualitative decision rules compatible with
transitivity.
|
1301.6695 | Data Analysis with Bayesian Networks: A Bootstrap Approach | cs.LG cs.AI stat.ML | In recent years there has been significant progress in algorithms and methods
for inducing Bayesian networks from data. However, in complex data analysis
problems, we need to go beyond being satisfied with inducing networks with high
scores. We need to provide confidence measures on features of these networks:
Is the existence of an edge between two nodes warranted? Is the Markov blanket
of a given node robust? Can we say something about the ordering of the
variables? We should be able to address these questions, even when the amount
of data is not enough to induce a high scoring network. In this paper we
propose Efron's Bootstrap as a computationally efficient approach for answering
these questions. In addition, we propose to use these confidence measures to
induce better structures from the data, and to detect the presence of latent
variables.
|
1301.6696 | Learning Bayesian Network Structure from Massive Datasets: The "Sparse
Candidate" Algorithm | cs.LG cs.AI stat.ML | Learning Bayesian networks is often cast as an optimization problem, where
the computational task is to find a structure that maximizes a statistically
motivated score. By and large, existing learning tools address this
optimization problem using standard heuristic search techniques. Since the
search space is extremely large, such search procedures can spend most of the
time examining candidates that are extremely unreasonable. This problem becomes
critical when we deal with data sets that are large either in the number of
instances, or the number of attributes. In this paper, we introduce an
algorithm that achieves faster learning by restricting the search space. This
iterative algorithm restricts the parents of each variable to belong to a small
subset of candidates. We then search for a network that satisfies these
constraints. The learned network is then used for selecting better candidates
for the next iteration. We evaluate this algorithm both on synthetic and
real-life data. Our results show that it is significantly faster than
alternative search procedures without loss of quality in the learned
structures.
|
1301.6697 | Parameter Priors for Directed Acyclic Graphical Models and the
Characterization of Several Probability Distributions | cs.LG stat.ML | We show that the only parameter prior for complete Gaussian DAG models that
satisfies global parameter independence, complete model equivalence, and some
weak regularity assumptions, is the normal-Wishart distribution. Our analysis
is based on the following new characterization of the Wishart distribution: let
W be an n x n, n >= 3, positive-definite symmetric matrix of random variables
and f(W) be a pdf of W. Then, f(W) is a Wishart distribution if and only if
W_{11}-W_{12}W_{22}^{-1}W_{12}' is independent of {W_{12}, W_{22}} for every
block partitioning W_{11}, W_{12}, W_{12}', W_{22} of W. Similar
characterizations of the normal and normal-Wishart distributions are provided
as well. We also show how to construct a prior for every DAG model over X from
the prior of a single regression model.
|
1301.6698 | Quantifier Elimination for Statistical Problems | cs.AI cs.LO | Recent improvement on Tarski's procedure for quantifier elimination in the
first order theory of real numbers makes it feasible to solve small instances
of the following problems completely automatically: 1. listing all equality and
inequality constraints implied by a graphical model with hidden variables. 2.
Comparing graphyical models with hidden variables (i.e., model equivalence,
inclusion, and overlap). 3. Answering questions about the identification of a
model or portion of a model, and about bounds on quantities derived from a
model. 4. Determing whether a given set of independence assertions. We discuss
the foundation of quantifier elimination and demonstrate its application to
these problems.
|
1301.6699 | On Transformations between Probability and Spohnian Disbelief Functions | cs.AI | In this paper, we analyze the relationship between probability and Spohn's
theory for representation of uncertain beliefs. Using the intuitive idea that
the more probable a proposition is, the more believable it is, we study
transformations from probability to Sphonian disbelief and vice-versa. The
transformations described in this paper are different from those described in
the literature. In particular, the former satisfies the principles of ordinal
congruence while the latter does not. Such transformations between probability
and Spohn's calculi can contribute to (1) a clarification of the semantics of
nonprobabilistic degree of uncertain belief, and (2) to a construction of a
decision theory for such calculi. In practice, the transformations will allow a
meaningful combination of more than one calculus in different stages of using
an expert system such as knowledge acquisition, inference, and interpretation
of results.
|
1301.6700 | A New Model of Plan Recognition | cs.AI | We present a new abductive, probabilistic theory of plan recognition. This
model differs from previous plan recognition theories in being centered around
a model of plan execution: most previous methods have been based on plans as
formal objects or on rules describing the recognition process. We show that our
new model accounts for phenomena omitted from most previous plan recognition
theories: notably the cumulative effect of a sequence of observations of
partially-ordered, interleaved plans and the effect of context on plan
adoption. The model also supports inferences about the evolution of plan
execution in situations where another agent intervenes in plan execution. This
facility provides support for using plan recognition to build systems that will
intelligently assist a user.
|
1301.6701 | Multi-objects association in perception of dynamical situation | cs.AI cs.CV | In current perception systems applied to the rebuilding of the environment
for intelligent vehicles, the part reserved to object association for the
tracking is increasingly significant. This allows firstly to follow the objects
temporal evolution and secondly to increase the reliability of environment
perception. We propose in this communication the development of a multi-objects
association algorithm with ambiguity removal entering into the design of such a
dynamic perception system for intelligent vehicles. This algorithm uses the
belief theory and data modelling with fuzzy mathematics in order to be able to
handle inaccurate as well as uncertain information due to imperfect sensors.
These theories also allow the fusion of numerical as well as symbolic data. We
develop in this article the problem of matching between known and perceived
objects. This makes it possible to update a dynamic environment map for a
vehicle. The belief theory will enable us to quantify the belief in the
association of each perceived object with each known object. Conflicts can
appear in the case of object appearance or disappearance, or in the case of a
confused situation or bad perception. These conflicts are removed or solved
using an assignment algorithm, giving a solution called the " best " and so
ensuring the tracking of some objects present in our environment.
|
1301.6702 | A Hybrid Approach to Reasoning with Partially Elicited Preference Models | cs.AI | Classical Decision Theory provides a normative framework for representing and
reasoning about complex preferences. Straightforward application of this theory
to automate decision making is difficult due to high elicitation cost. In
response to this problem, researchers have recently developed a number of
qualitative, logic-oriented approaches for representing and reasoning about
references. While effectively addressing some expressiveness issues, these
logics have not proven powerful enough for building practical automated
decision making systems. In this paper we present a hybrid approach to
preference elicitation and decision making that is grounded in classical
multi-attribute utility theory, but can make effective use of the expressive
power of qualitative approaches. Specifically, assuming a partially specified
multilinear utility function, we show how comparative statements about classes
of decision alternatives can be used to further constrain the utility function
and thus identify sup-optimal alternatives. This work demonstrates that
quantitative and qualitative approaches can be synergistically integrated to
provide effective and flexible decision support.
|
1301.6703 | Faithful Approximations of Belief Functions | cs.AI | A conceptual foundation for approximation of belief functions is proposed and
investigated. It is based on the requirements of consistency and closeness. An
optimal approximation is studied. Unfortunately, the computation of the optimal
approximation turns out to be intractable. Hence, various heuristic methods are
proposed and experimantally evaluated both in terms of their accuracy and in
terms of the speed of computation. These methods are compared to the earlier
proposed approximations of belief functions.
|
1301.6704 | SPUDD: Stochastic Planning using Decision Diagrams | cs.AI | Markov decisions processes (MDPs) are becoming increasing popular as models
of decision theoretic planning. While traditional dynamic programming methods
perform well for problems with small state spaces, structured methods are
needed for large problems. We propose and examine a value iteration algorithm
for MDPs that uses algebraic decision diagrams(ADDs) to represent value
functions and policies. An MDP is represented using Bayesian networks and ADDs
and dynamic programming is applied directly to these ADDs. We demonstrate our
method on large MDPs (up to 63 million states) and show that significant gains
can be had when compared to tree-structured representations (with up to a
thirty-fold reduction in the number of nodes required to represent optimal
value functions).
|
1301.6705 | Probabilistic Latent Semantic Analysis | cs.LG cs.IR stat.ML | Probabilistic Latent Semantic Analysis is a novel statistical technique for
the analysis of two-mode and co-occurrence data, which has applications in
information retrieval and filtering, natural language processing, machine
learning from text, and in related areas. Compared to standard Latent Semantic
Analysis which stems from linear algebra and performs a Singular Value
Decomposition of co-occurrence tables, the proposed method is based on a
mixture decomposition derived from a latent class model. This results in a more
principled approach which has a solid foundation in statistics. In order to
avoid overfitting, we propose a widely applicable generalization of maximum
likelihood model fitting by tempered EM. Our approach yields substantial and
consistent improvements over Latent Semantic Analysis in a number of
experiments.
|
1301.6706 | Estimating the Value of Computation in Flexible Information Refinement | cs.AI | We outline a method to estimate the value of computation for a flexible
algorithm using empirical data. To determine a reasonable trade-off between
cost and value, we build an empirical model of the value obtained through
computation, and apply this model to estimate the value of computation for
quite different problems. In particular, we investigate this trade-off for the
problem of constructing policies for decision problems represented as influence
diagrams. We show how two features of our anytime algorithm provide reasonable
estimates of the value of computation in this domain.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.