id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0810.2434
|
Faster and better: a machine learning approach to corner detection
|
cs.CV cs.LG
|
The repeatability and efficiency of a corner detector determines how likely
it is to be useful in a real-world application. The repeatability is importand
because the same scene viewed from different positions should yield features
which correspond to the same real-world 3D locations [Schmid et al 2000]. The
efficiency is important because this determines whether the detector combined
with further processing can operate at frame rate.
Three advances are described in this paper. First, we present a new heuristic
for feature detection, and using machine learning we derive a feature detector
from this which can fully process live PAL video using less than 5% of the
available processing time. By comparison, most other detectors cannot even
operate at frame rate (Harris detector 115%, SIFT 195%). Second, we generalize
the detector, allowing it to be optimized for repeatability, with little loss
of efficiency. Third, we carry out a rigorous comparison of corner detectors
based on the above repeatability criterion applied to 3D scenes. We show that
despite being principally constructed for speed, on these stringent tests, our
heuristic detector significantly outperforms existing feature detectors.
Finally, the comparison demonstrates that using machine learning produces
significant improvements in repeatability, yielding a detector that is both
very fast and very high quality.
|
0810.2513
|
The Impact of Mobility on Gossip Algorithms
|
cs.NI cs.DC cs.IT math.IT
|
The influence of node mobility on the convergence time of averaging gossip
algorithms in networks is studied. It is shown that a small number of fully
mobile nodes can yield a significant decrease in convergence time. A method is
developed for deriving lower bounds on the convergence time by merging nodes
according to their mobility pattern. This method is used to show that if the
agents have one-dimensional mobility in the same direction the convergence time
is improved by at most a constant. Upper bounds are obtained on the convergence
time using techniques from the theory of Markov chains and show that simple
models of mobility can dramatically accelerate gossip as long as the mobility
paths significantly overlap. Simulations verify that different mobility
patterns can have significantly different effects on the convergence of
distributed algorithms.
|
0810.2529
|
On the Throughput Maximization in Dencentralized Wireless Networks
|
cs.IT math.IT
|
A distributed single-hop wireless network with $K$ links is considered, where
the links are partitioned into a fixed number ($M$) of clusters each operating
in a subchannel with bandwidth $\frac{W}{M}$. The subchannels are assumed to be
orthogonal to each other. A general shadow-fading model, described by
parameters $(\alpha,\varpi)$, is considered where $\alpha$ denotes the
probability of shadowing and $\varpi$ ($\varpi \leq 1$) represents the average
cross-link gains. The main goal of this paper is to find the maximum network
throughput in the asymptotic regime of $K \to \infty$, which is achieved by: i)
proposing a distributed and non-iterative power allocation strategy, where the
objective of each user is to maximize its best estimate (based on its local
information, i.e., direct channel gain) of the average network throughput, and
ii) choosing the optimum value for $M$. In the first part of the paper, the
network hroughput is defined as the \textit{average sum-rate} of the network,
which is shown to scale as $\Theta (\log K)$. Moreover, it is proved that in
the strong interference scenario, the optimum power allocation strategy for
each user is a threshold-based on-off scheme. In the second part, the network
throughput is defined as the \textit{guaranteed sum-rate}, when the outage
probability approaches zero. In this scenario, it is demonstrated that the
on-off power allocation scheme maximizes the throughput, which scales as
$\frac{W}{\alpha \varpi} \log K$. Moreover, the optimum spectrum sharing for
maximizing the average sum-rate and the guaranteed sum-rate is achieved at M=1.
|
0810.2598
|
New avenue to the Parton Distribution Functions: Self-Organizing Maps
|
hep-ph cs.CE
|
Neural network algorithms have been recently applied to construct Parton
Distribution Function (PDF) parametrizations which provide an alternative to
standard global fitting procedures. We propose a technique based on an
interactive neural network algorithm using Self-Organizing Maps (SOMs). SOMs
are a class of clustering algorithms based on competitive learning among
spatially-ordered neurons. Our SOMs are trained on selections of stochastically
generated PDF samples. The selection criterion for every optimization iteration
is based on the features of the clustered PDFs. Our main goal is to provide a
fitting procedure that, at variance with the standard neural network
approaches, allows for an increased control of the systematic bias by enabling
user interaction in the various stages of the process.
|
0810.2653
|
On combinations of local theory extensions
|
cs.LO cs.AI
|
In this paper we study possibilities of efficient reasoning in combinations
of theories over possibly non-disjoint signatures. We first present a class of
theory extensions (called local extensions) in which hierarchical reasoning is
possible, and give several examples from computer science and mathematics in
which such extensions occur in a natural way. We then identify situations in
which combinations of local extensions of a theory are again local extensions
of that theory. We thus obtain criteria both for recognizing wider classes of
local theory extensions, and for modular reasoning in combinations of theories
over non-disjoint signatures.
|
0810.2665
|
Path Planner for Objects, Robots and Mannequins by Multi-Agents Systems
or Motion Captures
|
cs.RO
|
In order to optimise the costs and time of design of the new products while
improving their quality, concurrent engineering is based on the digital model
of these products. However, in order to be able to avoid definitively physical
model without loss of information, new tools must be available. Especially, a
tool making it possible to check simply and quickly the maintainability of
complex mechanical sets using the numerical model is necessary. Since one
decade, the MCM team of IRCCyN works on the creation of tools for the
generation and the analysis of trajectories of virtual mannequins. The
simulation of human tasks can be carried out either by robot-like simulation or
by simulation by motion capture. This paper presents some results on the both
two methods. The first method is based on a multi-agent system and on a digital
mock-up technology, to assess an efficient path planner for a manikin or a
robot for access and visibility task taking into account ergonomic constraints
or joint limits. The human operator is integrated in the process optimisation
to contribute to a global perception of the environment. This operator
cooperates, in real-time, with several automatic local elementary agents. In
the second method, we worked with the CEA and EADS/CCR to solve the constraints
related to the evolution of human virtual in its environment on the basis of
data resulting from motion capture system. An approach using of the virtual
guides was developed to allow to the user the realization of precise trajectory
in absence of force feedback.
|
0810.2666
|
A Vision-based Computed Torque Control for Parallel Kinematic Machines
|
cs.RO
|
In this paper, a novel approach for parallel kinematic machine control
relying on a fast exteroceptive measure is implemented and validated on the
Orthoglide robot. This approach begins with rewriting the robot models as a
function of the only end-effector pose. It is shown that such an operation
reduces the model complexity. Then, this approach uses a classical Cartesian
space computed torque control with a fast exteroceptive measure, reducing the
control schemes complexity. Simulation results are given to show the expected
performance improvements and experiments prove the practical feasibility of the
approach.
|
0810.2746
|
Finite-SNR Diversity-Multiplexing Tradeoff and Optimum Power Allocation
in Bidirectional Cooperative Networks
|
cs.IT math.IT
|
This paper focuses on analog network coding (ANC) and time division
broadcasting (TDBC) which are two major protocols used in bidirectional
cooperative networks. Lower bounds of the outage probabilities of those two
protocols are derived first. Those lower bounds are extremely tight in the
whole signal-to-noise ratio (SNR) range irrespective of the values of channel
variances. Based on those lower bounds, finite-SNR diversity-multiplexing
tradeoffs of the ANC and TDBC protocols are obtained. Secondly, we investigate
how to efficiently use channel state information (CSI) in those two protocols.
Specifically, an optimum power allocation scheme is proposed for the ANC
protocol. It simultaneously minimizes the outage probability and maximizes the
total mutual information of this protocol. For the TDBC protocol, an optimum
method to combine the received signals at the relay terminal is developed under
an equal power allocation assumption. This method minimizes the outage
probability and maximizes the total mutual information of the TDBC protocol at
the same time.
|
0810.2764
|
A Simple Linear Ranking Algorithm Using Query Dependent Intercept
Variables
|
cs.IR cs.LG
|
The LETOR website contains three information retrieval datasets used as a
benchmark for testing machine learning ideas for ranking. Algorithms
participating in the challenge are required to assign score values to search
results for a collection of queries, and are measured using standard IR ranking
measures (NDCG, precision, MAP) that depend only the relative score-induced
order of the results. Similarly to many of the ideas proposed in the
participating algorithms, we train a linear classifier. In contrast with other
participating algorithms, we define an additional free variable (intercept, or
benchmark) for each query. This allows expressing the fact that results for
different queries are incomparable for the purpose of determining relevance.
The cost of this idea is the addition of relatively few nuisance parameters.
Our approach is simple, and we used a standard logistic regression library to
test it. The results beat the reported participating algorithms. Hence, it
seems promising to combine our approach with other more complex ideas.
|
0810.2781
|
Linear Time Encoding of LDPC Codes
|
cs.IT math.IT
|
In this paper, we propose a linear complexity encoding method for arbitrary
LDPC codes. We start from a simple graph-based encoding method
``label-and-decide.'' We prove that the ``label-and-decide'' method is
applicable to Tanner graphs with a hierarchical structure--pseudo-trees-- and
that the resulting encoding complexity is linear with the code block length.
Next, we define a second type of Tanner graphs--the encoding stopping set. The
encoding stopping set is encoded in linear complexity by a revised
label-and-decide algorithm--the ``label-decide-recompute.'' Finally, we prove
that any Tanner graph can be partitioned into encoding stopping sets and
pseudo-trees. By encoding each encoding stopping set or pseudo-tree
sequentially, we develop a linear complexity encoding method for general LDPC
codes where the encoding complexity is proved to be less than $4 \cdot M \cdot
(\overline{k} - 1)$, where $M$ is the number of independent rows in the parity
check matrix and $\overline{k}$ represents the mean row weight of the parity
check matrix.
|
0810.2861
|
A comparison of the notions of optimality in soft constraints and
graphical games
|
cs.AI cs.GT
|
The notion of optimality naturally arises in many areas of applied
mathematics and computer science concerned with decision making. Here we
consider this notion in the context of two formalisms used for different
purposes and in different research areas: graphical games and soft constraints.
We relate the notion of optimality used in the area of soft constraint
satisfaction problems (SCSPs) to that used in graphical games, showing that for
a large class of SCSPs that includes weighted constraints every optimal
solution corresponds to a Nash equilibrium that is also a Pareto efficient
joint strategy.
|
0810.2924
|
BER and Outage Probability Approximations for LMMSE Detectors on
Correlated MIMO Channels
|
cs.IT math.IT
|
This paper is devoted to the study of the performance of the Linear Minimum
Mean-Square Error receiver for (receive) correlated Multiple-Input
Multiple-Output systems. By the random matrix theory, it is well-known that the
Signal-to-Noise Ratio (SNR) at the output of this receiver behaves
asymptotically like a Gaussian random variable as the number of receive and
transmit antennas converge to +$\infty$ at the same rate. However, this
approximation being inaccurate for the estimation of some performance metrics
such as the Bit Error Rate and the outage probability, especially for small
system dimensions, Li et al. proposed convincingly to assume that the SNR
follows a generalized Gamma distribution which parameters are tuned by
computing the first three asymptotic moments of the SNR. In this article, this
technique is generalized to (receive) correlated channels, and closed-form
expressions for the first three asymptotic moments of the SNR are provided. To
obtain these results, a random matrix theory technique adapted to matrices with
Gaussian elements is used. This technique is believed to be simple, efficient,
and of broad interest in wireless communications. Simulations are provided, and
show that the proposed technique yields in general a good accuracy, even for
small system dimensions.
|
0810.2953
|
On Power Control and Frequency Reuse in the Two User Cognitive Channel
|
cs.IT math.IT
|
This paper considers the generalized cognitive radio channel where the
secondary user is allowed to reuse the frequency during both the idle and
active periods of the primary user, as long as the primary rate remains the
same. In this setting, the optimal power allocation policy with single-input
single-output (SISO) primary and secondary channels is explored. Interestingly,
the offered gain resulting from the frequency reuse during the active periods
of the spectrum is shown to disappear in both the low and high signal-to-noise
ratio (SNR) regimes. We then argue that this drawback in the high SNR region
can be avoided by equipping both the primary and secondary transmitters with
multiple antennas. Finally, the scenario consisting of SISO primary and
multi-input multi-output (MIMO) secondary channels is investigated. Here, a
simple Zero-Forcing approach is shown to significantly outperform the
celebrated Decoding-Forwarding-Dirty Paper Coding strategy (especially in the
high SNR regime).
|
0810.3076
|
Combining Semantic Wikis and Controlled Natural Language
|
cs.HC cs.AI
|
We demonstrate AceWiki that is a semantic wiki using the controlled natural
language Attempto Controlled English (ACE). The goal is to enable easy creation
and modification of ontologies through the web. Texts in ACE can automatically
be translated into first-order logic and other languages, for example OWL.
Previous evaluation showed that ordinary people are able to use AceWiki without
being instructed.
|
0810.3125
|
On the Vocabulary of Grammar-Based Codes and the Logical Consistency of
Texts
|
cs.IT cs.CL math.IT
|
The article presents a new interpretation for Zipf-Mandelbrot's law in
natural language which rests on two areas of information theory. Firstly, we
construct a new class of grammar-based codes and, secondly, we investigate
properties of strongly nonergodic stationary processes. The motivation for the
joint discussion is to prove a proposition with a simple informal statement: If
a text of length $n$ describes $n^\beta$ independent facts in a repetitive way
then the text contains at least $n^\beta/\log n$ different words, under
suitable conditions on $n$. In the formal statement, two modeling postulates
are adopted. Firstly, the words are understood as nonterminal symbols of the
shortest grammar-based encoding of the text. Secondly, the text is assumed to
be emitted by a finite-energy strongly nonergodic source whereas the facts are
binary IID variables predictable in a shift-invariant way.
|
0810.3136
|
On the Complexity of Core, Kernel, and Bargaining Set
|
cs.GT cs.AI cs.CC
|
Coalitional games are mathematical models suited to analyze scenarios where
players can collaborate by forming coalitions in order to obtain higher worths
than by acting in isolation. A fundamental problem for coalitional games is to
single out the most desirable outcomes in terms of appropriate notions of worth
distributions, which are usually called solution concepts. Motivated by the
fact that decisions taken by realistic players cannot involve unbounded
resources, recent computer science literature reconsidered the definition of
such concepts by advocating the relevance of assessing the amount of resources
needed for their computation in terms of their computational complexity. By
following this avenue of research, the paper provides a complete picture of the
complexity issues arising with three prominent solution concepts for
coalitional games with transferable utility, namely, the core, the kernel, and
the bargaining set, whenever the game worth-function is represented in some
reasonable compact form (otherwise, if the worths of all coalitions are
explicitly listed, the input sizes are so large that complexity problems
are---artificially---trivial). The starting investigation point is the setting
of graph games, about which various open questions were stated in the
literature. The paper gives an answer to these questions, and in addition
provides new insights on the setting, by characterizing the computational
complexity of the three concepts in some relevant generalizations and
specializations.
|
0810.3226
|
Optimal Transmission Strategy and Explicit Capacity Region for Broadcast
Z Channels
|
cs.IT math.IT
|
This paper provides an explicit expression for the capacity region of the
two-user broadcast Z channel and proves that the optimal boundary can be
achieved by independent encoding of each user. Specifically, the information
messages corresponding to each user are encoded independently and the OR of
these two encoded streams is transmitted. Nonlinear turbo codes that provide a
controlled distribution of ones and zeros are used to demonstrate a
low-complexity scheme that operates close to the optimal boundary.
|
0810.3227
|
Dynamic Approaches to In-Network Aggregation
|
cs.DC cs.DB cs.DS
|
Collaboration between small-scale wireless devices hinges on their ability to
infer properties shared across multiple nearby nodes. Wireless-enabled mobile
devices in particular create a highly dynamic environment not conducive to
distributed reasoning about such global properties. This paper addresses a
specific instance of this problem: distributed aggregation. We present
extensions to existing unstructured aggregation protocols that enable
estimation of count, sum, and average aggregates in highly dynamic
environments. With the modified protocols, devices with only limited
connectivity can maintain estimates of the aggregate, despite
\textit{unexpected} peer departures and arrivals. Our analysis of these
aggregate maintenance extensions demonstrates their effectiveness in
unstructured environments despite high levels of node mobility.
|
0810.3283
|
Quantum robot: structure, algorithms and applications
|
cs.RO cs.AI quant-ph
|
This paper has been withdrawn.
|
0810.3294
|
A static theory of promises
|
cs.MA cs.SE
|
We discuss for the concept of promises within a framework that can be applied
to either humans or technology. We compare promises to the more established
notion of obligations and find promises to be both simpler and more effective
at reducing uncertainty in behavioural outcomes.
|
0810.3356
|
The Fundamental Problem with the Building Block Hypothesis
|
cs.NE
|
Skepticism of the building block hypothesis (BBH) has previously been
expressed on account of the weak theoretical foundations of this hypothesis and
the anomalies in the empirical record of the simple genetic algorithm. In this
paper we hone in on a more fundamental cause for skepticism--the extraordinary
strength of some of the assumptions that undergird the BBH. Specifically, we
focus on assumptions made about the distribution of fitness over the genome
set, and argue that these assumptions are unacceptably strong. As most of these
assumptions have been embraced by the designers of so-called "competent"
genetic algorithms, our critique is relevant to an appraisal of such algorithms
as well.
|
0810.3357
|
Two Remarkable Computational Competencies of the Simple Genetic
Algorithm
|
cs.NE
|
Since the inception of genetic algorithmics the identification of
computational efficiencies of the simple genetic algorithm (SGA) has been an
important goal. In this paper we distinguish between a computational competency
of the SGA--an efficient, but narrow computational ability--and a computational
proficiency of the SGA--a computational ability that is both efficient and
broad. Till date, attempts to deduce a computational proficiency of the SGA
have been unsuccessful. It may, however, be possible to inductively infer a
computational proficiency of the SGA from a set of related computational
competencies that have been deduced. With this in mind we deduce two
computational competencies of the SGA. These competencies, when considered
together, point toward a remarkable computational proficiency of the SGA. This
proficiency is pertinent to a general problem that is closely related to a
well-known statistical problem at the cutting edge of computational genetics.
|
0810.3416
|
Text as Statistical Mechanics Object
|
cs.CL physics.soc-ph
|
In this article we present a model of human written text based on statistical
mechanics approach by deriving the potential energy for different parts of the
text using large text corpus. We have checked the results numerically and found
that the specific heat parameter effectively separates the closed class words
from the specific terms used in the text.
|
0810.3418
|
Detecting the Most Unusual Part of a Digital Image
|
cs.CV cs.GR
|
The purpose of this paper is to introduce an algorithm that can detect the
most unusual part of a digital image. The most unusual part of a given shape is
defined as a part of the image that has the maximal distance to all non
intersecting shapes with the same form.
The method can be used to scan image databases with no clear model of the
interesting part or large image databases, as for example medical databases.
|
0810.3422
|
Coding Theorems for Repeat Multiple Accumulate Codes
|
cs.IT math.IT
|
In this paper the ensemble of codes formed by a serial concatenation of a
repetition code with multiple accumulators connected through random
interleavers is considered. Based on finite length weight enumerators for these
codes, asymptotic expressions for the minimum distance and an arbitrary number
of accumulators larger than one are derived using the uniform interleaver
approach. In accordance with earlier results in the literature, it is first
shown that the minimum distance of repeat-accumulate codes can grow, at best,
sublinearly with block length. Then, for repeat-accumulate-accumulate codes and
rates of 1/3 or less, it is proved that these codes exhibit asymptotically
linear distance growth with block length, where the gap to the
Gilbert-Varshamov bound can be made vanishingly small by increasing the number
of accumulators beyond two. In order to address larger rates, random puncturing
of a low-rate mother code is introduced. It is shown that in this case the
resulting ensemble of repeat-accumulate-accumulate codes asymptotically
achieves linear distance growth close to the Gilbert-Varshamov bound. This
holds even for very high rate codes.
|
0810.3442
|
Language structure in the n-object naming game
|
cs.CL cs.MA physics.soc-ph
|
We examine a naming game with two agents trying to establish a common
vocabulary for n objects. Such efforts lead to the emergence of language that
allows for an efficient communication and exhibits some degree of homonymy and
synonymy. Although homonymy reduces the communication efficiency, it seems to
be a dynamical trap that persists for a long, and perhaps indefinite, time. On
the other hand, synonymy does not reduce the efficiency of communication, but
appears to be only a transient feature of the language. Thus, in our model the
role of synonymy decreases and in the long-time limit it becomes negligible. A
similar rareness of synonymy is observed in present natural languages. The role
of noise, that distorts the communicated words, is also examined. Although, in
general, the noise reduces the communication efficiency, it also regroups the
words so that they are more evenly distributed within the available "verbal"
space.
|
0810.3451
|
The many faces of optimism - Extended version
|
cs.AI cs.CC cs.LG
|
The exploration-exploitation dilemma has been an intriguing and unsolved
problem within the framework of reinforcement learning. "Optimism in the face
of uncertainty" and model building play central roles in advanced exploration
methods. Here, we integrate several concepts and obtain a fast and simple
algorithm. We show that the proposed algorithm finds a near-optimal policy in
polynomial time, and give experimental evidence that it is robust and efficient
compared to its ascendants.
|
0810.3474
|
Social Learning Methods in Board Games
|
cs.AI cs.MA
|
This paper discusses the effects of social learning in training of game
playing agents. The training of agents in a social context instead of a
self-play environment is investigated. Agents that use the reinforcement
learning algorithms are trained in social settings. This mimics the way in
which players of board games such as scrabble and chess mentor each other in
their clubs. A Round Robin tournament and a modified Swiss tournament setting
are used for the training. The agents trained using social settings are
compared to self play agents and results indicate that more robust agents
emerge from the social training setting. Higher state space games can benefit
from such settings as diverse set of agents will have multiple strategies that
increase the chances of obtaining more experienced players at the end of
training. The Social Learning trained agents exhibit better playing experience
than self play agents. The modified Swiss playing style spawns a larger number
of better playing agents as the population size increases.
|
0810.3484
|
A Study of NK Landscapes' Basins and Local Optima Networks
|
cs.NE
|
We propose a network characterization of combinatorial fitness landscapes by
adapting the notion of inherent networks proposed for energy surfaces (Doye,
2002). We use the well-known family of $NK$ landscapes as an example. In our
case the inherent network is the graph where the vertices are all the local
maxima and edges mean basin adjacency between two maxima. We exhaustively
extract such networks on representative small NK landscape instances, and show
that they are 'small-worlds'. However, the maxima graphs are not random, since
their clustering coefficients are much larger than those of corresponding
random graphs. Furthermore, the degree distributions are close to exponential
instead of Poissonian. We also describe the nature of the basins of attraction
and their relationship with the local maxima network.
|
0810.3492
|
The Connectivity of NK Landscapes' Basins: A Network Analysis
|
cs.NE
|
We propose a network characterization of combinatorial fitness landscapes by
adapting the notion of inherent networks proposed for energy surfaces. We use
the well-known family of NK landscapes as an example. In our case the inherent
network is the graph where the vertices represent the local maxima in the
landscape, and the edges account for the transition probabilities between their
corresponding basins of attraction. We exhaustively extracted such networks on
representative small NK landscape instances, and performed a statistical
characterization of their properties. We found that most of these network
properties can be related to the search difficulty on the underlying NK
landscapes with varying values of K.
|
0810.3525
|
The use of entropy to measure structural diversity
|
cs.LG cs.AI q-bio.QM
|
In this paper entropy based methods are compared and used to measure
structural diversity of an ensemble of 21 classifiers. This measure is mostly
applied in ecology, whereby species counts are used as a measure of diversity.
The measures used were Shannon entropy, Simpsons and the Berger Parker
diversity indexes. As the diversity indexes increased so did the accuracy of
the ensemble. An ensemble dominated by classifiers with the same structure
produced poor accuracy. Uncertainty rule from information theory was also used
to further define diversity. Genetic algorithms were used to find the optimal
ensemble by using the diversity indices as the cost function. The method of
voting was used to aggregate the decisions.
|
0810.3564
|
The Poisson Channel at Low Input Powers
|
cs.IT math.IT
|
The asymptotic capacity at low input powers of an average-power limited or an
average- and peak-power limited discrete-time Poisson channel is considered.
For a Poisson channel whose dark current is zero or decays to zero linearly
with its average input power $E$, capacity scales like $E\log\frac{1}{E}$ for
small $E$. For a Poisson channel whose dark current is a nonzero constant,
capacity scales, to within a constant, like $E\log\log\frac{1}{E}$ for small
$E$.
|
0810.3579
|
Hierarchical Bag of Paths for Kernel Based Shape Classification
|
cs.CV
|
Graph kernels methods are based on an implicit embedding of graphs within a
vector space of large dimension. This implicit embedding allows to apply to
graphs methods which where until recently solely reserved to numerical data.
Within the shape classification framework, graphs are often produced by a
skeletonization step which is sensitive to noise. We propose in this paper to
integrate the robustness to structural noise by using a kernel based on a bag
of path where each path is associated to a hierarchy encoding successive
simplifications of the path. Several experiments prove the robustness and the
flexibility of our approach compared to alternative shape classification
methods.
|
0810.3605
|
A Minimum Relative Entropy Principle for Learning and Acting
|
cs.AI cs.LG
|
This paper proposes a method to construct an adaptive agent that is universal
with respect to a given class of experts, where each expert is an agent that
has been designed specifically for a particular environment. This adaptive
control problem is formalized as the problem of minimizing the relative entropy
of the adaptive agent from the expert that is most suitable for the unknown
environment. If the agent is a passive observer, then the optimal solution is
the well-known Bayesian predictor. However, if the agent is active, then its
past actions need to be treated as causal interventions on the I/O stream
rather than normal probability conditions. Here it is shown that the solution
to this new variational problem is given by a stochastic controller called the
Bayesian control rule, which implements adaptive behavior as a mixture of
experts. Furthermore, it is shown that under mild assumptions, the Bayesian
control rule converges to the control law of the most suitable expert.
|
0810.3631
|
Approximating the Gaussian Multiple Description Rate Region Under
Symmetric Distortion Constraints
|
cs.IT math.IT
|
We consider multiple description coding for the Gaussian source with K
descriptions under the symmetric mean squared error distortion constraints, and
provide an approximate characterization of the rate region. We show that the
rate region can be sandwiched between two polytopes, between which the gap can
be upper bounded by constants dependent on the number of descriptions, but
independent of the exact distortion constraints. Underlying this result is an
exact characterization of the lossless multi-level diversity source coding
problem: a lossless counterpart of the MD problem. This connection provides a
polytopic template for the inner and outer bounds to the rate region. In order
to establish the outer bound, we generalize Ozarow's technique to introduce a
strategic expansion of the original probability space by more than one random
variables. For the symmetric rate case with any number of descriptions, we show
that the gap between the upper bound and the lower bound for the individual
description rate is no larger than 0.92 bit. The results developed in this work
also suggest the "separation" approach of combining successive refinement
quantization and lossless multi-level diversity coding is a competitive one,
since it is only a constant away from the optimum. The results are further
extended to general sources under the mean squared error distortion measure,
where a similar but looser bound on the gap holds.
|
0810.3729
|
Optimal codes in deletion and insertion metric
|
cs.IT cs.DM math.CO math.IT
|
We improve the upper bound of Levenshtein for the cardinality of a code of
length 4 capable of correcting single deletions over an alphabet of even size.
We also illustrate that the new upper bound is sharp. Furthermore we will
construct an optimal perfect code capable of correcting single deletions for
the same parameters.
|
0810.3787
|
Automorphisms of doubly-even self-dual binary codes
|
math.NT cs.IT math.IT
|
The automorphism group of a binary doubly-even self-dual code is always
contained in the alternating group. On the other hand, given a permutation
group $G$ of degree $n$ there exists a doubly-even self-dual $G$-invariant code
if and only if $n$ is a multiple of 8, every simple self-dual $\F_2G$-module
occurs with even multiplicity in $\F_2^n$, and $G$ is contained in the
alternating group.
|
0810.3827
|
Comments on the Boundary of the Capacity Region of Multiaccess Fading
Channels
|
cs.IT math.IT
|
A modification is proposed for the formula known from the literature that
characterizes the boundary of the capacity region of Gaussian multiaccess
fading channels. The modified version takes into account potentially negative
arguments of the cumulated density function that would affect the accuracy of
the numerical capacity results.
|
0810.3828
|
Quantum reinforcement learning
|
quant-ph cs.AI cs.LG
|
The key approaches for machine learning, especially learning in unknown
probabilistic environments are new representations and computation mechanisms.
In this paper, a novel quantum reinforcement learning (QRL) method is proposed
by combining quantum theory and reinforcement learning (RL). Inspired by the
state superposition principle and quantum parallelism, a framework of value
updating algorithm is introduced. The state (action) in traditional RL is
identified as the eigen state (eigen action) in QRL. The state (action) set can
be represented with a quantum superposition state and the eigen state (eigen
action) can be obtained by randomly observing the simulated quantum state
according to the collapse postulate of quantum measurement. The probability of
the eigen action is determined by the probability amplitude, which is
parallelly updated according to rewards. Some related characteristics of QRL
such as convergence, optimality and balancing between exploration and
exploitation are also analyzed, which shows that this approach makes a good
tradeoff between exploration and exploitation using the probability amplitude
and can speed up learning through the quantum parallelism. To evaluate the
performance and practicability of QRL, several simulated experiments are given
and the results demonstrate the effectiveness and superiority of QRL algorithm
for some complex problems. The present work is also an effective exploration on
the application of quantum computation to artificial intelligence.
|
0810.3851
|
Astronomical imaging: The theory of everything
|
astro-ph cs.CV physics.data-an
|
We are developing automated systems to provide homogeneous calibration
meta-data for heterogeneous imaging data, using the pixel content of the image
alone where necessary. Standardized and complete calibration meta-data permit
generative modeling: A good model of the sky through wavelength and time--that
is, a model of the positions, motions, spectra, and variability of all stellar
sources, plus an intensity map of all cosmological sources--could synthesize or
generate any astronomical image ever taken at any time with any equipment in
any configuration. We argue that the best-fit or highest likelihood model of
the data is also the best possible astronomical catalog constructed from those
data. A generative model or catalog of this form is the best possible platform
for automated discovery, because it is capable of identifying informative
failures of the model in new data at the pixel level, or as statistical
anomalies in the joint distribution of residuals from many images. It is also,
in some sense, an astronomer's "theory of everything".
|
0810.3865
|
Relationship between Diversity and Perfomance of Multiple Classifiers
for Decision Support
|
cs.AI
|
The paper presents the investigation and implementation of the relationship
between diversity and the performance of multiple classifiers on classification
accuracy. The study is critical as to build classifiers that are strong and can
generalize better. The parameters of the neural network within the committee
were varied to induce diversity; hence structural diversity is the focus for
this study. The hidden nodes and the activation function are the parameters
that were varied. The diversity measures that were adopted from ecology such as
Shannon and Simpson were used to quantify diversity. Genetic algorithm is used
to find the optimal ensemble by using the accuracy as the cost function. The
results observed shows that there is a relationship between structural
diversity and accuracy. It is observed that the classification accuracy of an
ensemble increases as the diversity increases. There was an increase of 3%-6%
in the classification accuracy.
|
0810.3891
|
Control Theoretic Formulation of Capacity of Dynamic Electro Magnetic
Channels
|
cs.IT math.IT
|
In this paper nonhomogeneous deterministic and stochastic Maxwell equations
are used to rigorously formulate the capacity of electromagnetic channels such
as wave guides (cavities, coaxial cables etc). Both distributed, but localized,
and Dirichlet boundary data are considered as the potential input sources. We
prove the existence of a source measure, satisfying certain second order
constraints (equivalent to power constraints), at which the channel capacity is
attained. Further, necessary and sufficient conditions for optimality are
presented.
|
0810.3900
|
On the Capacity and Diversity-Multiplexing Tradeoff of the Two-Way Relay
Channel
|
cs.IT math.IT
|
This paper considers a multiple input multiple output (MIMO) two-way relay
channel, where two nodes want to exchange data with each other using multiple
relays. An iterative algorithm is proposed to achieve the optimal achievable
rate region, when each relay employs an amplify and forward (AF) strategy.
The iterative algorithm solves a power minimization problem at every step,
subject to minimum signal-to-interference-and-noise ratio constraints, which is
non-convex, however, for which the Karush Kuhn Tuker conditions are sufficient
for optimality. The optimal AF strategy assumes global channel state
information (CSI) at each relay. To simplify the CSI requirements, a simple
amplify and forward strategy, called dual channel matching, is also proposed,
that requires only local channel state information, and whose achievable rate
region is close to that of the optimal AF strategy. In the asymptotic regime of
large number of relays, we show that the achievable rate region of the dual
channel matching and an upper bound differ by only a constant term and
establish the capacity scaling law of the two-way relay channel. Relay
strategies achieving optimal diversity-multiplexing tradeoff are also
considered with a single relay node. A compress and forward strategy is shown
to be optimal for achieving diversity multiplexing tradeoff for the full-duplex
case, in general, and for the half-duplex case in some cases.
|
0810.3990
|
To which extend is the "neural code" a metric ?
|
physics.bio-ph cs.NE physics.data-an q-bio.NC
|
Here is proposed a review of the different choices to structure spike trains,
using deterministic metrics. Temporal constraints observed in biological or
computational spike trains are first taken into account. The relation with
existing neural codes (rate coding, rank coding, phase coding, ..) is then
discussed. To which extend the "neural code" contained in spike trains is
related to a metric appears to be a key point, a generalization of the
Victor-Purpura metric family being proposed for temporal constrained causal
spike trains
|
0810.3992
|
Introducing numerical bounds to improve event-based neural network
simulation
|
nlin.AO cs.NE nlin.CD q-bio.NC
|
Although the spike-trains in neural networks are mainly constrained by the
neural dynamics itself, global temporal constraints (refractoriness, time
precision, propagation delays, ..) are also to be taken into account. These
constraints are revisited in this paper in order to use them in event-based
simulation paradigms.
We first review these constraints, and discuss their consequences at the
simulation level, showing how event-based simulation of time-constrained
networks can be simplified in this context: the underlying data-structures are
strongly simplified, while event-based and clock-based mechanisms can be easily
mixed. These ideas are applied to punctual conductance-based generalized
integrate-and-fire neural networks simulation, while spike-response model
simulations are also revisited within this framework.
As an outcome, a fast minimal complementary alternative with respect to
existing simulation event-based methods, with the possibility to simulate
interesting neuron models is implemented and experimented.
|
0810.4059
|
Network Coding-based Protection Strategies Against a Single Link Failure
in Optical Networks
|
cs.IT cs.NI math.IT
|
In this paper we develop network protection strategies against a single link
failure in optical networks. The motivation behind this work is the fact that
$%70$ of all available links in an optical network suffers from a single link
failure. In the proposed protection strategies, denoted NPS-I and NPS-II, we
deploy network coding and reduced capacity on the working paths to provide a
backup protection path that will carry encoded data from all sources. In
addition, we provide implementation aspects and how to deploy the proposed
strategies in case of an optical network with $n$ disjoint working paths.
|
0810.4112
|
Sums of residues on algebraic surfaces and application to coding theory
|
math.AG cs.IT math.IT
|
In this paper, we study residues of differential 2-forms on a smooth
algebraic surface over an arbitrary field and give several statements about
sums of residues. Afterwards, using these results we construct
algebraic-geometric codes which are an extension to surfaces of the well-known
differential codes on curves. We also study some properties of these codes and
extend to them some known properties for codes on curves.
|
0810.4171
|
Capacity of Steganographic Channels
|
cs.CR cs.IT math.IT
|
This work investigates a central problem in steganography, that is: How much
data can safely be hidden without being detected? To answer this question, a
formal definition of steganographic capacity is presented. Once this has been
defined, a general formula for the capacity is developed. The formula is
applicable to a very broad spectrum of channels due to the use of an
information-spectrum approach. This approach allows for the analysis of
arbitrary steganalyzers as well as non-stationary, non-ergodic encoder and
attack channels.
After the general formula is presented, various simplifications are applied
to gain insight into example hiding and detection methodologies. Finally, the
context and applications of the work are summarized in a general discussion.
|
0810.4182
|
Bucketing Coding and Information Theory for the Statistical High
Dimensional Nearest Neighbor Problem
|
cs.IT math.IT
|
Consider the problem of finding high dimensional approximate nearest
neighbors, where the data is generated by some known probabilistic model. We
will investigate a large natural class of algorithms which we call bucketing
codes. We will define bucketing information, prove that it bounds the
performance of all bucketing codes, and that the bucketing information bound
can be asymptotically attained by randomly constructed bucketing codes.
For example suppose we have n Bernoulli(1/2) very long (length d-->infinity)
sequences of bits. Let n-2m sequences be completely independent, while the
remaining 2m sequences are composed of m independent pairs. The interdependence
within each pair is that their bits agree with probability 1/2<p<=1. It is well
known how to find most pairs with high probability by performing order of
n^{\log_{2}2/p} comparisons. We will see that order of n^{1/p+\epsilon}
comparisons suffice, for any \epsilon>0. Moreover if one sequence out of each
pair belongs to a a known set of n^{(2p-1)^{2}-\epsilon} sequences, than
pairing can be done using order n comparisons!
|
0810.4188
|
A Heterogeneous High Dimensional Approximate Nearest Neighbor Algorithm
|
cs.IT math.IT
|
We consider the problem of finding high dimensional approximate nearest
neighbors. Suppose there are d independent rare features, each having its own
independent statistics. A point x will have x_{i}=0 denote the absence of
feature i, and x_{i}=1 its existence. Sparsity means that usually x_{i}=0.
Distance between points is a variant of the Hamming distance. Dimensional
reduction converts the sparse heterogeneous problem into a lower dimensional
full homogeneous problem. However we will see that the converted problem can be
much harder to solve than the original problem. Instead we suggest a direct
approach. It consists of T tries. In try t we rearrange the coordinates in
decreasing order of (1-r_{t,i})\frac{p_{i,11}}{p_{i,01}+p_{i,10}}
\ln\frac{1}{p_{i,1*}} where 0<r_{t,i}<1 are uniform pseudo-random numbers, and
the p's are the coordinate's statistical parameters. The points are
lexicographically ordered, and each is compared to its neighbors in that order.
We analyze a generalization of this algorithm, show that it is optimal in
some class of algorithms, and estimate the necessary number of tries to
success. It is governed by an information like function, which we call
bucketing forest information. Any doubts whether it is "information" are
dispelled by another paper, where unrestricted bucketing information is
defined.
|
0810.4341
|
Entropy of Hidden Markov Processes via Cycle Expansion
|
cs.IT cond-mat.other math.IT physics.data-an
|
Hidden Markov Processes (HMP) is one of the basic tools of the modern
probabilistic modeling. The characterization of their entropy remains however
an open problem. Here the entropy of HMP is calculated via the cycle expansion
of the zeta-function, a method adopted from the theory of dynamical systems.
For a class of HMP this method produces exact results both for the entropy and
the moment-generating function. The latter allows to estimate, via the Chernoff
bound, the probabilities of large deviations for the HMP. More generally, the
method offers a representation of the moment-generating function and of the
entropy via convergent series.
|
0810.4366
|
Resource Allocation and Relay Selection for Collaborative Communications
|
cs.IT math.IT
|
We investigate the relay selection problem for a decode and forward
collaborative network. Users are able to collaborate; decode messages of each
other, re-encode and forward along with their own messages. We study the
performance obtained from collaboration in terms of 1) increasing the
achievable rate, 2) saving the transmit energy and 3) reducing the resource
requirement (resource means time-bandwidth). To ensure fairness, we fix the
transmit-energy-to-rate ratio among all users. We allocate resource optimally
for the collaborative protocol (CP), and compare the result with the
non-collaborative protocol (NCP) where users transmits their messages directly.
The collaboration gain is a function of the channel gain and available energies
and allows us 1) to decide to collaborate or not, 2) to select one relay among
the possible relay users, and 3) to determine the involved gain and loss of
possible collaboration. A considerable gain can be obtained if the direct
source-destination channel gain is significantly smaller than those of
alternative involved links. We demonstrate that a rate and energy improvement
of up to $(1+\sqrt[\eta]{\frac{k}{k+1}})^\eta$ can be obtained, where $\eta$ is
the environment path loss exponent and $k$ is the ratio of the rates of
involved users. The gain is maximum for low
transmit-energy-to-received-noise-ratio (TERN) and in a high TERN environment
the NCP is preferred.
|
0810.4401
|
Efficient Exact Inference in Planar Ising Models
|
cs.LG cs.CV stat.ML
|
We give polynomial-time algorithms for the exact computation of lowest-energy
(ground) states, worst margin violators, log partition functions, and marginal
edge probabilities in certain binary undirected graphical models. Our approach
provides an interesting alternative to the well-known graph cut paradigm in
that it does not impose any submodularity constraints; instead we require
planarity to establish a correspondence with perfect matchings (dimer
coverings) in an expanded dual graph. We implement a unified framework while
delegating complex but well-understood subproblems (planar embedding,
maximum-weight perfect matching) to established algorithms for which efficient
implementations are freely available. Unlike graph cut methods, we can perform
penalized maximum-likelihood as well as maximum-margin parameter estimation in
the associated conditional random fields (CRFs), and employ marginal posterior
probabilities as well as maximum a posteriori (MAP) states for prediction.
Maximum-margin CRF parameter estimation on image denoising and segmentation
problems shows our approach to be efficient and effective. A C++ implementation
is available from http://nic.schraudolph.org/isinf/
|
0810.4404
|
Non binary LDPC codes over the binary erasure channel: density evolution
analysis
|
cs.IT math.IT
|
In this paper we present a thorough analysis of non binary LDPC codes over
the binary erasure channel. First, the decoding of non binary LDPC codes is
investigated. The proposed algorithm performs on-the-fly decoding, i.e. it
starts decoding as soon as the first symbols are received, which generalizes
the erasure decoding of binary LDPC codes. Next, we evaluate the asymptotical
performance of ensembles of non binary LDPC codes, by using the density
evolution method. Density evolution equations are derived by taking into
consideration both the irregularity of the bipartite graph and the probability
distribution of the graph edge labels. Finally, infinite-length performance of
some ensembles of non binary LDPC codes for different edge label distributions
are shown.
|
0810.4426
|
Camera distortion self-calibration using the plumb-line constraint and
minimal Hough entropy
|
cs.CV
|
In this paper we present a simple and robust method for self-correction of
camera distortion using single images of scenes which contain straight lines.
Since the most common distortion can be modelled as radial distortion, we
illustrate the method using the Harris radial distortion model, but the method
is applicable to any distortion model. The method is based on transforming the
edgels of the distorted image to a 1-D angular Hough space, and optimizing the
distortion correction parameters which minimize the entropy of the
corresponding normalized histogram. Properly corrected imagery will have fewer
curved lines, and therefore less spread in Hough space. Since the method does
not rely on any image structure beyond the existence of edgels sharing some
common orientations and does not use edge fitting, it is applicable to a wide
variety of image types. For instance, it can be applied equally well to images
of texture with weak but dominant orientations, or images with strong vanishing
points. Finally, the method is performed on both synthetic and real data
revealing that it is particularly robust to noise.
|
0810.4442
|
Message passing resource allocation for the uplink of multicarrier
systems
|
cs.IT math.IT
|
We propose a novel distributed resource allocation scheme for the up-link of
a cellular multi-carrier system based on the message passing (MP) algorithm. In
the proposed approach each transmitter iteratively sends and receives
information messages to/from the base station with the goal of achieving an
optimal resource allocation strategy. The exchanged messages are the solution
of small distributed allocation problems. To reduce the computational load, the
MP problems at the terminals follow a dynamic programming formulation. The
advantage of the proposed scheme is that it distributes the computational
effort among all the transmitters in the cell and it does not require the
presence of a central controller that takes all the decisions. Numerical
results show that the proposed approach is an excellent solution to the
resource allocation problem for cellular multi-carrier systems.
|
0810.4460
|
Logics for XML
|
cs.PL cs.DB cs.LO
|
This thesis describes the theoretical and practical foundations of a system
for the static analysis of XML processing languages. The system relies on a
fixpoint temporal logic with converse, derived from the mu-calculus, where
models are finite trees. This calculus is expressive enough to capture regular
tree types along with multi-directional navigation in trees, while having a
single exponential time complexity. Specifically the decidability of the logic
is proved in time 2^O(n) where n is the size of the input formula.
Major XML concepts are linearly translated into the logic: XPath navigation
and node selection semantics, and regular tree languages (which include DTDs
and XML Schemas). Based on these embeddings, several problems of major
importance in XML applications are reduced to satisfiability of the logic.
These problems include XPath containment, emptiness, equivalence, overlap,
coverage, in the presence or absence of regular tree type constraints, and the
static type-checking of an annotated query.
The focus is then given to a sound and complete algorithm for deciding the
logic, along with a detailed complexity analysis, and crucial implementation
techniques for building an effective solver. Practical experiments using a full
implementation of the system are presented. The system appears to be efficient
in practice for several realistic scenarios.
The main application of this work is a new class of static analyzers for
programming languages using both XPath expressions and XML type annotations
(input and output). Such analyzers allow to ensure at compile-time valuable
properties such as type-safety and optimizations, for safer and more efficient
XML processing.
|
0810.4611
|
Learning Isometric Separation Maps
|
cs.LG
|
Maximum Variance Unfolding (MVU) and its variants have been very successful
in embedding data-manifolds in lower dimensional spaces, often revealing the
true intrinsic dimension. In this paper we show how to also incorporate
supervised class information into an MVU-like method without breaking its
convexity. We call this method the Isometric Separation Map and we show that
the resulting kernel matrix can be used as a binary/multiclass Support Vector
Machine-like method in a semi-supervised (transductive) framework. We also show
that the method always finds a kernel matrix that linearly separates the
training data exactly without projecting them in infinite dimensional spaces.
In traditional SVMs we choose a kernel and hope that the data become linearly
separable in the kernel space. In this paper we show how the hyperplane can be
chosen ad-hoc and the kernel is trained so that data are always linearly
separable. Comparisons with Large Margin SVMs show comparable performance.
|
0810.4616
|
Assembling Actor-based Mind-Maps from Text Stream
|
cs.CL cs.DL
|
For human beings, the processing of text streams of unknown size leads
generally to problems because e.g. noise must be selected out, information be
tested for its relevance or redundancy, and linguistic phenomenon like
ambiguity or the resolution of pronouns be advanced. Putting this into
simulation by using an artificial mind-map is a challenge, which offers the
gate for a wide field of applications like automatic text summarization or
punctual retrieval. In this work we present a framework that is a first step
towards an automatic intellect. It aims at assembling a mind-map based on
incoming text streams and on a subject-verb-object strategy, having the verb as
an interconnection between the adjacent nouns. The mind-map's performance is
enriched by a pronoun resolution engine that bases on the work of D. Klein, and
C. D. Manning.
|
0810.4617
|
Graph-based classification of multiple observation sets
|
cs.CV
|
We consider the problem of classification of an object given multiple
observations that possibly include different transformations. The possible
transformations of the object generally span a low-dimensional manifold in the
original signal space. We propose to take advantage of this manifold structure
for the effective classification of the object represented by the observation
set. In particular, we design a low complexity solution that is able to exploit
the properties of the data manifolds with a graph-based algorithm. Hence, we
formulate the computation of the unknown label matrix as a smoothing process on
the manifold under the constraint that all observations represent an object of
one single class. It results into a discrete optimization problem, which can be
solved by an efficient and low complexity algorithm. We demonstrate the
performance of the proposed graph-based algorithm in the classification of sets
of multiple images. Moreover, we show its high potential in video-based face
recognition, where it outperforms state-of-the-art solutions that fall short of
exploiting the manifold structure of the face image data sets.
|
0810.4657
|
Cooperative Strategies for the Half-Duplex Gaussian Parallel Relay
Channel: Simultaneous Relaying versus Successive Relaying
|
cs.IT math.IT
|
This study investigates the problem of communication for a network composed
of two half-duplex parallel relays with additive white Gaussian noise. Two
protocols, i.e., \emph{Simultaneous} and \emph{Successive} relaying, associated
with two possible relay orderings are proposed. The simultaneous relaying
protocol is based on \emph{Dynamic Decode and Forward (DDF)} scheme. For the
successive relaying protocol: (i) a \emph{Non-Cooperative} scheme based on the
\emph{Dirty Paper Coding (DPC)}, and (ii) a \emph{Cooperative} scheme based on
the \emph{Block Markov Encoding (BME)} are considered. Furthermore, the
composite scheme of employing BME at one relay and DPC at another always
achieves a better rate when compared to the \emph{Cooperative} scheme. A
\emph{"Simultaneous-Successive Relaying based on Dirty paper coding scheme"
(SSRD)} is also proposed. The optimum ordering of the relays and hence the
capacity of the half-duplex Gaussian parallel relay channel in the low and high
signal-to-noise ratio (SNR) scenarios is derived. In the low SNR scenario, it
is revealed that under certain conditions for the channel coefficients, the
ratio of the achievable rate of the simultaneous relaying based on DDF to the
cut-set bound tends to be 1. On the other hand, as SNR goes to infinity, it is
proved that successive relaying, based on the DPC, asymptotically achieves the
capacity of the network.
|
0810.4658
|
Indexability of Restless Bandit Problems and Optimality of Whittle's
Index for Dynamic Multichannel Access
|
cs.IT math.IT
|
We consider a class of restless multi-armed bandit problems (RMBP) that
arises in dynamic multichannel access, user/server scheduling, and optimal
activation in multi-agent systems. For this class of RMBP, we establish the
indexability and obtain Whittle's index in closed-form for both discounted and
average reward criteria. These results lead to a direct implementation of
Whittle's index policy with remarkably low complexity. When these Markov chains
are stochastically identical, we show that Whittle's index policy is optimal
under certain conditions. Furthermore, it has a semi-universal structure that
obviates the need to know the Markov transition probabilities. The optimality
and the semi-universal structure result from the equivalency between Whittle's
index policy and the myopic policy established in this work. For non-identical
channels, we develop efficient algorithms for computing a performance upper
bound given by Lagrangian relaxation. The tightness of the upper bound and the
near-optimal performance of Whittle's index policy are illustrated with
simulation examples.
|
0810.4668
|
On Granular Knowledge Structures
|
cs.AI cs.DL
|
Knowledge plays a central role in human and artificial intelligence. One of
the key characteristics of knowledge is its structured organization. Knowledge
can be and should be presented in multiple levels and multiple views to meet
people's needs in different levels of granularities and from different
perspectives. In this paper, we stand on the view point of granular computing
and provide our understanding on multi-level and multi-view of knowledge
through granular knowledge structures (GKS). Representation of granular
knowledge structures, operations for building granular knowledge structures and
how to use them are investigated. As an illustration, we provide some examples
through results from an analysis of proceeding papers. Results show that
granular knowledge structures could help users get better understanding of the
knowledge source from set theoretical, logical and visual point of views. One
may consider using them to meet specific needs or solve certain kinds of
problems.
|
0810.4727
|
Robust Estimation of Mean Values
|
math.ST cs.SY math.PR stat.CO stat.TH
|
In this paper, we develop a computational approach for estimating the mean
value of a quantity in the presence of uncertainty. We demonstrate that, under
some mild assumptions, the upper and lower bounds of the mean value are
efficiently computable via a sample reuse technique, of which the computational
complexity is shown to posses a Poisson distribution.
|
0810.4741
|
On the Capacity and Generalized Degrees of Freedom of the X Channel
|
cs.IT math.IT
|
We explore the capacity and generalized degrees of freedom of the two-user
Gaussian X channel, i.e. a generalization of the 2 user interference channel
where there is an independent message from each transmitter to each receiver.
There are three main results in this paper. First, we characterize the sum
capacity of the deterministic X channel model under a symmetric setting.
Second, we characterize the generalized degrees of freedom of the Gaussian X
channel under a similar symmetric model. Third, we extend the noisy
interference capacity characterization previously obtained for the interference
channel to the X channel. Specifically, we show that the X channel associated
with noisy (very weak) interference channel has the same sum capacity as the
noisy interference channel.
|
0810.4809
|
XQuery Join Graph Isolation
|
cs.DB
|
A purely relational account of the true XQuery semantics can turn any
relational database system into an XQuery processor. Compiling nested
expressions of the fully compositional XQuery language, however, yields odd
algebraic plan shapes featuring scattered distributions of join operators that
currently overwhelm commercial SQL query optimizers.
This work rewrites such plans before submission to the relational database
back-end. Once cast into the shape of join graphs, we have found off-the-shelf
relational query optimizers--the B-tree indexing subsystem and join tree
planner, in particular--to cope and even be autonomously capable of
"reinventing" advanced processing strategies that have originally been devised
specifically for the XQuery domain, e.g., XPath step reordering, axis reversal,
and path stitching. Performance assessments provide evidence that relational
query engines are among the most versatile and efficient XQuery processors
readily available today.
|
0810.4884
|
The adaptability of physiological systems optimizes performance: new
directions in augmentation
|
cs.HC cs.NE
|
This paper contributes to the human-machine interface community in two ways:
as a critique of the closed-loop AC (augmented cognition) approach, and as a
way to introduce concepts from complex systems and systems physiology into the
field. Of particular relevance is a comparison of the inverted-U (or Gaussian)
model of optimal performance and multidimensional fitness landscape model.
Hypothetical examples will be given from human physiology and learning and
memory. In particular, a four-step model will be introduced that is proposed as
a better means to characterize multivariate systems during behavioral processes
with complex dynamics such as learning. Finally, the alternate approach
presented herein is considered as a preferable design alternate in
human-machine systems. It is within this context that future directions are
discussed.
|
0810.4916
|
Sequential adaptive compressed sampling via Huffman codes
|
cs.IT math.IT
|
There are two main approaches in compressed sensing: the geometric approach
and the combinatorial approach. In this paper we introduce an information
theoretic approach and use results from the theory of Huffman codes to
construct a sequence of binary sampling vectors to determine a sparse signal.
Unlike other approaches, our approach is adaptive in the sense that each
sampling vector depends on the previous sample. The number of measurements we
need for a k-sparse vector in n-dimensional space is no more than O(k log n)
and the reconstruction is O(k).
|
0810.4952
|
Computational modelling of evolution: ecosystems and language
|
q-bio.PE cs.CL physics.soc-ph
|
Recently, computational modelling became a very important research tool that
enables us to study problems that for decades evaded scientific analysis.
Evolutionary systems are certainly examples of such problems: they are composed
of many units that might reproduce, diffuse, mutate, die, or in some cases for
example communicate. These processes might be of some adaptive value, they
influence each other and occur on various time scales. That is why such systems
are so difficult to study. In this paper we briefly review some computational
approaches, as well as our contributions, to the evolution of ecosystems and
language. We start from Lotka-Volterra equations and the modelling of simple
two-species prey-predator systems. Such systems are canonical example for
studying oscillatory behaviour in competitive populations. Then we describe
various approaches to study long-term evolution of multi-species ecosystems. We
emphasize the need to use models that take into account both ecological and
evolutionary processes. Finally, we address the problem of the emergence and
development of language. It is becoming more and more evident that any theory
of language origin and development must be consistent with darwinian principles
of evolution. Consequently, a number of techniques developed for modelling
evolution of complex ecosystems are being applied to the problem of language.
We briefly review some of these approaches.
|
0810.4993
|
New completely regular q-ary codes based on Kronecker products
|
cs.IT cs.DM math.CO math.IT
|
For any integer $\rho \geq 1$ and for any prime power q, the explicit
construction of a infinite family of completely regular (and completely
transitive) q-ary codes with d=3 and with covering radius $\rho$ is given. The
intersection array is also computed. Under the same conditions, the explicit
construction of an infinite family of q-ary uniformly packed codes (in the wide
sense) with covering radius $\rho$, which are not completely regular, is also
given. In both constructions the Kronecker product is the basic tool that has
been used.
|
0810.5057
|
Combining Advanced Visualization and Automatized Reasoning for
Webometrics: A Test Study
|
cs.IR cs.DL
|
This paper presents a first attempt at performing a precise and automatic
identification of the linking behaviour in a scientific domain through the
analysis of the communication of the related academic institutions on the web.
The proposed approach is based on the paradigm of multiple viewpoint data
analysis (MVDA) than can be fruitfully exploited to highlight relationships
between data, like websites, carrying several kinds of description. It uses the
MultiSOM clustering and mapping method. The domain that has been chosen for
this study is the domain of Computer Science in Germany. The analysis is
conduced on a set of 438 websites of this domain using all together, thematic,
geographic and linking information. It highlights interesting results
concerning both global and local linking behaviour.
|
0810.5064
|
A New Algorithm for Building Alphabetic Minimax Trees
|
cs.IT cs.DS math.IT
|
We show how to build an alphabetic minimax tree for a sequence (W = w_1,
>..., w_n) of real weights in (O (n d \log \log n)) time, where $d$ is the
number of distinct integers (\lceil w_i \rceil). We apply this algorithm to
building an alphabetic prefix code given a sample.
|
0810.5090
|
Power-Bandwidth Tradeoff in Multiuser Relay Channels with Opportunistic
Scheduling
|
cs.IT math.IT
|
The goal of this paper is to understand the key merits of multihop relaying
techniques jointly in terms of their energy efficiency and spectral efficiency
advantages in the presence of multiuser diversity gains from opportunistic
(i.e., channel-aware) scheduling and identify the regimes and conditions in
which relay-assisted multiuser communication provides a clear advantage over
direct multiuser communication. For this purpose, we use Shannon-theoretic
tools to analyze the tradeoff between energy efficiency and spectral efficiency
(known as the power-bandwidth tradeoff) over a fading multiuser relay channel
with $K$ users in the asymptotic regime of large (but finite) number of users
(i.e., dense network). Benefiting from the extreme-value theoretic results of
\cite{Oyman_isit07}, we characterize the power-bandwidth tradeoff and the
associated energy and spectral efficiency measures of the bandwidth-limited
high signal-to-noise ratio (SNR) and power-limited low SNR regimes, and utilize
them in investigating the large system behavior of the multiuser relay channel
as a function of the number of users and physical channel SNRs. Our analysis
results in very accurate closed-form formulas in the large (but finite) $K$
regime that quantify energy and spectral efficiency performance, and provides
insights on the impact of multihop relaying and multiuser diversity techniques
on the power-bandwidth tradeoff.
|
0810.5098
|
Reliability Bounds for Delay-Constrained Multi-hop Networks
|
cs.IT math.IT
|
We consider a linear multi-hop network composed of multi-state discrete-time
memoryless channels over each hop, with orthogonal time-sharing across hops
under a half-duplex relaying protocol. We analyze the probability of error and
associated reliability function \cite{Gallager68} over the multi-hop network;
with emphasis on random coding and sphere packing bounds, under the assumption
of point-to-point coding over each hop. In particular, we define the system
reliability function for the multi-hop network and derive lower and upper
bounds on this function to specify the reliability-optimal operating conditions
of the network under an end-to-end constraint on the total number of channel
uses. Moreover, we apply the reliability analysis to bound the expected
end-to-end latency of multi-hop communication under the support of an automatic
repeat request (ARQ) protocol. Considering an additive white Gaussian noise
(AWGN) channel model over each hop, we evaluate and compare these bounds to
draw insights on the role of multi-hopping toward enhancing the end-to-end
rate-reliability-delay tradeoff.
|
0810.5148
|
Scheduling Kalman Filters in Continuous Time
|
math.OC cs.IT math.IT
|
A set of N independent Gaussian linear time invariant systems is observed by
M sensors whose task is to provide the best possible steady-state causal
minimum mean square estimate of the state of the systems, in addition to
minimizing a steady-state measurement cost. The sensors can switch between
systems instantaneously, and there are additional resource constraints, for
example on the number of sensors which can observe a given system
simultaneously. We first derive a tractable relaxation of the problem, which
provides a bound on the achievable performance. This bound can be computed by
solving a convex program involving linear matrix inequalities. Exploiting the
additional structure of the sites evolving independently, we can decompose this
program into coupled smaller dimensional problems. In the scalar case with
identical sensors, we give an analytical expression of an index policy proposed
in a more general context by Whittle. In the general case, we develop open-loop
periodic switching policies whose performance matches the bound arbitrarily
closely.
|
0810.5203
|
Monotonic Convergence in an Information-Theoretic Law of Small Numbers
|
cs.IT math.IT math.PR
|
An "entropy increasing to the maximum" result analogous to the entropic
central limit theorem (Barron 1986; Artstein et al. 2004) is obtained in the
discrete setting. This involves the thinning operation and a Poisson limit.
Monotonic convergence in relative entropy is established for general discrete
distributions, while monotonic increase of Shannon entropy is proved for the
special class of ultra-log-concave distributions. Overall we extend the
parallel between the information-theoretic central limit theorem and law of
small numbers explored by Kontoyiannis et al. (2005) and Harremo\"es et al.\
(2007, 2008). Ingredients in the proofs include convexity, majorization, and
stochastic orders.
|
0810.5308
|
Typical Performance of Irregular Low-Density Generator-Matrix Codes for
Lossy Compression
|
cond-mat.dis-nn cs.IT math.IT
|
We evaluate typical performance of irregular low-density generator-matrix
(LDGM) codes, which is defined by sparse matrices with arbitrary irregular bit
degree distribution and arbitrary check degree distribution, for lossy
compression. We apply the replica method under one-step replica symmetry
breaking (1RSB) ansatz to this problem.
|
0810.5325
|
3D Face Recognition with Sparse Spherical Representations
|
cs.CV
|
This paper addresses the problem of 3D face recognition using simultaneous
sparse approximations on the sphere. The 3D face point clouds are first aligned
with a novel and fully automated registration process. They are then
represented as signals on the 2D sphere in order to preserve depth and geometry
information. Next, we implement a dimensionality reduction process with
simultaneous sparse approximations and subspace projection. It permits to
represent each 3D face by only a few spherical functions that are able to
capture the salient facial characteristics, and hence to preserve the
discriminant facial information. We eventually perform recognition by effective
matching in the reduced space, where Linear Discriminant Analysis can be
further activated for improved recognition performance. The 3D face recognition
algorithm is evaluated on the FRGC v.1.0 data set, where it is shown to
outperform classical state-of-the-art solutions that work with depth images.
|
0810.5399
|
An axiomatic characterization of a two-parameter extended relative
entropy
|
cond-mat.stat-mech cs.IT math.IT
|
The uniqueness theorem for a two-parameter extended relative entropy is
proven. This result extends our previous one, the uniqueness theorem for a
one-parameter extended relative entropy, to a two-parameter case. In addition,
the properties of a two-parameter extended relative entropy are studied.
|
0810.5407
|
Quasi-metrics, Similarities and Searches: aspects of geometry of protein
datasets
|
cs.IR math.GN q-bio.QM
|
A quasi-metric is a distance function which satisfies the triangle inequality
but is not symmetric: it can be thought of as an asymmetric metric. The central
result of this thesis, developed in Chapter 3, is that a natural correspondence
exists between similarity measures between biological (nucleotide or protein)
sequences and quasi-metrics.
Chapter 2 presents basic concepts of the theory of quasi-metric spaces and
introduces a new examples of them: the universal countable rational
quasi-metric space and its bicompletion, the universal bicomplete separable
quasi-metric space. Chapter 4 is dedicated to development of a notion of the
quasi-metric space with Borel probability measure, or pq-space. The main result
of this chapter indicates that `a high dimensional quasi-metric space is close
to being a metric space'.
Chapter 5 investigates the geometric aspects of the theory of database
similarity search in the context of quasi-metrics. The results about
$pq$-spaces are used to produce novel theoretical bounds on performance of
indexing schemes.
Finally, the thesis presents some biological applications. Chapter 6
introduces FSIndex, an indexing scheme that significantly accelerates
similarity searches of short protein fragment datasets. Chapter 7 presents the
prototype of the system for discovery of short functional protein motifs called
PFMFind, which relies on FSIndex for similarity searches.
|
0810.5428
|
Relating Web pages to enable information-gathering tasks
|
cs.IR cs.DS
|
We argue that relationships between Web pages are functions of the user's
intent. We identify a class of Web tasks - information-gathering - that can be
facilitated by a search engine that provides links to pages which are related
to the page the user is currently viewing. We define three kinds of intentional
relationships that correspond to whether the user is a) seeking sources of
information, b) reading pages which provide information, or c) surfing through
pages as part of an extended information-gathering process. We show that these
three relationships can be productively mined using a combination of textual
and link information and provide three scoring mechanisms that correspond to
them: {\em SeekRel}, {\em FactRel} and {\em SurfRel}. These scoring mechanisms
incorporate both textual and link information. We build a set of capacitated
subnetworks - each corresponding to a particular keyword - that mirror the
interconnection structure of the World Wide Web. The scores are computed by
computing flows on these subnetworks. The capacities of the links are derived
from the {\em hub} and {\em authority} values of the nodes they connect,
following the work of Kleinberg (1998) on assigning authority to pages in
hyperlinked environments. We evaluated our scoring mechanism by running
experiments on four data sets taken from the Web. We present user evaluations
of the relevance of the top results returned by our scoring mechanisms and
compare those to the top results returned by Google's Similar Pages feature,
and the {\em Companion} algorithm proposed by Dean and Henzinger (1999).
|
0810.5484
|
A Novel Clustering Algorithm Based on a Modified Model of Random Walk
|
cs.LG cs.AI cs.MA
|
We introduce a modified model of random walk, and then develop two novel
clustering algorithms based on it. In the algorithms, each data point in a
dataset is considered as a particle which can move at random in space according
to the preset rules in the modified model. Further, this data point may be also
viewed as a local control subsystem, in which the controller adjusts its
transition probability vector in terms of the feedbacks of all data points, and
then its transition direction is identified by an event-generating function.
Finally, the positions of all data points are updated. As they move in space,
data points collect gradually and some separating parts emerge among them
automatically. As a consequence, data points that belong to the same class are
located at a same position, whereas those that belong to different classes are
away from one another. Moreover, the experimental results have demonstrated
that data points in the test datasets are clustered reasonably and efficiently,
and the comparison with other algorithms also provides an indication of the
effectiveness of the proposed algorithms.
|
0810.5535
|
A Combinatorial-Probabilistic Diagnostic Entropy and Information
|
cs.IT math.IT
|
A new combinatorial-probabilistic diagnostic entropy has been introduced. It
describes the pair-wise sum of probabilities of system conditions that have to
be distinguished during the diagnosing process. The proposed measure describes
the uncertainty of the system conditions, and at the same time complexity of
the diagnosis problem. Treating the assumed combinatorial-diagnostic entropy as
a primary notion, the information delivered by the symptoms has been defined.
The relationships have been derived to facilitate explicit, quantitative
assessment of the information of a single symptom as well as that of a symptoms
set. It has been proved that the combinatorial-probabilistic information shows
the property of additivity. The presented measures are focused on diagnosis
problem, but they can be easily applied to other disciplines such as decision
theory and classification.
|
0810.5551
|
A Theory of Truncated Inverse Sampling
|
math.ST cs.LG math.PR stat.ME stat.TH
|
In this paper, we have established a new framework of truncated inverse
sampling for estimating mean values of non-negative random variables such as
binomial, Poisson, hyper-geometrical, and bounded variables. We have derived
explicit formulas and computational methods for designing sampling schemes to
ensure prescribed levels of precision and confidence for point estimators.
Moreover, we have developed interval estimation methods.
|
0810.5573
|
A branch-and-bound feature selection algorithm for U-shaped cost
functions
|
cs.CV cs.DS cs.LG
|
This paper presents the formulation of a combinatorial optimization problem
with the following characteristics: i.the search space is the power set of a
finite set structured as a Boolean lattice; ii.the cost function forms a
U-shaped curve when applied to any lattice chain. This formulation applies for
feature selection in the context of pattern recognition. The known approaches
for this problem are branch-and-bound algorithms and heuristics, that explore
partially the search space. Branch-and-bound algorithms are equivalent to the
full search, while heuristics are not. This paper presents a branch-and-bound
algorithm that differs from the others known by exploring the lattice structure
and the U-shaped chain curves of the search space. The main contribution of
this paper is the architecture of this algorithm that is based on the
representation and exploration of the search space by new lattice properties
proven here. Several experiments, with well known public data, indicate the
superiority of the proposed method to SFFS, which is a popular heuristic that
gives good results in very short computational time. In all experiments, the
proposed method got better or equal results in similar or even smaller
computational time.
|
0810.5578
|
Anonymizing Graphs
|
cs.DB cs.DS
|
Motivated by recently discovered privacy attacks on social networks, we study
the problem of anonymizing the underlying graph of interactions in a social
network. We call a graph (k,l)-anonymous if for every node in the graph there
exist at least k other nodes that share at least l of its neighbors. We
consider two combinatorial problems arising from this notion of anonymity in
graphs. More specifically, given an input graph we ask for the minimum number
of edges to be added so that the graph becomes (k,l)-anonymous. We define two
variants of this minimization problem and study their properties. We show that
for certain values of k and l the problems are polynomial-time solvable, while
for others they become NP-hard. Approximation algorithms for the latter cases
are also given.
|
0810.5582
|
Anonymizing Unstructured Data
|
cs.DB cs.DS
|
In this paper we consider the problem of anonymizing datasets in which each
individual is associated with a set of items that constitute private
information about the individual. Illustrative datasets include market-basket
datasets and search engine query logs. We formalize the notion of k-anonymity
for set-valued data as a variant of the k-anonymity model for traditional
relational datasets. We define an optimization problem that arises from this
definition of anonymity and provide O(klogk) and O(1)-approximation algorithms
for the same. We demonstrate applicability of our algorithms to the America
Online query log dataset.
|
0810.5631
|
Temporal Difference Updating without a Learning Rate
|
cs.LG cs.AI
|
We derive an equation for temporal difference learning from statistical
principles. Specifically, we start with the variational principle and then
bootstrap to produce an updating rule for discounted state value estimates. The
resulting equation is similar to the standard equation for temporal difference
learning with eligibility traces, so called TD(lambda), however it lacks the
parameter alpha that specifies the learning rate. In the place of this free
parameter there is now an equation for the learning rate that is specific to
each state transition. We experimentally test this new learning rule against
TD(lambda) and find that it offers superior performance in various settings.
Finally, we make some preliminary investigations into how to extend our new
temporal difference algorithm to reinforcement learning. To do this we combine
our update equation with both Watkins' Q(lambda) and Sarsa(lambda) and find
that it again offers superior performance without a learning rate parameter.
|
0810.5633
|
Reconstructing Extended Perfect Binary One-Error-Correcting Codes from
Their Minimum Distance Graphs
|
cs.IT math.CO math.IT
|
The minimum distance graph of a code has the codewords as vertices and edges
exactly when the Hamming distance between two codewords equals the minimum
distance of the code. A constructive proof for reconstructibility of an
extended perfect binary one-error-correcting code from its minimum distance
graph is presented. Consequently, inequivalent such codes have nonisomorphic
minimum distance graphs. Moreover, it is shown that the automorphism group of a
minimum distance graph is isomorphic to that of the corresponding code.
|
0810.5636
|
On the Possibility of Learning in Reactive Environments with Arbitrary
Dependence
|
cs.LG cs.AI cs.IT math.IT
|
We address the problem of reinforcement learning in which observations may
exhibit an arbitrary form of stochastic dependence on past observations and
actions, i.e. environments more general than (PO)MDPs. The task for an agent is
to attain the best possible asymptotic reward where the true generating
environment is unknown but belongs to a known countable family of environments.
We find some sufficient conditions on the class of environments under which an
agent exists which attains the best asymptotic reward for any environment in
the class. We analyze how tight these conditions are and how they relate to
different probabilistic assumptions known in reinforcement learning and related
fields, such as Markov Decision Processes and mixing conditions.
|
0810.5663
|
Effective Complexity and its Relation to Logical Depth
|
cs.IT math.IT
|
Effective complexity measures the information content of the regularities of
an object. It has been introduced by M. Gell-Mann and S. Lloyd to avoid some of
the disadvantages of Kolmogorov complexity, also known as algorithmic
information content. In this paper, we give a precise formal definition of
effective complexity and rigorous proofs of its basic properties. In
particular, we show that incompressible binary strings are effectively simple,
and we prove the existence of strings that have effective complexity close to
their lengths. Furthermore, we show that effective complexity is related to
Bennett's logical depth: If the effective complexity of a string $x$ exceeds a
certain explicit threshold then that string must have astronomically large
depth; otherwise, the depth can be arbitrarily small.
|
0810.5717
|
On the Conditional Independence Implication Problem: A Lattice-Theoretic
Approach
|
cs.AI cs.DM
|
A lattice-theoretic framework is introduced that permits the study of the
conditional independence (CI) implication problem relative to the class of
discrete probability measures. Semi-lattices are associated with CI statements
and a finite, sound and complete inference system relative to semi-lattice
inclusions is presented. This system is shown to be (1) sound and complete for
saturated CI statements, (2) complete for general CI statements, and (3) sound
and complete for stable CI statements. These results yield a criterion that can
be used to falsify instances of the implication problem and several heuristics
are derived that approximate this "lattice-exclusion" criterion in polynomial
time. Finally, we provide experimental results that relate our work to results
obtained from other existing inference algorithms.
|
0810.5725
|
A triangle-based logic for affine-invariant querying of spatial and
spatio-temporal data
|
cs.LO cs.DB
|
In spatial databases, incompatibilities often arise due to different choices
of origin or unit of measurement (e.g., centimeters versus inches). By
representing and querying the data in an affine-invariant manner, we can avoid
these incompatibilities.
In practice, spatial (resp., spatio-temporal) data is often represented as a
finite union of triangles (resp., moving triangles). As two arbitrary triangles
are equal up to a unique affinity of the plane, they seem perfect candidates as
basic units for an affine-invariant query language.
We propose a so-called "triangle logic", a query language that is
affine-generic and has triangles as basic elements. We show that this language
has the same expressive power as the affine-generic fragment of first-order
logic over the reals on triangle databases. We illustrate that the proposed
language is simple and intuitive. It can also serve as a first step towards a
"moving-triangle logic" for spatio-temporal data.
|
0810.5770
|
From Multi-Keyholes to Measure of Correlation and Power Imbalance in
MIMO Channels: Outage Capacity Analysis
|
cs.IT math.IT
|
An information-theoretic analysis of a multi-keyhole channel, which includes
a number of statistically independent keyholes with possibly different
correlation matrices, is given. When the number of keyholes or/and the number
of Tx/Rx antennas is large, there is an equivalent Rayleigh-fading channel such
that the outage capacities of both channels are asymptotically equal. In the
case of a large number of antennas and for a broad class of fading
distributions, the instantaneous capacity is shown to be asymptotically
Gaussian in distribution, and compact, closed-form expressions for the mean and
variance are given. Motivated by the asymptotic analysis, a simple,
full-ordering scalar measure of spatial correlation and power imbalance in MIMO
channels is introduced, which quantifies the negative impact of these two
factors on the outage capacity in a simple and well-tractable way. It does not
require the eigenvalue decomposition, and has the full-ordering property. The
size-asymptotic results are used to prove Telatar's conjecture for
semi-correlated multi-keyhole and Rayleigh channels. Since the keyhole channel
model approximates well the relay channel in the amplify-and-forward mode in
certain scenarios, these results also apply to the latter
|
0811.0048
|
Conjectural Equilibrium in Water-filling Games
|
cs.GT cs.MA
|
This paper considers a non-cooperative game in which competing users sharing
a frequency-selective interference channel selfishly optimize their power
allocation in order to improve their achievable rates. Previously, it was shown
that a user having the knowledge of its opponents' channel state information
can make foresighted decisions and substantially improve its performance
compared with the case in which it deploys the conventional iterative
water-filling algorithm, which does not exploit such knowledge. This paper
discusses how a foresighted user can acquire this knowledge by modeling its
experienced interference as a function of its own power allocation. To
characterize the outcome of the multi-user interaction, the conjectural
equilibrium is introduced, and the existence of this equilibrium for the
investigated water-filling game is proved. Interestingly, both the Nash
equilibrium and the Stackelberg equilibrium are shown to be special cases of
the generalization of conjectural equilibrium. We develop practical algorithms
to form accurate beliefs and search desirable power allocation strategies.
Numerical simulations indicate that a foresighted user without any a priori
knowledge of its competitors' private information can effectively learn the
required information, and induce the entire system to an operating point that
improves both its own achievable rate as well as the rates of the other
participants in the water-filling game.
|
0811.0113
|
A Bayesian Framework for Opinion Updates
|
physics.soc-ph cs.MA nlin.AO
|
Opinion Dynamics lacks a theoretical basis. In this article, I propose to use
a decision-theoretic framework, based on the updating of subjective
probabilities, as that basis. We will see we get a basic tool for a better
understanding of the interaction between the agents in Opinion Dynamics
problems and for creating new models. I will review the few existing
applications of Bayesian update rules to both discrete and continuous opinion
problems and show that several traditional models can be obtained as special
cases or approximations from these Bayesian models. The empirical basis and
useful properties of the framework will be discussed and examples of how the
framework can be used to describe different problems given.
|
0811.0123
|
A computational model of affects
|
cs.AI cs.MA
|
This article provides a simple logical structure, in which affective concepts
(i.e. concepts related to emotions and feelings) can be defined. The set of
affects defined is similar to the set of emotions covered in the OCC model
(Ortony A., Collins A., and Clore G. L.: The Cognitive Structure of Emotions.
Cambridge University Press, 1988), but the model presented in this article is
fully computationally defined.
|
0811.0131
|
Balancing Exploration and Exploitation by an Elitist Ant System with
Exponential Pheromone Deposition Rule
|
cs.AI
|
The paper presents an exponential pheromone deposition rule to modify the
basic ant system algorithm which employs constant deposition rule. A stability
analysis using differential equation is carried out to find out the values of
parameters that make the ant system dynamics stable for both kinds of
deposition rule. A roadmap of connected cities is chosen as the problem
environment where the shortest route between two given cities is required to be
discovered. Simulations performed with both forms of deposition approach using
Elitist Ant System model reveal that the exponential deposition approach
outperforms the classical one by a large extent. Exhaustive experiments are
also carried out to find out the optimum setting of different controlling
parameters for exponential deposition approach and an empirical relationship
between the major controlling parameters of the algorithm and some features of
problem environment.
|
0811.0134
|
A Novel Parser Design Algorithm Based on Artificial Ants
|
cs.AI
|
This article presents a unique design for a parser using the Ant Colony
Optimization algorithm. The paper implements the intuitive thought process of
human mind through the activities of artificial ants. The scheme presented here
uses a bottom-up approach and the parsing program can directly use ambiguous or
redundant grammars. We allocate a node corresponding to each production rule
present in the given grammar. Each node is connected to all other nodes
(representing other production rules), thereby establishing a completely
connected graph susceptible to the movement of artificial ants. Each ant tries
to modify this sentential form by the production rule present in the node and
upgrades its position until the sentential form reduces to the start symbol S.
Successful ants deposit pheromone on the links that they have traversed
through. Eventually, the optimum path is discovered by the links carrying
maximum amount of pheromone concentration. The design is simple, versatile,
robust and effective and obviates the calculation of the above mentioned sets
and precedence relation tables. Further advantages of our scheme lie in i)
ascertaining whether a given string belongs to the language represented by the
grammar, and ii) finding out the shortest possible path from the given string
to the start symbol S in case multiple routes exist.
|
0811.0136
|
Extension of Max-Min Ant System with Exponential Pheromone Deposition
Rule
|
cs.AI
|
The paper presents an exponential pheromone deposition approach to improve
the performance of classical Ant System algorithm which employs uniform
deposition rule. A simplified analysis using differential equations is carried
out to study the stability of basic ant system dynamics with both exponential
and constant deposition rules. A roadmap of connected cities, where the
shortest path between two specified cities are to be found out, is taken as a
platform to compare Max-Min Ant System model (an improved and popular model of
Ant System algorithm) with exponential and constant deposition rules. Extensive
simulations are performed to find the best parameter settings for non-uniform
deposition approach and experiments with these parameter settings revealed that
the above approach outstripped the traditional one by a large extent in terms
of both solution quality and convergence time.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.