id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1105.5176
|
The merit factor of binary arrays derived from the quadratic character
|
cs.IT math.IT
|
We calculate the asymptotic merit factor, under all cyclic rotations of rows
and columns, of two families of binary two-dimensional arrays derived from the
quadratic character. The arrays in these families have size p x q, where p and
q are not necessarily distinct odd primes, and can be considered as
two-dimensional generalisations of a Legendre sequence. The asymptotic values
of the merit factor of the two families are generally different, although the
maximum asymptotic merit factor, taken over all cyclic rotations of rows and
columns, equals 36/13 for both families. These are the first non-trivial
theoretical results for the asymptotic merit factor of families of truly
two-dimensional binary arrays.
|
1105.5178
|
The peak sidelobe level of random binary sequences
|
math.CO cs.IT math.IT
|
Let $A_n=(a_0,a_1,\dots,a_{n-1})$ be drawn uniformly at random from
$\{-1,+1\}^n$ and define \[
M(A_n)=\max_{0<u<n}\,\Bigg|\sum_{j=0}^{n-u-1}a_ja_{j+u}\Bigg|\quad\text{for
$n>1$}. \] It is proved that $M(A_n)/\sqrt{n\log n}$ converges in probability
to $\sqrt{2}$. This settles a problem first studied by Moon and Moser in the
1960s and proves in the affirmative a recent conjecture due to Alon, Litsyn,
and Shpunt. It is also shown that the expectation of $M(A_n)/\sqrt{n\log n}$
tends to $\sqrt{2}$.
|
1105.5180
|
The L_4 norm of Littlewood polynomials derived from the Jacobi symbol
|
math.NT cs.IT math.IT
|
Littlewood raised the question of how slowly the L_4 norm ||f||_4 of a
Littlewood polynomial f (having all coefficients in {-1,+1}) of degree n-1 can
grow with n. We consider such polynomials for odd square-free n, where \phi(n)
coefficients are determined by the Jacobi symbol, but the remaining
coefficients can be freely chosen. When n is prime, these polynomials have the
smallest known asymptotic value of the normalised L_4 norm ||f||_4/||f||_2
among all Littlewood polynomials, namely (7/6)^{1/4}. When n is not prime, our
results show that the normalised L_4 norm varies considerably according to the
free choices of the coefficients and can even grow without bound. However, by
suitably choosing these coefficients, the limit of the normalised L_4 norm can
be made as small as the best known value (7/6)^{1/4}.
|
1105.5196
|
Large-Scale Music Annotation and Retrieval: Learning to Rank in Joint
Semantic Spaces
|
cs.LG
|
Music prediction tasks range from predicting tags given a song or clip of
audio, predicting the name of the artist, or predicting related songs given a
song, clip, artist name or tag. That is, we are interested in every semantic
relationship between the different musical concepts in our database. In
realistically sized databases, the number of songs is measured in the hundreds
of thousands or more, and the number of artists in the tens of thousands or
more, providing a considerable challenge to standard machine learning
techniques. In this work, we propose a method that scales to such datasets
which attempts to capture the semantic similarities between the database items
by modeling audio, artist names, and tags in a single low-dimensional semantic
space. This choice of space is learnt by optimizing the set of prediction tasks
of interest jointly using multi-task learning. Our method both outperforms
baseline methods and, in comparison to them, is faster and consumes less
memory. We then demonstrate how our method learns an interpretable model, where
the semantic space captures well the similarities of interest.
|
1105.5215
|
Compressive Identification of Linear Operators
|
cs.IT math.IT
|
We consider the problem of identifying a linear deterministic operator from
an input-output measurement. For the large class of continuous (and hence
bounded) operators, under additional mild restrictions, we show that stable
identifiability is possible if the total support area of the operator's
spreading function satisfies D <= 1/2. This result holds for arbitrary
(possibly fragmented) support regions of the spreading function, does not
impose limitations on the total extent of the support region, and, most
importantly, does not require the support region of the spreading function to
be known prior to identification. Furthermore, we prove that asking for
identifiability of only almost all operators, stable identifiability is
possible if D <= 1. This result is surprising as it says that there is no
penalty for not knowing the support region of the spreading function prior to
identification.
|
1105.5235
|
The rocket problem in general relativity
|
gr-qc cs.SY math.OC
|
We derive the covariant optimality conditions for rocket trajectories in
general relativity, with and without a bound on the magnitude of the proper
acceleration. The resulting theory is then applied to solve two specific
problems: the minimum fuel consumption transfer between two galaxies in a FLRW
model, and between two stable circular orbits in the Schwarzschild spacetime.
|
1105.5294
|
A long-time limit of world subway networks
|
physics.soc-ph cs.SI
|
We study the temporal evolution of the structure of the world's largest
subway networks in an exploratory manner. We show that, remarkably, all these
networks converge to {a shape which shares similar generic features} despite
their geographic and economic differences. This limiting shape is made of a
core with branches radiating from it. For most of these networks, the average
degree of a node (station) within the core has a value of order 2.5 and the
proportion of k=2 nodes in the core is larger than 60%. The number of branches
scales roughly as the square root of the number of stations, the current
proportion of branches represents about half of the total number of stations,
and the average diameter of branches is about twice the average radial
extension of the core. Spatial measures such as the number of stations at a
given distance to the barycenter display a first regime which grows as r^2
followed by another regime with different exponents, and eventually saturates.
These results -- difficult to interpret in the framework of fractal geometry --
confirm and yield a natural explanation in the geometric picture of this core
and their branches: the first regime corresponds to a uniform core, while the
second regime is controlled by the interstation spacing on branches. The
apparent convergence towards a unique network shape in the temporal limit
suggests the existence of dominant, universal mechanisms governing the
evolution of these structures.
|
1105.5306
|
On the Generalized Degrees of Freedom of the K-user Symmetric MIMO
Gaussian Interference Channel
|
cs.IT math.IT
|
The K-user symmetric multiple input multiple output (MIMO) Gaussian
interference channel (IC) where each transmitter has M antennas and each
receiver has N antennas is studied from a generalized degrees of freedom (GDOF)
perspective. An inner bound on the GDOF is derived using a combination of
techniques such as treating interference as noise, zero forcing (ZF) at the
receivers, interference alignment (IA), and extending the Han-Kobayashi (HK)
scheme to K users, as a function of the number of antennas and the log (INR) /
log (SNR) level. Three outer bounds are derived, under different assumptions of
cooperation and providing side information to receivers. The novelty in the
derivation lies in the careful selection of side information, which results in
the cancellation of the negative differential entropy terms containing signal
components, leading to a tractable outer bound. The overall outer bound is
obtained by taking the minimum of the three outer bounds. The derived bounds
are simplified for the MIMO Gaussian symmetric IC to obtain outer bounds on the
generalized degrees of freedom (GDOF). Several interesting conclusions are
drawn from the derived bounds. For example, when K > N/M + 1, a combination of
the HK and IA schemes performs the best among the schemes considered. When N/M
< K <= N/M + 1, the HK-scheme outperforms other schemes and is shown to be GDOF
optimal. In addition, when the SNR and INR are at the same level, ZF-receiving
and the HK-scheme have the same GDOF performance. It is also shown that many of
the existing results on the GDOF of the Gaussian IC can be obtained as special
cases of the bounds, e.g., by setting K=2 or the number of antennas at each
user to 1.
|
1105.5307
|
Efficient Learning of Sparse Invariant Representations
|
cs.CV cs.NE
|
We propose a simple and efficient algorithm for learning sparse invariant
representations from unlabeled data with fast inference. When trained on short
movies sequences, the learned features are selective to a range of orientations
and spatial frequencies, but robust to a wide range of positions, similar to
complex cells in the primary visual cortex. We give a hierarchical version of
the algorithm, and give guarantees of fast convergence under certain
conditions.
|
1105.5332
|
Multidimensional Scaling in the Poincare Disk
|
stat.ML cs.SI
|
Multidimensional scaling (MDS) is a class of projective algorithms
traditionally used in Euclidean space to produce two- or three-dimensional
visualizations of datasets of multidimensional points or point distances. More
recently however, several authors have pointed out that for certain datasets,
hyperbolic target space may provide a better fit than Euclidean space.
In this paper we develop PD-MDS, a metric MDS algorithm designed specifically
for the Poincare disk (PD) model of the hyperbolic plane. Emphasizing the
importance of proceeding from first principles in spite of the availability of
various black box optimizers, our construction is based on an elementary
hyperbolic line search and reveals numerous particulars that need to be
carefully addressed when implementing this as well as more sophisticated
iterative optimization methods in a hyperbolic space model.
|
1105.5344
|
Partitioning Breaks Communities
|
physics.soc-ph cs.SI
|
Considering a clique as a conservative definition of community structure, we
examine how graph partitioning algorithms interact with cliques. Many popular
community-finding algorithms partition the entire graph into non-overlapping
communities. We show that on a wide range of empirical networks, from different
domains, significant numbers of cliques are split across the separate
partitions produced by these algorithms. We then examine the largest connected
component of the subgraph formed by retaining only edges in cliques, and apply
partitioning strategies that explicitly minimise the number of cliques split.
We further examine several modern overlapping community finding algorithms, in
terms of the interaction between cliques and the communities they find, and in
terms of the global overlap of the sets of communities they find. We conclude
that, due to the connectedness of many networks, any community finding
algorithm that produces partitions must fail to find at least some significant
structures. Moreover, contrary to traditional intuition, in some empirical
networks, strong ties and cliques frequently do cross community boundaries;
much community structure is fundamentally overlapping and unpartitionable in
nature.
|
1105.5370
|
Quantum Communication Complexity of Quantum Authentication Protocols
|
cs.IT math.IT quant-ph
|
In order to perform Quantum Cryptography procedures it is often essencial to
ensure that the parties of the communication are authentic. Such task is
accomplished by quantum authentication protocols which are distributed
algorithms based on the intrinsic properties of Quantum Mechanics. The choice
of an authentication protocol must consider that quantum states are very
delicate and that the channel is subject to eavesdropping. However, even in
face of the various existing definitions of quantum authentication protocols in
the literature, little is known about them in this perspective, and this lack
of knowledge may unfavor comparisons and wise choices. In the attempt to
overcome this limitation, in the present work we aim at showing an approach to
evaluate quantum authentication protocols based on the determination of their
quantum communication complexity. Based on our investigation, no similar
methods to analyze quantum authentication protocols were found in the
literature. Pursuing this further, our approach has advantages that need to be
highlighted: it characterizes a systematic procedure to evaluate quantum
authentication protocols; its evaluation is intuitive, based only on the
protocol execution; the resulting measure is a concise notation of what
resources a quantum authentication protocol demands and how many communications
are performed; it allows comparisons between protocols; it makes possible to
analyze the communication effort when an eavesdropping occurs; and, lastly, it
is likely to be applied in almost any quantum authentication protocol. To
illustrate the proposed approach, we also bring results about its application
in ten existing quantum authentication protocols (data origin authentication
and identity authentication). Such evaluations increase the knowledge about the
existing protocols, presenting its advantages, limitations and contrasts.
|
1105.5379
|
Parallel Coordinate Descent for L1-Regularized Loss Minimization
|
cs.LG cs.IT math.IT
|
We propose Shotgun, a parallel coordinate descent algorithm for minimizing
L1-regularized losses. Though coordinate descent seems inherently sequential,
we prove convergence bounds for Shotgun which predict linear speedups, up to a
problem-dependent limit. We present a comprehensive empirical study of Shotgun
for Lasso and sparse logistic regression. Our theoretical predictions on the
potential for parallelism closely match behavior on real data. Shotgun
outperforms other published solvers on a range of large problems, proving to be
one of the most scalable algorithms for L1.
|
1105.5419
|
Strong Secrecy from Channel Resolvability
|
cs.IT math.IT
|
We analyze physical-layer security based on the premise that the coding
mechanism for secrecy over noisy channels is tied to the notion of channel
resolvability. Instead of considering capacity-based constructions, which
associate to each message a sub-code that operates just below the capacity of
the eavesdropper's channel, we consider channel-resolvability-based
constructions, which associate to each message a sub-code that operates just
above the resolvability of the eavesdropper's channel. Building upon the work
of Csiszar and Hayashi, we provide further evidence that channel resolvability
is a powerful and versatile coding mechanism for secrecy by developing results
that hold for strong secrecy metrics and arbitrary channels.
Specifically, we show that at least for symmetric wiretap channels, random
capacity-based constructions fail to achieve the strong secrecy capacity while
channel-resolvability-based constructions achieve it. We then leverage channel
resolvability to establish the secrecy-capacity region of arbitrary broadcast
channels with confidential messages and a cost constraint for strong secrecy
metrics. Finally, we specialize our results to study the secrecy capacity of
wireless channels with perfect channel state information, mixed channels and
compound channels with receiver Channel State Information (CSI), as well as the
secret-key capacity of source models for secret-key agreement. By tying secrecy
to channel resolvability, we obtain achievable rates for strong secrecy metrics
with simple proofs.
|
1105.5427
|
Combining Lagrangian Decomposition and Excessive Gap Smoothing Technique
for Solving Large-Scale Separable Convex Optimization Problems
|
math.OC cs.SY
|
A new algorithm for solving large-scale convex optimization problems with a
separable objective function is proposed. The basic idea is to combine three
techniques: Lagrangian dual decomposition, excessive gap and smoothing. The
main advantage of this algorithm is that it dynamically updates the smoothness
parameters which leads to numerically robust performance. The convergence of
the algorithm is proved under weak conditions imposed on the original problem.
The rate of convergence is $O(\frac{1}{k})$, where $k$ is the iteration
counter. In the second part of the paper, the algorithm is coupled with a dual
scheme to construct a switching variant of the dual decomposition. We discuss
implementation issues and make a theoretical comparison. Numerical examples
confirm the theoretical results.
|
1105.5432
|
Extensions to the Theory of Widely Linear Complex Kalman Filtering
|
cs.SY cs.IT math.IT math.OC
|
For an improper complex signal x, its complementary covariance ExxT is not
zero and thus it carries useful statistical information about x. Widely linear
processing exploits Hermitian and complementary covariance to improve
performance. In this paper we extend the existing theory of widely linear
complex Kalman filters (WLCKF) and unscented WLCKFs [1]. We propose a WLCKF
which can deal with more general dynamical models of complex-valued states and
measurements than the WLCKFs in [1]. The proposed WLCKF has an equivalency with
the corresponding dual channel real KF. Our analytical and numerical results
show the performance improvement of a WLCKF over a complex Kalman filter (CKF)
that does not exploit complementary covariance. We also develop an unscented
WLCKF which uses modified complex sigma points. The modified complex sigma
points preserve complete first and second moments of complex signals, while the
sigma points in [1] only carry the mean and Hermitian covariance, but not
complementary covariance of complex signals.
|
1105.5438
|
The capacity region of classes of product broadcast channels
|
cs.IT math.IT
|
We establish a new outer bound for the capacity region of product broadcast
channels. This outer bound matches Marton's inner bound for a variety of
classes of product broadcast channels whose capacity regions were previously
unknown. These classes include product of reversely semi-deterministic and
product of reversely more-capable channels. A significant consequence of this
new outer bound is that it establishes, via an example, that the previously
best known outer-bound is strictly suboptimal for the general broadcast
channel. Our example is comprised of a product broadcast channel with two
semi-deterministic components in reverse orientation.
|
1105.5440
|
The Ariadne's Clew Algorithm
|
cs.AI
|
We present a new approach to path planning, called the "Ariadne's clew
algorithm". It is designed to find paths in high-dimensional continuous spaces
and applies to robots with many degrees of freedom in static, as well as
dynamic environments - ones where obstacles may move. The Ariadne's clew
algorithm comprises two sub-algorithms, called Search and Explore, applied in
an interleaved manner. Explore builds a representation of the accessible space
while Search looks for the target. Both are posed as optimization problems. We
describe a real implementation of the algorithm to plan paths for a six degrees
of freedom arm in a dynamic environment where another six degrees of freedom
arm is used as a moving obstacle. Experimental results show that a path is
found in about one second without any pre-processing.
|
1105.5441
|
Computational Aspects of Reordering Plans
|
cs.AI
|
This article studies the problem of modifying the action ordering of a plan
in order to optimise the plan according to various criteria. One of these
criteria is to make a plan less constrained and the other is to minimize its
parallel execution time. Three candidate definitions are proposed for the first
of these criteria, constituting a sequence of increasing optimality guarantees.
Two of these are based on deordering plans, which means that ordering relations
may only be removed, not added, while the third one uses reordering, where
arbitrary modifications to the ordering are allowed. It is shown that only the
weakest one of the three criteria is tractable to achieve, the other two being
NP-hard and even difficult to approximate. Similarly, optimising the parallel
execution time of a plan is studied both for deordering and reordering of
plans. In the general case, both of these computations are NP-hard. However, it
is shown that optimal deorderings can be computed in polynomial time for a
class of planning languages based on the notions of producers, consumers and
threats, which includes most of the commonly used planning languages. Computing
optimal reorderings can potentially lead to even faster parallel executions,
but this problem remains NP-hard and difficult to approximate even under quite
severe restrictions.
|
1105.5442
|
The Divide-and-Conquer Subgoal-Ordering Algorithm for Speeding up Logic
Inference
|
cs.AI
|
It is common to view programs as a combination of logic and control: the
logic part defines what the program must do, the control part -- how to do it.
The Logic Programming paradigm was developed with the intention of separating
the logic from the control. Recently, extensive research has been conducted on
automatic generation of control for logic programs. Only a few of these works
considered the issue of automatic generation of control for improving the
efficiency of logic programs. In this paper we present a novel algorithm for
automatic finding of lowest-cost subgoal orderings. The algorithm works using
the divide-and-conquer strategy. The given set of subgoals is partitioned into
smaller sets, based on co-occurrence of free variables. The subsets are ordered
recursively and merged, yielding a provably optimal order. We experimentally
demonstrate the utility of the algorithm by testing it in several domains, and
discuss the possibilities of its cooperation with other existing methods.
|
1105.5443
|
The Gn,m Phase Transition is Not Hard for the Hamiltonian Cycle Problem
|
cs.AI
|
Using an improved backtrack algorithm with sophisticated pruning techniques,
we revise previous observations correlating a high frequency of hard to solve
Hamiltonian Cycle instances with the Gn,m phase transition between
Hamiltonicity and non-Hamiltonicity. Instead all tested graphs of 100 to 1500
vertices are easily solved. When we artificially restrict the degree sequence
with a bounded maximum degree, although there is some increase in difficulty,
the frequency of hard graphs is still low. When we consider more regular graphs
based on a generalization of knight's tours, we observe frequent instances of
really hard graphs, but on these the average degree is bounded by a constant.
We design a set of graphs with a feature our algorithm is unable to detect and
so are very hard for our algorithm, but in these we can vary the average degree
from O(1) to O(n). We have so far found no class of graphs correlated with the
Gn,m phase transition which asymptotically produces a high frequency of hard
instances.
|
1105.5444
|
Semantic Similarity in a Taxonomy: An Information-Based Measure and its
Application to Problems of Ambiguity in Natural Language
|
cs.AI
|
This article presents a measure of semantic similarity in an IS-A taxonomy
based on the notion of shared information content. Experimental evaluation
against a benchmark set of human similarity judgments demonstrates that the
measure performs better than the traditional edge-counting approach. The
article presents algorithms that take advantage of taxonomic similarity in
resolving syntactic and semantic ambiguity, along with experimental results
demonstrating their effectiveness.
|
1105.5446
|
A Temporal Description Logic for Reasoning about Actions and Plans
|
cs.AI
|
A class of interval-based temporal languages for uniformly representing and
reasoning about actions and plans is presented. Actions are represented by
describing what is true while the action itself is occurring, and plans are
constructed by temporally relating actions and world states. The temporal
languages are members of the family of Description Logics, which are
characterized by high expressivity combined with good computational properties.
The subsumption problem for a class of temporal Description Logics is
investigated and sound and complete decision procedures are given. The basic
language TL-F is considered first: it is the composition of a temporal logic TL
-- able to express interval temporal networks -- together with the non-temporal
logic F -- a Feature Description Logic. It is proven that subsumption in this
language is an NP-complete problem. Then it is shown how to reason with the
more expressive languages TLU-FU and TL-ALCF. The former adds disjunction both
at the temporal and non-temporal sides of the language, the latter extends the
non-temporal side with set-valued features (i.e., roles) and a propositionally
complete language.
|
1105.5447
|
Adaptive Parallel Iterative Deepening Search
|
cs.AI
|
Many of the artificial intelligence techniques developed to date rely on
heuristic search through large spaces. Unfortunately, the size of these spaces
and the corresponding computational effort reduce the applicability of
otherwise novel and effective algorithms. A number of parallel and distributed
approaches to search have considerably improved the performance of the search
process. Our goal is to develop an architecture that automatically selects
parallel search strategies for optimal performance on a variety of search
problems. In this paper we describe one such architecture realized in the
Eureka system, which combines the benefits of many different approaches to
parallel heuristic search. Through empirical and theoretical analyses we
observe that features of the problem space directly affect the choice of
optimal parallel search strategy. We then employ machine learning techniques to
select the optimal parallel search strategy for a given problem space. When a
new search task is input to the system, Eureka uses features describing the
search space and the chosen architecture to automatically select the
appropriate search strategy. Eureka has been tested on a MIMD parallel
processor, a distributed network of workstations, and a single workstation
using multithreading. Results generated from fifteen puzzle problems, robot arm
motion problems, artificial search spaces, and planning problems indicate that
Eureka outperforms any of the tested strategies used exclusively for all
problem instances and is able to greatly reduce the search time for these
applications.
|
1105.5448
|
Order of Magnitude Comparisons of Distance
|
cs.AI
|
Order of magnitude reasoning - reasoning by rough comparisons of the sizes of
quantities - is often called 'back of the envelope calculation', with the
implication that the calculations are quick though approximate. This paper
exhibits an interesting class of constraint sets in which order of magnitude
reasoning is demonstrably fast. Specifically, we present a polynomial-time
algorithm that can solve a set of constraints of the form 'Points a and b are
much closer together than points c and d.' We prove that this algorithm can be
applied if `much closer together' is interpreted either as referring to an
infinite difference in scale or as referring to a finite difference in scale,
as long as the difference in scale is greater than the number of variables in
the constraint set. We also prove that the first-order theory over such
constraints is decidable.
|
1105.5449
|
AntNet: Distributed Stigmergetic Control for Communications Networks
|
cs.AI
|
This paper introduces AntNet, a novel approach to the adaptive learning of
routing tables in communications networks. AntNet is a distributed, mobile
agents based Monte Carlo system that was inspired by recent work on the ant
colony metaphor for solving optimization problems. AntNet's agents concurrently
explore the network and exchange collected information. The communication among
the agents is indirect and asynchronous, mediated by the network itself. This
form of communication is typical of social insects and is called stigmergy. We
compare our algorithm with six state-of-the-art routing algorithms coming from
the telecommunications and machine learning fields. The algorithms' performance
is evaluated over a set of realistic testbeds. We run many experiments over
real and artificial IP datagram networks with increasing number of nodes and
under several paradigmatic spatial and temporal traffic distributions. Results
are very encouraging. AntNet showed superior performance under all the
experimental conditions with respect to its competitors. We analyze the main
characteristics of the algorithm and try to explain the reasons for its
superiority.
|
1105.5450
|
A Counter Example to Theorems of Cox and Fine
|
cs.AI
|
Cox's well-known theorem justifying the use of probability is shown not to
hold in finite domains. The counterexample also suggests that Cox's assumptions
are insufficient to prove the result even in infinite domains. The same
counterexample is used to disprove a result of Fine on comparative conditional
probability.
|
1105.5451
|
The Automatic Inference of State Invariants in TIM
|
cs.AI
|
As planning is applied to larger and richer domains the effort involved in
constructing domain descriptions increases and becomes a significant burden on
the human application designer. If general planners are to be applied
successfully to large and complex domains it is necessary to provide the domain
designer with some assistance in building correctly encoded domains. One way of
doing this is to provide domain-independent techniques for extracting, from a
domain description, knowledge that is implicit in that description and that can
assist domain designers in debugging domain descriptions. This knowledge can
also be exploited to improve the performance of planners: several researchers
have explored the potential of state invariants in speeding up the performance
of domain-independent planners. In this paper we describe a process by which
state invariants can be extracted from the automatically inferred type
structure of a domain. These techniques are being developed for exploitation by
STAN, a Graphplan based planner that employs state analysis techniques to
enhance its performance.
|
1105.5452
|
Unifying Class-Based Representation Formalisms
|
cs.AI
|
The notion of class is ubiquitous in computer science and is central in many
formalisms for the representation of structured knowledge used both in
knowledge representation and in databases. In this paper we study the basic
issues underlying such representation formalisms and single out both their
common characteristics and their distinguishing features. Such investigation
leads us to propose a unifying framework in which we are able to capture the
fundamental aspects of several representation languages used in different
contexts. The proposed formalism is expressed in the style of description
logics, which have been introduced in knowledge representation as a means to
provide a semantically well-founded basis for the structural aspects of
knowledge representation systems. The description logic considered in this
paper is a subset of first order logic with nice computational characteristics.
It is quite expressive and features a novel combination of constructs that has
not been studied before. The distinguishing constructs are number restrictions,
which generalize existence and functional dependencies, inverse roles, which
allow one to refer to the inverse of a relationship, and possibly cyclic
assertions, which are necessary for capturing real world domains. We are able
to show that it is precisely such combination of constructs that makes our
logic powerful enough to model the essential set of features for defining class
structures that are common to frame systems, object-oriented database
languages, and semantic data models. As a consequence of the established
correspondences, several significant extensions of each of the above formalisms
become available. The high expressiveness of the logic we propose and the need
for capturing the reasoning in different contexts forces us to distinguish
between unrestricted and finite model reasoning. A notable feature of our
proposal is that reasoning in both cases is decidable. We argue that, by virtue
of the high expressive power and of the associated reasoning capabilities on
both unrestricted and finite models, our logic provides a common core for
class-based representation formalisms.
|
1105.5453
|
Complexity of Prioritized Default Logics
|
cs.AI
|
In default reasoning, usually not all possible ways of resolving conflicts
between default rules are acceptable. Criteria expressing acceptable ways of
resolving the conflicts may be hardwired in the inference mechanism, for
example specificity in inheritance reasoning can be handled this way, or they
may be given abstractly as an ordering on the default rules. In this article we
investigate formalizations of the latter approach in Reiter's default logic.
Our goal is to analyze and compare the computational properties of three such
formalizations in terms of their computational complexity: the prioritized
default logics of Baader and Hollunder, and Brewka, and a prioritized default
logic that is based on lexicographic comparison. The analysis locates the
propositional variants of these logics on the second and third levels of the
polynomial hierarchy, and identifies the boundary between tractable and
intractable inference for restricted classes of prioritized default theories.
|
1105.5454
|
Squeaky Wheel Optimization
|
cs.AI
|
We describe a general approach to optimization which we term `Squeaky Wheel'
Optimization (SWO). In SWO, a greedy algorithm is used to construct a solution
which is then analyzed to find the trouble spots, i.e., those elements, that,
if improved, are likely to improve the objective function score. The results of
the analysis are used to generate new priorities that determine the order in
which the greedy algorithm constructs the next solution. This
Construct/Analyze/Prioritize cycle continues until some limit is reached, or an
acceptable solution is found. SWO can be viewed as operating on two search
spaces: solutions and prioritizations. Successive solutions are only indirectly
related, via the re-prioritization that results from analyzing the prior
solution. Similarly, successive prioritizations are generated by constructing
and analyzing solutions. This `coupled search' has some interesting properties,
which we discuss. We report encouraging experimental results on two domains,
scheduling problems that arise in fiber-optic cable manufacturing, and graph
coloring problems. The fact that these domains are very different supports our
claim that SWO is a general technique for optimization.
|
1105.5455
|
Variational Cumulant Expansions for Intractable Distributions
|
cs.AI
|
Intractable distributions present a common difficulty in inference within the
probabilistic knowledge representation framework and variational methods have
recently been popular in providing an approximate solution. In this article, we
describe a perturbational approach in the form of a cumulant expansion which,
to lowest order, recovers the standard Kullback-Leibler variational bound.
Higher-order terms describe corrections on the variational approach without
incurring much further computational cost. The relationship to other
perturbational approaches such as TAP is also elucidated. We demonstrate the
method on a particular class of undirected graphical models, Boltzmann
machines, for which our simulation results confirm improved accuracy and
enhanced stability during learning.
|
1105.5457
|
Efficient Implementation of the Plan Graph in STAN
|
cs.AI
|
STAN is a Graphplan-based planner, so-called because it uses a variety of
STate ANalysis techniques to enhance its performance. STAN competed in the
AIPS-98 planning competition where it compared well with the other competitors
in terms of speed, finding solutions fastest to many of the problems posed.
Although the domain analysis techniques STAN exploits are an important factor
in its overall performance, we believe that the speed at which STAN solved the
competition problems is largely due to the implementation of its plan graph.
The implementation is based on two insights: that many of the graph
construction operations can be implemented as bit-level logical operations on
bit vectors, and that the graph should not be explicitly constructed beyond the
fix point. This paper describes the implementation of STAN's plan graph and
provides experimental results which demonstrate the circumstances under which
advantages can be obtained from using this implementation.
|
1105.5458
|
Cooperation between Top-Down and Bottom-Up Theorem Provers
|
cs.AI
|
Top-down and bottom-up theorem proving approaches each have specific
advantages and disadvantages. Bottom-up provers profit from strong redundancy
control but suffer from the lack of goal-orientation, whereas top-down provers
are goal-oriented but often have weak calculi when their proof lengths are
considered. In order to integrate both approaches, we try to achieve
cooperation between a top-down and a bottom-up prover in two different ways:
The first technique aims at supporting a bottom-up with a top-down prover. A
top-down prover generates subgoal clauses, they are then processed by a
bottom-up prover. The second technique deals with the use of bottom-up
generated lemmas in a top-down prover. We apply our concept to the areas of
model elimination and superposition. We discuss the ability of our techniques
to shorten proofs as well as to reorder the search space in an appropriate
manner. Furthermore, in order to identify subgoal clauses and lemmas which are
actually relevant for the proof task, we develop methods for a relevancy-based
filtering. Experiments with the provers SETHEO and SPASS performed in the
problem library TPTP reveal the high potential of our cooperation approaches.
|
1105.5459
|
Solving Highly Constrained Search Problems with Quantum Computers
|
cs.AI
|
A previously developed quantum search algorithm for solving 1-SAT problems in
a single step is generalized to apply to a range of highly constrained k-SAT
problems. We identify a bound on the number of clauses in satisfiability
problems for which the generalized algorithm can find a solution in a constant
number of steps as the number of variables increases. This performance
contrasts with the linear growth in the number of steps required by the best
classical algorithms, and the exponential number required by classical and
quantum methods that ignore the problem structure. In some cases, the algorithm
can also guarantee that insoluble problems in fact have no solutions, unlike
previously proposed quantum search algorithms.
|
1105.5460
|
Decision-Theoretic Planning: Structural Assumptions and Computational
Leverage
|
cs.AI
|
Planning under uncertainty is a central problem in the study of automated
sequential decision making, and has been addressed by researchers in many
different fields, including AI planning, decision analysis, operations
research, control theory and economics. While the assumptions and perspectives
adopted in these areas often differ in substantial ways, many planning problems
of interest to researchers in these fields can be modeled as Markov decision
processes (MDPs) and analyzed using the techniques of decision theory. This
paper presents an overview and synthesis of MDP-related methods, showing how
they provide a unifying framework for modeling many classes of planning
problems studied in AI. It also describes structural properties of MDPs that,
when exhibited by particular classes of problems, can be exploited in the
construction of optimal or approximately optimal policies or plans. Planning
problems commonly possess structure in the reward and value functions used to
describe performance criteria, in the functions used to describe state
transitions and observations, and in the relationships among features used to
describe states, actions, rewards, and observations. Specialized
representations, and algorithms employing these representations, can achieve
computational leverage by exploiting these various forms of structure. Certain
AI techniques -- in particular those based on the use of structured,
intensional representations -- can be viewed in this way. This paper surveys
several types of representations for both classical and decision-theoretic
planning problems, and planning algorithms that exploit these representations
in a number of different ways to ease the computational burden of constructing
policies or plans. It focuses primarily on abstraction, aggregation and
decomposition techniques based on AI-style representations.
|
1105.5461
|
Probabilistic Deduction with Conditional Constraints over Basic Events
|
cs.AI
|
We study the problem of probabilistic deduction with conditional constraints
over basic events. We show that globally complete probabilistic deduction with
conditional constraints over basic events is NP-hard. We then concentrate on
the special case of probabilistic deduction in conditional constraint trees. We
elaborate very efficient techniques for globally complete probabilistic
deduction. In detail, for conditional constraint trees with point
probabilities, we present a local approach to globally complete probabilistic
deduction, which runs in linear time in the size of the conditional constraint
trees. For conditional constraint trees with interval probabilities, we show
that globally complete probabilistic deduction can be done in a global approach
by solving nonlinear programs. We show how these nonlinear programs can be
transformed into equivalent linear programs, which are solvable in polynomial
time in the size of the conditional constraint trees.
|
1105.5462
|
Variational Probabilistic Inference and the QMR-DT Network
|
cs.AI
|
We describe a variational approximation method for efficient inference in
large-scale probabilistic models. Variational methods are deterministic
procedures that provide approximations to marginal and conditional
probabilities of interest. They provide alternatives to approximate inference
methods based on stochastic sampling or search. We describe a variational
approach to the problem of diagnostic inference in the `Quick Medical
Reference' (QMR) network. The QMR network is a large-scale probabilistic
graphical model built on statistical and expert knowledge. Exact probabilistic
inference is infeasible in this model for all but a small set of cases. We
evaluate our variational inference algorithm on a large set of diagnostic test
cases, comparing the algorithm to a state-of-the-art stochastic sampling
method.
|
1105.5463
|
Extensible Knowledge Representation: the Case of Description Reasoners
|
cs.AI
|
This paper offers an approach to extensible knowledge representation and
reasoning for a family of formalisms known as Description Logics. The approach
is based on the notion of adding new concept constructors, and includes a
heuristic methodology for specifying the desired extensions, as well as a
modularized software architecture that supports implementing extensions. The
architecture detailed here falls in the normalize-compared paradigm, and
supports both intentional reasoning (subsumption) involving concepts, and
extensional reasoning involving individuals after incremental updates to the
knowledge base. The resulting approach can be used to extend the reasoner with
specialized notions that are motivated by specific problems or application
areas, such as reasoning about dates, plans, etc. In addition, it provides an
opportunity to implement constructors that are not currently yet sufficiently
well understood theoretically, but are needed in practice. Also, for
constructors that are provably hard to reason with (e.g., ones whose presence
would lead to undecidability), it allows the implementation of incomplete
reasoners where the incompleteness is tailored to be acceptable for the
application at hand.
|
1105.5464
|
Learning to Order Things
|
cs.LG cs.AI
|
There are many applications in which it is desirable to order rather than
classify instances. Here we consider the problem of learning how to order
instances given feedback in the form of preference judgments, i.e., statements
to the effect that one instance should be ranked ahead of another. We outline a
two-stage approach in which one first learns by conventional means a binary
preference function indicating whether it is advisable to rank one instance
before another. Here we consider an on-line algorithm for learning preference
functions that is based on Freund and Schapire's 'Hedge' algorithm. In the
second stage, new instances are ordered so as to maximize agreement with the
learned preference function. We show that the problem of finding the ordering
that agrees best with a learned preference function is NP-complete.
Nevertheless, we describe simple greedy algorithms that are guaranteed to find
a good approximation. Finally, we show how metasearch can be formulated as an
ordering problem, and present experimental results on learning a combination of
'search experts', each of which is a domain-specific query expansion strategy
for a web search engine.
|
1105.5465
|
Constructing Conditional Plans by a Theorem-Prover
|
cs.AI
|
The research on conditional planning rejects the assumptions that there is no
uncertainty or incompleteness of knowledge with respect to the state and
changes of the system the plans operate on. Without these assumptions the
sequences of operations that achieve the goals depend on the initial state and
the outcomes of nondeterministic changes in the system. This setting raises the
questions of how to represent the plans and how to perform plan search. The
answers are quite different from those in the simpler classical framework. In
this paper, we approach conditional planning from a new viewpoint that is
motivated by the use of satisfiability algorithms in classical planning.
Translating conditional planning to formulae in the propositional logic is not
feasible because of inherent computational limitations. Instead, we translate
conditional planning to quantified Boolean formulae. We discuss three
formalizations of conditional planning as quantified Boolean formulae, and
present experimental results obtained with a theorem-prover.
|
1105.5466
|
Issues in Stacked Generalization
|
cs.AI
|
Stacked generalization is a general method of using a high-level model to
combine lower-level models to achieve greater predictive accuracy. In this
paper we address two crucial issues which have been considered to be a `black
art' in classification tasks ever since the introduction of stacked
generalization in 1992 by Wolpert: the type of generalizer that is suitable to
derive the higher-level model, and the kind of attributes that should be used
as its input. We find that best results are obtained when the higher-level
model combines the confidence (and not just the predictions) of the lower-level
ones. We demonstrate the effectiveness of stacked generalization for combining
three different types of learning algorithms for classification tasks. We also
compare the performance of stacked generalization with majority vote and
published results of arcing and bagging.
|
1105.5476
|
Feedback-Topology Designs for Interference Alignment in MIMO
Interference Channels
|
cs.IT math.IT
|
Interference alignment (IA) is a joint-transmission technique that achieves
the capacity of the interference channel for high signal-to-noise ratios
(SNRs). Most prior work on IA is based on the impractical assumption that
perfect and global channel-state information(CSI) is available at all
transmitters. To implement IA, each receiver has to feed back CSI to all
interferers, resulting in overwhelming feedback overhead. In particular, the
sum feedback rate of each receiver scales quadratically with the number of
users even if the quantized CSI is fed back. To substantially suppress feedback
overhead, this paper focuses on designing efficient arrangements of feedback
links, called feedback topologies, under the IA constraint. For the
multiple-input-multiple-output (MIMO) K-user interference channel, we propose
the feedback topology that supports sequential CSI exchange (feedback and
feedforward) between transmitters and receivers so as to achieve IA
progressively. This feedback topology is shown to reduce the network feedback
overhead from a cubic function of K to a linear one. To reduce the delay in the
sequential CSI exchange, an alternative feedback topology is designed for
supporting two-hop feedback via a control station, which also achieves the
linear feedback scaling with K. Next, given the proposed feedback topologies,
the feedback-bit allocation algorithm is designed for allocating feedback bits
by each receiver to different feedback links so as to regulate the residual
interference caused by the finite-rate feedback. Simulation results demonstrate
that the proposed bit allocation leads to significant throughput gains
especially in strong interference environments.
|
1105.5488
|
Coarse-Grained Topology Estimation via Graph Sampling
|
cs.SI physics.data-an physics.soc-ph
|
Many online networks are measured and studied via sampling techniques, which
typically collect a relatively small fraction of nodes and their associated
edges. Past work in this area has primarily focused on obtaining a
representative sample of nodes and on efficient estimation of local graph
properties (such as node degree distribution or any node attribute) based on
that sample. However, less is known about estimating the global topology of the
underlying graph.
In this paper, we show how to efficiently estimate the coarse-grained
topology of a graph from a probability sample of nodes. In particular, we
consider that nodes are partitioned into categories (e.g., countries or
work/study places in OSNs), which naturally defines a weighted category graph.
We are interested in estimating (i) the size of categories and (ii) the
probability that nodes from two different categories are connected. For each of
the above, we develop a family of estimators for design-based inference under
uniform or non-uniform sampling, employing either of two measurement
strategies: induced subgraph sampling, which relies only on information about
the sampled nodes; and star sampling, which also exploits category information
about the neighbors of sampled nodes. We prove consistency of these estimators
and evaluate their efficiency via simulation on fully known graphs. We also
apply our methodology to a sample of Facebook users to obtain a number of
category graphs, such as the college friendship graph and the country
friendship graph; we share and visualize the resulting data at
www.geosocialmap.com.
|
1105.5516
|
Ontology Alignment at the Instance and Schema Level
|
cs.AI
|
We present PARIS, an approach for the automatic alignment of ontologies.
PARIS aligns not only instances, but also relations and classes. Alignments at
the instance-level cross-fertilize with alignments at the schema-level.
Thereby, our system provides a truly holistic solution to the problem of
ontology alignment. The heart of the approach is probabilistic. This allows
PARIS to run without any parameter tuning. We demonstrate the efficiency of the
algorithm and its precision through extensive experiments. In particular, we
obtain a precision of around 90% in experiments with two of the world's largest
ontologies.
|
1105.5540
|
Finite First Hitting Time versus Stochastic Convergence in Particle
Swarm Optimisation
|
cs.NE
|
We reconsider stochastic convergence analyses of particle swarm optimisation,
and point out that previously obtained parameter conditions are not always
sufficient to guarantee mean square convergence to a local optimum. We show
that stagnation can in fact occur for non-trivial configurations in non-optimal
parts of the search space, even for simple functions like SPHERE. The
convergence properties of the basic PSO may in these situations be detrimental
to the goal of optimisation, to discover a sufficiently good solution within
reasonable time. To characterise optimisation ability of algorithms, we suggest
the expected first hitting time (FHT), i.e., the time until a search point in
the vicinity of the optimum is visited. It is shown that a basic PSO may have
infinite expected FHT, while an algorithm introduced here, the Noisy PSO, has
finite expected FHT on some functions.
|
1105.5542
|
Monte Carlo Algorithms for the Partition Function and Information Rates
of Two-Dimensional Channels
|
cs.IT math.IT stat.AP stat.CO
|
The paper proposes Monte Carlo algorithms for the computation of the
information rate of two-dimensional source/channel models. The focus of the
paper is on binary-input channels with constraints on the allowed input
configurations. The problem of numerically computing the information rate, and
even the noiseless capacity, of such channels has so far remained largely
unsolved. Both problems can be reduced to computing a Monte Carlo estimate of a
partition function. The proposed algorithms use tree-based Gibbs sampling and
multilayer (multitemperature) importance sampling. The viability of the
proposed algorithms is demonstrated by simulation results.
|
1105.5545
|
Competing activation mechanisms in epidemics on networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
In contrast to previous common wisdom that epidemic activity in heterogeneous
networks is dominated by the hubs with the largest number of connections,
recent research has pointed out the role that the innermost, dense core of the
network plays in sustaining epidemic processes. Here we show that the mechanism
responsible of spreading depends on the nature of the process. Epidemics with a
transient state are boosted by the innermost core. Contrarily, epidemics
allowing a steady state present a dual scenario, where either the hub
independently sustains activity and propagates it to the rest of the system,
or, alternatively, the innermost network core collectively turns into the
active state, maintaining it globally. In uncorrelated networks the former
mechanism dominates if the degree distribution decays with an exponent larger
than 5/2, and the latter otherwise. Topological correlations, rife in real
networks, may perturb this picture, mixing the role of both mechanisms.
|
1105.5557
|
Decoding q-ary lattices in the Lee metric
|
cs.IT math.CO math.IT
|
q-ary lattices can be obtained from q-ary codes using the so-called
Construction A. We investigate these lattices in the Lee metric and show how
their decoding process can be related to the associated codes. For prime q we
derive a Lee sphere decoding algorithm for q-ary lattices, present a brief
discussion on its complexity and some comparisons with the classic sphere
decoding.
|
1105.5575
|
Comprehensive online Atomic Database Management System (DBMS) with
Highly Qualified Computing Capabilities
|
physics.atom-ph astro-ph.CO astro-ph.IM cs.DB
|
The intensive need of atomic data is expanding continuously in a wide variety
of applications (e.g. fusion energy and astrophysics, laser-produced, plasma
researches, and plasma processing).This paper will introduce our ongoing
research work to build a comprehensive, complete, up-to-date, user friendly and
online atomic Database Management System (DBMS) namely called AIMS by using
SQLite (http://www.sqlite.org/about.html)(8). Programming language tools and
techniques will not be covered here. The system allows the generation of
various atomic data based on professional online atomic calculators. The
ongoing work is a step forward to bring detailed atomic model accessible to a
wide community of laboratory and astrophysical plasma diagnostics. AIMS is a
professional worldwide tool for supporting several educational purposes and can
be considered as a complementary database of IAEA atomic databases. Moreover,
it will be an exceptional strategy of incorporating the output data of several
atomic codes to external spectral models.
|
1105.5592
|
Kernel Belief Propagation
|
cs.LG
|
We propose a nonparametric generalization of belief propagation, Kernel
Belief Propagation (KBP), for pairwise Markov random fields. Messages are
represented as functions in a reproducing kernel Hilbert space (RKHS), and
message updates are simple linear operations in the RKHS. KBP makes none of the
assumptions commonly required in classical BP algorithms: the variables need
not arise from a finite domain or a Gaussian distribution, nor must their
relations take any particular parametric form. Rather, the relations between
variables are represented implicitly, and are learned nonparametrically from
training data. KBP has the advantage that it may be used on any domain where
kernels are defined (Rd, strings, groups), even where explicit parametric
models are not known, or closed form expressions for the BP updates do not
exist. The computational cost of message updates in KBP is polynomial in the
training data size. We also propose a constant time approximate message update
procedure by representing messages using a small number of basis functions. In
experiments, we apply KBP to image denoising, depth prediction from still
images, and protein configuration prediction: KBP is faster than competing
classical and nonparametric approaches (by orders of magnitude, in some cases),
while providing significantly more accurate results.
|
1105.5594
|
A risk profile for information fusion algorithms
|
cs.IT cond-mat.stat-mech math.IT
|
E.T. Jaynes, originator of the maximum entropy interpretation of statistical
mechanics, emphasized that there is an inevitable trade-off between the
conflicting requirements of robustness and accuracy for any inferencing
algorithm. This is because robustness requires discarding of information in
order to reduce the sensitivity to outliers. The principal of nonlinear
statistical coupling, which is an interpretation of the Tsallis entropy
generalization, can be used to quantify this trade-off. The coupled-surprisal,
-ln_k (p)=-(p^k-1)/k, is a generalization of Shannon surprisal or the
logarithmic scoring rule, given a forecast p of a true event by an inferencing
algorithm. The coupling parameter k=1-q, where q is the Tsallis entropy index,
is the degree of nonlinear coupling between statistical states. Positive
(negative) values of nonlinear coupling decrease (increase) the surprisal
information metric and thereby biases the risk in favor of decisive (robust)
algorithms relative to the Shannon surprisal (k=0). We show that translating
the average coupled-surprisal to an effective probability is equivalent to
using the generalized mean of the true event probabilities as a scoring rule.
The metric is used to assess the robustness, accuracy, and decisiveness of a
fusion algorithm. We use a two-parameter fusion algorithm to combine input
probabilities from N sources. The generalized mean parameter 'alpha' varies the
degree of smoothing and raising to a power N^beta with beta between 0 and 1
provides a model of correlation.
|
1105.5639
|
Asynchronous Communication: Capacity Bounds and Suboptimality of
Training
|
cs.IT math.IT
|
Several aspects of the problem of asynchronous point-to-point communication
without feedback are developed when the source is highly intermittent. In the
system model of interest, the codeword is transmitted at a random time within a
prescribed window whose length corresponds to the level of asynchronism between
the transmitter and the receiver. The decoder operates sequentially and
communication rate is defined as the ratio between the message size and the
elapsed time between when transmission commences and when the decoder makes a
decision.
For such systems, general upper and lower bounds on capacity as a function of
the level of asynchronism are established, and are shown to coincide in some
nontrivial cases. From these bounds, several properties of this asynchronous
capacity are derived. In addition, the performance of training-based schemes is
investigated. It is shown that such schemes, which implement synchronization
and information transmission on separate degrees of freedom in the encoding,
cannot achieve the asynchronous capacity in general, and that the penalty is
particularly significant in the high-rate regime.
|
1105.5640
|
Quantized Feedback Control Software Synthesis from System Level Formal
Specifications for Buck DC/DC Converters
|
cs.SY math.OC
|
Many Embedded Systems are indeed Software Based Control Systems (SBCSs), that
is control systems whose controller consists of control software running on a
microcontroller device. This motivates investigation on Formal Model Based
Design approaches for automatic synthesis of SBCS control software. In previous
works we presented an algorithm, along with a tool QKS implementing it, that
from a formal model (as a Discrete Time Linear Hybrid System, DTLHS) of the
controlled system (plant), implementation specifications (that is, number of
bits in the Analog-to-Digital, AD, conversion) and System Level Formal
Specifications (that is, safety and liveness requirements for the closed loop
system) returns correct-by-construction control software that has a Worst Case
Execution Time (WCET) linear in the number of AD bits and meets the given
specifications. In this technical report we present full experimental results
on using it to synthesize control software for two versions of buck DC-DC
converters (single-input and multi-input), a widely used mixed-mode analog
circuit.
|
1105.5651
|
Towards a Queueing-Based Framework for In-Network Function Computation
|
cs.NI cs.IT math.IT
|
We seek to develop network algorithms for function computation in sensor
networks. Specifically, we want dynamic joint aggregation, routing, and
scheduling algorithms that have analytically provable performance benefits due
to in-network computation as compared to simple data forwarding. To this end,
we define a class of functions, the Fully-Multiplexible functions, which
includes several functions such as parity, MAX, and k th -order statistics. For
such functions we exactly characterize the maximum achievable refresh rate of
the network in terms of an underlying graph primitive, the min-mincut. In
acyclic wireline networks, we show that the maximum refresh rate is achievable
by a simple algorithm that is dynamic, distributed, and only dependent on local
information. In the case of wireless networks, we provide a MaxWeight-like
algorithm with dynamic flow splitting, which is shown to be throughput-optimal.
|
1105.5667
|
Complexity of and Algorithms for Borda Manipulation
|
cs.AI
|
We prove that it is NP-hard for a coalition of two manipulators to compute
how to manipulate the Borda voting rule. This resolves one of the last open
problems in the computational complexity of manipulating common voting rules.
Because of this NP-hardness, we treat computing a manipulation as an
approximation problem where we try to minimize the number of manipulators.
Based on ideas from bin packing and multiprocessor scheduling, we propose two
new approximation methods to compute manipulations of the Borda rule.
Experiments show that these methods significantly outperform the previous best
known %existing approximation method. We are able to find optimal manipulations
in almost all the randomly generated elections tested. Our results suggest
that, whilst computing a manipulation of the Borda rule by a coalition is
NP-hard, computational complexity may provide only a weak barrier against
manipulation in practice.
|
1105.5675
|
Scale-Invariant Local Descriptor for Event Recognition in 1D Sensor
Signals
|
cs.MM cs.CV
|
In this paper, we introduce a shape-based, time-scale invariant feature
descriptor for 1-D sensor signals. The time-scale invariance of the feature
allows us to use feature from one training event to describe events of the same
semantic class which may take place over varying time scales such as walking
slow and walking fast. Therefore it requires less training set. The descriptor
takes advantage of the invariant location detection in the scale space theory
and employs a high level shape encoding scheme to capture invariant local
features of events. Based on this descriptor, a scale-invariant classifier with
"R" metric (SIC-R) is designed to recognize multi-scale events of human
activities. The R metric combines the number of matches of keypoint in scale
space with the Dynamic Time Warping score. SICR is tested on various types of
1-D sensors data from passive infrared, accelerometer and seismic sensors with
more than 90% classification accuracy.
|
1105.5676
|
Transmission Control of Two-User Slotted ALOHA Over Gilbert-Elliott
Channel: Stability and Delay Analysis
|
cs.IT math.IT
|
In this paper, we consider the problem of calculating the stability region
and average delay of two user slotted ALOHA over a Gilbert-Elliott channel,
where users have channel state information and adapt their transmission
probabilities according to the channel state. Each channel has two states,
namely, the 'good' and 'bad' states. In the 'bad' state, the channel is assumed
to be in deep fade and the transmission fails with probability one, while in
the 'good' state, there is some positive success probability. We calculate the
Stability region with and without Multipacket Reception capability as well as
the average delay without MPR. Our results show that the stability region of
the controlled S-ALOHA is always a superset of the stability region of
uncontrolled S-ALOHA. Moreover, if the channel tends to be in the 'bad' state
for long proportion of time, then the stability region is a convex Polyhedron
strictly containing the TDMA stability region and the optimal transmission
strategy is to transmit with probability one whenever the nodes have packets
and it is shown that this strategy is delay optimal. On the other hand, if the
channel tends to be in the 'good' state more often, then the stability region
is bounded by a convex curve and is strict subset of the TDMA stability region.
We also show that enhancing the physical layer by allowing MPR capability can
significantly enhance the performance while simplifying the MAC Layer design by
the lack of the need of scheduling under some conditions. Furthermore, it is
shown that transmission control not only allows handling higher stable arrival
rates but also leads to lower delay for the same arrival rate compared with
ordinary S-ALOHA.
|
1105.5721
|
A Philosophical Treatise of Universal Induction
|
cs.LG cs.IT math.IT
|
Understanding inductive reasoning is a problem that has engaged mankind for
thousands of years. This problem is relevant to a wide range of fields and is
integral to the philosophy of science. It has been tackled by many great minds
ranging from philosophers to scientists to mathematicians, and more recently
computer scientists. In this article we argue the case for Solomonoff
Induction, a formal inductive framework which combines algorithmic information
theory with the Bayesian framework. Although it achieves excellent theoretical
results and is based on solid philosophical foundations, the requisite
technical knowledge necessary for understanding this framework has caused it to
remain largely unknown and unappreciated in the wider scientific community. The
main contribution of this article is to convey Solomonoff induction and its
related concepts in a generally accessible form with the aim of bridging this
current technical gap. In the process we examine the major historical
contributions that have led to the formulation of Solomonoff Induction as well
as criticisms of Solomonoff and induction in general. In particular we examine
how Solomonoff induction addresses many issues that have plagued other
inductive systems, such as the black ravens paradox and the confirmation
problem, and compare this approach with other recent approaches.
|
1105.5736
|
Network Codes with Overlapping Chunks over Line Networks: A Case for
Linear-Time Codes
|
cs.IT math.IT
|
In this paper, the problem of designing network codes that are both
communicationally and computationally efficient over packet line networks with
worst-case schedules is considered. In this context, random linear network
codes (dense codes) are asymptotically capacity-achieving, but require highly
complex coding operations. To reduce the coding complexity, Maymounkov et al.
proposed chunked codes (CC). Chunked codes operate by splitting the message
into non-overlapping chunks and send a randomly chosen chunk at each
transmission time by a dense code. The complexity, that is linear in the chunk
size, is thus reduced compared to dense codes. In this paper, the existing
analysis of CC is revised, and tighter bounds on the performance of CC are
derived. As a result, we prove that (i) CC with sufficiently large chunks are
asymptotically capacity-achieving, but with a slower speed of convergence
compared to dense codes; and (ii) CC with relatively smaller chunks approach
the capacity with an arbitrarily small but non-zero constant gap. To improve
the speed of convergence of CC, while maintaining their advantage in reducing
the computational complexity, we propose and analyze a new CC scheme with
overlapping chunks, referred to as overlapped chunked codes (OCC). We prove
that for smaller chunks, which are advantageous due to lower computational
complexity, OCC with larger overlaps provide a better tradeoff between the
speed of convergence and the message or packet error rate. This implies that
for smaller chunks, and with the same computational complexity, OCC outperform
CC in terms of the speed of approaching the capacity for sufficiently small
target error rate. In fact, we design linear-time OCC with very small chunks
(constant in the message size) that are both computationally and
communicationally efficient, and that outperform linear-time CC.
|
1105.5755
|
On Real Time Coding with Limited Lookahead
|
cs.IT math.IT
|
A real time coding system with lookahead consists of a memoryless source, a
memoryless channel, an encoder, which encodes the source symbols sequentially
with knowledge of future source symbols upto a fixed finite lookahead, d, with
or without feedback of the past channel output symbols and a decoder, which
sequentially constructs the source symbols using the channel output. The
objective is to minimize the expected per-symbol distortion. For a fixed finite
lookahead d>=1 we invoke the theory of controlled markov chains to obtain an
average cost optimality equation (ACOE), the solution of which, denoted by
D(d), is the minimum expected per-symbol distortion. With increasing d, D(d)
bridges the gap between causal encoding, d=0, where symbol by symbol
encoding-decoding is optimal and the infinite lookahead case, d=\infty, where
Shannon Theoretic arguments show that separation is optimal. We extend the
analysis to a system with finite state decoders, with or without noise-free
feedback. For a Bernoulli source and binary symmetric channel, under hamming
loss, we compute the optimal distortion for various source and channel
parameters, and thus obtain computable bounds on D(d). We also identify regions
of source and channel parameters where symbol by symbol encoding-decoding is
suboptimal. Finally, we demonstrate the wide applicability of our approach by
applying it in additional coding scenarios, such as the case where the
sequential decoder can take cost constrained actions affecting the quality or
availability of side information about the source.
|
1105.5762
|
On Log-concavity of the Generalized Marcum Q Function
|
math.ST cs.IT math.CA math.IT stat.TH
|
It is shown that, if nu >= 1/2 then the generalized Marcum Q function Q_nu(a,
b) is log-concave in 0<=b <infty. This proves a conjecture of Sun, Baricz and
Zhou (2010). We also point out relevant results in the statistics literature.
|
1105.5766
|
On 2-step, corank 2 nilpotent sub-Riemannian metrics
|
math.OC cs.SY
|
In this paper we study the nilpotent 2-step, corank 2 sub-Riemannian metrics
that are nilpotent approximations of general sub-Riemannian metrics. We exhibit
optimal syntheses for these problems. It turns out that in general the cut time
is not equal to the first conjugate time but has a simple explicit expression.
As a byproduct of this study we get some smoothness properties of the spherical
Hausdorff measure in the case of a generic 6 dimensional, 2-step corank 2
sub-Riemannian metric.
|
1105.5782
|
Grassmannian Predictive Coding for Limited Feedback in Multiple Antenna
Wireless Systems
|
cs.IT math.IT
|
Limited feedback is a paradigm for the feedback of channel state information
in wireless systems. In multiple antenna wireless systems, limited feedback
usually entails quantizing a source that lives on the Grassmann manifold. Most
work on limited feedback beamforming considered single-shot quantization. In
wireless systems, however, the channel is temporally correlated, which can be
used to reduce feedback requirements. Unfortunately, conventional predictive
quantization does not incorporate the non-Euclidean structure of the Grassmann
manifold. In this paper, we propose a Grassmannian predictive coding algorithm
where the differential geometric structure of the Grassmann manifold is used to
formulate a predictive vector quantization encoder and decoder. We analyze the
quantization error and derive bounds on the distortion attained by the proposed
algorithm. We apply the algorithm to a multiuser multiple-input multiple-output
wireless system and show that it improves the achievable sum rate as the
temporal correlation of the channel increases.
|
1105.5789
|
Clustering and Classification in Text Collections Using Graph Modularity
|
cs.IR cs.DL
|
A new fast algorithm for clustering and classification of large collections
of text documents is introduced. The new algorithm employs the bipartite graph
that realizes the word-document matrix of the collection. Namely, the
modularity of the bipartite graph is used as the optimization functional.
Experiments performed with the new algorithm on a number of text collections
had shown a competitive quality of the clustering (classification), and a
record-breaking speed.
|
1105.5802
|
Sequences of Inequalities among Differences of Gini Means and Divergence
Measures
|
cs.IT math.IT
|
In 1938, Gini studied a mean having two parameters. Later, many authors
studied properties of this mean. In particular, it contains the famous means as
harmonic, geometric, arithmetic, etc. Here we considered a sequence of
inequalities arising due to particular values of each parameter of Gini's mean.
This sequence generates many nonnegative differences. Not all of them are
convex. We have studied here convexity of these differences and again
established new sequences of inequalities of these differences. Considering in
terms of probability distributions these differences, we have made connections
with some of well known divergence measures.
|
1105.5839
|
Intra-City Urban Network and Traffic Flow Analysis from GPS Mobility
Trace
|
physics.soc-ph cs.SI
|
We analyse two large-scale intra-city urban networks and traffic flows
therein measured by GPS traces of taxis in San Francisco and Shanghai. Our
results coincide with previous findings that, based purely on topological
means, it is often insufficient to characterise traffic flow. Traditional
shortest-path betweenness analysis, where shortest paths are calculated from
each pairs of nodes, carries an unrealistic implicit assumption that each node
or junction in the urban network generates and attracts an equal amount of
traffic. We also argue that weighting edges based only on euclidean distance is
inadequate, as primary roads are commonly favoured over secondary roads due to
the perceived and actual travel time required. We show that betweenness traffic
analysis can be improved by a simple extended framework which incorporates both
the notions of node weights and fastest-path betweenness. We demonstrate that
the framework is superior to traditional methods based solely on simple
topological perspectives.
|
1105.5849
|
Diffusion in Networks With Overlapping Community Structure
|
physics.soc-ph cs.SI
|
In this work we study diffusion in networks with community structure. We
first replicate and extend work on networks with non-overlapping community
structure. We then study diffusion on network models that have overlapping
community structure. We study contagions in the standard SIR model, and complex
contagions thought to be better models of some social diffusion processes.
Finally, we investigate diffusion on empirical networks with known overlapping
community structure, by analysing the structure of such networks, and by
simulating contagion on them. We find that simple and complex contagions can
spread fast in networks with overlapping community structure. We also find that
short paths exist through overlapping community structure on empirical
networks.
|
1105.5853
|
Orthogonal Matching Pursuit: A Brownian Motion Analysis
|
cs.IT math.IT
|
A well-known analysis of Tropp and Gilbert shows that orthogonal matching
pursuit (OMP) can recover a k-sparse n-dimensional real vector from 4 k log(n)
noise-free linear measurements obtained through a random Gaussian measurement
matrix with a probability that approaches one as n approaches infinity. This
work strengthens this result by showing that a lower number of measurements, 2
k log(n - k), is in fact sufficient for asymptotic recovery. More generally,
when the sparsity level satisfies kmin <= k <= kmax but is unknown, 2 kmax
log(n - kmin) measurements is sufficient. Furthermore, this number of
measurements is also sufficient for detection of the sparsity pattern (support)
of the vector with measurement errors provided the signal-to-noise ratio (SNR)
scales to infinity. The scaling 2 k log(n - k) exactly matches the number of
measurements required by the more complex lasso method for signal recovery with
a similar SNR scaling.
|
1105.5861
|
Optimality of binary power-control in a single cell via majorization
|
cs.IT math.IT
|
This paper considers the optimum single cell power-control maximizing the
aggregate (uplink) communication rate of the cell when there are peak power
constraints at mobile users, and a low-complexity data decoder (without
successive decoding) at the base station. It is shown, via the theory of
majorization, that the optimum power allocation is binary, which means links
are either "on" or "off". By exploiting further structure of the optimum binary
power allocation, a simple polynomial-time algorithm for finding the optimum
transmission power allocation is proposed, together with a reduced complexity
near-optimal heuristic algorithm. Sufficient conditions under which
channel-state aware time-division-multiple-access (TDMA) maximizes the
aggregate communication rate are established. Finally, a numerical study is
performed to compare and contrast the performance achieved by the optimum
binary power-control policy with other sub-optimum policies and the throughput
capacity achievable via successive decoding. It is observed that two dominant
modes of communication arise, wideband or TDMA, and that successive decoding
achieves better sum-rates only under near-perfect interference cancellation
efficiency.
|
1105.5881
|
On the random access performance of Cell Broadband Engine with graph
analysis application
|
cs.CE cs.PF
|
The Cell Broad Engine (BE) Processor has unique memory access architecture
besides its powerful computing engines. Many computing-intensive applications
have been ported to Cell/BE successfully. But memory-intensive applications are
rarely investigated except for several micro benchmarks. Since Cell/BE has
powerful software visible DMA engine, this paper studies on whether Cell/BE is
suit for applica- tions with large amount of random memory accesses. Two
benchmarks, GUPS and SSCA#2, are used. The latter is a rather complex one that
in representative of real world graph analysis applications. We find both
benchmarks have good performance on Cell/BE based IBM QS20/22. Com- pared with
2 conventional multi-processor systems with the same core/thread number, GUPS
is about 40-80% fast and SSCA#2 about 17-30% fast. The dynamic load balanc- ing
and software pipeline for optimizing SSCA#2 are intro- duced. Based on the
experiment, the potential of Cell/BE for random access is analyzed in detail as
well as its limita- tions of memory controller, atomic engine and TLB manage-
ment.Our research shows although more programming effort are needed, Cell/BE
has the potencial for irregular memory access applications.
|
1105.5887
|
Efficient sampling of high-dimensional Gaussian fields: the
non-stationary / non-sparse case
|
stat.CO cs.LG stat.AP
|
This paper is devoted to the problem of sampling Gaussian fields in high
dimension. Solutions exist for two specific structures of inverse covariance :
sparse and circulant. The proposed approach is valid in a more general case and
especially as it emerges in inverse problems. It relies on a
perturbation-optimization principle: adequate stochastic perturbation of a
criterion and optimization of the perturbed criterion. It is shown that the
criterion minimizer is a sample of the target density. The motivation in
inverse problems is related to general (non-convolutive) linear observation
models and their resolution in a Bayesian framework implemented through
sampling algorithms when existing samplers are not feasible. It finds a direct
application in myopic and/or unsupervised inversion as well as in some
non-Gaussian inversion. An illustration focused on hyperparameter estimation
for super-resolution problems assesses the effectiveness of the proposed
approach.
|
1105.5895
|
Percolation and Connectivity on the Signal to Interference Ratio Graph
|
cs.IT math.IT
|
A wireless communication network is considered where any two nodes are
connected if the signal-to-interference ratio (SIR) between them is greater
than a threshold. Assuming that the nodes of the wireless network are
distributed as a Poisson point process (PPP), percolation (unbounded connected
cluster) on the resulting SIR graph is studied as a function of the density of
the PPP. For both the path-loss as well as path-loss plus fading model of
signal propagation, it is shown that for a small enough threshold, there exists
a closed interval of densities for which percolation happens with non-zero
probability. Conversely, for the path-loss model of signal propagation, it is
shown that for a large enough threshold, there exists a closed interval of
densities for which the probability of percolation is zero. Restricting all
nodes to lie in an unit square, connectivity properties of the SIR graph are
also studied. Assigning separate frequency bands or time-slots proportional to
the logarithm of the number of nodes to different nodes for
transmission/reception is sufficient to guarantee connectivity in the SIR
graph.
|
1105.5900
|
Ethane: A Heterogeneous Parallel Search Algorithm for Heterogeneous
Platforms
|
cs.NE cs.DC
|
In this paper we present Ethane, a parallel search algorithm specifically
designed for its execution on heterogeneous hardware environments. With Ethane
we propose an algorithm inspired in the structure of the chemical compound of
the same name, implementing a heterogeneous island model based in the structure
of its chemical bonds. We also propose a schema for describing a family of
parallel heterogeneous metaheuristics inspired by the structure of hydrocarbons
in Nature, HydroCM (HydroCarbon inspired Metaheuristics), establishing a resem-
blance between atoms and computers, and between chemical bonds and
communication links. Our goal is to gracefully match computers of different
power to algorithms of different behavior (GA and SA in this study), all them
collaborating to solve the same problem. The analysis will show that Ethane,
though simple, can solve search problems in a faster and more robust way than
well-known panmitic and distributed algorithms very popular in the literature.
|
1105.5903
|
Probabilistic Analysis of the Network Reliability Problem on a Random
Graph Ensemble
|
cs.IT cs.DM math.IT
|
In the field of computer science, the network reliability problem for
evaluating the network failure probability has been extensively investigated.
For a given undirected graph $G$, the network failure probability is the
probability that edge failures (i.e., edge erasures) make $G$ unconnected. Edge
failures are assumed to occur independently with the same probability. The main
contributions of the present paper are the upper and lower bounds on the
expected network failure probability. We herein assume a simple random graph
ensemble that is closely related to the Erd\H{o}s-R\'{e}nyi random graph
ensemble. These upper and lower bounds exhibit the typical behavior of the
network failure probability. The proof is based on the fact that the cut-set
space of $G$ is a linear space over $\Bbb F_2$ spanned by the incident matrix
of $G$. The present study shows a close relationship between the ensemble
analysis of the network failure probability and the ensemble analysis of the
error detection probability of LDGM codes with column weight 2.
|
1105.5912
|
Need to categorize: A comparative look at the categories of the
Universal Decimal Classification system (UDC) and Wikipedia
|
cs.DL cs.IR physics.soc-ph
|
This study analyzes the differences between the category structure of the
Universal Decimal Classification (UDC) system (which is one of the widely used
library classification systems in Europe) and Wikipedia. In particular, we
compare the emerging structure of category-links to the structure of classes in
the UDC. With this comparison we would like to scrutinize the question of how
do knowledge maps of the same domain differ when they are created socially
(i.e. Wikipedia) as opposed to when they are created formally (UDC) using
classificatio theory. As a case study, we focus on the category of "Arts".
|
1105.5939
|
Airborne TDMA for High Throughput and Fast Weather Conditions
Notification
|
cs.CE
|
As air traffic grows significantly, aircraft accidents increase. Many
aviation accidents could be prevented if the precise aircraft positions and
weather conditions on the aircraft's route were known. Existing studies propose
determining the precise aircraft positions via a VHF channel with an air-to-air
radio relay system that is based on mobile ad-hoc networks. However, due to the
long propagation delay, the existing TDMA MAC schemes underutilize the
networks. The existing TDMA MAC sends data and receives ACK in one time slot,
which requires two guard times in one time slot. Since aeronautical
communications spans a significant distance, the guard time occupies a
significantly large portion of the slot. To solve this problem, we propose a
piggybacking mechanism ACK. Our proposed MAC has one guard time in one time
slot, which enables the transmission of more data. Using this additional data,
we can send weather conditions that pertain to the aircraft's current position.
Our analysis shows that this proposed MAC performs better than the existing
MAC, since it offers better throughput and network utilization. In addition,
our weather condition notification model achieves a much lower transmission
delay than a HF (high frequency) voice communication.
|
1105.5941
|
Predicting the Structure of Alloys using Genetic Algorithms
|
cond-mat.mtrl-sci cs.NE physics.comp-ph
|
We discuss a novel genetic algorithm that can be used to find global minima
on the potential energy surface of disordered ceramics and alloys using a
real-space symmetry adapted crossover. Due to a high number of symmetrically
equivalent solutions of many alloys a conventional genetic algorithms using
reasonable population sizes are unable to locate the global minima for even the
smallest systems. We demonstrate the superior performance of the use of
symmetry adapted crossover by the comparison of that of a conventional GA for
finding global minima of two binary Ising-type alloys that either order or
phase separate at low temperature. Comparison of different representations and
crossover operations show that the use of real-space crossover outperforms
crossover operators working on binary representations by several orders of
magnitude.
|
1105.5951
|
Performance of Short-Commit in Extreme Database Environment
|
cs.DB
|
Atomic commit protocols are used where data integrity is more important than
data availability. Two-Phase commit (2PC) is a standard commit protocol for
commercial database management systems. To reduce certain drawbacks in 2PC
protocol people have suggested different variance of this protocol.
Short-Commit protocol is developed with an objective to achieve low cost
transaction commitment cost with non-blocking capability. In this paper we have
briefly explained short-commit protocol executing pattern. Experimental
analysis and results are presented to support the claim that short-commit can
work efficiently in extreme database environment.
|
1105.5975
|
Multiple Access Channel with States Known Noncausally at One Encoder and
Only Strictly Causally at the Other Encoder
|
cs.IT math.IT
|
We consider a two-user state-dependent multiaccess channel in which the
states of the channel are known non-causally to one of the encoders and only
strictly causally to the other encoder. Both encoders transmit a common message
and, in addition, the encoder that knows the states non-causally transmits an
individual message. We study the capacity region of this communication model.
In the discrete memoryless case, we establish inner and outer bounds on the
capacity region. Although the encoder that sends both messages knows the states
fully, we show that the strictly causal knowledge of these states at the other
encoder can be beneficial for this encoder, and in general enlarges the
capacity region. Furthermore, we find an explicit characterization of the
capacity in the case in which the two encoders transmit only the common
message. In the Gaussian case, we characterize the capacity region for the
model with individual message as well. Our converse proof in this case shows
that, for this model, strictly causal knowledge of the state at one of the
encoders does not increase capacity if the other is informed non-causally, a
result which sheds more light on the utility of conveying a compressed version
of the state to the decoder in recent results by Lapidoth and Steinberg on a
multiacess model with only strictly causal state at both encoders and
independent messages.
|
1105.5981
|
Modulation for MIMO Networks with Several Users
|
cs.IT math.IT
|
In a recent work, a capacity-achieving scheme for the common-message two-user
MIMO broadcast channel, based on single-stream coding and decoding, was
described. This was obtained via a novel joint unitary triangularization which
is applied to the corresponding channel matrices. In this work, the
triangularization is generalized, to any (finite) number of matrices, allowing
multi-user applications. To that end, multiple channel uses are jointly
treated, in a manner reminiscent of space-time coding. As opposed to the
two-user case, in the general case there does not always exist a perfect
(capacity-achieving) solution. However, a nearly optimal scheme (with vanishing
loss in the limit of large blocks) always exists. Common-message broadcasting
is but one example of communication networks with MIMO links which can be
solved using an approach coined "Network Modulation"; the extension beyond two
links carries over to these problems.
|
1105.5986
|
A Modeling Framework for Gossip-based Information Spread
|
cs.DC cs.DM cs.IT cs.PF math.IT
|
We present an analytical framework for gossip protocols based on the pairwise
information exchange between interacting nodes. This framework allows for
studying the impact of protocol parameters on the performance of the protocol.
Previously, gossip-based information dissemination protocols have been analyzed
under the assumption of perfect, lossless communication channels. We extend our
framework for the analysis of networks with lossy channels. We show how the
presence of message loss, coupled with specific topology configurations,impacts
the expected behavior of the protocol. We validate the obtained models against
simulations for two protocols.
|
1105.6001
|
A Call to Arms: Revisiting Database Design
|
cs.DB
|
Good database design is crucial to obtain a sound, consistent database, and -
in turn - good database design methodologies are the best way to achieve the
right design. These methodologies are taught to most Computer Science
undergraduates, as part of any Introduction to Database class. They can be
considered part of the "canon", and indeed, the overall approach to database
design has been unchanged for years. Moreover, none of the major database
research assessments identify database design as a strategic research
direction.
Should we conclude that database design is a solved problem?
Our thesis is that database design remains a critical unsolved problem.
Hence, it should be the subject of more research. Our starting point is the
observation that traditional database design is not used in practice - and if
it were used it would result in designs that are not well adapted to current
environments. In short, database design has failed to keep up with the times.
In this paper, we put forth arguments to support our viewpoint, analyze the
root causes of this situation and suggest some avenues of research.
|
1105.6009
|
Noncoherent SIMO Pre-Log via Resolution of Singularities
|
cs.IT math.AG math.IT
|
We establish a lower bound on the noncoherent capacity pre-log of a
temporally correlated Rayleigh block-fading single-input multiple-output (SIMO)
channel. Our result holds for arbitrary rank Q of the channel correlation
matrix, arbitrary block-length L > Q, and arbitrary number of receive antennas
R, and includes the result in Morgenshtern et al. (2010) as a special case. It
is well known that the capacity pre-log for this channel in the single-input
single-output (SISO) case is given by 1-Q/L, where Q/L is the penalty incurred
by channel uncertainty. Our result reveals that this penalty can be reduced to
1/L by adding only one receive antenna, provided that L \geq 2Q - 1 and the
channel correlation matrix satisfies mild technical conditions. The main
technical tool used to prove our result is Hironaka's celebrated theorem on
resolution of singularities in algebraic geometry.
|
1105.6010
|
Synchronous Control of Reconfiguration in Fractal Component-based
Systems -- a Case Study
|
cs.SE cs.SY
|
In the context of component-based embedded systems, the management of dynamic
reconfiguration in adaptive systems is an increasingly important feature. The
Fractal component-based framework, and its industrial instantiation MIND,
provide for support for control operations in the lifecycle of components.
Nevertheless, the use of complex and integrated architectures make the
management of this reconfiguration operations difficult to handle by
programmers. To address this issue, we propose to use Synchronous languages,
which are a complete approach to the design of reactive systems, based on
behavior models in the form of transition systems. Furthermore, the design of
closed-loop reactive managers of reconfigurations can benefit from formal tools
like Discrete Controller Synthesis. In this paper we describe an approach to
concretely integrate synchronous reconfiguration managers in Fractal
component-based systems. We describe how to model the state space of the
control problem, and how to specify the control objectives. We describe the
implementation of the resulting manager with the Fractal/Cecilia programming
environment, taking advantage of the Comete distributed middleware. We
illustrate and validate it with the case study of the Comanche HTTP server on a
multi-core execution platform.
|
1105.6014
|
Neural Networks for Emotion Classification
|
cs.CV
|
It is argued that for the computer to be able to interact with humans, it
needs to have the communication skills of humans. One of these skills is the
ability to understand the emotional state of the person. This thesis describes
a neural network-based approach for emotion classification. We learn a
classifier that can recognize six basic emotions with an average accuracy of
77% over the Cohn-Kanade database. The novelty of this work is that instead of
empirically selecting the parameters of the neural network, i.e. the learning
rate, activation function parameter, momentum number, the number of nodes in
one layer, etc. we developed a strategy that can automatically select
comparatively better combination of these parameters. We also introduce another
way to perform back propagation. Instead of using the partial differential of
the error function, we use optimal algorithm; namely Powell's direction set to
minimize the error function. We were also interested in construction an
authentic emotion databases. This is a very important task because nowadays
there is no such database available. Finally, we perform several experiments
and show that our neural network approach can be successfully used for emotion
recognition.
|
1105.6033
|
A New Outer-Bound via Interference Localization and the Degrees of
Freedom Regions of MIMO Interference Networks with no CSIT
|
cs.IT math.IT
|
The two-user multi-input, multi-output (MIMO) interference and cognitive
radio channels are studied under the assumption of no channel state information
at the transmitter (CSIT) from the degrees of freedom (DoF) region perspective.
With $M_i$ and $N_i$ denoting the number of antennas at transmitter $i$ and
receiver $i$ respectively, the DoF regions of the MIMO interference channel
were recently characterized by Huang et al., Zhu and Guo, and by the authors of
this paper for all values of numbers of antennas except when $\min(M_1,N_1) >
N_2 > M_2$ (or $\min(M_2,N_2) > N_1 > M_1$). This latter case was solved more
recently by Zhu and Guo who provided a tight outer-bound. Here, a simpler and
more widely applicable proof of that outer-bound is given based on the idea of
interference localization. Using it, the DoF region is also established for the
class of MIMO cognitive radio channels when $\min(M_1+M_2,N_1) > N_2 > M_2$
(with the second transmitter cognitive) -- the only class for which the inner
and outer bounds previously obtained by the authors were not tight -- thereby
completing the DoF region characterization of the general 2-user MIMO cognitive
radio channel as well.
|
1105.6041
|
The Perceptron with Dynamic Margin
|
cs.LG
|
The classical perceptron rule provides a varying upper bound on the maximum
margin, namely the length of the current weight vector divided by the total
number of updates up to that time. Requiring that the perceptron updates its
internal state whenever the normalized margin of a pattern is found not to
exceed a certain fraction of this dynamic upper bound we construct a new
approximate maximum margin classifier called the perceptron with dynamic margin
(PDM). We demonstrate that PDM converges in a finite number of steps and derive
an upper bound on them. We also compare experimentally PDM with other
perceptron-like algorithms and support vector machines on hard margin tasks
involving linear kernels which are equivalent to 2-norm soft margin.
|
1105.6060
|
Alignment of Microtubule Imagery
|
cs.CV
|
This work discusses preliminary work aimed at simulating and visualizing the
growth process of a tiny structure inside the cell---the microtubule.
Difficulty of recording the process lies in the fact that the tissue
preparation method for electronic microscopes is highly destructive to live
cells. Here in this paper, our approach is to take pictures of microtubules at
different time slots and then appropriately combine these images into a
coherent video. Experimental results are given on real data.
|
1105.6061
|
Distributed Detection/Isolation Procedures for Quickest Event Detection
in Large Extent Wireless Sensor Networks
|
stat.AP cs.IT cs.NI math.IT
|
We study a problem of distributed detection of a stationary point event in a
large extent wireless sensor network ($\wsn$), where the event influences the
observations of the sensors only in the vicinity of where it occurs. An event
occurs at an unknown time and at a random location in the coverage region (or
region of interest ($\ROI$)) of the $\wsn$. We consider a general sensing model
in which the effect of the event at a sensor node depends on the distance
between the event and the sensor node; in particular, in the Boolean sensing
model, all sensors in a disk of a given radius around the event are equally
affected. Following the prior work reported in
\cite{nikiforov95change_isolation},
\cite{nikiforov03lower-bound-for-det-isolation},
\cite{tartakovsky08multi-decision}, {\em the problem is formulated as that of
detecting the event and locating it to a subregion of the $\ROI$ as early as
possible under the constraints that the average run length to false alarm
($\tfa$) is bounded below by $\gamma$, and the probability of false isolation
($\pfi$) is bounded above by $\alpha$}, where $\gamma$ and $\alpha$ are target
performance requirements. In this setting, we propose distributed procedures
for event detection and isolation (namely $\mx$, $\all$, and $\hall$), based on
the local fusion of $\CUSUM$s at the sensors. For these procedures, we obtain
bounds on the maximum mean detection/isolation delay ($\add$), and on $\tfa$
and $\pfi$, and thus provide an upper bound on $\add$ as
$\min\{\gamma,1/\alpha\} \to \infty$. For the Boolean sensing model, we show
that an asymptotic upper bound on the maximum mean detection/isolation delay of
our distributed procedure scales with $\gamma$ and $\alpha$ in the same way as
the asymptotically optimal centralised procedure
\cite{nikiforov03lower-bound-for-det-isolation}.
|
1105.6084
|
RASID: A Robust WLAN Device-free Passive Motion Detection System
|
cs.NI cs.CV
|
WLAN Device-free passive DfP indoor localization is an emerging technology
enabling the localization of entities that do not carry any devices nor
participate actively in the localization process using the already installed
wireless infrastructure. This technology is useful for a variety of
applications such as intrusion detection, smart homes and border protection. We
present the design, implementation and evaluation of RASID, a DfP system for
human motion detection. RASID combines different modules for statistical
anomaly detection while adapting to changes in the environment to provide
accurate, robust, and low-overhead detection of human activities using standard
WiFi hardware. Evaluation of the system in two different testbeds shows that it
can achieve an accurate detection capability in both environments with an
F-measure of at least 0.93. In addition, the high accuracy and low overhead
performance are robust to changes in the environment as compared to the current
state of the art DfP detection systems. We also relay the lessons learned
during building our system and discuss future research directions.
|
1105.6115
|
On the Capacity of Multiplicative Finite-Field Matrix Channels
|
cs.IT math.IT
|
This paper deals with the multiplicative finite-field matrix channel, a
discrete memoryless channel whose input and output are matrices (over a finite
field) related by a multiplicative transfer matrix. The model considered here
assumes that all transfer matrices with the same rank are equiprobable, so that
the channel is completely characterized by the rank distribution of the
transfer matrix. This model is seen to be more flexible than previously
proposed ones in describing random linear network coding systems subject to
link erasures, while still being sufficiently simple to allow tractability. The
model is also conservative in the sense that its capacity provides a lower
bound on the capacity of any channel with the same rank distribution. A main
contribution is to express the channel capacity as the solution of a convex
optimization problem which can be easily solved by numerical computation. For
the special case of constant-rank input, a closed-form expression for the
capacity is obtained. The behavior of the channel for asymptotically large
field size or packet length is studied, and it is shown that constant-rank
input suffices in this case. Finally, it is proved that the well-known approach
of treating inputs and outputs as subspaces is information-lossless even in
this more general model.
|
1105.6118
|
Mapping Relational Operations onto Hypergraph Model
|
cs.DB cs.PL
|
The relational model is the most commonly used data model for storing large
datasets, perhaps due to the simplicity of the tabular format which had
revolutionized database management systems. However, many real world objects
are recursive and associative in nature which makes storage in the relational
model difficult. The hypergraph model is a generalization of a graph model,
where each hypernode can be made up of other nodes or graphs and each hyperedge
can be made up of one or more edges. It may address the recursive and
associative limitations of relational model. However, the hypergraph model is
non-tabular; thus, loses the simplicity of the relational model. In this study,
we consider the means to convert a relational model into a hypergraph model in
two layers. At the bottom layer, each relational tuple can be considered as a
star graph centered where the primary key node is surrounded by non-primary key
attributes. At the top layer, each tuple is a hypernode, and a relation is a
set of hypernodes. We presented a reference implementation of relational
operators (project, rename, select, inner join, natural join, left join, right
join, outer join and Cartesian join) on a hypergraph model. Using a simple
example, we demonstrate that a relation and relational operators can be
implemented on this hypergraph model.
|
1105.6120
|
Distributed Spectrum Sensing with Sequential Ordered Transmissions to a
Cognitive Fusion Center
|
cs.IT math.IT
|
Cooperative spectrum sensing is a robust strategy that enhances the detection
probability of primary licensed users. However, a large number of detectors
reporting to a fusion center for a final decision causes significant delay and
also presumes the availability of unreasonable communication resources at the
disposal of a network searching for spectral opportunities. In this work, we
employ the idea of sequential detection to obtain a quick, yet reliable,
decision regarding primary activity. Local detectors take measurements, and
only a few of them transmit the log likelihood ratios (LLR) to a fusion center
in descending order of LLR magnitude. The fusion center runs a sequential test
with a maximum imposed on the number of sensors that can report their LLR
measurements. We calculate the detection thresholds using two methods. The
first achieves the same probability of error as the optimal block detector. In
the second, an objective function is constructed and decision thresholds are
obtained via backward induction to optimize this function. The objective
function is related directly to the primary and secondary throughputs with
inbuilt privilege for primary operation. Simulation results demonstrate the
enhanced performance of the approaches proposed in this paper. We also
investigate the case of fading channels between the local sensors and the
fusion center, and the situation in which the sensing cost is negligible.
|
1105.6124
|
Reasoning on Interval and Point-based Disjunctive Metric Constraints in
Temporal Contexts
|
cs.AI
|
We introduce a temporal model for reasoning on disjunctive metric constraints
on intervals and time points in temporal contexts. This temporal model is
composed of a labeled temporal algebra and its reasoning algorithms. The
labeled temporal algebra defines labeled disjunctive metric point-based
constraints, where each disjunct in each input disjunctive constraint is
univocally associated to a label. Reasoning algorithms manage labeled
constraints, associated label lists, and sets of mutually inconsistent
disjuncts. These algorithms guarantee consistency and obtain a minimal network.
Additionally, constraints can be organized in a hierarchy of alternative
temporal contexts. Therefore, we can reason on context-dependent disjunctive
metric constraints on intervals and points. Moreover, the model is able to
represent non-binary constraints, such that logical dependencies on disjuncts
in constraints can be handled. The computational cost of reasoning algorithms
is exponential in accordance with the underlying problem complexity, although
some improvements are proposed.
|
1105.6148
|
Overcoming Misleads In Logic Programs by Redefining Negation
|
cs.AI
|
Negation as failure and incomplete information in logic programs have been
studied by many researchers In order to explains HOW a negated conclusion was
reached, we introduce and proof a different way for negating facts to
overcoming misleads in logic programs. Negating facts can be achieved by asking
the user for constants that do not appear elsewhere in the knowledge base.
|
1105.6150
|
A Strictly Improved Achievable Region for Multiple Descriptions Using
Combinatorial Message Sharing
|
cs.IT math.IT
|
We recently proposed a new coding scheme for the L-channel multiple
descriptions (MD) problem for general sources and distortion measures involving
`Combinatorial Message Sharing' (CMS) [7] leading to a new achievable
rate-distortion region. Our objective in this paper is to establish that this
coding scheme strictly subsumes the most popular region for this problem due to
Venkataramani, Kramer and Goyal (VKG) [3]. In particular, we show that for a
binary symmetric source under Hamming distortion measure, the CMS scheme
provides a strictly larger region for all L>2. The principle of the CMS coding
scheme is to include a common message in every subset of the descriptions,
unlike the VKG scheme which sends a single common message in all the
descriptions. In essence, we show that allowing for a common codeword in every
subset of descriptions provides better freedom in coordinating the messages
which can be exploited constructively to achieve points outside the VKG region.
|
1105.6162
|
A statistical learning algorithm for word segmentation
|
cs.CL
|
In natural speech, the speaker does not pause between words, yet a human
listener somehow perceives this continuous stream of phonemes as a series of
distinct words. The detection of boundaries between spoken words is an instance
of a general capability of the human neocortex to remember and to recognize
recurring sequences. This paper describes a computer algorithm that is designed
to solve the problem of locating word boundaries in blocks of English text from
which the spaces have been removed. This problem avoids the complexities of
speech processing but requires similar capabilities for detecting recurring
sequences. The algorithm relies entirely on statistical relationships between
letters in the input stream to infer the locations of word boundaries. A
Viterbi trellis is used to simultaneously evaluate a set of hypothetical
segmentations of a block of adjacent words. This technique improves accuracy
but incurs a small latency between the arrival of letters in the input stream
and the sending of words to the output stream. The source code for a C++
version of this algorithm is presented in an appendix.
|
1105.6163
|
Assisted Common Information: Further Results
|
cs.IT cs.CR math.IT
|
We presented assisted common information as a generalization of
G\'acs-K\"orner (GK) common information at ISIT 2010. The motivation for our
formulation was to improve upperbounds on the efficiency of protocols for
secure two-party sampling (which is a form of secure multi-party computation).
Our upperbound was based on a monotonicity property of a rate-region (called
the assisted residual information region) associated with the assisted common
information formulation. In this note we present further results. We explore
the connection of assisted common information with the Gray-Wyner system. We
show that the assisted residual information region and the Gray-Wyner region
are connected by a simple relationship: the assisted residual information
region is the increasing hull of the Gray-Wyner region under an affine map.
Several known relationships between GK common information and Gray-Wyner system
fall out as consequences of this. Quantities which arise in other source coding
contexts acquire new interpretations. In previous work we showed that assisted
common information can be used to derive upperbounds on the rate at which a
pair of parties can {\em securely sample} correlated random variables, given
correlated random variables from another distribution. Here we present an
example where the bound derived using assisted common information is much
better than previously known bounds, and in fact is tight. This example
considers correlated random variables defined in terms of standard variants of
oblivious transfer, and is interesting on its own as it answers a natural
question about these cryptographic primitives.
|
1105.6164
|
How to Construct Polar Codes
|
cs.IT math.IT
|
A method for efficiently constructing polar codes is presented and analyzed.
Although polar codes are explicitly defined, straightforward construction is
intractable since the resulting polar bit-channels have an output alphabet that
grows exponentially with he code length. Thus the core problem that needs to be
solved is that of faithfully approximating a bit-channel with an intractably
large alphabet by another channel having a manageable alphabet size. We devise
two approximation methods which "sandwich" the original bit-channel between a
degraded and an upgraded version thereof. Both approximations can be
efficiently computed, and turn out to be extremely close in practice. We also
provide theoretical analysis of our construction algorithms, proving that for
any fixed $\epsilon > 0$ and all sufficiently large code lengths $n$, polar
codes whose rate is within $\epsilon$ of channel capacity can be constructed in
time and space that are both linear in $n$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.