id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1107.0026
|
IDL-Expressions: A Formalism for Representing and Parsing Finite
Languages in Natural Language Processing
|
cs.AI
|
We propose a formalism for representation of finite languages, referred to as
the class of IDL-expressions, which combines concepts that were only considered
in isolation in existing formalisms. The suggested applications are in natural
language processing, more specifically in surface natural language generation
and in machine translation, where a sentence is obtained by first generating a
large set of candidate sentences, represented in a compact way, and then by
filtering such a set through a parser. We study several formal properties of
IDL-expressions and compare this new formalism with more standard ones. We also
present a novel parsing algorithm for IDL-expressions and prove a non-trivial
upper bound on its time complexity.
|
1107.0027
|
Effective Dimensions of Hierarchical Latent Class Models
|
cs.AI
|
Hierarchical latent class (HLC) models are tree-structured Bayesian networks
where leaf nodes are observed while internal nodes are latent. There are no
theoretically well justified model selection criteria for HLC models in
particular and Bayesian networks with latent nodes in general. Nonetheless,
empirical studies suggest that the BIC score is a reasonable criterion to use
in practice for learning HLC models. Empirical studies also suggest that
sometimes model selection can be improved if standard model dimension is
replaced with effective model dimension in the penalty term of the BIC score.
Effective dimensions are difficult to compute. In this paper, we prove a
theorem that relates the effective dimension of an HLC model to the effective
dimensions of a number of latent class models. The theorem makes it
computationally feasible to compute the effective dimensions of large HLC
models. The theorem can also be used to compute the effective dimensions of
general tree models.
|
1107.0029
|
A Personalized System for Conversational Recommendations
|
cs.IR cs.AI
|
Searching for and making decisions about information is becoming increasingly
difficult as the amount of information and number of choices increases.
Recommendation systems help users find items of interest of a particular type,
such as movies or restaurants, but are still somewhat awkward to use. Our
solution is to take advantage of the complementary strengths of personalized
recommendation systems and dialogue systems, creating personalized aides. We
present a system -- the Adaptive Place Advisor -- that treats item selection as
an interactive, conversational process, with the program inquiring about item
attributes and the user responding. Individual, long-term user preferences are
unobtrusively obtained in the course of normal recommendation dialogues and
used to direct future conversations with the same user. We present a novel user
model that influences both item search and the questions asked during a
conversation. We demonstrate the effectiveness of our system in significantly
reducing the time and number of interactions required to find a satisfactory
item, as compared to a control group of users interacting with a non-adaptive
version of the system.
|
1107.0030
|
Coherent Integration of Databases by Abductive Logic Programming
|
cs.AI
|
We introduce an abductive method for a coherent integration of independent
data-sources. The idea is to compute a list of data-facts that should be
inserted to the amalgamated database or retracted from it in order to restore
its consistency. This method is implemented by an abductive solver, called
Asystem, that applies SLDNFA-resolution on a meta-theory that relates
different, possibly contradicting, input databases. We also give a pure
model-theoretic analysis of the possible ways to `recover' consistent data from
an inconsistent database in terms of those models of the database that exhibit
as minimal inconsistent information as reasonably possible. This allows us to
characterize the `recovered databases' in terms of the `preferred' (i.e., most
consistent) models of the theory. The outcome is an abductive-based application
that is sound and complete with respect to a corresponding model-based,
preferential semantics, and -- to the best of our knowledge -- is more
expressive (thus more general) than any other implementation of coherent
integration of databases.
|
1107.0031
|
Grounded Semantic Composition for Visual Scenes
|
cs.AI
|
We present a visually-grounded language understanding model based on a study
of how people verbally describe objects in scenes. The emphasis of the model is
on the combination of individual word meanings to produce meanings for complex
referring expressions. The model has been implemented, and it is able to
understand a broad range of spatial referring expressions. We describe our
implementation of word level visually-grounded semantics and their embedding in
a compositional parsing framework. The implemented system selects the correct
referents in response to natural language expressions for a large percentage of
test cases. In an analysis of the system's successes and failures we reveal how
visual context influences the semantics of utterances and propose future
extensions to the model that take such context into account.
|
1107.0033
|
Existence of Multiagent Equilibria with Limited Agents
|
cs.MA cs.GT
|
Multiagent learning is a necessary yet challenging problem as multiagent
systems become more prevalent and environments become more dynamic. Much of the
groundbreaking work in this area draws on notable results from game theory, in
particular, the concept of Nash equilibria. Learners that directly learn an
equilibrium obviously rely on their existence. Learners that instead seek to
play optimally with respect to the other players also depend upon equilibria
since equilibria are fixed points for learning. From another perspective,
agents with limitations are real and common. These may be undesired physical
limitations as well as self-imposed rational limitations, such as abstraction
and approximation techniques, used to make learning tractable. This article
explores the interactions of these two important concepts: equilibria and
limitations in learning. We introduce the question of whether equilibria
continue to exist when agents have limitations. We look at the general effects
limitations can have on agent behavior, and define a natural extension of
equilibria that accounts for these limitations. Using this formalization, we
make three major contributions: (i) a counterexample for the general existence
of equilibria with limitations, (ii) sufficient conditions on limitations that
preserve their existence, (iii) three general classes of games and limitations
that satisfy these conditions. We then present empirical results from a
specific multiagent learning algorithm applied to a specific instance of
limited agents. These results demonstrate that learning with limitations is
feasible, when the conditions outlined by our theoretical analysis hold.
|
1107.0034
|
Price Prediction in a Trading Agent Competition
|
cs.AI
|
The 2002 Trading Agent Competition (TAC) presented a challenging market game
in the domain of travel shopping. One of the pivotal issues in this domain is
uncertainty about hotel prices, which have a significant influence on the
relative cost of alternative trip schedules. Thus, virtually all participants
employ some method for predicting hotel prices. We survey approaches employed
in the tournament, finding that agents apply an interesting diversity of
techniques, taking into account differing sources of evidence bearing on
prices. Based on data provided by entrants on their agents' actual predictions
in the TAC-02 finals and semifinals, we analyze the relative efficacy of these
approaches. The results show that taking into account game-specific information
about flight prices is a major distinguishing factor. Machine learning methods
effectively induce the relationship between flight and hotel prices from game
data, and a purely analytical approach based on competitive equilibrium
analysis achieves equal accuracy with no historical data. Employing a new
measure of prediction quality, we relate absolute accuracy to bottom-line
performance in the game.
|
1107.0035
|
Compositional Model Repositories via Dynamic Constraint Satisfaction
with Order-of-Magnitude Preferences
|
cs.AI
|
The predominant knowledge-based approach to automated model construction,
compositional modelling, employs a set of models of particular functional
components. Its inference mechanism takes a scenario describing the constituent
interacting components of a system and translates it into a useful mathematical
model. This paper presents a novel compositional modelling approach aimed at
building model repositories. It furthers the field in two respects. Firstly, it
expands the application domain of compositional modelling to systems that can
not be easily described in terms of interacting functional components, such as
ecological systems. Secondly, it enables the incorporation of user preferences
into the model selection process. These features are achieved by casting the
compositional modelling problem as an activity-based dynamic preference
constraint satisfaction problem, where the dynamic constraints describe the
restrictions imposed over the composition of partial models and the preferences
correspond to those of the user of the automated modeller. In addition, the
preference levels are represented through the use of symbolic values that
differ in orders of magnitude.
|
1107.0036
|
Can We Learn to Beat the Best Stock
|
cs.AI q-fin.TR
|
A novel algorithm for actively trading stocks is presented. While traditional
expert advice and "universal" algorithms (as well as standard technical trading
heuristics) attempt to predict winners or trends, our approach relies on
predictable statistical relations between all pairs of stocks in the market.
Our empirical results on historical markets provide strong evidence that this
type of technical trading can "beat the market" and moreover, can beat the best
stock in the market. In doing so we utilize a new idea for smoothing critical
parameters in the context of expert learning.
|
1107.0037
|
Competitive Coevolution through Evolutionary Complexification
|
cs.AI
|
Two major goals in machine learning are the discovery and improvement of
solutions to complex problems. In this paper, we argue that complexification,
i.e. the incremental elaboration of solutions through adding new structure,
achieves both these goals. We demonstrate the power of complexification through
the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves
increasingly complex neural network architectures. NEAT is applied to an
open-ended coevolutionary robot duel domain where robot controllers compete
head to head. Because the robot duel domain supports a wide range of
strategies, and because coevolution benefits from an escalating arms race, it
serves as a suitable testbed for studying complexification. When compared to
the evolution of networks with fixed structure, complexifying evolution
discovers significantly more sophisticated strategies. The results suggest that
in order to discover and improve complex solutions, evolution, and search in
general, should be allowed to complexify as well as optimize.
|
1107.0038
|
Dual Modelling of Permutation and Injection Problems
|
cs.AI
|
When writing a constraint program, we have to choose which variables should
be the decision variables, and how to represent the constraints on these
variables. In many cases, there is considerable choice for the decision
variables. Consider, for example, permutation problems in which we have as many
values as variables, and each variable takes an unique value. In such problems,
we can choose between a primal and a dual viewpoint. In the dual viewpoint,
each dual variable represents one of the primal values, whilst each dual value
represents one of the primal variables. Alternatively, by means of channelling
constraints to link the primal and dual variables, we can have a combined model
with both sets of variables. In this paper, we perform an extensive theoretical
and empirical study of such primal, dual and combined models for two classes of
problems: permutation problems and injection problems. Our results show that it
often be advantageous to use multiple viewpoints, and to have constraints which
channel between them to maintain consistency. They also illustrate a general
methodology for comparing different constraint models.
|
1107.0040
|
Generalizing Boolean Satisfiability I: Background and Survey of Existing
Work
|
cs.AI
|
This is the first of three planned papers describing ZAP, a satisfiability
engine that substantially generalizes existing tools while retaining the
performance characteristics of modern high-performance solvers. The fundamental
idea underlying ZAP is that many problems passed to such engines contain rich
internal structure that is obscured by the Boolean representation used; our
goal is to define a representation in which this structure is apparent and can
easily be exploited to improve computational performance. This paper is a
survey of the work underlying ZAP, and discusses previous attempts to improve
the performance of the Davis-Putnam-Logemann-Loveland algorithm by exploiting
the structure of the problem being solved. We examine existing ideas including
extensions of the Boolean language to allow cardinality constraints,
pseudo-Boolean representations, symmetry, and a limited form of quantification.
While this paper is intended as a survey, our research results are contained in
the two subsequent articles, with the theoretical structure of ZAP described in
the second paper in this series, and ZAP's implementation described in the
third.
|
1107.0041
|
PHA*: Finding the Shortest Path with A* in An Unknown Physical
Environment
|
cs.AI
|
We address the problem of finding the shortest path between two points in an
unknown real physical environment, where a traveling agent must move around in
the environment to explore unknown territory. We introduce the Physical-A*
algorithm (PHA*) for solving this problem. PHA* expands all the mandatory nodes
that A* would expand and returns the shortest path between the two points.
However, due to the physical nature of the problem, the complexity of the
algorithm is measured by the traveling effort of the moving agent and not by
the number of generated nodes, as in standard A*. PHA* is presented as a
two-level algorithm, such that its high level, A*, chooses the next node to be
expanded and its low level directs the agent to that node in order to explore
it. We present a number of variations for both the high-level and low-level
procedures and evaluate their performance theoretically and experimentally. We
show that the travel cost of our best variation is fairly close to the optimal
travel cost, assuming that the mandatory nodes of A* are known in advance. We
then generalize our algorithm to the multi-agent case, where a number of
cooperative agents are designed to solve the problem. Specifically, we provide
an experimental implementation for such a system. It should be noted that the
problem addressed here is not a navigation problem, but rather a problem of
finding the shortest path between two points for future usage.
|
1107.0042
|
Restricted Value Iteration: Theory and Algorithms
|
cs.AI
|
Value iteration is a popular algorithm for finding near optimal policies for
POMDPs. It is inefficient due to the need to account for the entire belief
space, which necessitates the solution of large numbers of linear programs. In
this paper, we study value iteration restricted to belief subsets. We show
that, together with properly chosen belief subsets, restricted value iteration
yields near-optimal policies and we give a condition for determining whether a
given belief subset would bring about savings in space and time. We also apply
restricted value iteration to two interesting classes of POMDPs, namely
informative POMDPs and near-discernible POMDPs.
|
1107.0043
|
A Maximal Tractable Class of Soft Constraints
|
cs.AI
|
Many researchers in artificial intelligence are beginning to explore the use
of soft constraints to express a set of (possibly conflicting) problem
requirements. A soft constraint is a function defined on a collection of
variables which associates some measure of desirability with each possible
combination of values for those variables. However, the crucial question of the
computational complexity of finding the optimal solution to a collection of
soft constraints has so far received very little attention. In this paper we
identify a class of soft binary constraints for which the problem of finding
the optimal solution is tractable. In other words, we show that for any given
set of such constraints, there exists a polynomial time algorithm to determine
the assignment having the best overall combined measure of desirability. This
tractable class includes many commonly-occurring soft constraints, such as 'as
near as possible' or 'as soon as possible after', as well as crisp constraints
such as 'greater than'. Finally, we show that this tractable class is maximal,
in the sense that adding any other form of soft binary constraint which is not
in the class gives rise to a class of problems which is NP-hard.
|
1107.0044
|
Towards Understanding and Harnessing the Potential of Clause Learning
|
cs.AI
|
Efficient implementations of DPLL with the addition of clause learning are
the fastest complete Boolean satisfiability solvers and can handle many
significant real-world problems, such as verification, planning and design.
Despite its importance, little is known of the ultimate strengths and
limitations of the technique. This paper presents the first precise
characterization of clause learning as a proof system (CL), and begins the task
of understanding its power by relating it to the well-studied resolution proof
system. In particular, we show that with a new learning scheme, CL can provide
exponentially shorter proofs than many proper refinements of general resolution
(RES) satisfying a natural property. These include regular and Davis-Putnam
resolution, which are already known to be much stronger than ordinary DPLL. We
also show that a slight variant of CL with unlimited restarts is as powerful as
RES itself. Translating these analytical results to practice, however, presents
a challenge because of the nondeterministic nature of clause learning
algorithms. We propose a novel way of exploiting the underlying problem
structure, in the form of a high level problem description such as a graph or
PDDL specification, to guide clause learning algorithms toward faster
solutions. We show that this leads to exponential speed-ups on grid and
randomized pebbling problems, as well as substantial improvements on certain
ordering formulas.
|
1107.0045
|
Graduality in Argumentation
|
cs.AI
|
Argumentation is based on the exchange and valuation of interacting
arguments, followed by the selection of the most acceptable of them (for
example, in order to take a decision, to make a choice). Starting from the
framework proposed by Dung in 1995, our purpose is to introduce 'graduality' in
the selection of the best arguments, i.e., to be able to partition the set of
the arguments in more than the two usual subsets of 'selected' and
'non-selected' arguments in order to represent different levels of selection.
Our basic idea is that an argument is all the more acceptable if it can be
preferred to its attackers. First, we discuss general principles underlying a
'gradual' valuation of arguments based on their interactions. Following these
principles, we define several valuation models for an abstract argumentation
system. Then, we introduce 'graduality' in the concept of acceptability of
arguments. We propose new acceptability classes and a refinement of existing
classes taking advantage of an available 'gradual' valuation.
|
1107.0046
|
Explicit Learning Curves for Transduction and Application to Clustering
and Compression Algorithms
|
cs.AI
|
Inductive learning is based on inferring a general rule from a finite data
set and using it to label new data. In transduction one attempts to solve the
problem of using a labeled training set to label a set of unlabeled points,
which are given to the learner prior to learning. Although transduction seems
at the outset to be an easier task than induction, there have not been many
provably useful algorithms for transduction. Moreover, the precise relation
between induction and transduction has not yet been determined. The main
theoretical developments related to transduction were presented by Vapnik more
than twenty years ago. One of Vapnik's basic results is a rather tight error
bound for transductive classification based on an exact computation of the
hypergeometric tail. While tight, this bound is given implicitly via a
computational routine. Our first contribution is a somewhat looser but explicit
characterization of a slightly extended PAC-Bayesian version of Vapnik's
transductive bound. This characterization is obtained using concentration
inequalities for the tail of sums of random variables obtained by sampling
without replacement. We then derive error bounds for compression schemes such
as (transductive) support vector machines and for transduction algorithms based
on clustering. The main observation used for deriving these new error bounds
and algorithms is that the unlabeled test points, which in the transductive
setting are known in advance, can be used in order to construct useful data
dependent prior distributions over the hypothesis space.
|
1107.0047
|
Decentralized Control of Cooperative Systems: Categorization and
Complexity Analysis
|
cs.AI
|
Decentralized control of cooperative systems captures the operation of a
group of decision makers that share a single global objective. The difficulty
in solving optimally such problems arises when the agents lack full
observability of the global state of the system when they operate. The general
problem has been shown to be NEXP-complete. In this paper, we identify classes
of decentralized control problems whose complexity ranges between NEXP and P.
In particular, we study problems characterized by independent transitions,
independent observations, and goal-oriented objective functions. Two algorithms
are shown to solve optimally useful classes of goal-oriented decentralized
processes in polynomial time. This paper also studies information sharing among
the decision-makers, which can improve their performance. We distinguish
between three ways in which agents can exchange information: indirect
communication, direct communication and sharing state features that are not
controlled by the agents. Our analysis shows that for every class of problems
we consider, introducing direct or indirect communication does not change the
worst-case complexity. The results provide a better understanding of the
complexity of decentralized control problems that arise in practice and
facilitate the development of planning algorithms for these problems.
|
1107.0048
|
Reinforcement Learning for Agents with Many Sensors and Actuators Acting
in Categorizable Environments
|
cs.AI
|
In this paper, we confront the problem of applying reinforcement learning to
agents that perceive the environment through many sensors and that can perform
parallel actions using many actuators as is the case in complex autonomous
robots. We argue that reinforcement learning can only be successfully applied
to this case if strong assumptions are made on the characteristics of the
environment in which the learning is performed, so that the relevant sensor
readings and motor commands can be readily identified. The introduction of such
assumptions leads to strongly-biased learning systems that can eventually lose
the generality of traditional reinforcement-learning algorithms. In this line,
we observe that, in realistic situations, the reward received by the robot
depends only on a reduced subset of all the executed actions and that only a
reduced subset of the sensor inputs (possibly different in each situation and
for each action) are relevant to predict the reward. We formalize this property
in the so called 'categorizability assumption' and we present an algorithm that
takes advantage of the categorizability of the environment, allowing a decrease
in the learning time with respect to existing reinforcement-learning
algorithms. Results of the application of the algorithm to a couple of
simulated realistic-robotic problems (landmark-based navigation and the
six-legged robot gait generation) are reported to validate our approach and to
compare it to existing flat and generalization-based reinforcement-learning
approaches.
|
1107.0050
|
Additive Pattern Database Heuristics
|
cs.AI
|
We explore a method for computing admissible heuristic evaluation functions
for search problems. It utilizes pattern databases, which are precomputed
tables of the exact cost of solving various subproblems of an existing problem.
Unlike standard pattern database heuristics, however, we partition our problems
into disjoint subproblems, so that the costs of solving the different
subproblems can be added together without overestimating the cost of solving
the original problem. Previously, we showed how to statically partition the
sliding-tile puzzles into disjoint groups of tiles to compute an admissible
heuristic, using the same partition for each state and problem instance. Here
we extend the method and show that it applies to other domains as well. We also
present another method for additive heuristics which we call dynamically
partitioned pattern databases. Here we partition the problem into disjoint
subproblems for each state of the search dynamically. We discuss the pros and
cons of each of these methods and apply both methods to three different problem
domains: the sliding-tile puzzles, the 4-peg Towers of Hanoi problem, and
finding an optimal vertex cover of a graph. We find that in some problem
domains, static partitioning is most effective, while in others dynamic
partitioning is a better choice. In each of these problem domains, either
statically partitioned or dynamically partitioned pattern database heuristics
are the best known heuristics for the problem.
|
1107.0051
|
On Prediction Using Variable Order Markov Models
|
cs.AI
|
This paper is concerned with algorithms for prediction of discrete sequences
over a finite alphabet, using variable order Markov models. The class of such
algorithms is large and in principle includes any lossless compression
algorithm. We focus on six prominent prediction algorithms, including Context
Tree Weighting (CTW), Prediction by Partial Match (PPM) and Probabilistic
Suffix Trees (PSTs). We discuss the properties of these algorithms and compare
their performance using real life sequences from three domains: proteins,
English text and music pieces. The comparison is made with respect to
prediction quality as measured by the average log-loss. We also compare
classification algorithms based on these predictors with respect to a number of
large protein classification tasks. Our results indicate that a "decomposed"
CTW (a variant of the CTW algorithm) and PPM outperform all other algorithms in
sequence prediction tasks. Somewhat surprisingly, a different algorithm, which
is a modification of the Lempel-Ziv compression algorithm, significantly
outperforms all algorithms on the protein classification problems.
|
1107.0052
|
Ordered Landmarks in Planning
|
cs.AI
|
Many known planning tasks have inherent constraints concerning the best order
in which to achieve the goals. A number of research efforts have been made to
detect such constraints and to use them for guiding search, in the hope of
speeding up the planning process. We go beyond the previous approaches by
considering ordering constraints not only over the (top-level) goals, but also
over the sub-goals that will necessarily arise during planning. Landmarks are
facts that must be true at some point in every valid solution plan. We extend
Koehler and Hoffmann's definition of reasonable orders between top level goals
to the more general case of landmarks. We show how landmarks can be found, how
their reasonable orders can be approximated, and how this information can be
used to decompose a given planning task into several smaller sub-tasks. Our
methodology is completely domain- and planner-independent. The implementation
demonstrates that the approach can yield significant runtime performance
improvements when used as a control loop around state-of-the-art sub-optimal
planning systems, as exemplified by FF and LPG.
|
1107.0053
|
Finding Approximate POMDP solutions Through Belief Compression
|
cs.AI
|
Standard value function approaches to finding policies for Partially
Observable Markov Decision Processes (POMDPs) are generally considered to be
intractable for large models. The intractability of these algorithms is to a
large extent a consequence of computing an exact, optimal policy over the
entire belief space. However, in real-world POMDP problems, computing the
optimal policy for the full belief space is often unnecessary for good control
even for problems with complicated policy classes. The beliefs experienced by
the controller often lie near a structured, low-dimensional subspace embedded
in the high-dimensional belief space. Finding a good approximation to the
optimal value function for only this subspace can be much easier than computing
the full value function. We introduce a new method for solving large-scale
POMDPs by reducing the dimensionality of the belief space. We use Exponential
family Principal Components Analysis (Collins, Dasgupta and Schapire, 2002) to
represent sparse, high-dimensional belief spaces using small sets of learned
features of the belief state. We then plan only in terms of the low-dimensional
belief features. By planning in this low-dimensional space, we can find
policies for POMDP models that are orders of magnitude larger than models that
can be handled by conventional techniques. We demonstrate the use of this
algorithm on a synthetic problem and on mobile robot navigation tasks.
|
1107.0054
|
A Comprehensive Trainable Error Model for Sung Music Queries
|
cs.AI
|
We propose a model for errors in sung queries, a variant of the hidden Markov
model (HMM). This is a solution to the problem of identifying the degree of
similarity between a (typically error-laden) sung query and a potential target
in a database of musical works, an important problem in the field of music
information retrieval. Similarity metrics are a critical component of
query-by-humming (QBH) applications which search audio and multimedia databases
for strong matches to oral queries. Our model comprehensively expresses the
types of error or variation between target and query: cumulative and
non-cumulative local errors, transposition, tempo and tempo changes,
insertions, deletions and modulation. The model is not only expressive, but
automatically trainable, or able to learn and generalize from query examples.
We present results of simulations, designed to assess the discriminatory
potential of the model, and tests with real sung queries, to demonstrate
relevance to real-world applications.
|
1107.0055
|
Phase Transitions and Backbones of the Asymmetric Traveling Salesman
Problem
|
cs.AI
|
In recent years, there has been much interest in phase transitions of
combinatorial problems. Phase transitions have been successfully used to
analyze combinatorial optimization problems, characterize their typical-case
features and locate the hardest problem instances. In this paper, we study
phase transitions of the asymmetric Traveling Salesman Problem (ATSP), an
NP-hard combinatorial optimization problem that has many real-world
applications. Using random instances of up to 1,500 cities in which intercity
distances are uniformly distributed, we empirically show that many properties
of the problem, including the optimal tour cost and backbone size, experience
sharp transitions as the precision of intercity distances increases across a
critical value. Our experimental results on the costs of the ATSP tours and
assignment problem agree with the theoretical result that the asymptotic cost
of assignment problem is pi ^2 /6 the number of cities goes to infinity. In
addition, we show that the average computational cost of the well-known
branch-and-bound subtour elimination algorithm for the problem also exhibits a
thrashing behavior, transitioning from easy to difficult as the distance
precision increases. These results answer positively an open question regarding
the existence of phase transitions in the ATSP, and provide guidance on how
difficult ATSP problem instances should be generated.
|
1107.0062
|
Optimal Multi-Robot Path Planning with Temporal Logic Constraints
|
cs.RO
|
In this paper we present a method for automatically planning optimal paths
for a group of robots that satisfy a common high level mission specification.
Each robot's motion in the environment is modeled as a weighted transition
system. The mission is given as a Linear Temporal Logic formula. In addition,
an optimizing proposition must repeatedly be satisfied. The goal is to minimize
the maximum time between satisfying instances of the optimizing proposition.
Our method is guaranteed to compute an optimal set of robot paths. We utilize a
timed automaton representation in order to capture the relative position of the
robots in the environment. We then obtain a bisimulation of this timed
automaton as a finite transition system that captures the joint behavior of the
robots and apply our earlier algorithm for the single robot case to optimize
the group motion. We present a simulation of a persistent monitoring task in a
road network environment.
|
1107.0078
|
Optimization of UAV Heading for the Ground-to-Air Uplink
|
cs.IT math.IT
|
In this paper we consider a collection of single-antenna ground nodes
communicating with a multi-antenna unmanned aerial vehicle (UAV) over a
multiple-access ground-to-air wireless communications link. The UAV uses
beamforming to mitigate the inter-user interference and achieve spatial
division multiple access (SDMA). First, we consider a simple scenario with two
static ground nodes and analytically investigate the effect of the UAV heading
on the system sum rate. We then study a more general setting with multiple
mobile ground-based terminals, and develop an algorithm for dynamically
adjusting the UAV heading in order to maximize a lower bound on the ergodic sum
rate of the uplink channel, using a Kalman filter to track the positions of the
mobile ground nodes. Fairness among the users can be guaranteed through
weighting the bound for each user's ergodic rate with a factor inversely
proportional to their average data rate. For the common scenario where a high
$K$-factor channel exists between the ground nodes and UAV, we use an
asymptotic analysis to find simplified versions of the algorithm for low and
high SNR. We present simulation results that demonstrate the benefits of
adapting the UAV heading in order to optimize the uplink communications
performance. The simulation results also show that the simplified algorithms
perform near-optimal performance.
|
1107.0082
|
A case of combination of evidence in the Dempster-Shafer theory
inconsistent with evaluation of probabilities
|
math.PR cs.AI
|
The Dempster-Shafer theory of evidence accumulation is one of the main tools
for combining data obtained from multiple sources. In this paper a special case
of combination of two bodies of evidence with non-zero conflict coefficient is
considered. It is shown that application of the Dempster-Shafer rule of
combination in this case leads to an evaluation of masses of the combined
bodies that is different from the evaluation of the corresponding probabilities
obtained by application of the law of total probability. This finding supports
the view that probabilistic interpretation of results of the Dempster-Shafer
analysis in the general case is not appropriate.
|
1107.0089
|
Towards a Reliable Framework of Uncertainty-Based Group Decision Support
System
|
cs.SY cs.AI
|
This study proposes a framework of Uncertainty-based Group Decision Support
System (UGDSS). It provides a platform for multiple criteria decision analysis
in six aspects including (1) decision environment, (2) decision problem, (3)
decision group, (4) decision conflict, (5) decision schemes and (6) group
negotiation. Based on multiple artificial intelligent technologies, this
framework provides reliable support for the comprehensive manipulation of
applications and advanced decision approaches through the design of an
integrated multi-agents architecture.
|
1107.0098
|
A Probabilistic Attack on NP-complete Problems
|
cs.CC cs.AI cs.DM cs.DS
|
Using the probability theory-based approach, this paper reveals the
equivalence of an arbitrary NP-complete problem to a problem of checking
whether a level set of a specifically constructed harmonic cost function (with
all diagonal entries of its Hessian matrix equal to zero) intersects with a
unit hypercube in many-dimensional Euclidean space. This connection suggests
the possibility that methods of continuous mathematics can provide crucial
insights into the most intriguing open questions in modern complexity theory.
|
1107.0124
|
A Gel'fand-type spectral radius formula and stability of linear
constrained switching systems
|
math.OC cs.SY math.DS math.RA
|
Using ergodic theory, in this paper we present a Gel'fand-type spectral
radius formula which states that the joint spectral radius is equal to the
generalized spectral radius for a matrix multiplicative semigroup $\bS^+$
restricted to a subset that need not carry the algebraic structure of $\bS^+$.
This generalizes the Berger-Wang formula. Using it as a tool, we study the
absolute exponential stability of a linear switched system driven by a compact
subshift of the one-sided Markov shift associated to $\bS$.
|
1107.0132
|
Pointwise Stabilization of Discrete-time Stationary Matrix-valued
Markovian Chains
|
math.PR cs.SY math.DS math.OC
|
We study the pointwise stabilizability of a discrete-time, time-homogeneous,
and stationary Markovian jump linear system. By using measure theory, ergodic
theory and a splitting theorem of state space we show in a relatively simple
way that if the system is essentially product-bounded, then it is pointwise
convergent if and only if it is pointwise exponentially convergent.
|
1107.0134
|
The Influence of Global Constraints on Similarity Measures for
Time-Series Databases
|
cs.AI
|
A time series consists of a series of values or events obtained over repeated
measurements in time. Analysis of time series represents and important tool in
many application areas, such as stock market analysis, process and quality
control, observation of natural phenomena, medical treatments, etc. A vital
component in many types of time-series analysis is the choice of an appropriate
distance/similarity measure. Numerous measures have been proposed to date, with
the most successful ones based on dynamic programming. Being of quadratic time
complexity, however, global constraints are often employed to limit the search
space in the matrix during the dynamic programming procedure, in order to speed
up computation. Furthermore, it has been reported that such constrained
measures can also achieve better accuracy. In this paper, we investigate two
representative time-series distance/similarity measures based on dynamic
programming, Dynamic Time Warping (DTW) and Longest Common Subsequence (LCS),
and the effects of global constraints on them. Through extensive experiments on
a large number of time-series data sets, we demonstrate how global constrains
can significantly reduce the computation time of DTW and LCS. We also show
that, if the constraint parameter is tight enough (less than 10-15% of
time-series length), the constrained measure becomes significantly different
from its unconstrained counterpart, in the sense of producing qualitatively
different 1-nearest neighbor graphs. This observation explains the potential
for accuracy gains when using constrained measures, highlighting the need for
careful tuning of constraint parameters in order to achieve a good trade-off
between speed and accuracy.
|
1107.0161
|
Quadratic order conditions for bang-singular extremals
|
math.OC cs.SY
|
This paper deals with optimal control problems for systems affine in the
control variable. We consider nonnegativity constraints on the control, and
finitely many equality and inequality constraints on the final state. First, we
obtain second order necessary optimality conditions. Secondly, we derive a
second order sufficient condition for the scalar control case.
|
1107.0169
|
Unstructured Human Activity Detection from RGBD Images
|
cs.RO cs.CV
|
Being able to detect and recognize human activities is essential for several
applications, including personal assistive robotics. In this paper, we perform
detection and recognition of unstructured human activity in unstructured
environments. We use a RGBD sensor (Microsoft Kinect) as the input sensor, and
compute a set of features based on human pose and motion, as well as based on
image and pointcloud information. Our algorithm is based on a hierarchical
maximum entropy Markov model (MEMM), which considers a person's activity as
composed of a set of sub-activities. We infer the two-layered graph structure
using a dynamic programming approach. We test our algorithm on detecting and
recognizing twelve different activities performed by four people in different
environments, such as a kitchen, a living room, an office, etc., and achieve
good performance even when the person was not seen before in the training set.
|
1107.0192
|
Multiple Space Debris Collecting Mission - Debris selection and
Trajectory optimization
|
cs.SY math.OC
|
A possible mean to stabilize the LEO debris population is to remove each year
5 heavy debris like spent satellites or launchers stages from that space
region. This paper investigates the DeltaV requirement for such a Space Debris
Collecting mission. The optimization problem is intrinsically hard since it
mixes combinatorial optimization to select the debris among a list of
candidates and functional optimization to define the orbital maneuvers. The
solving methodology proceeds in two steps : firstly a generic transfer strategy
with impulsive maneuvers is defined so that the problem becomes of finite
dimension, secondly the problem is linearized around an initial reference
solution. A Branch and Bound algorithm is then applied to optimize
simultaneously the debris selection and the orbital maneuvers, yielding a new
reference solution. The process is iterated until the solution stabilizes on
the optimal path. The trajectory controls and dates are finally re-optimized in
order to refine the solution. The method is applicable whatever the numbers of
debris (candidate and to deorbit) and whatever the mission duration. It is
exemplified on an application case consisting in selecting 5 SSO debris among a
list of 11.
|
1107.0193
|
On the origin of ambiguity in efficient communication
|
cs.CL
|
This article studies the emergence of ambiguity in communication through the
concept of logical irreversibility and within the framework of Shannon's
information theory. This leads us to a precise and general expression of the
intuition behind Zipf's vocabulary balance in terms of a symmetry equation
between the complexities of the coding and the decoding processes that imposes
an unavoidable amount of logical uncertainty in natural communication.
Accordingly, the emergence of irreversible computations is required if the
complexities of the coding and the decoding processes are balanced in a
symmetric scenario, which means that the emergence of ambiguous codes is a
necessary condition for natural communication to succeed.
|
1107.0194
|
Law of Connectivity in Machine Learning
|
cs.AI
|
We present in this paper our law that there is always a connection present
between two entities, with a selfconnection being present at least in each
node. An entity is an object, physical or imaginary, that is connected by a
path (or connection) and which is important for achieving the desired result of
the scenario. In machine learning, we state that for any scenario, a subject
entity is always, directly or indirectly, connected and affected by single or
multiple independent / dependent entities, and their impact on the subject
entity is dependent on various factors falling into the categories such as the
existenc
|
1107.0237
|
Team Decision Problems with Classical and Quantum Signals
|
quant-ph cs.IT econ.TH math-ph math.IT math.MP
|
We study team decision problems where communication is not possible, but
coordination among team members can be realized via signals in a shared
environment. We consider a variety of decision problems that differ in what
team members know about one another's actions and knowledge. For each type of
decision problem, we investigate how different assumptions on the available
signals affect team performance. Specifically, we consider the cases of
perfectly correlated, i.i.d., and exchangeable classical signals, as well as
the case of quantum signals. We find that, whereas in perfect-recall trees
(Kuhn [1950], [1953]) no type of signal improves performance, in
imperfect-recall trees quantum signals may bring an improvement. Isbell [1957]
proved that in non-Kuhn trees, classical i.i.d. signals may improve
performance. We show that further improvement may be possible by use of
classical exchangeable or quantum signals. We include an example of the effect
of quantum signals in the context of high-frequency trading.
|
1107.0268
|
Simple Algorithm Portfolio for SAT
|
cs.AI
|
The importance of algorithm portfolio techniques for SAT has long been noted,
and a number of very successful systems have been devised, including the most
successful one --- SATzilla. However, all these systems are quite complex (to
understand, reimplement, or modify). In this paper we propose a new algorithm
portfolio for SAT that is extremely simple, but in the same time so efficient
that it outperforms SATzilla. For a new SAT instance to be solved, our
portfolio finds its k-nearest neighbors from the training set and invokes a
solver that performs the best at those instances. The main distinguishing
feature of our algorithm portfolio is the locality of the selection procedure
--- the selection of a SAT solver is based only on few instances similar to the
input one.
|
1107.0300
|
The Compute-and-Forward Protocol: Implementation and Practical Aspects
|
cs.IT math.IT
|
In a recent work, Nazer and Gastpar proposed the Compute-and-Forward strategy
as a physical-layer network coding scheme. They described a code structure
based on nested lattices whose algebraic structure makes the scheme reliable
and efficient. In this work, we consider the implementation of their scheme for
real Gaussian channels and one dimensional lattices. We relate the maximization
of the transmission rate to the lattice shortest vector problem. We explicit,
in this case, the maximum likelihood criterion and show that it can be
implemented by using an Inhomogeneous Diophantine Approximation algorithm.
|
1107.0390
|
On Linear Index Coding for Random Graphs
|
cs.IT math.IT
|
A sender wishes to broadcast an n character word x in F^n (for a field F) to
n receivers R_1,...,R_n. Every receiver has some side information on x
consisting of a subset of the characters of x. The side information of the
receivers is represented by a graph G on n vertices in which {i,j} is an edge
if R_i knows x_j. In the index coding problem the goal is to encode x using a
minimum number of characters in F in a way that enables every R_i to retrieve
the ith character x_i using the encoded message and the side information. An
index code is linear if the encoding is linear, and in this case the minimum
possible length is known to be equal to a graph parameter called minrank
(Bar-Yossef et al., FOCS'06). Several bounds on the minimum length of an index
code for side information graphs G were shown in the study of index coding.
However, the minimum length of an index code for the random graph G(n,p) is far
from being understood. In this paper we initiate the study of the typical
minimum length of a linear index code for G(n,p) over a field F. First, we
prove that for every constant size field F and a constant p, the minimum length
of a linear index code for G(n,p) over F is almost surely Omega(\sqrt{n}).
Second, we introduce and study the following two restricted models of index
coding: 1. A locally decodable index code is an index code in which the
receivers are allowed to query at most q characters from the encoded message.
2. A low density index code is a linear index code in which every character of
the word x affects at most q characters in the encoded message. Equivalently,
it is a linear code whose generator matrix has at most q nonzero entries in
each row.
|
1107.0399
|
Vision-Based Navigation I: A navigation filter for fusing
DTM/correspondence updates
|
cs.CV cs.AI
|
An algorithm for pose and motion estimation using corresponding features in
images and a digital terrain map is proposed. Using a Digital Terrain (or
Digital Elevation) Map (DTM/DEM) as a global reference enables recovering the
absolute position and orientation of the camera. In order to do this, the DTM
is used to formulate a constraint between corresponding features in two
consecutive frames. The utilization of data is shown to improve the robustness
and accuracy of the inertial navigation algorithm. Extended Kalman filter was
used to combine results of inertial navigation algorithm and proposed
vision-based navigation algorithm. The feasibility of this algorithms is
established through numerical simulations.
|
1107.0416
|
Beamforming on the MISO interference channel with multi-user decoding
capability
|
cs.IT math.IT
|
This paper considers the multiple-input-single-output interference channel
(MISO-IC) with interference decoding capability (IDC), so that the interference
signal can be decoded and subtracted from the received signal. On the MISO-IC
with single user decoding, transmit beamforming vectors are classically
designed to reach a compromise between mitigating the generated interference
(zero forcing of the interference) or maximizing the energy at the desired
user. The particularly intriguing problem arising in the multi-antenna IC with
IDC is that transmitters may now have the incentive to amplify the interference
generated at the non-intended receivers, in the hope that Rxs have a better
chance of decoding the interference and removing it. This notion completely
changes the previous paradigm of balancing between maximizing the desired
energy and reducing the generated interference, thus opening up a new dimension
for the beamforming design strategy.
Our contributions proceed by proving that the optimal rank of the transmit
precoders, optimal in the sense of Pareto optimality and therefore sum rate
optimality, is rank one. Then, we investigate suitable transmit beamforming
strategies for different decoding structures and characterize the Pareto
boundary. As an application of this characterization, we obtain a candidate set
of the maximum sum rate point} which at least contains the set of sum rate
optimal beamforming vectors. We derive the Maximum-Ratio-Transmission (MRT)
optimality conditions. Inspired by the MRT optimality conditions, we propose a
simple algorithm that achieves maximum sum rate in certain scenarios and
suboptimal, in other scenarios comparing to the maximum sum rate.
|
1107.0420
|
Stable Restoration and Separation of Approximately Sparse Signals
|
cs.IT math.IT
|
This paper develops new theory and algorithms to recover signals that are
approximately sparse in some general dictionary (i.e., a basis, frame, or
over-/incomplete matrix) but corrupted by a combination of interference having
a sparse representation in a second general dictionary and measurement noise.
The algorithms and analytical recovery conditions consider varying degrees of
signal and interference support-set knowledge. Particular applications covered
by the proposed framework include the restoration of signals impaired by
impulse noise, narrowband interference, or saturation/clipping, as well as
image in-painting, super-resolution, and signal separation. Two application
examples for audio and image restoration demonstrate the efficacy of the
approach.
|
1107.0429
|
Small world yields the most effective information spreading
|
physics.soc-ph cs.SI physics.data-an
|
Spreading dynamics of information and diseases are usually analyzed by using
a unified framework and analogous models. In this paper, we propose a model to
emphasize the essential difference between information spreading and epidemic
spreading, where the memory effects, the social reinforcement and the
non-redundancy of contacts are taken into account. Under certain conditions,
the information spreads faster and broader in regular networks than in random
networks, which to some extent supports the recent experimental observation of
spreading in online society [D. Centola, Science {\bf 329}, 1194 (2010)]. At
the same time, simulation result indicates that the random networks tend to be
favorable for effective spreading when the network size increases. This
challenges the validity of the above-mentioned experiment for large-scale
systems. More significantly, we show that the spreading effectiveness can be
sharply enhanced by introducing a little randomness into the regular structure,
namely the small-world networks yield the most effective information spreading.
Our work provides insights to the understanding of the role of local clustering
in information spreading.
|
1107.0434
|
Abstraction Super-structuring Normal Forms: Towards a Theory of
Structural Induction
|
cs.AI cs.FL cs.LG
|
Induction is the process by which we obtain predictive laws or theories or
models of the world. We consider the structural aspect of induction. We answer
the question as to whether we can find a finite and minmalistic set of
operations on structural elements in terms of which any theory can be
expressed. We identify abstraction (grouping similar entities) and
super-structuring (combining topologically e.g., spatio-temporally close
entities) as the essential structural operations in the induction process. We
show that only two more structural operations, namely, reverse abstraction and
reverse super-structuring (the duals of abstraction and super-structuring
respectively) suffice in order to exploit the full power of Turing-equivalent
generative grammars in induction. We explore the implications of this theorem
with respect to the nature of hidden variables, radical positivism and the
2-century old claim of David Hume about the principles of connexion among
ideas.
|
1107.0478
|
Polar Codes with Mixed-Kernels
|
cs.IT math.IT
|
A generalization of the polar coding scheme called mixed-kernels is
introduced. This generalization exploits several homogeneous kernels over
alphabets of different sizes. An asymptotic analysis of the proposed scheme
shows that its polarization properties are strongly related to the ones of the
constituent kernels. Simulation of finite length instances of the scheme
indicate their advantages both in error correction performance and complexity
compared to the known polar coding structures.
|
1107.0539
|
Corporate competition: A self-organized network
|
physics.soc-ph cs.SI nlin.AO
|
A substantial number of studies have extended the work on universal
properties in physical systems to complex networks in social, biological, and
technological systems. In this paper, we present a complex networks perspective
on interfirm organizational networks by mapping, analyzing and modeling the
spatial structure of a large interfirm competition network across a variety of
sectors and industries within the United States. We propose two micro-dynamic
models that are able to reproduce empirically observed characteristics of
competition networks as a natural outcome of a minimal set of general
mechanisms governing the formation of competition networks. Both models, which
utilize different approaches yet apply common principles to network formation
give comparable results. There is an asymmetry between companies that are
considered competitors, and companies that consider others as their
competitors. All companies only consider a small number of other companies as
competitors; however, there are a few companies that are considered as
competitors by many others. Geographically, the density of corporate
headquarters strongly correlates with local population density, and the
probability two firms are competitors declines with geographic distance. We
construct these properties by growing a corporate network with competitive
links using random incorporations modulated by population density and
geographic distance. Our new analysis, methodology and empirical results are
relevant to various phenomena of social and market behavior, and have
implications to research fields such as economic geography, economic sociology,
and regional economic development.
|
1107.0550
|
3D Terrestrial lidar data classification of complex natural scenes using
a multi-scale dimensionality criterion: applications in geomorphology
|
cs.CV physics.geo-ph
|
3D point clouds of natural environments relevant to problems in geomorphology
often require classification of the data into elementary relevant classes. A
typical example is the separation of riparian vegetation from ground in fluvial
environments, the distinction between fresh surfaces and rockfall in cliff
environments, or more generally the classification of surfaces according to
their morphology. Natural surfaces are heterogeneous and their distinctive
properties are seldom defined at a unique scale, prompting the use of
multi-scale criteria to achieve a high degree of classification success. We
have thus defined a multi-scale measure of the point cloud dimensionality
around each point, which characterizes the local 3D organization. We can thus
monitor how the local cloud geometry behaves across scales. We present the
technique and illustrate its efficiency in separating riparian vegetation from
ground and classifying a mountain stream as vegetation, rock, gravel or water
surface. In these two cases, separating the vegetation from ground or other
classes achieve accuracy larger than 98 %. Comparison with a single scale
approach shows the superiority of the multi-scale analysis in enhancing class
separability and spatial resolution. The technique is robust to missing data,
shadow zones and changes in point density within the scene. The classification
is fast and accurate and can account for some degree of intra-class
morphological variability such as different vegetation types. A probabilistic
confidence in the classification result is given at each point, allowing the
user to remove the points for which the classification is uncertain. The
process can be both fully automated, but also fully customized by the user
including a graphical definition of the classifiers. Although developed for
fully 3D data, the method can be readily applied to 2.5D airborne lidar data.
|
1107.0622
|
Modelling and Control of Blowing-Venting Operations in Manned Submarines
|
math.OC cs.SY
|
Motivated by the study of the potential use of blowing and venting operations
of ballast tanks in manned submarines as a complementary or alternative control
system for manoeuvring, we first propose a mathematical model for these
operations. Then we consider the coupling of blowing and venting with the
Feldman, variable mass, coefficient based hydrodynamic model for the equations
of motion. The final complete model is composed of a system of twenty-four
nonlinear ordinary differential equations. In a second part, we carry out a
rigorous mathematical analysis of the model: existence of a solution is proved.
As one of the possible applications of this model in naval engineering
problems, we consider the problem of roll control in an emergency rising
manoeuvre by using only blowing and venting. To this end, we formulate a
suitable constrained, nonlinear, optimal control problem where controls are
linked to the variable aperture of blowing and venting valves of each of the
tanks. Existence of a solution for this problem is also proved. Finally, we
address the numerical resolution of the control problem by using a descent
algorithm. Numerical experiments seem to indicate that, indeed, an appropriate
use of blowing and venting operations may help in the control of this emergency
manoeuvre.
|
1107.0624
|
Infinitely many constrained inequalities for the von Neumann entropy
|
quant-ph cs.IT math.IT
|
We exhibit infinitely many new, constrained inequalities for the von Neumann
entropy, and show that they are independent of each other and the known
inequalities obeyed by the von Neumann entropy (basically strong
subadditivity). The new inequalities were proved originally by Makarychev et
al. [Commun. Inf. Syst., 2(2):147-166, 2002] for the Shannon entropy, using
properties of probability distributions. Our approach extends the proof of the
inequalities to the quantum domain, and includes their independence for the
quantum and also the classical cases.
|
1107.0639
|
Bounds on the capacity of OFDM underspread frequency selective fading
channels
|
cs.IT math.IT
|
The analysis of the channel capacity in the absence of prior channel
knowledge (noncoherent channel) has gained increasing interest in recent years,
but it is still unknown for the general case. In this paper we derive bounds on
the capacity of the noncoherent, underspread complex Gaussian, orthogonal
frequency division multiplexing (OFDM), wide sense stationary channel with
uncorrelated scattering (WSSUS), under a peak power constraint or a constraint
on the second and fourth moments of the transmitted signal. These bounds are
characterized only by the system signal-to-noise ratio (SNR) and by a newly
defined quantity termed effective coherence time. Analysis of the effective
coherence time reveals that it can be interpreted as the length of a block in
the block fading model in which a system with the same SNR will achieve the
same capacity as in the analyzed channel. Unlike commonly used coherence time
definitions, it is shown that the effective coherence time depends on the SNR,
and is a nonincreasing function of it. We show that for low SNR the capacity is
proportional to the effective coherence time, while for higher SNR the coherent
channel capacity can be achieved provided that the effective coherence time is
large enough.
|
1107.0674
|
"Memory foam" approach to unsupervised learning
|
nlin.AO cs.LG
|
We propose an alternative approach to construct an artificial learning
system, which naturally learns in an unsupervised manner. Its mathematical
prototype is a dynamical system, which automatically shapes its vector field in
response to the input signal. The vector field converges to a gradient of a
multi-dimensional probability density distribution of the input process, taken
with negative sign. The most probable patterns are represented by the stable
fixed points, whose basins of attraction are formed automatically. The
performance of this system is illustrated with musical signals.
|
1107.0681
|
Does Quantum Interference exist in Twitter?
|
cs.SI cs.IT math.IT physics.soc-ph
|
It becomes more difficult to explain the social information transfer
phenomena using the classic models based merely on Shannon Information Theory
(SIT) and Classic Probability Theory (CPT), because the transfer process in the
social world is rich of semantic and highly contextualized. This paper aims to
use twitter data to explore whether the traditional models can interpret
information transfer in social networks, and whether quantum-like phenomena can
be spotted in social networks. Our main contributions are: (1) SIT and CPT fail
to interpret the information transfer occurring in Twitter; and (2) Quantum
interference exists in Twitter, and (3) a mathematical model is proposed to
elucidate the spotted quantum phenomena.
|
1107.0789
|
Distributed Matrix Completion and Robust Factorization
|
cs.LG cs.DS cs.NA math.NA stat.ML
|
If learning methods are to scale to the massive sizes of modern datasets, it
is essential for the field of machine learning to embrace parallel and
distributed computing. Inspired by the recent development of matrix
factorization methods with rich theory but poor computational complexity and by
the relative ease of mapping matrices onto distributed architectures, we
introduce a scalable divide-and-conquer framework for noisy matrix
factorization. We present a thorough theoretical analysis of this framework in
which we characterize the statistical errors introduced by the "divide" step
and control their magnitude in the "conquer" step, so that the overall
algorithm enjoys high-probability estimation guarantees comparable to those of
its base algorithm. We also present experiments in collaborative filtering and
video background modeling that demonstrate the near-linear to superlinear
speed-ups attainable with this approach.
|
1107.0803
|
Motion Planning via Manifold Samples
|
cs.CG cs.RO
|
We present a general and modular algorithmic framework for path planning of
robots. Our framework combines geometric methods for exact and complete
analysis of low-dimensional configuration spaces, together with practical,
considerably simpler sampling-based approaches that are appropriate for higher
dimensions. In order to facilitate the transfer of advanced geometric
algorithms into practical use, we suggest taking samples that are entire
low-dimensional manifolds of the configuration space that capture the
connectivity of the configuration space much better than isolated point
samples. Geometric algorithms for analysis of low-dimensional manifolds then
provide powerful primitive operations. The modular design of the framework
enables independent optimization of each modular component. Indeed, we have
developed, implemented and optimized a primitive operation for complete and
exact combinatorial analysis of a certain set of manifolds, using arrangements
of curves of rational functions and concepts of generic programming. This in
turn enabled us to implement our framework for the concrete case of a polygonal
robot translating and rotating amidst polygonal obstacles. We demonstrate that
the integration of several carefully engineered components leads to significant
speedup over the popular PRM sampling-based algorithm, which represents the
more simplistic approach that is prevalent in practice. We foresee possible
extensions of our framework to solving high-dimensional problems beyond motion
planning.
|
1107.0845
|
Automatic Road Lighting System (ARLS) Model Based on Image Processing of
Moving Object
|
cs.CV
|
Using a vehicle toy (in next future called vehicle) as a moving object an
automatic road lighting system (ARLS) model is constructed. A digital video
camera with 25 fps is used to capture the vehicle motion as it moves in the
test segment of the road. Captured images are then processed to calculate
vehicle speed. This information of the speed together with position of vehicle
is then used to control the lighting system along the path that passes by the
vehicle. Length of the road test segment is 1 m, the video camera is positioned
about 1.1 m above the test segment, and the vehicle toy dimension is 13 cm
\times 9.3 cm. In this model, the maximum speed that ARLS can handle is about
1.32 m/s, and the highest performance is obtained about 91% at speed 0.93 m/s.
|
1107.0878
|
Learning to play public good games
|
physics.soc-ph cs.SI q-bio.PE
|
We extend recent analyses of stochastic effects in game dynamical learning to
cases of multi-player games, and to games defined on networked structures. By
means of an expansion in the noise strength we consider the weak-noise limit,
and present an analytical computation of spectral properties of fluctuations in
multi-player public good games. This extends existing work on two-player games.
In particular we show that coherent cycles may emerge driven by noise in the
adaptation dynamics. These phenomena are not too dissimilar from cyclic
strategy switching observed in experiments of behavioural game theory.
|
1107.0922
|
GraphLab: A Distributed Framework for Machine Learning in the Cloud
|
cs.LG
|
Machine Learning (ML) techniques are indispensable in a wide range of fields.
Unfortunately, the exponential increase of dataset sizes are rapidly extending
the runtime of sequential algorithms and threatening to slow future progress in
ML. With the promise of affordable large-scale parallel computing, Cloud
systems offer a viable platform to resolve the computational challenges in ML.
However, designing and implementing efficient, provably correct distributed ML
algorithms is often prohibitively challenging. To enable ML researchers to
easily and efficiently use parallel systems, we introduced the GraphLab
abstraction which is designed to represent the computational patterns in ML
algorithms while permitting efficient parallel and distributed implementations.
In this paper we provide a formal description of the GraphLab parallel
abstraction and present an efficient distributed implementation. We conduct a
comprehensive evaluation of GraphLab on three state-of-the-art ML algorithms
using real large-scale data and a 64 node EC2 cluster of 512 processors. We
find that GraphLab achieves orders of magnitude performance gains over Hadoop
while performing comparably or superior to hand-tuned MPI implementations.
|
1107.0927
|
Application of Predictive Model Selection to Coupled Models
|
stat.AP cs.IT math.IT physics.data-an
|
A predictive Bayesian model selection approach is presented to discriminate
coupled models used to predict an unobserved quantity of interest (QoI). The
need for accurate predictions arises in a variety of critical applications such
as climate, aerospace and defense. A model problem is introduced to study the
prediction yielded by the coupling of two physics/sub-components. For each
single physics domain, a set of model classes and a set of sensor observations
are available. A goal-oriented algorithm using a predictive approach to
Bayesian model selection is then used to select the combination of single
physics models that best predict the QoI. It is shown that the best coupled
model for prediction is the one that provides the most robust predictive
distribution for the QoI.
|
1107.0989
|
Geometry of Complex Networks and Topological Centrality
|
cs.DM cs.SI physics.soc-ph
|
We explore the geometry of complex networks in terms of an n-dimensional
Euclidean embedding represented by the Moore-Penrose pseudo-inverse of the
graph Laplacian $(\bb L^+)$. The squared distance of a node $i$ to the origin
in this n-dimensional space $(l^+_{ii})$, yields a topological centrality index
$(\mathcal{C}^{*}(i) = 1/l^+_{ii})$ for node $i$. In turn, the sum of
reciprocals of individual node structural centralities,
$\sum_{i}1/\mathcal{C}^*(i) = \sum_{i} l^+_{ii}$, i.e. the trace of $\bb L^+$,
yields the well-known Kirchhoff index $(\mathcal{K})$, an overall structural
descriptor for the network. In addition to this geometric interpretation, we
provide alternative interpretations of the proposed indices to reveal their
true topological characteristics: first, in terms of forced detour overheads
and frequency of recurrences in random walks that has an interesting analogy to
voltage distributions in the equivalent electrical network; and then as the
average connectedness of $i$ in all the bi-partitions of the graph. These
interpretations respectively help establish the topological centrality
$(\mathcal{C}^{*}(i))$ of node $i$ as a measure of its overall position as well
as its overall connectedness in the network; thus reflecting the robustness of
node $i$ to random multiple edge failures. Through empirical evaluations using
synthetic and real world networks, we demonstrate how the topological
centrality is better able to distinguish nodes in terms of their structural
roles in the network and, along with Kirchhoff index, is appropriately
sensitive to perturbations/rewirings in the network.
|
1107.0998
|
An Information Theoretic Representation of Agent Dynamics as Set
Intersections
|
cs.IT cs.AI math.IT
|
We represent agents as sets of strings. Each string encodes a potential
interaction with another agent or environment. We represent the total set of
dynamics between two agents as the intersection of their respective strings, we
prove complexity properties of player interactions using Algorithmic
Information Theory. We show how the proposed construction is compatible with
Universal Artificial Intelligence, in that the AIXI model can be seen as
universal with respect to interaction.
|
1107.1011
|
Hamilton-Jacobi Equations and Two-Person Zero-Sum Differential Games
with Unbounded Controls
|
math.OC cs.SY
|
A two-person zero-sum differential game with unbounded controls is
considered. Under proper coercivity conditions, the upper and lower value
functions are characterized as the unique viscosity solutions to the
corresponding upper and lower Hamilton--Jacobi--Isaacs equations, respectively.
Consequently, when the Isaacs' condition is satisfied, the upper and lower
value functions coincide, leading to the existence of the value function. Due
to the unboundedness of the controls, the corresponding upper and lower
Hamiltonians grow super linearly in the gradient of the upper and lower value
functions, respectively. A uniqueness theorem of viscosity solution to
Hamilton--Jacobi equations involving such kind of Hamiltonian is proved,
without relying on the convexity/concavity of the Hamiltonian. Also, it is
shown that the assumed coercivity conditions guaranteeing the finiteness of the
upper and lower value functions are sharp in some sense.
|
1107.1020
|
A Novel Multicriteria Group Decision Making Approach With Intuitionistic
Fuzzy SIR Method
|
cs.AI
|
The superiority and inferiority ranking (SIR) method is a generation of the
well-known PROMETHEE method, which can be more efficient to deal with
multi-criterion decision making (MCDM) problem. Intuitionistic fuzzy sets
(IFSs), as an important extension of fuzzy sets (IFs), include both membership
functions and non-membership functions and can be used to, more precisely
describe uncertain information. In real world, decision situations are usually
under uncertain environment and involve multiple individuals who have their own
points of view on handing of decision problems. In order to solve uncertainty
group MCDM problem, we propose a novel intuitionistic fuzzy SIR method in this
paper. This approach uses intuitionistic fuzzy aggregation operators and SIR
ranking methods to handle uncertain information; integrate individual opinions
into group opinions; make decisions on multiple-criterion; and finally
structure a specific decision map. The proposed approach is illustrated in a
simulation of group decision making problem related to supply chain management.
|
1107.1058
|
Online Vehicle Detection For Estimating Traffic Status
|
cs.CV
|
We propose a traffic congestion estimation system based on unsupervised
on-line learning algorithm. The system does not rely on background extraction
or motion detection. It extracts local features inside detection regions of
variable size which are drawn on lanes in advance. The extracted features are
then clustered into two classes using K-means and Gaussian Mixture Models(GMM).
A Bayes classifier is used to detect vehicles according to the previous cluster
information which keeps updated whenever system is running by on-line EM
algorithm. Experimental result shows that our system can be adapted to various
traffic scenes for estimating traffic status.
|
1107.1066
|
Families of twisted tensor product codes
|
math.CO cs.IT math.IT
|
Using geometric properties of the variety $\cV_{r,t}$, the image under the
Grassmannian map of a Desarguesian $(t-1)$-spread of $\PG(rt-1,q)$, we
introduce error correcting codes related to the twisted tensor product
construction, producing several families of constacyclic codes. We exactly
determine the parameters of these codes and characterise the words of minimum
weight.
|
1107.1081
|
Spatial Features for Multi-Font/Multi-Size Kannada Numerals and Vowels
Recognition
|
cs.CV
|
This paper presents multi-font/multi-size Kannada numerals and vowels
recognition based on spatial features. Directional spatial features viz stroke
density, stroke length and the number of stokes in an image are employed as
potential features to characterize the printed Kannada numerals and vowels.
Based on these features 1100 numerals and 1400 vowels are classified with
Multi-class Support Vector Machines (SVM). The proposed system achieves the
recognition accuracy as 98.45% and 90.64% for numerals and vowels respectively.
|
1107.1104
|
SERIMI - Resource Description Similarity, RDF Instance Matching and
Interlinking
|
cs.DB
|
The interlinking of datasets published in the Linked Data Cloud is a
challenging problem and a key factor for the success of the Semantic Web.
Manual rule-based methods are the most effective solution for the problem, but
they require skilled human data publishers going through a laborious, error
prone and time-consuming process for manually describing rules mapping
instances between two datasets. Thus, an automatic approach for solving this
problem is more than welcome. In this paper, we propose a novel interlinking
method, SERIMI, for solving this problem automatically. SERIMI matches
instances between a source and a target datasets, without prior knowledge of
the data, domain or schema of these datasets. Experiments conducted with
benchmark collections demonstrate that our approach considerably outperforms
state-of-the-art automatic approaches for solving the interlinking problem on
the Linked Data Cloud.
|
1107.1119
|
Integrating Generic Sensor Fusion Algorithms with Sound State
Representations through Encapsulation of Manifolds
|
cs.RO cs.CV cs.MS
|
Common estimation algorithms, such as least squares estimation or the Kalman
filter, operate on a state in a state space S that is represented as a
real-valued vector. However, for many quantities, most notably orientations in
3D, S is not a vector space, but a so-called manifold, i.e. it behaves like a
vector space locally but has a more complex global topological structure. For
integrating these quantities, several ad-hoc approaches have been proposed.
Here, we present a principled solution to this problem where the structure of
the manifold S is encapsulated by two operators, state displacement [+]:S x R^n
--> S and its inverse [-]: S x S --> R^n. These operators provide a local
vector-space view \delta; --> x [+] \delta; around a given state x. Generic
estimation algorithms can then work on the manifold S mainly by replacing +/-
with [+]/[-] where appropriate. We analyze these operators axiomatically, and
demonstrate their use in least-squares estimation and the Unscented Kalman
Filter. Moreover, we exploit the idea of encapsulation from a software
engineering perspective in the Manifold Toolkit, where the [+]/[-] operators
mediate between a "flat-vector" view for the generic algorithm and a
"named-members" view for the problem specific functions.
|
1107.1128
|
AISMOTIF-An Artificial Immune System for DNA Motif Discovery
|
cs.CE
|
Discovery of transcription factor binding sites is a much explored and still
exploring area of research in functional genomics. Many computational tools
have been developed for finding motifs and each of them has their own
advantages as well as disadvantages. Most of these algorithms need prior
knowledge about the data to construct background models. However there is not a
single technique that can be considered as best for finding regulatory motifs.
This paper proposes an artificial immune system based algorithm for finding the
transcription factor binding sites or motifs and two new weighted scores for
motif evaluation. The algorithm is enumerative, but sufficient pruning of the
pattern search space has been incorporated using immune system concepts. The
performance of AISMOTIF has been evaluated by comparing it with eight state of
art composite motif discovery algorithms and found that AISMOTIF predicts known
motifs as well as new motifs from the benchmark dataset without any prior
knowledge about the data.
|
1107.1149
|
The dimension of ergodic random sequences
|
cs.IT math.IT
|
Let \mu be a computable ergodic shift-invariant measure over the Cantor
space. Providing a constructive proof of Shannon-McMillan-Breiman theorem,
V'yugin proved that if a sequence x is Martin-L\"of random w.r.t. \mu then the
strong effective dimension Dim(x) of x equals the entropy of \mu. Whether its
effective dimension dim(x) also equals the entropy was left as an problem
question. In this paper we settle this problem, providing a positive answer. A
key step in the proof consists in extending recent results on Birkhoff's
ergodic theorem for Martin-L\"of random sequences.
|
1107.1155
|
Limits of modularity maximization in community detection
|
physics.soc-ph cs.SI
|
Modularity maximization is the most popular technique for the detection of
community structure in graphs. The resolution limit of the method is supposedly
solvable with the introduction of modified versions of the measure, with
tunable resolution parameters. We show that multiresolution modularity suffers
from two opposite coexisting problems: the tendency to merge small subgraphs,
which dominates when the resolution is low; the tendency to split large
subgraphs, which dominates when the resolution is high. In benchmark networks
with heterogeneous distributions of cluster sizes, the simultaneous elimination
of both biases is not possible and multiresolution modularity is not capable to
recover the planted community structure, not even when it is pronounced and
easily detectable by other methods, for any value of the resolution parameter.
This holds for other multiresolution techniques and it is likely to be a
general problem of methods based on global optimization.
|
1107.1163
|
Conditional Gradient Algorithms for Rank-One Matrix Approximations with
a Sparsity Constraint
|
math.OC cs.SY
|
The sparsity constrained rank-one matrix approximation problem is a difficult
mathematical optimization problem which arises in a wide array of useful
applications in engineering, machine learning and statistics, and the design of
algorithms for this problem has attracted intensive research activities. We
introduce an algorithmic framework, called ConGradU, that unifies a variety of
seemingly different algorithms that have been derived from disparate
approaches, and allows for deriving new schemes. Building on the old and
well-known conditional gradient algorithm, ConGradU is a simplified version
with unit step size and yields a generic algorithm which either is given by an
analytic formula or requires a very low computational complexity. Mathematical
properties are systematically developed and numerical experiments are given.
|
1107.1222
|
On the information-theoretic structure of distributed measurements
|
cs.IT cs.DC cs.NE math.CT math.IT nlin.CG
|
The internal structure of a measuring device, which depends on what its
components are and how they are organized, determines how it categorizes its
inputs. This paper presents a geometric approach to studying the internal
structure of measurements performed by distributed systems such as
probabilistic cellular automata. It constructs the quale, a family of sections
of a suitably defined presheaf, whose elements correspond to the measurements
performed by all subsystems of a distributed system. Using the quale we
quantify (i) the information generated by a measurement; (ii) the extent to
which a measurement is context-dependent; and (iii) whether a measurement is
decomposable into independent submeasurements, which turns out to be equivalent
to context-dependence. Finally, we show that only indecomposable measurements
are more informative than the sum of their submeasurements.
|
1107.1229
|
Characteristic Characteristics
|
stat.AP cs.IR physics.data-an
|
While five-factor models of personality are widespread, there is still not
universal agreement on this as a structural framework. Part of the reason for
the lingering debate is its dependence on factor analysis. In particular,
derivation or refutation of the model via other statistical means is a
worthwhile project. In this paper we use the methodology of spectral clustering
to articulate the structure in the dataset of responses of 20,993 subjects on a
300-item item version of the IPIP NEO personality questionnaire, and we compare
our results to those obtained from a factor analytic solution. We found support
for five- and six-cluster solutions. The five-cluster solution was similar to a
conventional five-factor solution, but the six-cluster and six-factor solutions
differed significantly, and only the six-cluster solution was readily
interpretable: it gave a model similar to the HEXACO model. We suggest that
spectral clustering provides a robust alternative view of personality data.
|
1107.1257
|
Evidence-Based Filters for Signal Detection: Application to Evoked Brain
Responses
|
physics.comp-ph cs.CV physics.med-ph
|
Template-based signal detection most often relies on computing a correlation,
or a dot product, between an incoming data stream and a signal template. Such a
correlation results in an ongoing estimate of the magnitude of the signal in
the data stream. However, it does not directly indicate the presence or absence
of the signal. The problem is really one of model-testing, and the relevant
quantity is the Bayesian evidence (marginal likelihood) of the signal model.
Given a signal template and an ongoing data stream, we have developed an
evidence-based filter that computes the Bayesian evidence that a signal is
present in the data. We demonstrate this algorithm by applying it to
brain-machine interface (BMI) data obtained by recording human brain electrical
activity, or electroencephalography (EEG). A very popular and effective
paradigm in EEG-based BMI is based on the detection of the P300 evoked brain
response which is generated in response to particular sensory stimuli. The goal
is to detect the presence of a P300 signal in ongoing EEG activity as
accurately and as fast as possible. Our algorithm uses a subject-specific P300
template to compute the Bayesian evidence that a applying window of EEG data
contains the signal. The efficacy of this algorithm is demonstrated by
comparing receiver operating characteristic (ROC) curves of the evidence-based
filter to the usual correlation method. Our results show a significant
improvement in single-trial P300 detection. The evidence-based filter promises
to improve the accuracy and speed of the detection of evoked brain responses in
BMI applications as well the detection of template signals in more general
signal processing applications
|
1107.1270
|
High-Dimensional Gaussian Graphical Model Selection: Walk Summability
and Local Separation Criterion
|
cs.LG math.ST stat.TH
|
We consider the problem of high-dimensional Gaussian graphical model
selection. We identify a set of graphs for which an efficient estimation
algorithm exists, and this algorithm is based on thresholding of empirical
conditional covariances. Under a set of transparent conditions, we establish
structural consistency (or sparsistency) for the proposed algorithm, when the
number of samples n=omega(J_{min}^{-2} log p), where p is the number of
variables and J_{min} is the minimum (absolute) edge potential of the graphical
model. The sufficient conditions for sparsistency are based on the notion of
walk-summability of the model and the presence of sparse local vertex
separators in the underlying graph. We also derive novel non-asymptotic
necessary conditions on the number of samples required for sparsistency.
|
1107.1276
|
Experiment-driven Characterization of Full-Duplex Wireless Systems
|
cs.IT math.IT
|
We present an experiment-based characterization of passive suppression and
active self-interference cancellation mechanisms in full-duplex wireless
communication systems. In particular, we consider passive suppression due to
antenna separation at the same node, and active cancellation in analog and/or
digital domain. First, we show that the average amount of cancellation
increases for active cancellation techniques as the received self-interference
power increases. Our characterization of the average cancellation as a function
of the self-interference power allows us to show that for a constant
signal-to-interference ratio at the receiver antenna (before any active
cancellation is applied), the rate of a full-duplex link increases as the
self-interference power increases. Second, we show that applying digital
cancellation after analog cancellation can sometimes increase the
self-interference, and thus digital cancellation is more effective when applied
selectively based on measured suppression values. Third, we complete our study
of the impact of self-interference cancellation mechanisms by characterizing
the probability distribution of the self-interference channel before and after
cancellation.
|
1107.1283
|
Spectral Methods for Learning Multivariate Latent Tree Structure
|
cs.LG stat.ML
|
This work considers the problem of learning the structure of multivariate
linear tree models, which include a variety of directed tree graphical models
with continuous, discrete, and mixed latent variables such as linear-Gaussian
models, hidden Markov models, Gaussian mixture models, and Markov evolutionary
trees. The setting is one where we only have samples from certain observed
variables in the tree, and our goal is to estimate the tree structure (i.e.,
the graph of how the underlying hidden variables are connected to each other
and to the observed variables). We propose the Spectral Recursive Grouping
algorithm, an efficient and simple bottom-up procedure for recovering the tree
structure from independent samples of the observed variables. Our finite sample
size bounds for exact recovery of the tree structure reveal certain natural
dependencies on underlying statistical and structural properties of the
underlying joint distribution. Furthermore, our sample complexity guarantees
have no explicit dependence on the dimensionality of the observed variables,
making the algorithm applicable to many high-dimensional settings. At the heart
of our algorithm is a spectral quartet test for determining the relative
topology of a quartet of variables from second-order statistics.
|
1107.1322
|
Text Classification: A Sequential Reading Approach
|
cs.AI cs.IR cs.LG
|
We propose to model the text classification process as a sequential decision
process. In this process, an agent learns to classify documents into topics
while reading the document sentences sequentially and learns to stop as soon as
enough information was read for deciding. The proposed algorithm is based on a
modelisation of Text Classification as a Markov Decision Process and learns by
using Reinforcement Learning. Experiments on four different classical
mono-label corpora show that the proposed approach performs comparably to
classical SVM approaches for large training sets, and better for small training
sets. In addition, the model automatically adapts its reading process to the
quantity of training information provided.
|
1107.1345
|
Distances and Riemannian metrics for multivariate spectral densities
|
math.OC cs.SY math.ST stat.TH
|
We first introduce a class of divergence measures between power spectral
density matrices. These are derived by comparing the suitability of different
models in the context of optimal prediction. Distances between "infinitesimally
close" power spectra are quadratic, and hence, they induce a
differential-geometric structure. We study the corresponding Riemannian metrics
and, for a particular case, provide explicit formulae for the corresponding
geodesics and geodesic distances. The close connection between the geometry of
power spectra and the geometry of the Fisher-Rao metric is noted.
|
1107.1347
|
Sequential, successive, and simultaneous decoders for
entanglement-assisted classical communication
|
quant-ph cs.IT math.IT
|
Bennett et al. showed that allowing shared entanglement between a sender and
receiver before communication begins dramatically simplifies the theory of
quantum channels, and these results suggest that it would be worthwhile to
study other scenarios for entanglement-assisted classical communication. In
this vein, the present paper makes several contributions to the theory of
entanglement-assisted classical communication. First, we rephrase the
Giovannetti-Lloyd-Maccone sequential decoding argument as a more general
"packing lemma" and show that it gives an alternate way of achieving the
entanglement-assisted classical capacity. Next, we show that a similar
sequential decoder can achieve the Hsieh-Devetak-Winter region for
entanglement-assisted classical communication over a multiple access channel.
Third, we prove the existence of a quantum simultaneous decoder for
entanglement-assisted classical communication over a multiple access channel
with two senders. This result implies a solution of the quantum simultaneous
decoding conjecture for unassisted classical communication over quantum
multiple access channels with two senders, but the three-sender case still
remains open (Sen recently and independently solved this unassisted two-sender
case with a different technique). We then leverage this result to recover the
known regions for unassisted and assisted quantum communication over a quantum
multiple access channel, though our proof exploits a coherent quantum
simultaneous decoder. Finally, we determine an achievable rate region for
communication over an entanglement-assisted bosonic multiple access channel and
compare it with the Yen-Shapiro outer bound for unassisted communication over
the same channel.
|
1107.1358
|
On the Furthest Hyperplane Problem and Maximal Margin Clustering
|
cs.CC cs.DS cs.LG
|
This paper introduces the Furthest Hyperplane Problem (FHP), which is an
unsupervised counterpart of Support Vector Machines. Given a set of n points in
Rd, the objective is to produce the hyperplane (passing through the origin)
which maximizes the separation margin, that is, the minimal distance between
the hyperplane and any input point. To the best of our knowledge, this is the
first paper achieving provable results regarding FHP. We provide both lower and
upper bounds to this NP-hard problem. First, we give a simple randomized
algorithm whose running time is n^O(1/{\theta}^2) where {\theta} is the optimal
separation margin. We show that its exponential dependency on 1/{\theta}^2 is
tight, up to sub-polynomial factors, assuming SAT cannot be solved in
sub-exponential time. Next, we give an efficient approxima- tion algorithm. For
any {\alpha} \in [0, 1], the algorithm produces a hyperplane whose distance
from at least 1 - 5{\alpha} fraction of the points is at least {\alpha} times
the optimal separation margin. Finally, we show that FHP does not admit a PTAS
by presenting a gap preserving reduction from a particular version of the PCP
theorem.
|
1107.1382
|
Operations-Based Planning for Placement and Sizing of Energy Storage in
a Grid With a High Penetration of Renewables
|
math.OC cs.SY physics.soc-ph
|
As the penetration level of transmission-scale time-intermittent renewable
generation resources increases, control of flexible resources will become
important to mitigating the fluctuations due to these new renewable resources.
Flexible resources may include new or existing synchronous generators as well
as new energy storage devices. The addition of energy storage, if needed,
should be done optimally to minimize the integration cost of renewable
resources, however, optimal placement and sizing of energy storage is a
difficult optimization problem. The fidelity of such results may be
questionable because optimal planning procedures typically do not consider the
effect of the time dynamics of operations and controls. Here, we use an optimal
energy storage control algorithm to develop a heuristic procedure for energy
storage placement and sizing. We generate many instances of intermittent
generation time profiles and allow the control algorithm access to unlimited
amounts of storage, both energy and power, at all nodes. Based on the activity
of the storage at each node, we restrict the number of storage node in a staged
procedure seeking the minimum number of storage nodes and total network storage
that can still mitigate the effects of renewable fluctuations on network
constraints. The quality of the heuristic is explored by comparing our results
to seemingly "intuitive" placements of storage.
|
1107.1383
|
Algorithms for Synthesizing Priorities in Component-based Systems
|
cs.LO cs.SY
|
We present algorithms to synthesize component-based systems that are safe and
deadlock-free using priorities, which define stateless-precedence between
enabled actions. Our core method combines the concept of fault-localization
(using safety-game) and fault-repair (using SAT for conflict resolution). For
complex systems, we propose three complementary methods as preprocessing steps
for priority synthesis, namely (a) data abstraction to reduce component
complexities, (b) alphabet abstraction and #-deadlock to ignore components, and
(c) automated assumption learning for compositional priority synthesis.
|
1107.1409
|
Fluctuations of spiked random matrix models and failure diagnosis in
sensor networks
|
cs.IT math.IT
|
In this article, the joint fluctuations of the extreme eigenvalues and
eigenvectors of a large dimensional sample covariance matrix are analyzed when
the associated population covariance matrix is a finite-rank perturbation of
the identity matrix, corresponding to the so-called spiked model in random
matrix theory. The asymptotic fluctuations, as the matrix size grows large, are
shown to be intimately linked with matrices from the Gaussian unitary ensemble
(GUE). When the spiked population eigenvalues have unit multiplicity, the
fluctuations follow a central limit theorem. This result is used to develop an
original framework for the detection and diagnosis of local failures in large
sensor networks, for known or unknown failure magnitude.
|
1107.1445
|
Bayesian experimental design for the active nitridation of graphite by
atomic nitrogen
|
physics.data-an cs.IT math.IT stat.AP
|
The problem of optimal data collection to efficiently learn the model
parameters of a graphite nitridation experiment is studied in the context of
Bayesian analysis using both synthetic and real experimental data. The paper
emphasizes that the optimal design can be obtained as a result of an
information theoretic sensitivity analysis. Thus, the preferred design is where
the statistical dependence between the model parameters and observables is the
highest possible. In this paper, the statistical dependence between random
variables is quantified by mutual information and estimated using a k-nearest
neighbor based approximation. It is shown, that by monitoring the inference
process via measures such as entropy or Kullback-Leibler divergence, one can
determine when to stop the data collection process. The methodology is applied
to select the most informative designs on both a simulated data set and on an
experimental data set, previously published in the literature. It is also shown
that the sequential Bayesian analysis used in the experimental design can also
be useful in detecting conflicting information between measurements and model
predictions.
|
1107.1456
|
Answering Non-Monotonic Queries in Relational Data Exchange
|
cs.DB cs.LO
|
Relational data exchange is the problem of translating relational data from a
source schema into a target schema, according to a specification of the
relationship between the source data and the target data. One of the basic
issues is how to answer queries that are posed against target data. While
consensus has been reached on the definitive semantics for monotonic queries,
this issue turned out to be considerably more difficult for non-monotonic
queries. Several semantics for non-monotonic queries have been proposed in the
past few years. This article proposes a new semantics for non-monotonic
queries, called the GCWA*-semantics. It is inspired by semantics from the area
of deductive databases. We show that the GCWA*-semantics coincides with the
standard open world semantics on monotonic queries, and we further explore the
(data) complexity of evaluating non-monotonic queries under the
GCWA*-semantics. In particular, we introduce a class of schema mappings for
which universal queries can be evaluated under the GCWA*-semantics in
polynomial time (data complexity) on the core of the universal solutions.
|
1107.1467
|
Geometry of Injection Regions of Power Networks
|
math.OC cs.IT cs.SY math.IT
|
We investigate the constraints on power flow in networks and its implications
to the optimal power flow problem. The constraints are described by the
injection region of a network; this is the set of all vectors of power
injections, one at each bus, that can be achieved while satisfying the network
and operation constraints. If there are no operation constraints, we show the
injection region of a network is the set of all injections satisfying the
conservation of energy. If the network has a tree topology, e.g., a
distribution network, we show that under voltage magnitude, line loss
constraints, line flow constraints and certain bus real and reactive power
constraints, the injection region and its convex hull have the same
Pareto-front. The Pareto-front is of interest since these are the the optimal
solutions to the minimization of increasing functions over the injection
region. For non-tree networks, we obtain a weaker result by characterize the
convex hull of the voltage constraint injection region for lossless cycles and
certain combinations of cycles and trees.
|
1107.1470
|
Vision-Based Navigation II: Error Analysis for a Navigation Algorithm
based on Optical-Flow and a Digital Terrain Map
|
cs.CV cs.AI
|
The paper deals with the error analysis of a navigation algorithm that uses
as input a sequence of images acquired by a moving camera and a Digital Terrain
Map (DTM) of the region been imaged by the camera during the motion. The main
sources of error are more or less straightforward to identify: camera
resolution, structure of the observed terrain and DTM accuracy, field of view
and camera trajectory. After characterizing and modeling these error sources in
the framework of the CDTM algorithm, a closed form expression for their effect
on the pose and motion errors of the camera can be found. The analytic
expression provides a priori measurements for the accuracy in terms of the
parameters mentioned above.
|
1107.1525
|
Accelerating Lossless Data Compression with GPUs
|
cs.IT cs.GR cs.PF math.IT
|
Huffman compression is a statistical, lossless, data compression algorithm
that compresses data by assigning variable length codes to symbols, with the
more frequently appearing symbols given shorter codes than the less. This work
is a modification of the Huffman algorithm which permits uncompressed data to
be decomposed into indepen- dently compressible and decompressible blocks,
allowing for concurrent compression and decompression on multiple processors.
We create implementations of this modified algorithm on a current NVIDIA GPU
using the CUDA API as well as on a current Intel chip and the performance
results are compared, showing favorable GPU performance for nearly all tests.
Lastly, we discuss the necessity for high performance data compression in
today's supercomputing ecosystem.
|
1107.1529
|
Decoding of Matrix-Product Codes
|
cs.IT math.IT
|
We propose a decoding algorithm for the $(u\mid u+v)$-construction that
decodes up to half of the minimum distance of the linear code. We extend this
algorithm for a class of matrix-product codes in two different ways. In some
cases, one can decode beyond the error correction capability of the code.
|
1107.1535
|
Multilevel Polarization of Polar Codes Over Arbitrary Discrete
Memoryless Channels
|
cs.IT math.IT
|
It is shown that polar codes achieve the symmetric capacity of discrete
memoryless channels with arbitrary input alphabet sizes. It is shown that in
general, channel polarization happens in several, rather than only two levels
so that the synthesized channels are either useless, perfect or "partially
perfect". Any subset of the channel input alphabet which is closed under
addition, induces a coset partition of the alphabet through its shifts. For any
such partition of the input alphabet, there exists a corresponding partially
perfect channel whose outputs uniquely determine the coset to which the channel
input belongs. By a slight modification of the encoding and decoding rules, it
is shown that perfect transmission of certain information symbols over
partially perfect channels is possible. Our result is general regarding both
the cardinality and the algebraic structure of the channel input alphabet; i.e
we show that for any channel input alphabet size and any Abelian group
structure on the alphabet, polar codes are optimal. It is also shown through an
example that polar codes when considered as group/coset codes, do not achieve
the capacity achievable using coset codes over arbitrary channels.
|
1107.1544
|
Cooperative Jamming for Secure Communications in MIMO Relay Networks
|
cs.IT math.IT
|
Secure communications can be impeded by eavesdroppers in conventional relay
systems. This paper proposes cooperative jamming strategies for two-hop relay
networks where the eavesdropper can wiretap the relay channels in both hops. In
these approaches, the normally inactive nodes in the relay network can be used
as cooperative jamming sources to confuse the eavesdropper. Linear precoding
schemes are investigated for two scenarios where single or multiple data
streams are transmitted via a decode-and-forward (DF) relay, under the
assumption that global channel state information (CSI) is available. For the
case of single data stream transmission, we derive closed-form jamming
beamformers and the corresponding optimal power allocation. Generalized
singular value decomposition (GSVD)-based secure relaying schemes are proposed
for the transmission of multiple data streams. The optimal power allocation is
found for the GSVD relaying scheme via geometric programming. Based on this
result, a GSVD-based cooperative jamming scheme is proposed that shows
significant improvement in terms of secrecy rate compared to the approach
without jamming. Furthermore, the case involving an eavesdropper with unknown
CSI is also investigated in this paper. Simulation results show that the
secrecy rate is dramatically increased when inactive nodes in the relay network
participate in cooperative jamming.
|
1107.1561
|
Analysis and Improvement of Low Rank Representation for Subspace
segmentation
|
cs.CV
|
We analyze and improve low rank representation (LRR), the state-of-the-art
algorithm for subspace segmentation of data. We prove that for the noiseless
case, the optimization model of LRR has a unique solution, which is the shape
interaction matrix (SIM) of the data matrix. So in essence LRR is equivalent to
factorization methods. We also prove that the minimum value of the optimization
model of LRR is equal to the rank of the data matrix. For the noisy case, we
show that LRR can be approximated as a factorization method that combines noise
removal by column sparse robust PCA. We further propose an improved version of
LRR, called Robust Shape Interaction (RSI), which uses the corrected data as
the dictionary instead of the noisy data. RSI is more robust than LRR when the
corruption in data is heavy. Experiments on both synthetic and real data
testify to the improved robustness of RSI.
|
1107.1563
|
Designing Nonlinear Turbo Codes with a Target Ones Density
|
cs.IT math.IT
|
Certain binary asymmetric channels, such as Z-channels in which one of the
two crossover probabilities is zero, demand optimal ones densities different
from 50%. Some broadcast channels, such as broadcast binary symmetric channels
(BBSC) where each component channel is a binary symmetric channel, also require
a non-uniform input distribution due to the superposition coding scheme, which
is known to achieve the boundary of capacity region. This paper presents a
systematic technique for designing nonlinear turbo codes that are able to
support ones densities different from 50%. To demonstrate the effectiveness of
our design technique, we design and simulate nonlinear turbo codes for the
Z-channel and the BBSC. The best nonlinear turbo code is less than 0.02 bits
from capacity.
|
1107.1564
|
Polyceptron: A Polyhedral Learning Algorithm
|
cs.LG cs.NE
|
In this paper we propose a new algorithm for learning polyhedral classifiers
which we call as Polyceptron. It is a Perception like algorithm which updates
the parameters only when the current classifier misclassifies any training
data. We give both batch and online version of Polyceptron algorithm. Finally
we give experimental results to show the effectiveness of our approach.
|
1107.1580
|
Controller Synthesis for Robust Invariance of Polynomial Dynamical
Systems using Linear Programming
|
math.OC cs.SY
|
In this paper, we consider a control synthesis problem for a class of
polynomial dynamical systems subject to bounded disturbances and with input
constraints. More precisely, we aim at synthesizing at the same time a
controller and an invariant set for the controlled system under all admissible
disturbances. We propose a computational method to solve this problem. Given a
candidate polyhedral invariant, we show that controller synthesis can be
formulated as an optimization problem involving polynomial cost functions over
bounded polytopes for which effective linear programming relaxations can be
obtained. Then, we propose an iterative approach to compute the controller and
the polyhedral invariant at once. Each iteration of the approach mainly
consists in solving two linear programs (one for the controller and one for the
invariant) and is thus computationally tractable. Finally, we show with several
examples the usefulness of our method in applications.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.