id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1106.5249
|
The strength of strong ties in scientific collaboration networks
|
physics.soc-ph cs.SI physics.data-an
|
Network topology and its relationship to tie strengths may hinder or enhance
the spreading of information in social networks. We study the correlations
between tie strengths and topology in networks of scientific collaboration, and
show that these are very different from ordinary social networks. For the
latter, it has earlier been shown that strong ties are associated with dense
network neighborhoods, while weaker ties act as bridges between these. Because
of this, weak links act as bottlenecks for the diffusion of information. We
show that on the contrary, in co-authorship networks dense local neighborhoods
mainly consist of weak links, whereas strong links are more important for
overall connectivity. The important role of strong links is further highlighted
in simulations of information spreading, where their topological position is
seen to dramatically speed up spreading dynamics. Thus, in contrast to ordinary
social networks, weight-topology correlations enhance the flow of information
across scientific collaboration networks.
|
1106.5253
|
User Arrival in MIMO Interference Alignment Networks
|
cs.IT math.IT
|
In this paper we analyze a constant multiple-input multiple-output
interference channel where a set of active users are cooperating through
interference alignment while a set of secondary users desire access to the
channel. We derive the minimum number of secondary transmit antennas required
so that a secondary user can use the channel without affecting the sum rate of
the active users, under a zero-forcing equalization assumption. When the
secondary users have enough antennas, we derive several secondary user
precoders that approximately maximize the secondary users' sum rate without
changing the sum rate of the active users. When the secondary users do not have
enough antennas, we perform numerical optimization to find secondary user
precoders that cause minimum degradation to the sum rate of the active users.
Through simulations, we confirm that i) with enough antennas at the secondary
users, gains equivalent to the case of all the users cooperating through
interference alignment is obtainable, and ii) when the secondary users do not
have enough antennas, large rate losses at the active users can be avoided.
|
1106.5256
|
Structure and Complexity in Planning with Unary Operators
|
cs.AI
|
Unary operator domains -- i.e., domains in which operators have a single
effect -- arise naturally in many control problems. In its most general form,
the problem of STRIPS planning in unary operator domains is known to be as hard
as the general STRIPS planning problem -- both are PSPACE-complete. However,
unary operator domains induce a natural structure, called the domain's causal
graph. This graph relates between the preconditions and effect of each domain
operator. Causal graphs were exploited by Williams and Nayak in order to
analyze plan generation for one of the controllers in NASA's Deep-Space One
spacecraft. There, they utilized the fact that when this graph is acyclic, a
serialization ordering over any subgoal can be obtained quickly. In this paper
we conduct a comprehensive study of the relationship between the structure of a
domain's causal graph and the complexity of planning in this domain. On the
positive side, we show that a non-trivial polynomial time plan generation
algorithm exists for domains whose causal graph induces a polytree with a
constant bound on its node indegree. On the negative side, we show that even
plan existence is hard when the graph is a directed-path singly connected DAG.
More generally, we show that the number of paths in the causal graph is closely
related to the complexity of planning in the associated domain. Finally we
relate our results to the question of complexity of planning with serializable
subgoals.
|
1106.5257
|
Answer Set Planning Under Action Costs
|
cs.AI
|
Recently, planning based on answer set programming has been proposed as an
approach towards realizing declarative planning systems. In this paper, we
present the language Kc, which extends the declarative planning language K by
action costs. Kc provides the notion of admissible and optimal plans, which are
plans whose overall action costs are within a given limit resp. minimum over
all plans (i.e., cheapest plans). As we demonstrate, this novel language allows
for expressing some nontrivial planning tasks in a declarative way.
Furthermore, it can be utilized for representing planning problems under other
optimality criteria, such as computing ``shortest'' plans (with the least
number of steps), and refinement combinations of cheapest and fastest plans. We
study complexity aspects of the language Kc and provide a transformation to
logic programs, such that planning problems are solved via answer set
programming. Furthermore, we report experimental results on selected problems.
Our experience is encouraging that answer set planning may be a valuable
approach to expressive planning systems in which intricate planning problems
can be naturally specified and solved.
|
1106.5258
|
Learning to Coordinate Efficiently: A Model-based Approach
|
cs.AI
|
In common-interest stochastic games all players receive an identical payoff.
Players participating in such games must learn to coordinate with each other in
order to receive the highest-possible value. A number of reinforcement learning
algorithms have been proposed for this problem, and some have been shown to
converge to good solutions in the limit. In this paper we show that using very
simple model-based algorithms, much better (i.e., polynomial) convergence rates
can be attained. Moreover, our model-based algorithms are guaranteed to
converge to the optimal value, unlike many of the existing algorithms.
|
1106.5260
|
SAPA: A Multi-objective Metric Temporal Planner
|
cs.AI
|
SAPA is a domain-independent heuristic forward chaining planner that can
handle durative actions, metric resource constraints, and deadline goals. It is
designed to be capable of handling the multi-objective nature of metric
temporal planning. Our technical contributions include (i) planning-graph based
methods for deriving heuristics that are sensitive to both cost and makespan
(ii) techniques for adjusting the heuristic estimates to take action
interactions and metric resource limitations into account and (iii) a linear
time greedy post-processing technique to improve execution flexibility of the
solution plans. An implementation of SAPA using many of the techniques
presented in this paper was one of the best domain independent planners for
domains with metric and temporal constraints in the third International
Planning Competition, held at AIPS-02. We describe the technical details of
extracting the heuristics and present an empirical evaluation of the current
implementation of SAPA.
|
1106.5261
|
A New General Method to Generate Random Modal Formulae for Testing
Decision Procedures
|
cs.AI
|
The recent emergence of heavily-optimized modal decision procedures has
highlighted the key role of empirical testing in this domain. Unfortunately,
the introduction of extensive empirical tests for modal logics is recent, and
so far none of the proposed test generators is very satisfactory. To cope with
this fact, we present a new random generation method that provides benefits
over previous methods for generating empirical tests. It fixes and much
generalizes one of the best-known methods, the random CNF_[]m test, allowing
for generating a much wider variety of problems, covering in principle the
whole input space. Our new method produces much more suitable test sets for the
current generation of modal decision procedures. We analyze the features of the
new method by means of an extensive collection of empirical tests.
|
1106.5262
|
AltAltp: Online Parallelization of Plans with Heuristic State Search
|
cs.AI
|
Despite their near dominance, heuristic state search planners still lag
behind disjunctive planners in the generation of parallel plans in classical
planning. The reason is that directly searching for parallel solutions in state
space planners would require the planners to branch on all possible subsets of
parallel actions, thus increasing the branching factor exponentially. We
present a variant of our heuristic state search planner AltAlt, called AltAltp
which generates parallel plans by using greedy online parallelization of
partial plans. The greedy approach is significantly informed by the use of
novel distance heuristics that AltAltp derives from a graphplan-style planning
graph for the problem. While this approach is not guaranteed to provide optimal
parallel plans, empirical results show that AltAltp is capable of generating
good quality parallel plans at a fraction of the cost incurred by the
disjunctive planners.
|
1106.5263
|
New Polynomial Classes for Logic-Based Abduction
|
cs.AI
|
We address the problem of propositional logic-based abduction, i.e., the
problem of searching for a best explanation for a given propositional
observation according to a given propositional knowledge base. We give a
general algorithm, based on the notion of projection; then we study
restrictions over the representations of the knowledge base and of the query,
and find new polynomial classes of abduction problems.
|
1106.5264
|
Acquiring Correct Knowledge for Natural Language Generation
|
cs.CL
|
Natural language generation (NLG) systems are computer software systems that
produce texts in English and other human languages, often from non-linguistic
input data. NLG systems, like most AI systems, need substantial amounts of
knowledge. However, our experience in two NLG projects suggests that it is
difficult to acquire correct knowledge for NLG systems; indeed, every knowledge
acquisition (KA) technique we tried had significant problems. In general terms,
these problems were due to the complexity, novelty, and poorly understood
nature of the tasks our systems attempted, and were worsened by the fact that
people write so differently. This meant in particular that corpus-based KA
approaches suffered because it was impossible to assemble a sizable corpus of
high-quality consistent manually written texts in our domains; and structured
expert-oriented KA techniques suffered because experts disagreed and because we
could not get enough information about special and unusual cases to build
robust systems. We believe that such problems are likely to affect many other
NLG systems as well. In the long term, we hope that new KA techniques may
emerge to help NLG system builders. In the shorter term, we believe that
understanding how individual KA techniques can fail, and using a mixture of
different KA techniques with different strengths and weaknesses, can help
developers acquire NLG knowledge that is mostly correct.
|
1106.5265
|
Planning Through Stochastic Local Search and Temporal Action Graphs in
LPG
|
cs.AI
|
We present some techniques for planning in domains specified with the recent
standard language PDDL2.1, supporting 'durative actions' and numerical
quantities. These techniques are implemented in LPG, a domain-independent
planner that took part in the 3rd International Planning Competition (IPC). LPG
is an incremental, any time system producing multi-criteria quality plans. The
core of the system is based on a stochastic local search method and on a
graph-based representation called 'Temporal Action Graphs' (TA-graphs). This
paper focuses on temporal planning, introducing TA-graphs and proposing some
techniques to guide the search in LPG using this representation. The
experimental results of the 3rd IPC, as well as further results presented in
this paper, show that our techniques can be very effective. Often LPG
outperforms all other fully-automated planners of the 3rd IPC in terms of speed
to derive a solution, or quality of the solutions that can be produced.
|
1106.5266
|
TALplanner in IPC-2002: Extensions and Control Rules
|
cs.AI
|
TALplanner is a forward-chaining planner that relies on domain knowledge in
the shape of temporal logic formulas in order to prune irrelevant parts of the
search space. TALplanner recently participated in the third International
Planning Competition, which had a clear emphasis on increasing the complexity
of the problem domains being used as benchmark tests and the expressivity
required to represent these domains in a planning system. Like many other
planners, TALplanner had support for some but not all aspects of this increase
in expressivity, and a number of changes to the planner were required. After a
short introduction to TALplanner, this article describes some of the changes
that were made before and during the competition. We also describe the process
of introducing suitable domain knowledge for several of the competition
domains.
|
1106.5267
|
Potential-Based Shaping and Q-Value Initialization are Equivalent
|
cs.LG
|
Shaping has proven to be a powerful but precarious means of improving
reinforcement learning performance. Ng, Harada, and Russell (1999) proposed the
potential-based shaping algorithm for adding shaping rewards in a way that
guarantees the learner will learn optimal behavior. In this note, we prove
certain similarities between this shaping algorithm and the initialization step
required for several reinforcement learning algorithms. More specifically, we
prove that a reinforcement learner with initial Q-values based on the shaping
algorithm's potential function make the same updates throughout learning as a
learner receiving potential-based shaping rewards. We further prove that under
a broad category of policies, the behavior of these two learners are
indistinguishable. The comparison provides intuition on the theoretical
properties of the shaping algorithm as well as a suggestion for a simpler
method for capturing the algorithm's benefit. In addition, the equivalence
raises previously unaddressed issues concerning the efficiency of learning with
potential-based shaping.
|
1106.5268
|
Temporal Decision Trees: Model-based Diagnosis of Dynamic Systems
On-Board
|
cs.AI
|
The automatic generation of decision trees based on off-line reasoning on
models of a domain is a reasonable compromise between the advantages of using a
model-based approach in technical domains and the constraints imposed by
embedded applications. In this paper we extend the approach to deal with
temporal information. We introduce a notion of temporal decision tree, which is
designed to make use of relevant information as long as it is acquired, and we
present an algorithm for compiling such trees from a model-based reasoning
system.
|
1106.5269
|
Optimal Schedules for Parallelizing Anytime Algorithms: The Case of
Shared Resources
|
cs.AI
|
The performance of anytime algorithms can be improved by simultaneously
solving several instances of algorithm-problem pairs. These pairs may include
different instances of a problem (such as starting from a different initial
state), different algorithms (if several alternatives exist), or several runs
of the same algorithm (for non-deterministic algorithms). In this paper we
present a methodology for designing an optimal scheduling policy based on the
statistical characteristics of the algorithms involved. We formally analyze the
case where the processes share resources (a single-processor model), and
provide an algorithm for optimal scheduling. We analyze, theoretically and
empirically, the behavior of our scheduling algorithm for various distribution
types. Finally, we present empirical results of applying our scheduling
algorithm to the Latin Square problem.
|
1106.5270
|
Decision-Theoretic Bidding Based on Learned Density Models in
Simultaneous, Interacting Auctions
|
cs.AI
|
Auctions are becoming an increasingly popular method for transacting
business, especially over the Internet. This article presents a general
approach to building autonomous bidding agents to bid in multiple simultaneous
auctions for interacting goods. A core component of our approach learns a model
of the empirical price dynamics based on past data and uses the model to
analytically calculate, to the greatest extent possible, optimal bids. We
introduce a new and general boosting-based algorithm for conditional density
estimation problems of this kind, i.e., supervised learning problems in which
the goal is to estimate the entire conditional distribution of the real-valued
label. This approach is fully implemented as ATTac-2001, a top-scoring agent in
the second Trading Agent Competition (TAC-01). We present experiments
demonstrating the effectiveness of our boosting-based price predictor relative
to several reasonable alternatives.
|
1106.5271
|
The Metric-FF Planning System: Translating "Ignoring Delete Lists" to
Numeric State Variables
|
cs.AI
|
Planning with numeric state variables has been a challenge for many years,
and was a part of the 3rd International Planning Competition (IPC-3). Currently
one of the most popular and successful algorithmic techniques in STRIPS
planning is to guide search by a heuristic function, where the heuristic is
based on relaxing the planning task by ignoring the delete lists of the
available actions. We present a natural extension of ``ignoring delete lists''
to numeric state variables, preserving the relevant theoretical properties of
the STRIPS relaxation under the condition that the numeric task at hand is
``monotonic''. We then identify a subset of the numeric IPC-3 competition
language, ``linear tasks'', where monotonicity can be achieved by
pre-processing. Based on that, we extend the algorithms used in the heuristic
planning system FF to linear tasks. The resulting system Metric-FF is,
according to the IPC-3 results which we discuss, one of the two currently most
efficient numeric planners.
|
1106.5294
|
Set systems: order types, continuous nondeterministic deformations, and
quasi-orders
|
cs.LO cs.GT cs.LG
|
By reformulating a learning process of a set system L as a game between
Teacher and Learner, we define the order type of L to be the order type of the
game tree, if the tree is well-founded. The features of the order type of L
(dim L in symbol) are (1) We can represent any well-quasi-order (wqo for short)
by the set system L of the upper-closed sets of the wqo such that the maximal
order type of the wqo is equal to dim L. (2) dim L is an upper bound of the
mind-change complexity of L. dim L is defined iff L has a finite elasticity (fe
for short), where, according to computational learning theory, if an indexed
family of recursive languages has fe then it is learnable by an algorithm from
positive data. Regarding set systems as subspaces of Cantor spaces, we prove
that fe of set systems is preserved by any continuous function which is
monotone with respect to the set-inclusion. By it, we prove that finite
elasticity is preserved by various (nondeterministic) language operators
(Kleene-closure, shuffle-closure, union, product, intersection,. . ..) The
monotone continuous functions represent nondeterministic computations. If a
monotone continuous function has a computation tree with each node followed by
at most n immediate successors and the order type of a set system L is
{\alpha}, then the direct image of L is a set system of order type at most
n-adic diagonal Ramsey number of {\alpha}. Furthermore, we provide an
order-type-preserving contravariant embedding from the category of quasi-orders
and finitely branching simulations between them, into the complete category of
subspaces of Cantor spaces and monotone continuous functions having Girard's
linearity between them. Keyword: finite elasticity, shuffle-closure
|
1106.5301
|
Optimizing and controlling functions of complex networks by manipulating
rich-club connections
|
physics.soc-ph cs.SI
|
Traditionally, there is no evidence suggesting that there are strong ties
between the rich-club property and the function of complex networks. In this
study, we find that whether a very small portion of rich nodes connected to
each other or not can strongly affect the frequency of occurrence of basic
building blocks (motif) within networks, and therefore the function, of a
heterogeneous network. Conversely whether a homogeneous network has a rich-club
property or not generally has no significant effect on its structure and
function. These findings open the possibility to optimize and control the
function of complex networks by manipulating rich-club connections.
Furthermore, based on the subgraph ratio profile, we develop a more rigorous
approach to judge whether a network has a rich-club or not. The new method does
not calculate how many links there are among rich nodes but depends on how the
links among rich nodes can affect the overall structure as well as function of
a given network. These results can also help us to understand the evolution of
dynamical networks and design new models for characterizing real-world
networks.
|
1106.5308
|
Clasificarea distribuita a mesajelor de e-mail
|
cs.HC cs.CL
|
A basic component in Internet applications is the electronic mail and its
various implications. The paper proposes a mechanism for automatically
classifying emails and create dynamic groups that belong to these messages.
Proposed mechanisms will be based on natural language processing techniques and
will be designed to facilitate human-machine interaction in this direction.
|
1106.5312
|
Manipulation of Nanson's and Baldwin's Rules
|
cs.AI
|
Nanson's and Baldwin's voting rules select a winner by successively
eliminating candidates with low Borda scores. We show that these rules have a
number of desirable computational properties. In particular, with unweighted
votes, it is NP-hard to manipulate either rule with one manipulator, whilst
with weighted votes, it is NP-hard to manipulate either rule with a small
number of candidates and a coalition of manipulators. As only a couple of other
voting rules are known to be NP-hard to manipulate with a single manipulator,
Nanson's and Baldwin's rules appear to be particularly resistant to
manipulation from a theoretical perspective. We also propose a number of
approximation methods for manipulating these two rules. Experiments demonstrate
that both rules are often difficult to manipulate in practice. These results
suggest that elimination style voting rules deserve further study.
|
1106.5316
|
Online Cake Cutting (published version)
|
cs.AI cs.GT cs.MA
|
We propose an online form of the cake cutting problem. This models situations
where agents arrive and depart during the process of dividing a resource. We
show that well known fair division procedures like cut-and-choose and the
Dubins-Spanier moving knife procedure can be adapted to apply to such online
problems. We propose some fairness properties that online cake cutting
procedures can possess like online forms of proportionality and envy-freeness.
We also consider the impact of collusion between agents. Finally, we study
theoretically and empirically the competitive ratio of these online cake
cutting procedures. Based on its resistance to collusion, and its good
performance in practice, our results favour the online version of the
cut-and-choose procedure over the online version of the moving knife procedure.
|
1106.5321
|
Uncovering Social Network Sybils in the Wild
|
cs.SI physics.soc-ph
|
Sybil accounts are fake identities created to unfairly increase the power or
resources of a single malicious user. Researchers have long known about the
existence of Sybil accounts in online communities such as file-sharing systems,
but have not been able to perform large scale measurements to detect them or
measure their activities. In this paper, we describe our efforts to detect,
characterize and understand Sybil account activity in the Renren online social
network (OSN). We use ground truth provided by Renren Inc. to build measurement
based Sybil account detectors, and deploy them on Renren to detect over 100,000
Sybil accounts. We study these Sybil accounts, as well as an additional 560,000
Sybil accounts caught by Renren, and analyze their link creation behavior. Most
interestingly, we find that contrary to prior conjecture, Sybil accounts in
OSNs do not form tight-knit communities. Instead, they integrate into the
social graph just like normal users. Using link creation timestamps, we verify
that the large majority of links between Sybil accounts are created
accidentally, unbeknownst to the attacker. Overall, only a very small portion
of Sybil accounts are connected to other Sybils with social links. Our study
shows that existing Sybil defenses are unlikely to succeed in today's OSNs, and
we must design new techniques to effectively detect and defend against Sybil
attacks.
|
1106.5341
|
Pose Estimation from a Single Depth Image for Arbitrary Kinematic
Skeletons
|
cs.CV cs.AI cs.LG
|
We present a method for estimating pose information from a single depth image
given an arbitrary kinematic structure without prior training. For an arbitrary
skeleton and depth image, an evolutionary algorithm is used to find the optimal
kinematic configuration to explain the observed image. Results show that our
approach can correctly estimate poses of 39 and 78 degree-of-freedom models
from a single depth image, even in cases of significant self-occlusion.
|
1106.5346
|
Reconstruction and Estimation of Scattering Functions of Overspread
Radar Targets
|
cs.IT math.IT
|
In many radar scenarios, the radar target or the medium is assumed to possess
randomly varying parts. The properties of a target are described by a random
process known as the spreading function. Its second order statistics under the
WSSUS assumption are given by the scattering function. Recent developments in
the operator identification theory suggest a channel sounding procedure that
allows to determine the spreading function given complete statistical knowledge
of the operator echo. We show that in a continuous model it is indeed
theoretically possible to identify a scattering function of an overspread
target given full statistics of a received echo from a single sounding by a
custom weighted delta train. Our results apply whenever the scattering function
is supported on a set of area less than one. Absent such complete statistics,
we construct and analyze an estimator that can be used as a replacement of the
averaged periodogram estimator in case of poor geometry of the support set of
the scattering function.
|
1106.5349
|
Discrete calculus of variations for quadratic lagrangians
|
math.OC cs.SY
|
We develop in this paper a new framework for discrete calculus of variations
when the actions have densities involving an arbitrary discretization operator.
We deduce the discrete Euler-Lagrange equations for piecewise continuous
critical points of sampled actions. Then we characterize the discretization
operators such that, for all quadratic lagrangian, the discrete Euler-Lagrange
equations converge to the classical ones.
|
1106.5350
|
Discrete Calculus of Variations for Quadratic Lagrangians. Convergence
Issues
|
math.OC cs.SY
|
We study in this paper the continuous and discrete Euler-Lagrange equations
arising from a quadratic lagrangian. Those equations may be thought as
numerical schemes and may be solved through a matrix based framework. When the
lagrangian is time-independent, we can solve both continuous and discrete
Euler-Lagrange equations under convenient oscillatory and non-resonance
properties. The convergence of the solutions is also investigated. In the
simplest case of the harmonic oscillator, unconditional convergence does not
hold, we give results and experiments in this direction.
|
1106.5351
|
Quadratic choreographies
|
math.OC cs.SY
|
This paper addresses the classical and discrete Euler-Lagrange equations for
systems of $n$ particles interacting quadratically in $\mathbb{R}^d$. By
highlighting the role played by the center of mass of the particles, we solve
the previous systems via the classical quadratic eigenvalue problem (QEP) and
its discrete transcendental generalization. The roots of classical and discrete
QEP being given, we state some conditional convergence results. Next, we focus
especially on periodic and choreographic solutions and we provide some
numerical experiments which confirm the convergence.
|
1106.5364
|
Macro and Micro Diversity Behaviors of Practical Dynamic Decode and
Forward Relaying schemes
|
cs.IT math.IT
|
In this paper, we propose a practical implementation of the Dynamic Decode
and Forward (DDF) protocol based on rateless codes and HARQ. We define the
macro diversity order of a transmission from several intermittent sources to a
single destination. Considering finite symbol alphabet used by the different
sources, upper bounds on the achievable macro diversity order are derived. We
analyse the diversity behavior of several relaying schemes for the DDF
protocol, and we propose the Patching technique to increase both the macro and
the micro diversity orders. The coverage gain for the open-loop transmission
case and the spectral efficiency gain for the closed loop transmission case are
illustrated by simulation results.
|
1106.5367
|
Partial Interference Alignment for K-user MIMO Interference Channels
|
cs.IT math.IT
|
In this paper, we consider a Partial Interference Alignment and Interference
Detection (PIAID) design for $K$-user quasi-static MIMO interference channels
with discrete constellation inputs. Each transmitter has M antennas and
transmits L independent data streams to the desired receiver with N receive
antennas. We focus on the case where not all K-1 interfering transmitters can
be aligned at every receiver. As a result, there will be residual interference
at each receiver that cannot be aligned. Each receiver detects and cancels the
residual interference based on the constellation map. However, there is a
window of unfavorable interference profile at the receiver for Interference
Detection (ID). In this paper, we propose a low complexity Partial Interference
Alignment scheme in which we dynamically select the user set for IA so as to
create a favorable interference profile for ID at each receiver. We first
derive the average symbol error rate (SER) by taking into account of the
non-Guassian residual interference due to discrete constellation. Using graph
theory, we then devise a low complexity user set selection algorithm for the
PIAID scheme,which minimizes the asymptotically tight bound for the average
end-to-end SER performance. Moreover, we substantially simplify interference
detection at the receiver using Semi-Definite Relaxation (SDR) techniques. It
is shown that the SER performance of the proposed PIAID scheme has significant
gain compared with various conventional baseline solutions.
|
1106.5387
|
Subspace Properties of Network Coding and their Applications
|
cs.IT cs.NI math.IT
|
Systems that employ network coding for content distribution convey to the
receivers linear combinations of the source packets. If we assume randomized
network coding, during this process the network nodes collect random subspaces
of the space spanned by the source packets. We establish several fundamental
properties of the random subspaces induced in such a system, and show that
these subspaces implicitly carry topological information about the network and
its state that can be passively collected and inferred. We leverage this
information towards a number of applications that are interesting in their own
right, such as topology inference, bottleneck discovery in peer-to-peer systems
and locating Byzantine attackers. We thus argue that, randomized network
coding, apart from its better known properties for improving information
delivery rate, can additionally facilitate network management and control.
|
1106.5413
|
Accelerated Linearized Bregman Method
|
math.OC cs.IT math.IT
|
In this paper, we propose and analyze an accelerated linearized Bregman (ALB)
method for solving the basis pursuit and related sparse optimization problems.
This accelerated algorithm is based on the fact that the linearized Bregman
(LB) algorithm is equivalent to a gradient descent method applied to a certain
dual formulation. We show that the LB method requires $O(1/\epsilon)$
iterations to obtain an $\epsilon$-optimal solution and the ALB algorithm
reduces this iteration complexity to $O(1/\sqrt{\epsilon})$ while requiring
almost the same computational effort on each iteration. Numerical results on
compressed sensing and matrix completion problems are presented that
demonstrate that the ALB method can be significantly faster than the LB method.
|
1106.5427
|
Theory and Algorithms for Partial Order Based Reduction in Planning
|
cs.AI
|
Search is a major technique for planning. It amounts to exploring a state
space of planning domains typically modeled as a directed graph. However,
prohibitively large sizes of the search space make search expensive. Developing
better heuristic functions has been the main technique for improving search
efficiency. Nevertheless, recent studies have shown that improving heuristics
alone has certain fundamental limits on improving search efficiency. Recently,
a new direction of research called partial order based reduction (POR) has been
proposed as an alternative to improving heuristics. POR has shown promise in
speeding up searches.
POR has been extensively studied in model checking research and is a key
enabling technique for scalability of model checking systems. Although the POR
theory has been extensively studied in model checking, it has never been
developed systematically for planning before. In addition, the conditions for
POR in the model checking theory are abstract and not directly applicable in
planning. Previous works on POR algorithms for planning did not establish the
connection between these algorithms and existing theory in model checking.
In this paper, we develop a theory for POR in planning. The new theory we
develop connects the stubborn set theory in model checking and POR methods in
planning. We show that previous POR algorithms in planning can be explained by
the new theory. Based on the new theory, we propose a new, stronger POR
algorithm. Experimental results on various planning domains show further search
cost reduction using the new algorithm.
|
1106.5433
|
Kolmogorov complexity and cryptography
|
cs.CR cs.IT math.IT
|
This paper contains some results of An.A.Muchnik (1958-2007) reported in his
talks at the Kolmogorov seminar (Moscow State Lomonosov University, Math.
Department, Logic and Algorithms theory division, March 11, 2003 and April 8,
2003) but not published at that time. These results were stated (without
proofs) in the joint talk of Andrej Muchnik and Alexei Semenov at Dagstuhl
Seminar 03181, 27.04.2003-03.05.2003. This text was prepared by Alexey Chernov
and Alexander Shen in 2008-2009. We consider (in the framework of algorithmic
information theory) questions of the following type: construct a message that
contains different amounts of information for recipients that have (or do not
have) certain a priori information. Assume, for example, that the recipient
knows some string $a$, and we want to send her some information that allows her
to reconstruct some string $b$ (using $a$). On the other hand, this information
alone should not allow the eavesdropper (who does not know $a$) to reconstruct
$b$. It is indeed possible (if the strings $a$ and $b$ are not too simple).
Then we consider more complicated versions of this question. What if the
eavesdropper knows some string $c$? How long should be our message? We provide
some conditions that guarantee the existence of a polynomial-size message; we
show then that without these conditions this is not always possible.
|
1106.5448
|
Dominating Manipulations in Voting with Partial Information
|
cs.AI cs.CC cs.GT cs.MA
|
We consider manipulation problems when the manipulator only has partial
information about the votes of the nonmanipulators. Such partial information is
described by an information set, which is the set of profiles of the
nonmanipulators that are indistinguishable to the manipulator. Given such an
information set, a dominating manipulation is a non-truthful vote that the
manipulator can cast which makes the winner at least as preferable (and
sometimes more preferable) as the winner when the manipulator votes truthfully.
When the manipulator has full information, computing whether or not there
exists a dominating manipulation is in P for many common voting rules (by known
results). We show that when the manipulator has no information, there is no
dominating manipulation for many common voting rules. When the manipulator's
information is represented by partial orders and only a small portion of the
preferences are unknown, computing a dominating manipulation is NP-hard for
many common voting rules. Our results thus throw light on whether we can
prevent strategic behavior by limiting information about the votes of other
voters.
|
1106.5460
|
Automated segmentation of the pulmonary arteries in low-dose CT by
vessel tracking
|
cs.CV
|
We present a fully automated method for top-down segmentation of the
pulmonary arterial tree in low-dose thoracic CT images. The main basal
pulmonary arteries are identified near the lung hilum by searching for
candidate vessels adjacent to known airways, identified by our previously
reported airway segmentation method. Model cylinders are iteratively fit to the
vessels to track them into the lungs. Vessel bifurcations are detected by
measuring the rate of change of vessel radii, and child vessels are segmented
by initiating new trackers at bifurcation points. Validation is accomplished
using our novel sparse surface (SS) evaluation metric. The SS metric was
designed to quantify the magnitude of the segmentation error per vessel while
significantly decreasing the manual marking burden for the human user. A total
of 210 arteries and 205 veins were manually marked across seven test cases.
134/210 arteries were correctly segmented, with a specificity for arteries of
90%, and average segmentation error of 0.15 mm. This fully-automated
segmentation is a promising method for improving lung nodule detection in
low-dose CT screening scans, by separating vessels from surrounding
iso-intensity objects.
|
1106.5524
|
Robust network community detection using balanced propagation
|
physics.soc-ph cs.SI physics.data-an
|
Label propagation has proven to be an extremely fast method for detecting
communities in large complex networks. Furthermore, due to its simplicity, it
is also currently one of the most commonly adopted algorithms in the
literature. Despite various subsequent advances, an important issue of the
algorithm has not yet been properly addressed. Random (node) update orders
within the algorithm severely hamper its robustness, and consequently also the
stability of the identified community structure. We note that an update order
can be seen as increasing propagation preferences from certain nodes, and
propose a balanced propagation that counteracts for the introduced randomness
by utilizing node balancers. We have evaluated the proposed approach on
synthetic networks with planted partition, and on several real-world networks
with community structure. The results confirm that balanced propagation is
significantly more robust than label propagation, when the performance of
community detection is even improved. Thus, balanced propagation retains high
scalability and algorithmic simplicity of label propagation, but improves on
its stability and performance.
|
1106.5536
|
Spreading paths in partially observed social networks
|
physics.soc-ph cs.SI
|
Understanding how and how far information, behaviors, or pathogens spread in
social networks is an important problem, having implications for both
predicting the size of epidemics, as well as for planning effective
interventions. There are, however, two main challenges for inferring spreading
paths in real-world networks. One is the practical difficulty of observing a
dynamic process on a network, and the other is the typical constraint of only
partially observing a network. Using a static, structurally realistic social
network as a platform for simulations, we juxtapose three distinct paths: (1)
the stochastic path taken by a simulated spreading process from source to
target; (2) the topologically shortest path in the fully observed network, and
hence the single most likely stochastic path, between the two nodes; and (3)
the topologically shortest path in a partially observed network. In a sampled
network, how closely does the partially observed shortest path (3) emulate the
unobserved spreading path (1)? Although partial observation inflates the length
of the shortest path, the stochastic nature of the spreading process also
frequently derails the dynamic path from the shortest path. We find that the
partially observed shortest path does not necessarily give an inflated estimate
of the length of the process path; in fact, partial observation may,
counterintuitively, make the path seem shorter than it actually is.
|
1106.5551
|
Labeling 3D scenes for Personal Assistant Robots
|
cs.RO
|
Inexpensive RGB-D cameras that give an RGB image together with depth data
have become widely available. We use this data to build 3D point clouds of a
full scene. In this paper, we address the task of labeling objects in this 3D
point cloud of a complete indoor scene such as an office. We propose a
graphical model that captures various features and contextual relations,
including the local visual appearance and shape cues, object co-occurrence
relationships and geometric relationships. With a large number of object
classes and relations, the model's parsimony becomes important and we address
that by using multiple types of edge potentials. The model admits efficient
approximate inference, and we train it using a maximum-margin learning
approach. In our experiments over a total of 52 3D scenes of homes and offices
(composed from about 550 views, having 2495 segments labeled with 27 object
classes), we get a performance of 84.06% in labeling 17 object classes for
offices, and 73.38% in labeling 17 object classes for home scenes. Finally, we
applied these algorithms successfully on a mobile robot for the task of finding
an object in a large cluttered room.
|
1106.5562
|
Relative clock demonstrates the endogenous heterogeneity of human
dynamics
|
physics.soc-ph cs.SI
|
The heavy-tailed inter-event time distributions are widely observed in many
human-activated systems, which may result from both endogenous mechanisms like
the highest-priority-first protocol and exogenous factors like the varying
global activity versus time. To distinguish the effects on temporal statistics
from different mechanisms is this of theoretical significance. In this Letter,
we propose a new timing method by using a relative clock, where the time length
between two consecutive events of an individual is counted as the number of
other individuals' events appeared during this interval. We propose a model, in
which agents act either in a constant rate or with a power-law inter-event time
distribution, and the global activity either keeps unchanged or varies
periodically versus time. Our analysis shows that the heavy tails caused by the
heterogeneity of global activity can be eliminated by setting the relative
clock, yet the heterogeneity due to real individual behaviors still exists. We
perform extensive experiments on four large-scale systems, the search engine by
AOL, a social bookmarking system--Delicious, a short-message communication
network, and a microblogging system--Twitter. Strong heterogeneity and clear
seasonality of global activity are observed, but the heavy tails cannot be
eliminated by using the relative clock. Our results suggest the existence of
endogenous heterogeneity of human dynamics.
|
1106.5568
|
Opportunistic Content Search of Smartphone Photos
|
cs.IR cs.DB
|
Photos taken by smartphone users can accidentally contain content that is
timely and valuable to others, often in real-time. We report the system design
and evaluation of a distributed search system, Theia, for crowd-sourced
real-time content search of smartphone photos. Because smartphones are
resource-constrained, Theia incorporates two key innovations to control search
cost and improve search efficiency. Incremental Search expands search scope
incrementally and exploits user feedback. Partitioned Search leverages the
cloud to reduce the energy consumption of search in smartphones. Through user
studies, measurement studies, and field studies, we show that Theia reduces the
cost per relevant photo by an average of 59%. It reduces the energy consumption
of search by up to 55% and 81% compared to alternative strategies of executing
entirely locally or entirely in the cloud. Search results from smartphones are
obtained in seconds. Our experiments also suggest approaches to further improve
these results.
|
1106.5569
|
Augmented Reality Implementation Methods in Mainstream Applications
|
cs.CV
|
Augmented reality has became an useful tool in many areas from space
exploration to military applications. Although used theoretical principles are
well known for almost a decade, the augmented reality is almost exclusively
used in high budget solutions with a special hardware. However, in last few
years we could see rising popularity of many projects focused on deployment of
the augmented reality on different mobile devices. Our article is aimed on
developers who consider development of an augmented reality application for the
mainstream market. Such developers will be forced to keep the application
price, therefore also the development price, at reasonable level. Usage of
existing image processing software library could bring a significant cut-down
of the development costs. In the theoretical part of the article is presented
an overview of the augmented reality application structure. Further, an
approach for selection appropriate library as well as the review of the
existing software libraries focused in this area is described. The last part of
the article outlines our implementation of key parts of the augmented reality
application using the OpenCV library.
|
1106.5571
|
Mobile Augmented Reality Applications
|
cs.CV
|
Augmented reality have undergone considerable improvement in past years. Many
special techniques and hardware devices were developed, but the crucial
breakthrough came with the spread of intelligent mobile phones. This enabled
mass spread of augmented reality applications. However mobile devices have
limited hardware capabilities, which narrows down the methods usable for scene
analysis. In this article we propose an augmented reality application which is
using cloud computing to enable using of more complex computational methods
such as neural networks. Our goal is to create an affordable augmented reality
application suitable which will help car designers in by 'virtualizing' car
modifications.
|
1106.5594
|
The Swiss Board Directors Network in 2009
|
cs.SI physics.soc-ph
|
We study the networks formed by the directors of the most important Swiss
boards and the boards themselves for the year 2009. The networks are obtained
by projection from the original bipartite graph. We highlight a number of
important statistical features of those networks such as degree distribution,
weight distribution, and several centrality measures as well as their
interrelationships. While similar statistics were already known for other board
systems, and are comparable here, we have extended the study with a careful
investigation of director and board centrality, a k-core analysis, and a
simulation of the speed of information propagation and its relationships with
the topological aspects of the network such as clustering and link weight and
betweenness. The overall picture that emerges is one in which the topological
structure of the Swiss board and director networks has evolved in such a way
that special actors and links between actors play a fundamental role in the
flow of information among distant parts of the network. This is shown in
particular by the centrality measures and by the simulation of a simple
epidemic process on the directors network.
|
1106.5601
|
Class-based Rough Approximation with Dominance Principle
|
cs.CC cs.AI
|
Dominance-based Rough Set Approach (DRSA), as the extension of Pawlak's Rough
Set theory, is effective and fundamentally important in Multiple Criteria
Decision Analysis (MCDA). In previous DRSA models, the definitions of the upper
and lower approximations are preserving the class unions rather than the
singleton class. In this paper, we propose a new Class-based Rough
Approximation with respect to a series of previous DRSA models, including
Classical DRSA model, VC-DRSA model and VP-DRSA model. In addition, the new
class-based reducts are investigated.
|
1106.5615
|
Achievable Outage Rate Regions for the MISO Interference Channel
|
cs.IT math.IT
|
We consider the slow-fading two-user multiple-input single-output (MISO)
interference channel. We want to understand which rate points can be achieved,
allowing a non-zero outage probability. We do so by defining four different
outage rate regions. The definitions differ on whether the rates are declared
in outage jointly or individually and whether the transmitters have
instantaneous or statistical channel state information (CSI). The focus is on
the instantaneous CSI case with individual outage, where we propose a
stochastic mapping from the rate point and the channel realization to the
beamforming vectors. A major contribution is that we prove that the stochastic
component of this mapping is independent of the actual channel realization.
|
1106.5626
|
A distributed control strategy for reactive power compensation in smart
microgrids
|
math.OC cs.SY
|
We consider the problem of optimal reactive power compensation for the
minimization of power distribution losses in a smart microgrid. We first
propose an approximate model for the power distribution network, which allows
us to cast the problem into the class of convex quadratic, linearly
constrained, optimization problems. We then consider the specific problem of
commanding the microgenerators connected to the microgrid, in order to achieve
the optimal injection of reactive power. For this task, we design a randomized,
gossip-like optimization algorithm. We show how a distributed approach is
possible, where microgenerators need to have only a partial knowledge of the
problem parameters and of the state, and can perform only local measurements.
For the proposed algorithm, we provide conditions for convergence together with
an analytic characterization of the convergence speed. The analysis shows that,
in radial networks, the best performance can be achieved when we command
cooperation among units that are neighbors in the electric topology. Numerical
simulations are included to validate the proposed model and to confirm the
analytic results about the performance of the proposed algorithm.
|
1106.5648
|
Joint LDPC and Physical-layer Network Coding for Asynchronous
Bi-directional Relaying
|
cs.IT math.IT
|
In practical asynchronous bi-directional relaying, symbols transmitted by two
sources cannot arrive at the relay with perfect frame and symbol alignments and
the asynchronous multiple-access channel (MAC) should be seriously considered.
Recently, Lu et al. proposed a Tanner-graph representation of the
symbol-asynchronous MAC with rectangular-pulse shaping and further developed
the message-passing algorithm for optimal decoding of the symbol-asynchronous
physical-layer network coding. In this paper, we present a general channel
model for the asynchronous MAC with arbitrary pulse-shaping. Then, the Bahl,
Cocke, Jelinek, and Raviv (BCJR) algorithm is developed for optimal decoding of
the asynchronous MAC channel. For Low-Density Parity-Check (LDPC)-coded BPSK
signalling over the symbol-asynchronous MAC, we present a formal log-domain
generalized sum-product-algorithm (Log-G-SPA) for efficient decoding.
Furthermore, we propose to use cyclic codes for combating the
frame-asynchronism and the resolution of the relative delay inherent in this
approach can be achieved by employing the simple cyclic-redundancy-check (CRC)
coding technique. Simulation results demonstrate the effectiveness of the
proposed approach.
|
1106.5675
|
Writing on the Facade of RWTH ICT Cubes: Cost Constrained Geometric
Huffman Coding
|
cs.IT math.IT
|
In this work, a coding technique called cost constrained Geometric Huffman
coding (ccGhc) is developed. ccGhc minimizes the Kullback-Leibler distance
between a dyadic probability mass function (pmf) and a target pmf subject to an
affine inequality constraint. An analytical proof is given that when ccGhc is
applied to blocks of symbols, the optimum is asymptotically achieved when the
blocklength goes to infinity. The derivation of ccGhc is motivated by the
problem of encoding a text to a sequence of slats subject to architectural
design criteria. For the considered architectural problem, for a blocklength of
3, the codes found by ccGhc match the design criteria. For communications
channels with average cost constraints, ccGhc can be used to efficiently find
prefix-free modulation codes that are provably capacity achieving.
|
1106.5683
|
Distributed Interference Alignment with Low Overhead
|
cs.IT math.IT
|
Based on closed-form interference alignment (IA) solutions, a low overhead
distributed interference alignment (LOIA) scheme is proposed in this paper for
the $K$-user SISO interference channel, and extension to multiple antenna
scenario is also considered. Compared with the iterative interference alignment
(IIA) algorithm proposed by Gomadam et al., the overhead is greatly reduced.
Simulation results show that the IIA algorithm is strictly suboptimal compared
with our LOIA algorithm in the overhead-limited scenario.
|
1106.5714
|
Non-parametric change-point detection using string matching algorithms
|
math.PR cs.IT math.IT stat.ME
|
Given the output of a data source taking values in a finite alphabet, we wish
to detect change-points, that is times when the statistical properties of the
source change. Motivated by ideas of match lengths in information theory, we
introduce a novel non-parametric estimator which we call CRECHE (CRossings
Enumeration CHange Estimator). We present simulation evidence that this
estimator performs well, both for simulated sources and for real data formed by
concatenating text sources. For example, we show that we can accurately detect
the point at which a source changes from a Markov chain to an IID source with
the same stationary distribution. Our estimator requires no assumptions about
the form of the source distribution, and avoids the need to estimate its
probabilities. Further, we establish consistency of the CRECHE estimator under
a related toy model, by establishing a fluid limit and using martingale
arguments.
|
1106.5730
|
HOGWILD!: A Lock-Free Approach to Parallelizing Stochastic Gradient
Descent
|
math.OC cs.LG
|
Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve
state-of-the-art performance on a variety of machine learning tasks. Several
researchers have recently proposed schemes to parallelize SGD, but all require
performance-destroying memory locking and synchronization. This work aims to
show using novel theoretical analysis, algorithms, and implementation that SGD
can be implemented without any locking. We present an update scheme called
HOGWILD! which allows processors access to shared memory with the possibility
of overwriting each other's work. We show that when the associated optimization
problem is sparse, meaning most gradient updates only modify small parts of the
decision variable, then HOGWILD! achieves a nearly optimal rate of convergence.
We demonstrate experimentally that HOGWILD! outperforms alternative schemes
that use locking by an order of magnitude.
|
1106.5737
|
Fingerprint: DWT, SVD Based Enhancement and Significant Contrast for
Ridges and Valleys Using Fuzzy Measures
|
cs.CV
|
The performance of the Fingerprint recognition system will be more accurate
with respect of enhancement for the fingerprint images. In this paper we
develop a novel method for Fingerprint image contrast enhancement technique
based on the discrete wavelet transform (DWT) and singular value decomposition
(SVD) has been proposed. This technique is compared with conventional image
equalization techniques such as standard general histogram equalization and
local histogram equalization. An automatic histogram threshold approach based
on a fuzziness measure is presented. Then, using an index of fuzziness, a
similarity process is started to find the threshold point. A significant
contrast between ridges and valleys of the best, medium and poor finger image
features to extract from finger images and get maximum recognition rate using
fuzzy measures. The experimental results show the recognition of superiority of
the proposed method to get maximum performance up gradation to the
implementation of this approach.
|
1106.5742
|
Wireless Network Coding with Local Network Views: Coded Layer Scheduling
|
cs.IT math.IT
|
One of the fundamental challenges in the design of distributed wireless
networks is the large dynamic range of network state. Since continuous tracking
of global network state at all nodes is practically impossible, nodes can only
acquire limited local views of the whole network to design their transmission
strategies. In this paper, we study multi-layer wireless networks and assume
that each node has only a limited knowledge, namely 1-local view, where each
S-D pair has enough information to perform optimally when other pairs do not
interfere, along with connectivity information for rest of the network. We
investigate the information-theoretic limits of communication with such limited
knowledge at the nodes. We develop a novel transmission strategy, namely Coded
Layer Scheduling, that solely relies on 1-local view at the nodes and
incorporates three different techniques: (1) per layer interference avoidance,
(2) repetition coding to allow overhearing of the interference, and (3) network
coding to allow interference neutralization. We show that our proposed scheme
can provide a significant throughput gain compared with the conventional
interference avoidance strategies. Furthermore, we show that our strategy
maximizes the achievable normalized sum-rate for some classes of networks,
hence, characterizing the normalized sum-capacity of those networks with
1-local view.
|
1106.5793
|
A Replica Inference Approach to Unsupervised Multi-Scale Image
Segmentation
|
cond-mat.stat-mech cs.CV physics.soc-ph
|
We apply a replica inference based Potts model method to unsupervised image
segmentation on multiple scales. This approach was inspired by the statistical
mechanics problem of "community detection" and its phase diagram. Specifically,
the problem is cast as identifying tightly bound clusters ("communities" or
"solutes") against a background or "solvent". Within our multiresolution
approach, we compute information theory based correlations among multiple
solutions ("replicas") of the same graph over a range of resolutions.
Significant multiresolution structures are identified by replica correlations
as manifest in information theory overlaps. With the aid of these correlations
as well as thermodynamic measures, the phase diagram of the corresponding Potts
model is analyzed both at zero and finite temperatures. Optimal parameters
corresponding to a sensible unsupervised segmentation correspond to the "easy
phase" of the Potts model. Our algorithm is fast and shown to be at least as
accurate as the best algorithms to date and to be especially suited to the
detection of camouflaged images.
|
1106.5815
|
Patchy Solution of a Francis-Byrnes-Isidori Partial Differential
Equation
|
math.OC cs.SY math.DS
|
The solution to the nonlinear output regulation problem requires one to solve
a first order PDE, known as the Francis-Byrnes-Isidori (FBI) equations. In this
paper we propose a method to compute approximate solutions to the FBI equations
when the zero dynamics of the plant are hyperbolic and the exosystem is
two-dimensional. With our method we are able to produce approximations that
converge uniformly to the true solution. Our method relies on the periodic
nature of two-dimensional analytic center manifolds.
|
1106.5818
|
Characterizing the process of reaching consensus for social systems
|
physics.soc-ph cs.SI
|
A novel way of characterizing the process of reaching consensus for a social
system is given. The foundation of the characterization is based on the theorem
which states that the sufficient and necessary condition for a system to reach
the state of consensus is the occurrence of communicators, defined as the units
of the system that can directly communicate with all the others simultaneously.
A model is proposed to illustrate the characterization explicitly. The
existence of communicators provides an efficient way for unifying two systems
that a state of consensus is guaranteed after the mergence.
|
1106.5825
|
Fundamentals of Inter-cell Overhead Signaling in Heterogeneous Cellular
Networks
|
cs.IT math.IT
|
Heterogeneous base stations (e.g. picocells, microcells, femtocells and
distributed antennas) will become increasingly essential for cellular network
capacity and coverage. Up until now, little basic research has been done on the
fundamentals of managing so much infrastructure -- much of it unplanned --
together with the carefully planned macro-cellular network. Inter-cell
coordination is in principle an effective way of ensuring different
infrastructure components behave in a way that increases, rather than
decreases, the key quality of service (QoS) metrics. The success of such
coordination depends heavily on how the overhead is shared, and the rate and
delay of the overhead sharing. We develop a novel framework to quantify
overhead signaling for inter-cell coordination, which is usually ignored in
traditional 1-tier networks, and assumes even more importance in multi-tier
heterogeneous cellular networks (HCNs). We derive the overhead quality contour
for general K-tier HCNs -- the achievable set of overhead packet rate, size,
delay and outage probability -- in closed-form expressions or computable
integrals under general assumptions on overhead arrivals and different overhead
signaling methods (backhaul and/or wireless). The overhead quality contour is
further simplified for two widely used models of overhead arrivals: Poisson and
deterministic arrival process. This framework can be used in the design and
evaluation of any inter-cell coordination scheme. It also provides design
insights on backhaul and wireless overhead channels to handle specific overhead
signaling requirements.
|
1106.5826
|
A Dirty Model for Multiple Sparse Regression
|
cs.LG math.ST stat.ML stat.TH
|
Sparse linear regression -- finding an unknown vector from linear
measurements -- is now known to be possible with fewer samples than variables,
via methods like the LASSO. We consider the multiple sparse linear regression
problem, where several related vectors -- with partially shared support sets --
have to be recovered. A natural question in this setting is whether one can use
the sharing to further decrease the overall number of samples required. A line
of recent research has studied the use of \ell_1/\ell_q norm
block-regularizations with q>1 for such problems; however these could actually
perform worse in sample complexity -- vis a vis solving each problem separately
ignoring sharing -- depending on the level of sharing.
We present a new method for multiple sparse linear regression that can
leverage support and parameter overlap when it exists, but not pay a penalty
when it does not. A very simple idea: we decompose the parameters into two
components and regularize these differently. We show both theoretically and
empirically, our method strictly and noticeably outperforms both \ell_1 or
\ell_1/\ell_q methods, over the entire range of possible overlaps (except at
boundary cases, where we match the best method). We also provide theoretical
guarantees that the method performs well under high-dimensional scaling.
|
1106.5829
|
Active Classification: Theory and Application to Underwater Inspection
|
cs.RO cs.AI cs.CV
|
We discuss the problem in which an autonomous vehicle must classify an object
based on multiple views. We focus on the active classification setting, where
the vehicle controls which views to select to best perform the classification.
The problem is formulated as an extension to Bayesian active learning, and we
show connections to recent theoretical guarantees in this area. We formally
analyze the benefit of acting adaptively as new information becomes available.
The analysis leads to a probabilistic algorithm for determining the best views
to observe based on information theoretic costs. We validate our approach in
two ways, both related to underwater inspection: 3D polyhedra recognition in
synthetic depth maps and ship hull inspection with imaging sonar. These tasks
encompass both the planning and recognition aspects of the active
classification problem. The results demonstrate that actively planning for
informative views can reduce the number of necessary views by up to 80% when
compared to passive methods.
|
1106.5841
|
Capacity Bounds of Finite Dimensional CDMA Systems with Fading/Near-Far
Effects and Power Control
|
cs.IT math.IT
|
This paper deals with fading and/or near-far effects with or without power
control on the evaluation of the sum capacity of finite dimensional Code
Division Multiple Access (CDMA) systems for binary and finite nonbinary inputs
and signature matrices. Important results of this paper are that the knowledge
of the received power variations due to input power differences, fading and/or
near-far effects can significantly improve the sum capacity. Also traditional
power controls can not improve the sum capacity; for the asymptotic case, any
type of power control on the near-far effects is equivalent to the case without
any power control. Moreover, for the asymptotic case, we have developed a
method that determines bounds for the fading/near-far sum capacity with
imperfect power estimation from the actual sum capacity of a CDMA system with
perfect power estimation. To show the power and utility of the results, a
number of sum capacity bounds for special cases are numerically evaluated.
|
1106.5890
|
A Comparison of Lex Bounds for Multiset Variables in Constraint
Programming
|
cs.AI
|
Set and multiset variables in constraint programming have typically been
represented using subset bounds. However, this is a weak representation that
neglects potentially useful information about a set such as its cardinality.
For set variables, the length-lex (LL) representation successfully provides
information about the length (cardinality) and position in the lexicographic
ordering. For multiset variables, where elements can be repeated, we consider
richer representations that take into account additional information. We study
eight different representations in which we maintain bounds according to one of
the eight different orderings: length-(co)lex (LL/LC), variety-(co)lex (VL/VC),
length-variety-(co)lex (LVL/LVC), and variety-length-(co)lex (VLL/VLC)
orderings. These representations integrate together information about the
cardinality, variety (number of distinct elements in the multiset), and
position in some total ordering. Theoretical and empirical comparisons of
expressiveness and compactness of the eight representations suggest that
length-variety-(co)lex (LVL/LVC) and variety-length-(co)lex (VLL/VLC) usually
give tighter bounds after constraint propagation. We implement the eight
representations and evaluate them against the subset bounds representation with
cardinality and variety reasoning. Results demonstrate that they offer
significantly better pruning and runtime.
|
1106.5917
|
Implementing Human-like Intuition Mechanism in Artificial Intelligence
|
cs.AI cs.NE
|
Human intuition has been simulated by several research projects using
artificial intelligence techniques. Most of these algorithms or models lack the
ability to handle complications or diversions. Moreover, they also do not
explain the factors influencing intuition and the accuracy of the results from
this process. In this paper, we present a simple series based model for
implementation of human-like intuition using the principles of connectivity and
unknown entities. By using Poker hand datasets and Car evaluation datasets, we
compare the performance of some well-known models with our intuition model. The
aim of the experiment was to predict the maximum accurate answers using
intuition based models. We found that the presence of unknown entities,
diversion from the current problem scenario, and identifying weakness without
the normal logic based execution, greatly affects the reliability of the
answers. Generally, the intuition based models cannot be a substitute for the
logic based mechanisms in handling such problems. The intuition can only act as
a support for an ongoing logic based model that processes all the steps in a
sequential manner. However, when time and computational cost are very strict
constraints, this intuition based model becomes extremely important and useful,
because it can give a reasonably good performance. Factors affecting intuition
are analyzed and interpreted through our model.
|
1106.5928
|
Image denoising assessment using anisotropic stack filtering
|
cs.CV
|
In this paper we propose a measure of anisotropy as a quality parameter to
estimate the amount of noise in noisy images. The anisotropy of an image can be
determined through a directional measure, using an appropriate statistical
distribution of the information contained in the image. This new measure is
achieved through a stack filtering paradigm. First, we define a local
directional entropy, based on the distribution of 0's and 1's in the
neigborhood of every pixel location of each stack level. Then the entropy
variation of this directional entropy is used to define an anisotropic measure.
The empirical results have shown that this measure can be regarded as an
excellent image noise indicator, which is particularly relevant for quality
assessment of denoising algorithms. The method has been evaluated with
artificial and real-world degraded images.
|
1106.5930
|
An Algorithm for Classification of Binary Self-Dual Codes
|
math.CO cs.DS cs.IT math.IT
|
An efficient algorithm for classification of binary self-dual codes is
presented. As an application, a complete classification of the self-dual codes
of length 38 is given.
|
1106.5936
|
Singly-even self-dual codes with minimal shadow
|
math.CO cs.IT math.IT
|
In this note we investigate extremal singly-even self-dual codes with minimal
shadow. For particular parameters we prove non-existence of such codes. By a
result of Rains \cite{Rains-asymptotic}, the length of extremal singly-even
self-dual codes is bounded. We give explicit bounds in case the shadow is
minimal.
|
1106.5960
|
On the classification of binary self-dual [44,22,8] codes with an
automorphism of order 3 or 7
|
math.CO cs.IT math.IT
|
All binary self-dual [44,22,8] codes with an automorphism of order 3 or 7 are
classified. In this way we complete the classification of extremal self-dual
codes of length 44 having an automorphism of odd prime order.
|
1106.5973
|
Entropy of Telugu
|
cs.CL
|
This paper presents an investigation of the entropy of the Telugu script.
Since this script is syllabic, and not alphabetic, the computation of entropy
is somewhat complicated.
|
1106.5979
|
Probabilistic Voronoi Diagrams for Probabilistic Moving Nearest Neighbor
Queries
|
cs.DB
|
A large spectrum of applications such as location based services and
environmental monitoring demand efficient query processing on uncertain
databases. In this paper, we propose the probabilistic Voronoi diagram (PVD)
for processing moving nearest neighbor queries on uncertain data, namely the
probabilistic moving nearest neighbor (PMNN) queries. A PMNN query finds the
most probable nearest neighbor of a moving query point continuously. To process
PMNN queries efficiently, we provide two techniques: a pre-computation approach
and an incremental approach. In the pre-computation approach, we develop an
algorithm to efficiently evaluate PMNN queries based on the pre-computed PVD
for the entire data set. In the incremental approach, we propose an incremental
probabilistic safe region based technique that does not require to pre-compute
the whole PVD to answer the PMNN query. In this incremental approach, we
exploit the knowledge for a known region to compute the lower bound of the
probability of an object being the nearest neighbor. Experimental results show
that our approaches significantly outperform a sampling based approach by
orders of magnitude in terms of I/O, query processing time, and communication
overheads.
|
1106.5992
|
On the Dynamics of Human Proximity for Data Diffusion in Ad-Hoc Networks
|
cs.NI cs.HC cs.SI physics.soc-ph
|
We report on a data-driven investigation aimed at understanding the dynamics
of message spreading in a real-world dynamical network of human proximity. We
use data collected by means of a proximity-sensing network of wearable sensors
that we deployed at three different social gatherings, simultaneously involving
several hundred individuals. We simulate a message spreading process over the
recorded proximity network, focusing on both the topological and the temporal
properties. We show that by using an appropriate technique to deal with the
temporal heterogeneity of proximity events, a universal statistical pattern
emerges for the delivery times of messages, robust across all the data sets.
Our results are useful to set constraints for generic processes of data
dissemination, as well as to validate established models of human mobility and
proximity that are frequently used to simulate realistic behaviors.
|
1106.5995
|
From Cognitive Binary Logic to Cognitive Intelligent Agents
|
cs.AI cs.LO math.LO
|
The relation between self awareness and intelligence is an open problem these
days. Despite the fact that self awarness is usually related to Emotional
Intelligence, this is not the case here. The problem described in this paper is
how to model an agent which knows (Cognitive) Binary Logic and which is also
able to pass (without any mistake) a certain family of Turing Tests designed to
verify its knowledge and its discourse about the modal states of truth
corresponding to well-formed formulae within the language of Propositional
Binary Logic.
|
1106.5998
|
The 3rd International Planning Competition: Results and Analysis
|
cs.AI
|
This paper reports the outcome of the third in the series of biennial
international planning competitions, held in association with the International
Conference on AI Planning and Scheduling (AIPS) in 2002. In addition to
describing the domains, the planners and the objectives of the competition, the
paper includes analysis of the results. The results are analysed from several
perspectives, in order to address the questions of comparative performance
between planners, comparative difficulty of domains, the degree of agreement
between planners about the relative difficulty of individual problem instances
and the question of how well planners scale relative to one another over
increasingly difficult problems. The paper addresses these questions through
statistical analysis of the raw results of the competition, in order to
determine which results can be considered to be adequately supported by the
data. The paper concludes with a discussion of some challenges for the future
of the competition series.
|
1106.6022
|
Use of Markov Chains to Design an Agent Bidding Strategy for Continuous
Double Auctions
|
cs.AI
|
As computational agents are developed for increasingly complicated e-commerce
applications, the complexity of the decisions they face demands advances in
artificial intelligence techniques. For example, an agent representing a seller
in an auction should try to maximize the seller's profit by reasoning about a
variety of possibly uncertain pieces of information, such as the maximum prices
various buyers might be willing to pay, the possible prices being offered by
competing sellers, the rules by which the auction operates, the dynamic arrival
and matching of offers to buy and sell, and so on. A naive application of
multiagent reasoning techniques would require the seller's agent to explicitly
model all of the other agents through an extended time horizon, rendering the
problem intractable for many realistically-sized problems. We have instead
devised a new strategy that an agent can use to determine its bid price based
on a more tractable Markov chain model of the auction process. We have
experimentally identified the conditions under which our new strategy works
well, as well as how well it works in comparison to the optimal performance the
agent could have achieved had it known the future. Our results show that our
new strategy in general performs well, outperforming other tractable heuristic
strategies in a majority of experiments, and is particularly effective in a
'seller?s market', where many buy offers are available.
|
1106.6024
|
The Rate of Convergence of AdaBoost
|
math.OC cs.AI stat.ML
|
The AdaBoost algorithm was designed to combine many "weak" hypotheses that
perform slightly better than random guessing into a "strong" hypothesis that
has very low error. We study the rate at which AdaBoost iteratively converges
to the minimum of the "exponential loss." Unlike previous work, our proofs do
not require a weak-learning assumption, nor do they require that minimizers of
the exponential loss are finite. Our first result shows that at iteration $t$,
the exponential loss of AdaBoost's computed parameter vector will be at most
$\epsilon$ more than that of any parameter vector of $\ell_1$-norm bounded by
$B$ in a number of rounds that is at most a polynomial in $B$ and $1/\epsilon$.
We also provide lower bounds showing that a polynomial dependence on these
parameters is necessary. Our second result is that within $C/\epsilon$
iterations, AdaBoost achieves a value of the exponential loss that is at most
$\epsilon$ more than the best possible value, where $C$ depends on the dataset.
We show that this dependence of the rate on $\epsilon$ is optimal up to
constant factors, i.e., at least $\Omega(1/\epsilon)$ rounds are necessary to
achieve within $\epsilon$ of the optimal exponential loss.
|
1106.6104
|
Deterministic Sequencing of Exploration and Exploitation for Multi-Armed
Bandit Problems
|
math.OC cs.LG cs.SY math.PR math.ST stat.TH
|
In the Multi-Armed Bandit (MAB) problem, there is a given set of arms with
unknown reward models. At each time, a player selects one arm to play, aiming
to maximize the total expected reward over a horizon of length T. An approach
based on a Deterministic Sequencing of Exploration and Exploitation (DSEE) is
developed for constructing sequential arm selection policies. It is shown that
for all light-tailed reward distributions, DSEE achieves the optimal
logarithmic order of the regret, where regret is defined as the total expected
reward loss against the ideal case with known reward models. For heavy-tailed
reward distributions, DSEE achieves O(T^1/p) regret when the moments of the
reward distributions exist up to the pth order for 1<p<=2 and O(T^1/(1+p/2))
for p>2. With the knowledge of an upperbound on a finite moment of the
heavy-tailed reward distributions, DSEE offers the optimal logarithmic regret
order. The proposed DSEE approach complements existing work on MAB by providing
corresponding results for general reward distributions. Furthermore, with a
clearly defined tunable parameter-the cardinality of the exploration sequence,
the DSEE approach is easily extendable to variations of MAB, including MAB with
various objectives, decentralized MAB with multiple players and incomplete
reward observations under collisions, MAB with unknown Markov dynamics, and
combinatorial MAB with dependent arms that often arise in network optimization
problems such as the shortest path, the minimum spanning, and the dominating
set problems under unknown random weights.
|
1106.6173
|
Power and Subcarrier Allocation for Physical-Layer Security in
OFDMA-based Broadband Wireless Networks
|
cs.IT math.IT
|
Providing physical-layer security for mobile users in future broadband
wireless networks is of both theoretical and practical importance. In this
paper, we formulate an analytical framework for resource allocation in a
downlink OFDMA-based broadband network with coexistence of secure users (SU)
and normal users (NU). The SU's require secure data transmission at the
physical layer while the NU's are served with conventional best-effort data
traffic. The problem is formulated as joint power and subcarrier allocation
with the objective of maximizing average aggregate information rate of all NU's
while maintaining an average secrecy rate for each individual SU under a total
transmit power constraint for the base station. We solve this problem in an
asymptotically optimal manner using dual decomposition. Our analysis shows that
an SU becomes a candidate competing for a subcarrier only if its channel gain
on this subcarrier is the largest among all and exceeds the second largest by a
certain threshold. Furthermore, while the power allocation for NU's follows the
conventional water-filling principle, the power allocation for SU's depends on
both its own channel gain and the largest channel gain among others. We also
develop a suboptimal algorithm to reduce the computational cost. Numerical
studies are conducted to evaluate the performance of the proposed algorithms in
terms of the achievable pair of information rate for NU and secrecy rate for SU
at different power consumptions.
|
1106.6174
|
Pairwise Check Decoding for LDPC Coded Two-Way Relay Block Fading
Channels
|
cs.IT math.IT
|
Partial decoding has the potential to achieve a larger capacity region than
full decoding in two-way relay (TWR) channels. Existing partial decoding
realizations are however designed for Gaussian channels and with a static
physical layer network coding (PLNC). In this paper, we propose a new solution
for joint network coding and channel decoding at the relay, called pairwise
check decoding (PCD), for low-density parity-check (LDPC) coded TWR system over
block fading channels. The main idea is to form a check relationship table
(check-relation-tab) for the superimposed LDPC coded packet pair in the
multiple access (MA) phase in conjunction with an adaptive PLNC mapping in the
broadcast (BC) phase. Using PCD, we then present a partial decoding method,
two-stage closest-neighbor clustering with PCD (TS-CNC-PCD), with the aim of
minimizing the worst pairwise error probability. Moreover, we propose the
minimum correlation optimization (MCO) for selecting the better
check-relation-tabs. Simulation results confirm that the proposed TS-CNC-PCD
offers a sizable gain over the conventional XOR with belief propagation (BP) in
fading channels.
|
1106.6185
|
Effects of Compensation, Connectivity and Tau in a Computational Model
of Alzheimer's Disease
|
cs.NE q-bio.NC
|
This work updates an existing, simplistic computational model of Alzheimer's
Disease (AD) to investigate the behaviour of synaptic compensatory mechanisms
in neural networks with small-world connectivity, and varying methods of
calculating compensation. It additionally introduces a method for simulating
tau neurofibrillary pathology, resulting in a more dramatic damage profile.
Small-world connectivity is shown to have contrasting effects on capacity,
retrieval time, and robustness to damage, whilst the use of more
easily-obtained remote memories rather than recent memories for synaptic
compensation is found to lead to rapid network damage.
|
1106.6186
|
IBSEAD: - A Self-Evolving Self-Obsessed Learning Algorithm for Machine
Learning
|
cs.LG
|
We present IBSEAD or distributed autonomous entity systems based Interaction
- a learning algorithm for the computer to self-evolve in a self-obsessed
manner. This learning algorithm will present the computer to look at the
internal and external environment in series of independent entities, which will
interact with each other, with and/or without knowledge of the computer's
brain. When a learning algorithm interacts, it does so by detecting and
understanding the entities in the human algorithm. However, the problem with
this approach is that the algorithm does not consider the interaction of the
third party or unknown entities, which may be interacting with each other.
These unknown entities in their interaction with the non-computer entities make
an effect in the environment that influences the information and the behaviour
of the computer brain. Such details and the ability to process the dynamic and
unsettling nature of these interactions are absent in the current learning
algorithm such as the decision tree learning algorithm. IBSEAD is able to
evaluate and consider such algorithms and thus give us a better accuracy in
simulation of the highly evolved nature of the human brain. Processes such as
dreams, imagination and novelty, that exist in humans are not fully simulated
by the existing learning algorithms. Also, Hidden Markov models (HMM) are
useful in finding "hidden" entities, which may be known or unknown. However,
this model fails to consider the case of unknown entities which maybe unclear
or unknown. IBSEAD is better because it considers three types of entities-
known, unknown and invisible. We present our case with a comparison of existing
algorithms in known environments and cases and present the results of the
experiments using dry run of the simulated runs of the existing machine
learning algorithms versus IBSEAD.
|
1106.6206
|
A generalisation of the Gilbert-Varshamov bound and its asymptotic
evaluation
|
cs.IT math.IT
|
The Gilbert-Varshamov (GV) lower bound on the maximum cardinality of a q-ary
code of length n with minimum Hamming distance at least d can be obtained by
application of Turan's theorem to the graph with vertex set {0,1,..,q-1}^n in
which two vertices are joined if and only if their Hamming distance is at least
d. We generalize the GV bound by applying Turan's theorem to the graph with
vertex set C^n, where C is a q-ary code of length m and two vertices are joined
if and only if their Hamming distance at least d. We asymptotically evaluate
the resulting bound for n-> \infty and d \delta mn for fixed \delta > 0, and
derive conditions on the distance distribution of C that are necessary and
sufficient for the asymptotic generalized bound to beat the asymptotic GV
bound. By invoking the Delsarte inequalities, we conclude that no improvement
on the asymptotic GV bound is obtained. By using a sharpening of Turan's
theorem due to Caro and Wei, we improve on our bound. It is undecided if there
exists a code C for which the improved bound can beat the asymptotic GV bound.
|
1106.6215
|
Towards two-dimensional search engines
|
cs.IR cond-mat.stat-mech
|
We study the statistical properties of various directed networks using
ranking of their nodes based on the dominant vectors of the Google matrix known
as PageRank and CheiRank. On average PageRank orders nodes proportionally to a
number of ingoing links, while CheiRank orders nodes proportionally to a number
of outgoing links. In this way the ranking of nodes becomes two-dimensional
that paves the way for development of two-dimensional search engines of new
type. Statistical properties of information flow on PageRank-CheiRank plane are
analyzed for networks of British, French and Italian Universities, Wikipedia,
Linux Kernel, gene regulation and other networks. A special emphasis is done
for British Universities networks using the large database publicly available
at UK. Methods of spam links control are also analyzed.
|
1106.6223
|
Why 'GSA: A Gravitational Search Algorithm' Is Not Genuinely Based on
the Law of Gravity
|
cs.NE
|
Why 'GSA: A Gravitational Search Algorithm' Is Not Genuinely Based on the Law
of Gravity
|
1106.6224
|
Structured Compressed Sensing: From Theory to Applications
|
cs.IT math.IT
|
Compressed sensing (CS) is an emerging field that has attracted considerable
research interest over the past few years. Previous review articles in CS limit
their scope to standard discrete-to-discrete measurement architectures using
matrices of randomized nature and signal models based on standard sparsity. In
recent years, CS has worked its way into several new application areas. This,
in turn, necessitates a fresh look on many of the basics of CS. The random
matrix measurement operator must be replaced by more structured sensing
architectures that correspond to the characteristics of feasible acquisition
hardware. The standard sparsity prior has to be extended to include a much
richer class of signals and to encode broader data models, including
continuous-time signals. In our overview, the theme is exploiting signal and
measurement structure in compressive sensing. The prime focus is bridging
theory and practice; that is, to pinpoint the potential of structured CS
strategies to emerge from the math to the hardware. Our summary highlights new
directions as well as relations to more traditional CS, with the hope of
serving both as a review to practitioners wanting to join this emerging field,
and as a reference for researchers that attempts to put some of the existing
ideas in perspective of practical applications.
|
1106.6242
|
Visual Secret Sharing Scheme using Grayscale Images
|
cs.CR cs.CV
|
Pixel expansion and the quality of the reconstructed secret image has been a
major issue of visual secret sharing (VSS) schemes. A number of probabilistic
VSS schemes with minimum pixel expansion have been proposed for black and white
(binary) secret images. This paper presents a probabilistic (2, 3)-VSS scheme
for gray scale images. Its pixel expansion is larger in size but the quality of
the image is perfect when it's reconstructed. The construction of the shadow
images (transparent shares) is based on the binary OR operation.
|
1106.6251
|
Kernels for Vector-Valued Functions: a Review
|
stat.ML cs.AI math.ST stat.TH
|
Kernel methods are among the most popular techniques in machine learning.
From a frequentist/discriminative perspective they play a central role in
regularization theory as they provide a natural choice for the hypotheses space
and the regularization functional through the notion of reproducing kernel
Hilbert spaces. From a Bayesian/generative perspective they are the key in the
context of Gaussian processes, where the kernel function is also known as the
covariance function. Traditionally, kernel methods have been used in supervised
learning problem with scalar outputs and indeed there has been a considerable
amount of work devoted to designing and learning kernels. More recently there
has been an increasing interest in methods that deal with multiple outputs,
motivated partly by frameworks like multitask learning. In this paper, we
review different methods to design or learn valid kernel functions for multiple
outputs, paying particular attention to the connection between probabilistic
and functional methods.
|
1106.6258
|
A Note on Improved Loss Bounds for Multiple Kernel Learning
|
cs.LG
|
In this paper, we correct an upper bound, presented in~\cite{hs-11}, on the
generalisation error of classifiers learned through multiple kernel learning.
The bound in~\cite{hs-11} uses Rademacher complexity and has an\emph{additive}
dependence on the logarithm of the number of kernels and the margin achieved by
the classifier. However, there are some errors in parts of the proof which are
corrected in this paper. Unfortunately, the final result turns out to be a risk
bound which has a \emph{multiplicative} dependence on the logarithm of the
number of kernels and the margin achieved by the classifier.
|
1106.6271
|
Low-Complexity Adaptive Channel Estimation over Multipath Rayleigh
Fading Non-Stationary Channels Under CFO
|
cs.IT math.IT
|
In this paper, we propose novel low-complexity adaptive channel estimation
techniques for mob ile wireless chan- n els in presence of Rayleigh fading,
carrier frequency offsets (CFO) and random channel variations. We show that the
selective p artial update of the estimated channel tap-weight vector offers a
better trade-off between the performance and computational complexity, compared
to the full update of the estimated channel tap-weight vector. We evaluate the
mean-square weight error of th e proposed methods and demonstrate the
usefulness of its via simulation studies.
|
1106.6323
|
The Diversity Multiplexing Tradeoff of the MIMO Half-Duplex Relay
Channel
|
cs.IT math.IT
|
The fundamental diversity-multiplexing tradeoff of the three-node,
multi-input, multi-output (MIMO), quasi-static, Rayleigh faded, half-duplex
relay channel is characterized for an arbitrary number of antennas at each node
and in which opportunistic scheduling (or dynamic operation) of the relay is
allowed, i.e., the relay can switch between receive and transmit modes at a
channel dependent time. In this most general case, the diversity-multiplexing
tradeoff is characterized as a solution to a simple, two-variable optimization
problem. This problem is then solved in closed form for special classes of
channels defined by certain restrictions on the numbers of antennas at the
three nodes. The key mathematical tool developed here that enables the explicit
characterization of the diversity-multiplexing tradeoff is the joint eigenvalue
distribution of three mutually correlated random Wishart matrices. Previously,
without actually characterizing the diversity-multiplexing tradeoff, the
optimality in this tradeoff metric of the dynamic compress-and-forward (DCF)
protocol based on the classical compress-and-forward scheme of Cover and El
Gamal was shown by Yuksel and Erkip. However, this scheme requires global
channel state information (CSI) at the relay. In this work, the so-called
quantize-map and forward (QMF) coding scheme due to Avestimehr {\em et} {\em
al} is adopted as the achievability scheme with the added benefit that it
achieves optimal tradeoff with only the knowledge of the (channel dependent)
switching time at the relay node. Moreover, in special classes of the MIMO
half-duplex relay channel, the optimal tradeoff is shown to be attainable even
without this knowledge. Such a result was previously known only for the
half-duplex relay channel with a single antenna at each node, also via the QMF
scheme.
|
1106.6328
|
On the Asymptotic Validity of the Decoupling Assumption for Analyzing
802.11 MAC Protocol
|
cs.NI cs.IT cs.PF math.IT
|
Performance evaluation of the 802.11 MAC protocol is classically based on the
decoupling assumption, which hypothesizes that the backoff processes at
different nodes are independent. This decoupling assumption results from mean
field convergence and is generally true in transient regime in the asymptotic
sense (when the number of wireless nodes tends to infinity), but, contrary to
widespread belief, may not necessarily hold in stationary regime. The issue is
often related with the existence and uniqueness of a solution to a fixed point
equation; however, it was also recently shown that this condition is not
sufficient; in contrast, a sufficient condition is a global stability property
of the associated ordinary differential equation. In this paper, we give a
simple condition that establishes the asymptotic validity of the decoupling
assumption for the homogeneous case. We also discuss the heterogeneous and the
differentiated service cases and formulate a new ordinary differential
equation. We show that the uniqueness of a solution to the associated fixed
point equation is not sufficient; we exhibit one case where the fixed point
equation has a unique solution but the decoupling assumption is not valid in
the asymptotic sense in stationary regime.
|
1106.6335
|
Bases for Riemann-Roch spaces of one point divisors on an optimal tower
of function fields
|
math.NT cs.IT math.AG math.IT
|
For applications in algebraic geometric codes, an explicit description of
bases of Riemann-Roch spaces of divisors on function fields over finite fields
is needed. We give an algorithm to compute such bases for one point divisors,
and Weierstrass semigroups over an optimal tower of function fields. We also
explicitly compute Weierstrass semigroups till level eight.
|
1106.6341
|
Vision-Based Navigation III: Pose and Motion from Omnidirectional
Optical Flow and a Digital Terrain Map
|
cs.CV cs.AI
|
An algorithm for pose and motion estimation using corresponding features in
omnidirectional images and a digital terrain map is proposed. In previous
paper, such algorithm for regular camera was considered. Using a Digital
Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables
recovering the absolute position and orientation of the camera. In order to do
this, the DTM is used to formulate a constraint between corresponding features
in two consecutive frames. In this paper, these constraints are extended to
handle non-central projection, as is the case with many omnidirectional
systems. The utilization of omnidirectional data is shown to improve the
robustness and accuracy of the navigation algorithm. The feasibility of this
algorithm is established through lab experimentation with two kinds of
omnidirectional acquisition systems. The first one is polydioptric cameras
while the second is catadioptric camera.
|
1107.0015
|
Automaton based detection of affected cells in three dimensional
biological system
|
cs.CE
|
The aim of this research review is to propose the logic and search mechanism
for the development of an artificially intelligent automaton (AIA) that can
find affected cells in a 3-dimensional biological system. Research on the
possible application of such automatons to detect and control cancer cells in
the human body are greatly focused MRI and PET scans finds the affected regions
at the tissue level even as we can find the affected regions at the cellular
level using the framework. The AIA may be designed to ensure optimum
utilization as they record and might control the presence of affected cells in
a human body. The proposed models and techniques can be generalized and used in
any application where cells are injured or affected by some disease or
accident. The best method to import AIA into the body without surgery or
injection is to insert small pill like automata, carrying material viz drugs or
leukocytes that is needed to correct the infection. In this process, the AIA
can be compared to nano pills to deliver or support therapy. NanoHive
simulation software was used to validate the framework of this paper. The
existing nanomedicine models such as obstacle avoidance algorithm based models
(Hla K H S et al 2008) and the framework in this model were tested in different
simulation based experiments. The existing models such as obstacle avoidance
based models failed in complex environmental conditions (such as changing
environmental conditions, presence of semi-solid particles, etc) while the
model in this paper executed its framework successfully.Come systems biology,
this field of automatons deserves a bigger leap of understanding especially
when pharmacogenomics is at its peak. The results also indicate the importance
of artificial intelligence and other computational capabilities in the proposed
model for the successful detection of affected cells.
|
1107.0018
|
A New Technique for Combining Multiple Classifiers using The
Dempster-Shafer Theory of Evidence
|
cs.AI
|
This paper presents a new classifier combination technique based on the
Dempster-Shafer theory of evidence. The Dempster-Shafer theory of evidence is a
powerful method for combining measures of evidence from different classifiers.
However, since each of the available methods that estimates the evidence of
classifiers has its own limitations, we propose here a new implementation which
adapts to training data so that the overall mean square error is minimized. The
proposed technique is shown to outperform most available classifier combination
methods when tested on three different classification problems.
|
1107.0019
|
Searching for Bayesian Network Structures in the Space of Restricted
Acyclic Partially Directed Graphs
|
cs.AI
|
Although many algorithms have been designed to construct Bayesian network
structures using different approaches and principles, they all employ only two
methods: those based on independence criteria, and those based on a scoring
function and a search procedure (although some methods combine the two). Within
the score+search paradigm, the dominant approach uses local search methods in
the space of directed acyclic graphs (DAGs), where the usual choices for
defining the elementary modifications (local changes) that can be applied are
arc addition, arc deletion, and arc reversal. In this paper, we propose a new
local search method that uses a different search space, and which takes account
of the concept of equivalence between network structures: restricted acyclic
partially directed graphs (RPDAGs). In this way, the number of different
configurations of the search space is reduced, thus improving efficiency.
Moreover, although the final result must necessarily be a local optimum given
the nature of the search method, the topology of the new search space, which
avoids making early decisions about the directions of the arcs, may help to
find better local optima than those obtained by searching in the DAG space.
Detailed results of the evaluation of the proposed search method on several
test problems, including the well-known Alarm Monitoring System, are also
presented.
|
1107.0020
|
Learning to Order BDD Variables in Verification
|
cs.AI
|
The size and complexity of software and hardware systems have significantly
increased in the past years. As a result, it is harder to guarantee their
correct behavior. One of the most successful methods for automated verification
of finite-state systems is model checking. Most of the current model-checking
systems use binary decision diagrams (BDDs) for the representation of the
tested model and in the verification process of its properties. Generally, BDDs
allow a canonical compact representation of a boolean function (given an order
of its variables). The more compact the BDD is, the better performance one gets
from the verifier. However, finding an optimal order for a BDD is an
NP-complete problem. Therefore, several heuristic methods based on expert
knowledge have been developed for variable ordering. We propose an alternative
approach in which the variable ordering algorithm gains 'ordering experience'
from training models and uses the learned knowledge for finding good orders.
Our methodology is based on offline learning of pair precedence classifiers
from training models, that is, learning which variable pair permutation is more
likely to lead to a good order. For each training model, a number of training
sequences are evaluated. Every training model variable pair permutation is then
tagged based on its performance on the evaluated orders. The tagged
permutations are then passed through a feature extractor and are given as
examples to a classifier creation algorithm. Given a model for which an order
is requested, the ordering algorithm consults each precedence classifier and
constructs a pair precedence table which is used to create the order. Our
algorithm was integrated with SMV, which is one of the most widely used
verification systems. Preliminary empirical evaluation of our methodology,
using real benchmark models, shows performance that is better than random
ordering and is competitive with existing algorithms that use expert knowledge.
We believe that in sub-domains of models (alu, caches, etc.) our system will
prove even more valuable. This is because it features the ability to learn
sub-domain knowledge, something that no other ordering algorithm does.
|
1107.0021
|
Decentralized Supply Chain Formation: A Market Protocol and Competitive
Equilibrium Analysis
|
cs.AI
|
Supply chain formation is the process of determining the structure and terms
of exchange relationships to enable a multilevel, multiagent production
activity. We present a simple model of supply chains, highlighting two
characteristic features: hierarchical subtask decomposition, and resource
contention. To decentralize the formation process, we introduce a market price
system over the resources produced along the chain. In a competitive
equilibrium for this system, agents choose locally optimal allocations with
respect to prices, and outcomes are optimal overall. To determine prices, we
define a market protocol based on distributed, progressive auctions, and
myopic, non-strategic agent bidding policies. In the presence of resource
contention, this protocol produces better solutions than the greedy protocols
common in the artificial intelligence and multiagent systems literature. The
protocol often converges to high-value supply chains, and when competitive
equilibria exist, typically to approximate competitive equilibria. However,
complementarities in agent production technologies can cause the protocol to
wastefully allocate inputs to agents that do not produce their outputs. A
subsequent decommitment phase recovers a significant fraction of the lost
surplus.
|
1107.0022
|
K-Implementation
|
cs.GT cs.AI
|
This paper discusses an interested party who wishes to influence the behavior
of agents in a game (multi-agent interaction), which is not under his control.
The interested party cannot design a new game, cannot enforce agents' behavior,
cannot enforce payments by the agents, and cannot prohibit strategies available
to the agents. However, he can influence the outcome of the game by committing
to non-negative monetary transfers for the different strategy profiles that may
be selected by the agents. The interested party assumes that agents are
rational in the commonly agreed sense that they do not use dominated
strategies. Hence, a certain subset of outcomes is implemented in a given game
if by adding non-negative payments, rational players will necessarily produce
an outcome in this subset. Obviously, by making sufficiently big payments one
can implement any desirable outcome. The question is what is the cost of
implementation? In this paper we introduce the notion of k-implementation of a
desired set of strategy profiles, where k stands for the amount of payment that
need to be actually made in order to implement desirable outcomes. A major
point in k-implementation is that monetary offers need not necessarily
materialize when following desired behaviors. We define and study
k-implementation in the contexts of games with complete and incomplete
information. In the latter case we mainly focus on the VCG games. Our setting
is later extended to deal with mixed strategies using correlation devices.
Together, the paper introduces and studies the implementation of desirable
outcomes by a reliable party who cannot modify game rules (i.e. provide
protocols), complementing previous work in mechanism design, while making it
more applicable to many realistic CS settings.
|
1107.0023
|
CP-nets: A Tool for Representing and Reasoning withConditional Ceteris
Paribus Preference Statements
|
cs.AI
|
Information about user preferences plays a key role in automated decision
making. In many domains it is desirable to assess such preferences in a
qualitative rather than quantitative way. In this paper, we propose a
qualitative graphical representation of preferences that reflects conditional
dependence and independence of preference statements under a ceteris paribus
(all else being equal) interpretation. Such a representation is often compact
and arguably quite natural in many circumstances. We provide a formal semantics
for this model, and describe how the structure of the network can be exploited
in several inference tasks, such as determining whether one outcome dominates
(is preferred to) another, ordering a set outcomes according to the preference
relation, and constructing the best outcome subject to available evidence.
|
1107.0024
|
Complexity Results and Approximation Strategies for MAP Explanations
|
cs.AI
|
MAP is the problem of finding a most probable instantiation of a set of
variables given evidence. MAP has always been perceived to be significantly
harder than the related problems of computing the probability of a variable
instantiation Pr, or the problem of computing the most probable explanation
(MPE). This paper investigates the complexity of MAP in Bayesian networks.
Specifically, we show that MAP is complete for NP^PP and provide further
negative complexity results for algorithms based on variable elimination. We
also show that MAP remains hard even when MPE and Pr become easy. For example,
we show that MAP is NP-complete when the networks are restricted to polytrees,
and even then can not be effectively approximated. Given the difficulty of
computing MAP exactly, and the difficulty of approximating MAP while providing
useful guarantees on the resulting approximation, we investigate best effort
approximations. We introduce a generic MAP approximation framework. We provide
two instantiations of the framework; one for networks which are amenable to
exact inference Pr, and one for networks for which even exact inference is too
hard. This allows MAP approximation on networks that are too complex to even
exactly solve the easier problems, Pr and MPE. Experimental results indicate
that using these approximation algorithms provides much better solutions than
standard techniques, and provide accurate MAP estimates in many cases.
|
1107.0025
|
Taming Numbers and Durations in the Model Checking Integrated Planning
System
|
cs.AI
|
The Model Checking Integrated Planning System (MIPS) is a temporal least
commitment heuristic search planner based on a flexible object-oriented
workbench architecture. Its design clearly separates explicit and symbolic
directed exploration algorithms from the set of on-line and off-line computed
estimates and associated data structures. MIPS has shown distinguished
performance in the last two international planning competitions. In the last
event the description language was extended from pure propositional planning to
include numerical state variables, action durations, and plan quality objective
functions. Plans were no longer sequences of actions but time-stamped
schedules. As a participant of the fully automated track of the competition,
MIPS has proven to be a general system; in each track and every benchmark
domain it efficiently computed plans of remarkable quality. This article
introduces and analyzes the most important algorithmic novelties that were
necessary to tackle the new layers of expressiveness in the benchmark problems
and to achieve a high level of performance. The extensions include critical
path analysis of sequentially generated plans to generate corresponding optimal
parallel plans. The linear time algorithm to compute the parallel plan bypasses
known NP hardness results for partial ordering by scheduling plans with respect
to the set of actions and the imposed precedence relations. The efficiency of
this algorithm also allows us to improve the exploration guidance: for each
encountered planning state the corresponding approximate sequential plan is
scheduled. One major strength of MIPS is its static analysis phase that grounds
and simplifies parameterized predicates, functions and operators, that infers
knowledge to minimize the state description length, and that detects domain
object symmetries. The latter aspect is analyzed in detail. MIPS has been
developed to serve as a complete and optimal state space planner, with
admissible estimates, exploration engines and branching cuts. In the
competition version, however, certain performance compromises had to be made,
including floating point arithmetic, weighted heuristic search exploration
according to an inadmissible estimate and parameterized optimization.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.