id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1109.2156
|
Approximate Policy Iteration with a Policy Language Bias: Solving
Relational Markov Decision Processes
|
cs.AI
|
We study an approach to policy selection for large relational Markov Decision
Processes (MDPs). We consider a variant of approximate policy iteration (API)
that replaces the usual value-function learning step with a learning step in
policy space. This is advantageous in domains where good policies are easier to
represent and learn than the corresponding value functions, which is often the
case for the relational MDPs we are interested in. In order to apply API to
such problems, we introduce a relational policy language and corresponding
learner. In addition, we introduce a new bootstrapping routine for goal-based
planning domains, based on random walks. Such bootstrapping is necessary for
many large relational MDPs, where reward is extremely sparse, as API is
ineffective in such domains when initialized with an uninformed policy. Our
experiments show that the resulting system is able to find good policies for a
number of classical planning domains and their stochastic variants by solving
them as extremely large relational MDPs. The experiments also point to some
limitations of our approach, suggesting future work.
|
1109.2215
|
Finding missing edges and communities in incomplete networks
|
cs.SI physics.data-an physics.soc-ph
|
Many algorithms have been proposed for predicting missing edges in networks,
but they do not usually take account of which edges are missing. We focus on
networks which have missing edges of the form that is likely to occur in real
networks, and compare algorithms that find these missing edges. We also
investigate the effect of this kind of missing data on community detection
algorithms.
|
1109.2227
|
A radial version of the Central Limit Theorem
|
cs.IT cs.CV math.IT math.PR
|
In this note, we give a probabilistic interpretation of the Central Limit
Theorem used for approximating isotropic Gaussians in [1].
|
1109.2229
|
A Learning Theory Approach to Non-Interactive Database Privacy
|
cs.DS cs.CR cs.LG
|
In this paper we demonstrate that, ignoring computational constraints, it is
possible to privately release synthetic databases that are useful for large
classes of queries -- much larger in size than the database itself.
Specifically, we give a mechanism that privately releases synthetic data for a
class of queries over a discrete domain with error that grows as a function of
the size of the smallest net approximately representing the answers to that
class of queries. We show that this in particular implies a mechanism for
counting queries that gives error guarantees that grow only with the
VC-dimension of the class of queries, which itself grows only logarithmically
with the size of the query class.
We also show that it is not possible to privately release even simple classes
of queries (such as intervals and their generalizations) over continuous
domains. Despite this, we give a privacy-preserving polynomial time algorithm
that releases information useful for all halfspace queries, given a slight
relaxation of the utility guarantee. This algorithm does not release synthetic
data, but instead another data structure capable of representing an answer for
each query. We also give an efficient algorithm for releasing synthetic data
for the class of interval queries and axis-aligned rectangles of constant
dimension.
Finally, inspired by learning theory, we introduce a new notion of data
privacy, which we call distributional privacy, and show that it is strictly
stronger than the prevailing privacy notion, differential privacy.
|
1109.2237
|
The World is Either Algorithmic or Mostly Random
|
cs.IT math.IT physics.data-an physics.pop-ph
|
I will propose the notion that the universe is digital, not as a claim about
what the universe is made of but rather about the way it unfolds. Central to
the argument will be the concepts of symmetry breaking and algorithmic
probability, which will be used as tools to compare the way patterns are
distributed in our world to the way patterns are distributed in a simulated
digital one. These concepts will provide a framework for a discussion of the
informational nature of reality. I will argue that if the universe were analog,
then the world would likely be random, making it largely incomprehensible. The
digital model has, however, an inherent beauty in its imposition of an upper
limit and in the convergence in computational power to a maximal level of
sophistication. Even if deterministic, that it is digital doesn't mean that the
world is trivial or predictable, but rather that it is built up from operations
that at the lowest scale are very simple but that at a higher scale look
complex and even random, though only in appearance.
|
1109.2271
|
Feature-Based Matrix Factorization
|
cs.AI cs.IR
|
Recommender system has been more and more popular and widely used in many
applications recently. The increasing information available, not only in
quantities but also in types, leads to a big challenge for recommender system
that how to leverage these rich information to get a better performance. Most
traditional approaches try to design a specific model for each scenario, which
demands great efforts in developing and modifying models. In this technical
report, we describe our implementation of feature-based matrix factorization.
This model is an abstract of many variants of matrix factorization models, and
new types of information can be utilized by simply defining new features,
without modifying any lines of code. Using the toolkit, we built the best
single model reported on track 1 of KDDCup'11.
|
1109.2275
|
On Phase Transition of Compressed Sensing in the Complex Domain
|
cs.IT math.IT
|
The phase transition is a performance measure of the sparsity-undersampling
tradeoff in compressed sensing (CS). This letter reports our first observation
and evaluation of an empirical phase transition of the $\ell_1$ minimization
approach to the complex valued CS (CVCS), which is positioned well above the
known phase transition of the real valued CS in the phase plane. This result
can be considered as an extension of the existing phase transition theory of
the block-sparse CS (BSCS) based on the universality argument, since the CVCS
problem does not meet the condition required by the phase transition theory of
BSCS but its observed phase transition coincides with that of BSCS. Our result
is obtained by applying the recently developed ONE-L1 algorithms to the
empirical evaluation of the phase transition of CVCS.
|
1109.2288
|
Heterogeneity for Increasing Performance and Reliability of
Self-Reconfigurable Multi-Robot Organisms
|
cs.RO cs.SY
|
Homogeneity and heterogeneity represent a well-known trade-off in the design
of modular robot systems. This work addresses the heterogeneity concept, its
rationales, design choices and performance evaluation. We introduce challenges
for self-reconfigurable systems, show advances of mechatronic and software
design of heterogeneous platforms and discuss experiments, which intend to
demonstrate usability and performance of this system.
|
1109.2296
|
Bandits with an Edge
|
cs.LG
|
We consider a bandit problem over a graph where the rewards are not directly
observed. Instead, the decision maker can compare two nodes and receive
(stochastic) information pertaining to the difference in their value. The graph
structure describes the set of possible comparisons. Consequently, comparing
between two nodes that are relatively far requires estimating the difference
between every pair of nodes on the path between them. We analyze this problem
from the perspective of sample complexity: How many queries are needed to find
an approximately optimal node with probability more than $1-\delta$ in the PAC
setup? We show that the topology of the graph plays a crucial in defining the
sample complexity: graphs with a low diameter have a much better sample
complexity.
|
1109.2304
|
Efficient Minimization of Higher Order Submodular Functions using
Monotonic Boolean Functions
|
cs.DS cs.CV cs.DM
|
Submodular function minimization is a key problem in a wide variety of
applications in machine learning, economics, game theory, computer vision, and
many others. The general solver has a complexity of $O(n^3 \log^2 n . E +n^4
{\log}^{O(1)} n)$ where $E$ is the time required to evaluate the function and
$n$ is the number of variables \cite{Lee2015}. On the other hand, many computer
vision and machine learning problems are defined over special subclasses of
submodular functions that can be written as the sum of many submodular cost
functions defined over cliques containing few variables. In such functions, the
pseudo-Boolean (or polynomial) representation \cite{BorosH02} of these
subclasses are of degree (or order, or clique size) $k$ where $k \ll n$. In
this work, we develop efficient algorithms for the minimization of this useful
subclass of submodular functions. To do this, we define novel mapping that
transform submodular functions of order $k$ into quadratic ones. The underlying
idea is to use auxiliary variables to model the higher order terms and the
transformation is found using a carefully constructed linear program. In
particular, we model the auxiliary variables as monotonic Boolean functions,
allowing us to obtain a compact transformation using as few auxiliary variables
as possible.
|
1109.2313
|
Convergence Analysis of Saddle Point Problems in Time Varying Wireless
Systems - Control Theoretical Approach
|
cs.IT math.IT
|
Saddle point problems arise from many wireless applications, and primal-dual
iterative algorithms are widely applied to find the saddle points. In the
existing literature, the convergence results of such algorithms are established
assuming the problem specific parameters remain unchanged during the
iterations. However, this assumption is unrealistic in time varying wireless
systems, as explicit message passing is usually involved in the iterations and
the channel state information (CSI) may change in a time scale comparable to
the algorithm update period. This paper investigates the convergence behavior
and the tracking error of primal-dual iterative algorithms under time varying
CSI. The convergence results are established by studying the stability of an
equivalent virtual dynamic system derived in the paper, and the Lyapunov theory
is applied for the stability analysis. We show that the average tracking error
is proportional to the time variation rate of the CSI. Based on these analyses,
we also derive an adaptive primal-dual algorithm by introducing a compensation
term to reduce the tracking error under the time varying CSI.
|
1109.2317
|
An Overview of Codes Tailor-made for Better Repairability in Networked
Distributed Storage Systems
|
cs.DC cs.IT math.IT
|
The continuously increasing amount of digital data generated by today's
society asks for better storage solutions. This survey looks at a new
generation of coding techniques designed specifically for the needs of
distributed networked storage systems, trying to reach the best compromise
among storage space efficiency, fault tolerance, and maintenance overheads.
Four families of codes tailor-made for distributed settings, namely - pyramid,
hierarchical, regenerating and self-repairing codes - are presented at a high
level, emphasizing the main ideas behind each of these codes, and discussing
their pros and cons, before concluding with a quantitative comparison among
them. This survey deliberately excluded technical details for the codes, nor
does it provide an exhaustive summary of the numerous works. Instead, it
provides an overview of the major code families in a manner easily accessible
to a broad audience, by presenting the big picture of advances in coding
techniques for distributed storage solutions.
|
1109.2321
|
Visualizing Domain Ontology using Enhanced Anaphora Resolution Algorithm
|
cs.IR
|
Enormous explosion in the number of the World Wide Web pages occur every day
and since the efficiency of most of the information processing systems is found
to be less, the potential of the Internet applications is often underutilized.
Efficient utilization of the web can be exploited when similar web pages are
rigorously, exhaustively organized and clustered based on some domain knowledge
(semantic-based) .Ontology which is a formal representation of domain knowledge
aids in such efficient utilization. The performance of almost all the
semantic-based clustering techniques depends on the constructed ontology,
describing the domain knowledge . The proposed methodology provides an enhanced
pronominal anaphora resolution, one of the key aspects of semantic analysis in
Natural Language Processing for obtaining cross references within a web page
providing better ontology construction. The experimental data sets exhibits
better efficiency of the proposed method compared to earlier traditional
algorithms.
|
1109.2346
|
Linking Search Space Structure, Run-Time Dynamics, and Problem
Difficulty: A Step Toward Demystifying Tabu Search
|
cs.AI
|
Tabu search is one of the most effective heuristics for locating high-quality
solutions to a diverse array of NP-hard combinatorial optimization problems.
Despite the widespread success of tabu search, researchers have a poor
understanding of many key theoretical aspects of this algorithm, including
models of the high-level run-time dynamics and identification of those search
space features that influence problem difficulty. We consider these questions
in the context of the job-shop scheduling problem (JSP), a domain where tabu
search algorithms have been shown to be remarkably effective. Previously, we
demonstrated that the mean distance between random local optima and the nearest
optimal solution is highly correlated with problem difficulty for a well-known
tabu search algorithm for the JSP introduced by Taillard. In this paper, we
discuss various shortcomings of this measure and develop a new model of problem
difficulty that corrects these deficiencies. We show that Taillards algorithm
can be modeled with high fidelity as a simple variant of a straightforward
random walk. The random walk model accounts for nearly all of the variability
in the cost required to locate both optimal and sub-optimal solutions to random
JSPs, and provides an explanation for differences in the difficulty of random
versus structured JSPs. Finally, we discuss and empirically substantiate two
novel predictions regarding tabu search algorithm behavior. First, the method
for constructing the initial solution is highly unlikely to impact the
performance of tabu search. Second, tabu tenure should be selected to be as
small as possible while simultaneously avoiding search stagnation; values
larger than necessary lead to significant degradations in performance.
|
1109.2347
|
Breaking Instance-Independent Symmetries In Exact Graph Coloring
|
cs.AI
|
Code optimization and high level synthesis can be posed as constraint
satisfaction and optimization problems, such as graph coloring used in register
allocation. Graph coloring is also used to model more traditional CSPs relevant
to AI, such as planning, time-tabling and scheduling. Provably optimal
solutions may be desirable for commercial and defense applications.
Additionally, for applications such as register allocation and code
optimization, naturally-occurring instances of graph coloring are often small
and can be solved optimally. A recent wave of improvements in algorithms for
Boolean satisfiability (SAT) and 0-1 Integer Linear Programming (ILP) suggests
generic problem-reduction methods, rather than problem-specific heuristics,
because (1) heuristics may be upset by new constraints, (2) heuristics tend to
ignore structure, and (3) many relevant problems are provably inapproximable.
Problem reductions often lead to highly symmetric SAT instances, and
symmetries are known to slow down SAT solvers. In this work, we compare several
avenues for symmetry breaking, in particular when certain kinds of symmetry are
present in all generated instances. Our focus on reducing CSPs to SAT allows us
to leverage recent dramatic improvement in SAT solvers and automatically
benefit from future progress. We can use a variety of black-box SAT solvers
without modifying their source code because our symmetry-breaking techniques
are static, i.e., we detect symmetries and add symmetry breaking predicates
(SBPs) during pre-processing.
An important result of our work is that among the types of
instance-independent SBPs we studied and their combinations, the simplest and
least complete constructions are the most effective. Our experiments also
clearly indicate that instance-independent symmetries should mostly be
processed together with instance-specific symmetries rather than at the
specification level, contrary to what has been suggested in the literature.
|
1109.2355
|
Decision-Theoretic Planning with non-Markovian Rewards
|
cs.AI
|
A decision process in which rewards depend on history rather than merely on
the current state is called a decision process with non-Markovian rewards
(NMRDP). In decision-theoretic planning, where many desirable behaviours are
more naturally expressed as properties of execution sequences rather than as
properties of states, NMRDPs form a more natural model than the commonly
adopted fully Markovian decision process (MDP) model. While the more tractable
solution methods developed for MDPs do not directly apply in the presence of
non-Markovian rewards, a number of solution methods for NMRDPs have been
proposed in the literature. These all exploit a compact specification of the
non-Markovian reward function in temporal logic, to automatically translate the
NMRDP into an equivalent MDP which is solved using efficient MDP solution
methods. This paper presents NMRDPP (Non-Markovian Reward Decision Process
Planner), a software platform for the development and experimentation of
methods for decision-theoretic planning with non-Markovian rewards. The current
version of NMRDPP implements, under a single interface, a family of methods
based on existing as well as new approaches which we describe in detail. These
include dynamic programming, heuristic search, and structured methods. Using
NMRDPP, we compare the methods and identify certain problem features that
affect their performance. NMRDPPs treatment of non-Markovian rewards is
inspired by the treatment of domain-specific search control knowledge in the
TLPlan planner, which it incorporates as a special case. In the First
International Probabilistic Planning Competition, NMRDPP was able to compete
and perform well in both the domain-independent and hand-coded tracks, using
search control knowledge in the latter.
|
1109.2363
|
Sensor Management: Past, Present, and Future
|
stat.AP cs.RO cs.SY math.OC
|
Sensor systems typically operate under resource constraints that prevent the
simultaneous use of all resources all of the time. Sensor management becomes
relevant when the sensing system has the capability of actively managing these
resources; i.e., changing its operating configuration during deployment in
reaction to previous measurements. Examples of systems in which sensor
management is currently used or is likely to be used in the near future include
autonomous robots, surveillance and reconnaissance networks, and waveform-agile
radars. This paper provides an overview of the theory, algorithms, and
applications of sensor management as it has developed over the past decades and
as it stands today.
|
1109.2388
|
MIS-Boost: Multiple Instance Selection Boosting
|
cs.LG cs.CV
|
In this paper, we present a new multiple instance learning (MIL) method,
called MIS-Boost, which learns discriminative instance prototypes by explicit
instance selection in a boosting framework. Unlike previous instance selection
based MIL methods, we do not restrict the prototypes to a discrete set of
training instances but allow them to take arbitrary values in the instance
feature space. We also do not restrict the total number of prototypes and the
number of selected-instances per bag; these quantities are completely
data-driven. We show that MIS-Boost outperforms state-of-the-art MIL methods on
a number of benchmark datasets. We also apply MIS-Boost to large-scale image
classification, where we show that the automatically selected prototypes map to
visually meaningful image regions.
|
1109.2389
|
A Probabilistic Framework for Discriminative Dictionary Learning
|
cs.CV cs.LG
|
In this paper, we address the problem of discriminative dictionary learning
(DDL), where sparse linear representation and classification are combined in a
probabilistic framework. As such, a single discriminative dictionary and linear
binary classifiers are learned jointly. By encoding sparse representation and
discriminative classification models in a MAP setting, we propose a general
optimization framework that allows for a data-driven tradeoff between faithful
representation and accurate classification. As opposed to previous work, our
learning methodology is capable of incorporating a diverse family of
classification cost functions (including those used in popular boosting
methods), while avoiding the need for involved optimization techniques. We show
that DDL can be solved by a sequence of updates that make use of well-known and
well-studied sparse coding and dictionary learning algorithms from the
literature. To validate our DDL framework, we apply it to digit classification
and face recognition and test it on standard benchmarks.
|
1109.2397
|
Structured sparsity through convex optimization
|
cs.LG stat.ML
|
Sparse estimation methods are aimed at using or obtaining parsimonious
representations of data or models. While naturally cast as a combinatorial
optimization problem, variable or feature selection admits a convex relaxation
through the regularization by the $\ell_1$-norm. In this paper, we consider
situations where we are not only interested in sparsity, but where some
structural prior knowledge is available as well. We show that the $\ell_1$-norm
can then be extended to structured norms built on either disjoint or
overlapping groups of variables, leading to a flexible framework that can deal
with various structures. We present applications to unsupervised learning, for
structured sparse principal component analysis and hierarchical dictionary
learning, and to supervised learning in the context of non-linear variable
selection.
|
1109.2415
|
Convergence Rates of Inexact Proximal-Gradient Methods for Convex
Optimization
|
cs.LG math.OC
|
We consider the problem of optimizing the sum of a smooth convex function and
a non-smooth convex function using proximal-gradient methods, where an error is
present in the calculation of the gradient of the smooth term or in the
proximity operator with respect to the non-smooth term. We show that both the
basic proximal-gradient method and the accelerated proximal-gradient method
achieve the same convergence rate as in the error-free case, provided that the
errors decrease at appropriate rates.Using these rates, we perform as well as
or better than a carefully chosen fixed error level on a set of structured
sparsity problems.
|
1109.2417
|
Internet and political communication - Macedonian case
|
cs.SI
|
Analysis how to use Internet influence to the process of political
communication, marketing and the management of public relations, what kind of
online communication methods are used by political parties, and to assess
satisfaction, means of communication and the services they provide to their
partys voters (people) and other interest groups and whether social networks
can affect the political and economic changes in the state, and the political
power of one party.
|
1109.2418
|
Facebook and political communication -- Macedonian case
|
cs.SI
|
Analysis how to use Internet influence to the process of political
communication, marketing and the management of public relations, what kind of
online communication methods are used by political parties, and to assess
satisfaction, means of communication and the services they provide to their
partys voters (people) and other interest groups and whether social networks
can affect the political and economic changes in the state, and the political
power of one party.
|
1109.2425
|
Query processing in distributed, taxonomy-based information sources
|
cs.DC cs.DB
|
We address the problem of answering queries over a distributed information
system, storing objects indexed by terms organized in a taxonomy. The taxonomy
consists of subsumption relationships between negation-free DNF formulas on
terms and negation-free conjunctions of terms. In the first part of the paper,
we consider the centralized case, deriving a hypergraph-based algorithm that is
efficient in data complexity. In the second part of the paper, we consider the
distributed case, presenting alternative ways implementing the centralized
algorithm. These ways descend from two basic criteria: direct vs. query
re-writing evaluation, and centralized vs. distributed data or taxonomy
allocation. Combinations of these criteria allow to cover a wide spectrum of
architectures, ranging from client-server to peer-to-peer. We evaluate the
performance of the various architectures by simulation on a network with
O(10^4) nodes, and derive final results. An extensive review of the relevant
literature is finally included.
|
1109.2427
|
Maximal frequent itemset generation using segmentation approach
|
cs.DB
|
Finding frequent itemsets in a data source is a fundamental operation behind
Association Rule Mining. Generally, many algorithms use either the bottom-up or
top-down approaches for finding these frequent itemsets. When the length of
frequent itemsets to be found is large, the traditional algorithms find all the
frequent itemsets from 1-length to n-length, which is a difficult process. This
problem can be solved by mining only the Maximal Frequent Itemsets (MFS).
Maximal Frequent Itemsets are frequent itemsets which have no proper frequent
superset. Thus, the generation of only maximal frequent itemsets reduces the
number of itemsets and also time needed for the generation of all frequent
itemsets as each maximal itemset of length m implies the presence of 2m-2
frequent itemsets. Furthermore, mining only maximal frequent itemset is
sufficient in many data mining applications like minimal key discovery and
theory extraction. In this paper, we suggest a novel method for finding the
maximal frequent itemset from huge data sources using the concept of
segmentation of data source and prioritization of segments. Empirical
evaluation shows that this method outperforms various other known methods.
|
1109.2449
|
Multi-Hypothesis CRF-Segmentation of Neural Tissue in Anisotropic EM
Volumes
|
cs.CV
|
We present an approach for the joint segmentation and grouping of similar
components in anisotropic 3D image data and use it to segment neural tissue in
serial sections electron microscopy (EM) images.
We first construct a nested set of neuron segmentation hypotheses for each
slice. A conditional random field (CRF) then allows us to evaluate both the
compatibility of a specific segmentation and a specific inter-slice assignment
of neuron candidates with the underlying observations. The model is solved
optimally for an entire image stack simultaneously using integer linear
programming (ILP), which yields the maximum a posteriori solution in amortized
linear time in the number of slices.
We evaluate the performance of our approach on an annotated sample of the
Drosophila larva neuropil and show that the consideration of different
segmentation hypotheses in each slice leads to a significant improvement in the
segmentation and assignment accuracy.
|
1109.2475
|
Statistical Physics for Humanities: A Tutorial
|
physics.pop-ph cond-mat.stat-mech cs.SI physics.ed-ph physics.soc-ph
|
The image of physics is connected with simple "mechanical" deterministic
events: that an apple always falls down, that force equals mass times
acceleleration. Indeed, applications of such concept to social or historical
problems go back two centuries (population growth and stabilisation, by Malthus
and by Verhulst) and use "differential equations", as recently revierwed by
Vitanov and Ausloos [2011]. However, since even today's computers cannot follow
the motion of all air molecules within one cubic centimeter, the probabilistic
approach has become fashionable since Ludwig Boltzmann invented Statistical
Physics in the 19th century. Computer simulations in Statistical Physics deal
with single particles, a method called agent-based modelling in fields which
adopted it later. Particularly simple are binary models where each particle has
only two choices, called spin up and spin down by physicists, bit zero and bit
one by computer scientists, and voters for the Republicans or for the Democrats
in American politics (where one human is simulated as one particle).
Neighbouring particles may influence each other, and the Ising model of 1925 is
the best-studied example of such models. This text will explain to the reader
how to program the Ising model on a square lattice (in Fortran language);
starting from there the readers can build their own computer programs. Some
applications of Statistical Physics outside the natural sciences will be
listed.
|
1109.2499
|
The Evolution of the Cuban HIV/AIDS Network
|
cs.SI physics.soc-ph
|
An individual detected as HIV positive in Cuba is asked to provide a list of
his/her sexual contacts for the previous 2 years. This allows one to gather
detailed information on the spread of the HIV epidemic. Here we study the
evolution of the sexual contact graph of detected individuals and also the
directed graph of HIV infections. The study covers the Cuban HIV epidemic
between the years 1986 and 2004 inclusive and is motivated by an earlier study
on the static properties of the network at the end of 2004. We use a variety of
advanced graph algorithms to paint a picture of the growth of the epidemic,
including an examination of diameters, geodesic distances, community structure
and centrality amongst others characteristics. The analysis contrasts the HIV
network with other real networks, and graphs generated using the configuration
model. We find that the early epidemic starts in the heterosexual population
and then grows mainly through MSM (Men having Sex with Men) contact. The
epidemic exhibits a giant component which is shown to have degenerate chains of
vertices and after 1989, diameters are larger than that expected by the
equivalent configuration model graphs. In 1997 there is an significant increase
in the detection rate from 73 to 256 detections/year covering mainly MSMs which
results in a rapid increase of distances and diameters in the giant component.
|
1109.2543
|
Optimal Index Assignment for Multiple Description Scalar Quantization
|
cs.IT math.IT
|
We provide a method for designing an optimal index assignment for scalar
K-description coding. The method stems from a construction of translated scalar
lattices, which provides a performance advantage by exploiting a so-called
staggered gain. Interestingly, generation of the optimal index assignment is
based on a lattice in K-1 dimensional space. The use of the K-1 dimensional
lattice facilitates analytic insight into the performance and eliminates the
need for a greedy optimization of the index assignment. It is shown that that
the optimal index assignment is not unique. This is illustrated for the
two-description case, where a periodic index assignment is selected from
possible optimal assignments and described in detail. The new index assignment
is applied to design of a K-description quantizer, which is found to outperform
a reference K-description quantizer at high rates. The performance advantage
due to the staggered gain increases with increasing redundancy among the
descriptions.
|
1109.2567
|
Quantization of Prior Probabilities for Collaborative Distributed
Hypothesis Testing
|
cs.IT math.IT
|
This paper studies the quantization of prior probabilities, drawn from an
ensemble, for distributed detection and data fusion. Design and performance
equivalences between a team of N agents tied by a fixed fusion rule and a more
powerful single agent are obtained. Effects of identical quantization and
diverse quantization are compared. Consideration of perceived common risk
enables agents using diverse quantizers to collaborate in hypothesis testing,
and it is proven that the minimum mean Bayes risk error is achieved by diverse
quantization. The comparison shows that optimal diverse quantization with K
cells per quantizer performs as well as optimal identical quantization with
N(K-1)+1 cells per quantizer. Similar results are obtained for maximum Bayes
risk error as the distortion criterion.
|
1109.2577
|
The Organization of Strong Links in Complex Networks
|
physics.soc-ph cs.SI q-bio.NC
|
A small-world topology characterizes many complex systems including the
structural and functional organization of brain networks. The topology allows
simultaneously for local and global efficiency in the interaction of the system
constituents. However, it ignores the gradations of interactions commonly
quantified by the link weight, w. Here, we identify an integrative weight
organization for brain, gene, social, and language networks, in which strong
links preferentially occur between nodes with overlapping neighbourhoods and
the small-world properties are robust to removal of a large fraction of the
weakest links. We also determine local learning rules that dynamically
establish such weight organization in response to past activity and capacity
demands, while preserving efficient local and global communication.
|
1109.2583
|
Optimal Backpressure Scheduling in Wireless Networks using Mutual
Information Accumulation
|
cs.IT cs.NI math.IT
|
In this paper we develop scheduling policies that maximize the stability
region of a wireless network under the assumption that mutual information
accumulation is implemented at the physical layer. When the link quality
between nodes is not sufficiently high that a packet can be decoded within a
single slot, the system can accumulate information across multiple slots,
eventually decoding the packet. The result is an expanded stability region. The
accumulation process over weak links is temporally coupled and therefore does
not satisfy the independent and identically distributed (i.i.d) assumption that
underlies many previous analysis in this area. Therefore the problem setting
also poses new analytic challenges. We propose two dynamic scheduling
algorithms to cope with the non-i.i.d nature of the decoding. The first
performs scheduling every $T$ slots, and approaches the boundary of the
stability region as $T$ gets large, but at the cost of increased average delay.
The second introduces virtual queues for each link and constructs a virtual
system wherein two virtual nodes are introduced for each link. The constructed
virtual system is shown to have the same stability region as the original
system. Through controlling the virtual queues in the constructed system, we
avoid the non-i.i.d analysis difficulty and attain the full stability region.
We derive performance bounds for both algorithms and compare them through
simulation results.
|
1109.2591
|
Polar codes for classical-quantum channels
|
quant-ph cs.IT math.IT
|
Holevo, Schumacher, and Westmoreland's coding theorem guarantees the
existence of codes that are capacity-achieving for the task of sending
classical data over a channel with classical inputs and quantum outputs.
Although they demonstrated the existence of such codes, their proof does not
provide an explicit construction of codes for this task. The aim of the present
paper is to fill this gap by constructing near-explicit "polar" codes that are
capacity-achieving. The codes exploit the channel polarization phenomenon
observed by Arikan for the case of classical channels. Channel polarization is
an effect in which one can synthesize a set of channels, by "channel combining"
and "channel splitting," in which a fraction of the synthesized channels are
perfect for data transmission while the other fraction are completely useless
for data transmission, with the good fraction equal to the capacity of the
channel. The channel polarization effect then leads to a simple scheme for data
transmission: send the information bits through the perfect channels and
"frozen" bits through the useless ones. The main technical contributions of the
present paper are threefold. First, we leverage several known results from the
quantum information literature to demonstrate that the channel polarization
effect occurs for channels with classical inputs and quantum outputs. We then
construct linear polar codes based on this effect, and the encoding complexity
is O(N log N), where N is the blocklength of the code. We also demonstrate that
a quantum successive cancellation decoder works well, in the sense that the
word error rate decays exponentially with the blocklength of the code. For this
last result, we exploit Sen's recent "non-commutative union bound" that holds
for a sequence of projectors applied to a quantum state.
|
1109.2657
|
From Contracts in Structured English to CL Specifications
|
cs.CL cs.FL cs.LO
|
In this paper we present a framework to analyze conflicts of contracts
written in structured English. A contract that has manually been rewritten in a
structured English is automatically translated into a formal language using the
Grammatical Framework (GF). In particular we use the contract language CL as a
target formal language for this translation. In our framework CL specifications
could then be input into the tool CLAN to detect the presence of conflicts
(whether there are contradictory obligations, permissions, and prohibitions. We
also use GF to get a version in (restricted) English of CL formulae. We discuss
the implementation of such a framework.
|
1109.2676
|
Dynamic Decentralized Algorithms for Cognitive Radio Relay Networks
|
cs.NI cs.IT math.IT
|
We propose a distributed spectrum access algorithm for cognitive radio relay
networks with multiple primary users (PU) and multiple secondary users (SU).
The key idea behind the proposed algorithm is that the PUs negotiate with the
SUs on both the amount of monetary compensation, and the amount of time the SUs
are either (i) allowed spectrum access, or (ii) cooperatively relaying the PU's
data, such that both the PUs' and the SUs' minimum rate requirement are
satisfied. The proposed algorithm is shown to be flexible in prioritizing
either the primary or the secondary users. We prove that the proposed algorithm
will result in the best possible stable matching and is weak Pareto optimal.
Numerical analysis also reveal that the distributed algorithm can achieve a
performance comparable to an optimal centralized solution, but with
significantly less overhead and complexity.
|
1109.2684
|
YouTube and political communication -- Macedonian case
|
cs.SI
|
Analysis how to use Internet influence to the process of political
communication, marketing and the management of public relations, what kind of
online communication methods are used by political parties, and to assess
satisfaction, means of communication and the services they provide to their
party's voters (people) and other interest groups and whether social networks
can affect the political and economic changes in the state, and the political
power of one party.
|
1109.2697
|
Selection of Model in Developing Information Security Criteria for Smart
Grid Security System
|
cs.CR cs.SY
|
At present, the "Smart Grid" has emerged as one of the best advanced energy
supply chains. This paper looks into the security system of smart grid via the
smart planet system. The scope focused on information security criteria that
impact on consumer trust and satisfaction. The importance of information
security criteria is perceived as the main aspect to impact on customer trust
throughout the entire smart grid system. On one hand, this paper also focuses
on the selection of the model for developing information security criteria on a
smart grid.
|
1109.2720
|
Capacity Pre-Log of SIMO Correlated Block-Fading Channels
|
cs.IT math.IT
|
We establish an upper bound on the noncoherent capacity pre-log of temporally
correlated block-fading single-input multiple-output (SIMO) channels. The upper
bound matches the lower bound recently reported in Riegler et al. (2011), and,
hence, yields a complete characterization of the SIMO noncoherent capacity
pre-log, provided that the channel covariance matrix satisfies a mild technical
condition. This result allows one to determine the optimal number of receive
antennas to be used to maximize the capacity pre-log for a given block-length
and a given rank of the channel covariance matrix.
|
1109.2752
|
On Validating Boolean Optimizers
|
cs.AI
|
Boolean optimization finds a wide range of application domains, that
motivated a number of different organizations of Boolean optimizers since the
mid 90s. Some of the most successful approaches are based on iterative calls to
an NP oracle, using either linear search, binary search or the identification
of unsatisfiable sub-formulas. The increasing use of Boolean optimizers in
practical settings raises the question of confidence in computed results. For
example, the issue of confidence is paramount in safety critical settings. One
way of increasing the confidence of the results computed by Boolean optimizers
is to develop techniques for validating the results. Recent work studied the
validation of Boolean optimizers based on branch-and-bound search. This paper
complements existing work, and develops methods for validating Boolean
optimizers that are based on iterative calls to an NP oracle. This entails
implementing solutions for validating both satisfiable and unsatisfiable
answers from the NP oracle. The work described in this paper can be applied to
a wide range of Boolean optimizers, that find application in Pseudo-Boolean
Optimization and in Maximum Satisfiability. Preliminary experimental results
indicate that the impact of the proposed method in overall performance is
negligible.
|
1109.2766
|
Secure Broadcasting With Side-Information
|
cs.IT math.IT
|
In this paper, we derive information-theoretic performance limits for secure
and reliable communications over the general two-user discrete memoryless
broadcast channel with side-information at the transmitter. The sender wishes
to broadcast two independent messages to two receivers, under the constraint
that each message should be kept confidential from the unintended receiver.
Furthermore, the encoder has side-information - for example, fading in the
wireless medium, interference caused by neighboring nodes in the network, etc.
- provided to it in a noncausal manner, i.e., before the process of
transmission. We derive an inner bound on the capacity region of this channel,
by employing an extension of Marton's coding technique used for the classical
two-user broadcast channel, in conjunction with a stochastic encoder to satisfy
confidentiality constraints. Based on previously known results, we discuss a
procedure to present a schematic of the achievable rate region. The
rate-penalties for dealing with side-information and confidentiality
constraints make the achievable region for this channel strictly smaller than
the rate regions of those channels where one or both of these constraints are
relaxed.
|
1109.2777
|
Connectivity Structure of Systems
|
math.OC cs.SY
|
In this paper, we consider to what degree the structure of a linear system is
determined by the system's input/output behavior. The structure of a linear
system is a directed graph where the vertices represent the variables in the
system and an edge (x,y) exists if x directly influences y. In a number of
studies, researchers have attempted to identify such structures using
input/output data. Thus, our main aim is to consider to what degree the results
of such studies are valid. We begin by showing that in many cases, applying a
linear transformation to a system will change the system's graph. Furthermore,
we show that even the graph's components and their interactions are not
determined by input/output behavior. From these results, we conclude that
without further assumptions, very few aspects, if any, of a system's structure
are determined by its input/output relation. We consider a number of such
assumptions. First, we show that for a number of parameterizations, we can
characterize when two systems have the same structure. Second, in many
applications, we can use domain knowledge to exclude certain interactions. In
these cases, we can assume that a certain variable x does not influence another
variable y. We show that these assumptions cannot be sufficient to identify a
system's parameters using input/output data. We conclude that identifying a
system's structure from input/output data may not be possible given only
assumptions of the form x does not influence y.
|
1109.2782
|
Two Classes of Broadcast Channels With Side-Information: Capacity Outer
Bounds
|
cs.IT math.IT
|
In this paper, we derive outer bounds on the capacity region of two classes
of the general two-user discrete memoryless broadcast channels with
side-information at the transmitter. The first class comprises the classical
broadcast channel where a sender transmits two independent messages to two
receivers. A constraint that each message must be kept confidential from the
unintended receiver constitutes the second class. For both classes, the
conditional distribution characterizing the channel depends on a state process
and the encoder has side-information provided to it in a noncausal manner. For
the first class of channels, an outer bound is derived employing techniques
used to prove the converse theorem for the Gel'fand-Pinsker's channel with
random parameters; the bounds are tight for individual rate constraints, but
can be improved upon for the sum rate. The technique for deriving outer bounds
for the second class of channels hinges on the confidentiality requirements; we
also derive a genie-aided outer bound, where a hypothetical genie gives the
unintended message to a receiver which treats it as side-information during
equivocation computation. For both classes of channels, Csisz\'{a}r's sum
identity plays a central role in establishing the capacity outer bounds.
|
1109.2788
|
Developing a supervised training algorithm for limited precision
feed-forward spiking neural networks
|
cs.NE
|
Spiking neural networks have been referred to as the third generation of
artificial neural networks where the information is coded as time of the
spikes. There are a number of different spiking neuron models available and
they are categorized based on their level of abstraction. In addition, there
are two known learning methods, unsupervised and supervised learning. This
thesis focuses on supervised learning where a new algorithm is proposed, based
on genetic algorithms. The proposed algorithm is able to train both synaptic
weights and delays and also allow each neuron to emit multiple spikes thus
taking full advantage of the spatial-temporal coding power of the spiking
neurons. In addition, limited synaptic precision is applied; only six bits are
used to describe and train a synapse, three bits for the weights and three bits
for the delays. Two limited precision schemes are investigated. The proposed
algorithm is tested on the XOR classification problem where it produces better
results for even smaller network architectures than the proposed ones.
Furthermore, the algorithm is benchmarked on the Fisher iris classification
problem where it produces higher classification accuracies compared to
SpikeProp, QuickProp and Rprop. Finally, a hardware implementation on a
microcontroller is done for the XOR problem as a proof of concept. Keywords:
Spiking neural networks, supervised learning, limited synaptic precision,
genetic algorithms, hardware implementation.
|
1109.2793
|
Finding missing edges in networks based on their community structure
|
cs.IR cs.SI physics.data-an physics.soc-ph
|
Many edge prediction methods have been proposed, based on various local or
global properties of the structure of an incomplete network. Community
structure is another significant feature of networks: Vertices in a community
are more densely connected than average. It is often true that vertices in the
same community have "similar" properties, which suggests that missing edges are
more likely to be found within communities than elsewhere. We use this insight
to propose a strategy for edge prediction that combines existing edge
prediction methods with community detection. We show that this method gives
better prediction accuracy than existing edge prediction methods alone.
|
1109.2806
|
Using the DiaSpec design language and compiler to develop robotics
systems
|
cs.RO cs.SE
|
A Sense/Compute/Control (SCC) application is one that interacts with the
physical environment. Such applications are pervasive in domains such as
building automation, assisted living, and autonomic computing. Developing an
SCC application is complex because: (1) the implementation must address both
the interaction with the environment and the application logic; (2) any
evolution in the environment must be reflected in the implementation of the
application; (3) correctness is essential, as effects on the physical
environment can have irreversible consequences. The SCC architectural pattern
and the DiaSpec domain-specific design language propose a framework to guide
the design of such applications. From a design description in DiaSpec, the
DiaSpec compiler is capable of generating a programming framework that guides
the developer in implementing the design and that provides runtime support. In
this paper, we report on an experiment using DiaSpec (both the design language
and compiler) to develop a standard robotics application. We discuss the
benefits and problems of using DiaSpec in a robotics setting and present some
changes that would make DiaSpec a better framework in this setting.
|
1109.2816
|
Designing MPC controllers by reverse-engineering existing LTI
controllers
|
math.OC cs.SY
|
This technical report presents a method for designing a constrained
output-feedback model predictive controller (MPC) that behaves in the same way
as an existing baseline stabilising linear time invariant output-feedback
controller when constraints are inactive. The baseline controller is cast into
an observer-compensator form and an inverse-optimal cost function is used as
the basis of the MPC controller. The available degrees of design freedom are
explored, and some guidelines provided for the selection of an appropriate
observer-compensator realisation that will best allow exploitation of the
constraint-handling and redundancy management capabilities of MPC.
Consideration is given to output setpoint tracking, and the method is
demonstrated with three different multivariable plants of varying complexity.
|
1109.2843
|
A Novel Relay-Aided Transmission Scheme in Cognitive Radio Networks
|
cs.NI cs.IT math.IT
|
In underlay cognitive radio networks, unlicensed secondary users are allowed
to share the spectrum with licensed primary users when the interference induced
on the primary transmission is limited. In this paper, we propose a new
cooperative transmission scheme for cognitive radio networks where a relay node
is able to help both the primary and secondary transmissions. We derive exact
closed-form and upper bound expressions of the conditional primary and
secondary outage probabilities over Rayleigh fading channels. Furthermore, we
proposed a simple power allocation algorithm. Finally, using numerical
evaluation and simulation results we show the potential of our cooperative
transmission scheme in improving the secondary outage probability without
harming the primary one.
|
1109.2891
|
On the nonexistence of $[\binom{2m}{m-1}, 2m, \binom{2m-1}{m-1}]$, $m$
odd, complex orthogonal design
|
cs.IT math.IT
|
Complex orthogonal designs (CODs) are used to construct space-time block
codes. COD $\mathcal{O}_z$ with parameter $[p, n, k]$ is a $p\times n$ matrix,
where nonzero entries are filled by $\pm z_i$ or $\pm z^*_i$, $i = 1, 2,...,
k$, such that $\mathcal{O}^H_z \mathcal{O}_z =
(|z_1|^2+|z_2|^2+...+|z_k|^2)I_{n \times n}$. Adams et al. in "The final case
of the decoding delay problem for maximum rate complex orthogonal designs,"
IEEE Trans. Inf. Theory, vol. 56, no. 1, pp. 103-122, Jan. 2010, first proved
the nonexistence of $[\binom{2m}{m-1}, 2m, \binom{2m-1}{m-1}]$, $m$ odd, COD.
Combining with the previous result that decoding delay should be an integer
multiple of $\binom{2m}{m-1}$, they solved the final case $n \equiv 2 \pmod 4$
of the decoding delay problem for maximum rate complex orthogonal designs.
In this paper, we give another proof of the nonexistence of COD with
parameter $[\binom{2m}{m-1}, 2m, \binom{2m-1}{m-1}]$, $m$ odd. Our new proof is
based on the uniqueness of $[\binom{2m}{m-1}, 2m-1, \binom{2m-1}{m-1}]$ under
equivalence operation, where an explicit-form representation is proposed to
help the proof. Then, by proving it's impossible to add an extra orthogonal
column on COD $[\binom{2m}{m-1}, 2m-1, \binom{2m-1}{m-1}]$ when $m$ is odd, we
complete the proof of the nonexistence of COD $[\binom{2m}{m-1}, 2m,
\binom{2m-1}{m-1}]$.
|
1109.2944
|
Real Interference Alignment and Degrees of Freedom Region of Wireless X
Networks
|
cs.IT math.IT
|
We consider a single hop wireless X network with $K$ transmitters and $J$
receivers, all with single antenna. Each transmitter conveys for each receiver
an independent message. The channel is assumed to have constant coefficients.
We develop interference alignment scheme for this setup and derived several
achievable degrees of freedom regions. We show that in some cases, the derived
region meets a previous outer bound and are hence the DoF region. For our
achievability schemes, we divide each message into streams and use real
interference alignment on the streams. Several previous results on the DoF
region and total DoF for various special cases can be recovered from our
result.
|
1109.2950
|
The Physics of Communicability in Complex Networks
|
physics.soc-ph cond-mat.stat-mech cs.SI math-ph math.MP
|
A fundamental problem in the study of complex networks is to provide
quantitative measures of correlation and information flow between different
parts of a system. To this end, several notions of communicability have been
introduced and applied to a wide variety of real-world networks in recent
years. Several such communicability functions are reviewed in this paper. It is
emphasized that communication and correlation in networks can take place
through many more routes than the shortest paths, a fact that may not have been
sufficiently appreciated in previously proposed correlation measures. In
contrast to these, the communicability measures reviewed in this paper are
defined by taking into account all possible routes between two nodes, assigning
smaller weights to longer ones. This point of view naturally leads to the
definition of communicability in terms of matrix functions, such as the
exponential, resolvent, and hyperbolic functions, in which the matrix argument
is either the adjacency matrix or the graph Laplacian associated with the
network. Considerable insight on communicability can be gained by modeling a
network as a system of oscillators and deriving physical interpretations, both
classical and quantum-mechanical, of various communicability functions.
Applications of communicability measures to the analysis of complex systems are
illustrated on a variety of biological, physical and social networks. The last
part of the paper is devoted to a review of the notion of locality in complex
networks and to computational aspects that by exploiting sparsity can greatly
reduce the computational efforts for the calculation of communicability
functions for large networks.
|
1109.2954
|
A New Framework for Network Disruption
|
cs.SI math.CO math.OC physics.soc-ph
|
Traditional network disruption approaches focus on disconnecting or
lengthening paths in the network. We present a new framework for network
disruption that attempts to reroute flow through critical vertices via vertex
deletion, under the assumption that this will render those vertices vulnerable
to future attacks. We define the load on a critical vertex to be the number of
paths in the network that must flow through the vertex. We present
graph-theoretic and computational techniques to maximize this load, firstly by
removing either a single vertex from the network, secondly by removing a subset
of vertices.
|
1109.2957
|
Downlink Performance and Capacity of Distributed Antenna Systems
|
cs.IT math.IT
|
This paper investigates the performance of the downlink channel in
distributed antenna systems. We first establish the ergodic capacity of
distributed antennas, under different channel side information (CSI)
assumptions. We consider a generalized distributed antenna system with $N$
distributed ports, each of which is equipped with an array of $L$ transmit
antennas and constrained by a fixed transmit power. For this system we
calculate the downlink capacity to a single antenna receiver, under different
assumptions about the availability of the channel states at the transmitter.
Having established this information theoretic analysis of the ergodic capacity
of distributed antenna systems, this paper also investigates the effect of
antenna placement on the performance of such systems. In particular, we
investigate the optimal placement of the transmit antennas in distributed
antenna systems. We present a fairly general framework for this optimization
with no constraint on the location of the antennas. Based on stochastic
approximation theory, we adopt a formulation that is suitable for node
placement optimization in various wireless network scenarios. We show that
optimal placement of antennas inside the coverage region can significantly
improve the power efficiency of wireless networks.
|
1109.2963
|
Unveiling the Relationship Between Structure and Dynamics in Complex
Networks
|
physics.data-an cs.SI nlin.CD physics.soc-ph stat.ME
|
Over the last years, a great deal of attention has been focused on complex
networked systems, characterized by intricate structure and dynamics. The
latter has been often represented in terms of overall statistics (e.g. average
and standard deviations) of the time signals. While such approaches have led to
many insights, they have failed to take into account that signals at different
parts of the system can undergo distinct evolutions, which cannot be properly
represented in terms of average values. A novel framework for identifying the
principal aspects of the dynamics and how it is influenced by the network
structure is proposed in this work. The potential of this approach is
illustrated with respect to three important models (Integrate-and-Fire, SIS and
Kuramoto), allowing the identification of highly structured dynamics, in the
sense that different groups of nodes not only presented specific dynamics but
also felt the structure of the network in different ways.
|
1109.2964
|
Performance of Multi-Antenna MMSE Receivers in Non-homogeneous Poisson
Networks
|
cs.IT math.IT
|
A technique to compute the Cumulative Distribution Function (CDF) of the
Signal-to-Interference-plus-Noise-Ratio (SINR) for a wireless link with a
multi-antenna, Linear, Minimum-Mean-Square-Error (MMSE) receiver in the
presence of interferers distributed according to a non-homogenous Poisson point
process on the plane, and independent Rayleigh fading between antennas is
presented. This technique is used to compute the CDF of the SINR for several
different models of intensity functions, in particular, power-law intensity
functions, circular-symmetric Gaussian intensity functions and intensity
functions described by a polynomial in a bounded domain. Additionally it is
shown that if the number of receiver antennas is scaled linearly with the
intensity function, the SINR converges in probability to a limit determined by
the "shape" of the underlying intensity function. This work generalizes known
results for homogenous Poisson networks to non-homogenous Poisson networks.
|
1109.2984
|
A Statistically Modelling Method for Performance Limits in Sensor
Localization
|
cs.SY math.OC
|
In this paper, we study performance limits of sensor localization from a
novel perspective. Specifically, we consider the Cramer-Rao Lower Bound (CRLB)
in single-hop sensor localization using measurements from received signal
strength (RSS), time of arrival (TOA) and bearing, respectively, but
differently from the existing work, we statistically analyze the trace of the
associated CRLB matrix (i.e. as a scalar metric for performance limits of
sensor localization) by assuming anchor locations are random. By the Central
Limit Theorems for $U$-statistics, we show that as the number of the anchors
increases, this scalar metric is asymptotically normal in the RSS/bearing case,
and converges to a random variable which is an affine transformation of a
chi-square random variable of degree 2 in the TOA case. Moreover, we provide
formulas quantitatively describing the relationship among the mean and standard
deviation of the scalar metric, the number of the anchors, the parameters of
communication channels, the noise statistics in measurements and the spatial
distribution of the anchors. These formulas, though asymptotic in the number of
the anchors, in many cases turn out to be remarkably accurate in predicting
performance limits, even if the number is small. Simulations are carried out to
confirm our results.
|
1109.2993
|
A Delay-Constrained General Achievable Rate and Certain Capacity Results
for UWB Relay Channel
|
cs.IT math.IT
|
In this paper, we derive UWB version of (i) general best achievable rate for
the relay channel with decode-andforward strategy and (ii) max-flow min-cut
upper bound, such that the UWB relay channel can be studied considering the
obtained lower and upper bounds. Then, we show that by appropriately choosing
the noise correlation coefficients, our new upper bound coincides with the
lower bound in special cases of degraded and reversely degraded UWB relay
channels. Finally, some numerical results are illustrated.
|
1109.3041
|
Asymptotic analysis of the stochastic block model for modular networks
and its algorithmic applications
|
cond-mat.stat-mech cond-mat.dis-nn cs.SI physics.soc-ph
|
In this paper we extend our previous work on the stochastic block model, a
commonly used generative model for social and biological networks, and the
problem of inferring functional groups or communities from the topology of the
network. We use the cavity method of statistical physics to obtain an
asymptotically exact analysis of the phase diagram. We describe in detail
properties of the detectability/undetectability phase transition and the
easy/hard phase transition for the community detection problem. Our analysis
translates naturally into a belief propagation algorithm for inferring the
group memberships of the nodes in an optimal way, i.e., that maximizes the
overlap with the underlying group memberships, and learning the underlying
parameters of the block model. Finally, we apply the algorithm to two examples
of real-world networks and discuss its performance.
|
1109.3069
|
Directional Variance Adjustment: improving covariance estimates for
high-dimensional portfolio optimization
|
q-fin.PM cs.CE q-fin.ST
|
Robust and reliable covariance estimates play a decisive role in financial
and many other applications. An important class of estimators is based on
Factor models. Here, we show by extensive Monte Carlo simulations that
covariance matrices derived from the statistical Factor Analysis model exhibit
a systematic error, which is similar to the well-known systematic error of the
spectrum of the sample covariance matrix. Moreover, we introduce the
Directional Variance Adjustment (DVA) algorithm, which diminishes the
systematic error. In a thorough empirical study for the US, European, and Hong
Kong market we show that our proposed method leads to improved portfolio
allocation.
|
1109.3070
|
Sufficient conditions for the genericity of feedback stabilisability of
switching systems via Lie-algebraic solvability
|
cs.SY math.OC
|
This paper addresses the stabilisation of discrete-time switching linear
systems (DTSSs) with control inputs under arbitrary switching, based on the
existence of a common quadratic Lyapunov function (CQLF). The authors have
begun a line of work dealing with control design based on the Lie-algebraic
solvability property. The present paper expands on earlier work by deriving
sufficient conditions under which the closed-loop system can be caused to
satisfy the Lie-algebraic solvability property generically, i.e. for almost
every set of system parameters, furthermore admitting straightforward and
efficient numerical implementation.
|
1109.3071
|
Oscillations of simple networks
|
physics.soc-ph cond-mat.dis-nn cs.SI nlin.CD
|
To describe the flow of a miscible quantity on a network, we introduce the
graph wave equation where the standard continuous Laplacian is replaced by the
graph Laplacian. This is a natural description of an array of inductances and
capacities, of fluid flow in a network of ducts and of a system of masses and
springs. The structure of the graph influences strongly the dynamics which is
naturally described using the basis of the eigenvectors. In particular, we show
that if two outer nodes are connected to a common third node with the same
coupling, then this coupling is an eigenvalue of the Laplacian. Assuming the
graph is forced and damped at specific nodes, we derive the amplitude
equations. These are analyzed for two simple non trivial networks: a tree and a
graph with a cycle. Forcing the network at a resonant frequency reveals that
damping can be ineffective if applied to the wrong node, leading to a
disastrous resonance and destruction of the network. These results could be
useful for complex physical networks and engineering networks like power grids.
|
1109.3094
|
On the use of reference points for the biobjective Inventory Routing
Problem
|
cs.AI
|
The article presents a study on the biobjective inventory routing problem.
Contrary to most previous research, the problem is treated as a true
multi-objective optimization problem, with the goal of identifying
Pareto-optimal solutions. Due to the hardness of the problem at hand, a
reference point based optimization approach is presented and implemented into
an optimization and decision support system, which allows for the computation
of a true subset of the optimal outcomes. Experimental investigation involving
local search metaheuristics are conducted on benchmark data, and numerical
results are reported and analyzed.
|
1109.3095
|
Convolutional Network Coding Based on Matrix Power Series Representation
|
cs.IT math.IT
|
In this paper, convolutional network coding is formulated by means of matrix
power series representation of the local encoding kernel (LEK) matrices and
global encoding kernel (GEK) matrices to establish its theoretical fundamentals
for practical implementations. From the encoding perspective, the GEKs of a
convolutional network code (CNC) are shown to be uniquely determined by its LEK
matrix $K(z)$ if $K_0$, the constant coefficient matrix of $K(z)$, is
nilpotent. This will simplify the CNC design because a nilpotent $K_0$ suffices
to guarantee a unique set of GEKs. Besides, the relation between coding
topology and $K(z)$ is also discussed. From the decoding perspective, the main
theme is to justify that the first $L+1$ terms of the GEK matrix $F(z)$ at a
sink $r$ suffice to check whether the code is decodable at $r$ with delay $L$
and to start decoding if so. The concomitant decoding scheme avoids dealing
with $F(z)$, which may contain infinite terms, as a whole and hence reduces the
complexity of decodability check. It potentially makes CNCs applicable to
wireless networks.
|
1109.3102
|
Approximation of L\"owdin Orthogonalization to a Spectrally Efficient
Orthogonal Overlapping PPM Design for UWB Impulse Radio
|
cs.IT math.IT
|
In this paper we consider the design of spectrally efficient time-limited
pulses for ultrawideband (UWB) systems using an overlapping pulse position
modulation scheme. For this we investigate an orthogonalization method, which
was developed in 1950 by Per-Olov L\"owdin. Our objective is to obtain a set of
N orthogonal (L\"owdin) pulses, which remain time-limited and spectrally
efficient for UWB systems, from a set of N equidistant translates of a
time-limited optimal spectral designed UWB pulse. We derive an approximate
L\"owdin orthogonalization (ALO) by using circulant approximations for the Gram
matrix to obtain a practical filter implementation. We show that the centered
ALO and L\"owdin pulses converge pointwise to the same Nyquist pulse as N tends
to infinity. The set of translates of the Nyquist pulse forms an orthonormal
basis or the shift-invariant space generated by the initial spectral optimal
pulse. The ALO transform provides a closed-form approximation of the L\"owdin
transform, which can be implemented in an analog fashion without the need of
analog to digital conversions. Furthermore, we investigate the interplay
between the optimization and the orthogonalization procedure by using methods
from the theory of shift-invariant spaces. Finally we develop a connection
between our results and wavelet and frame theory.
|
1109.3119
|
Persistent Data Layout and Infrastructure for Efficient Selective
Retrieval of Event Data in ATLAS
|
physics.data-an cs.CE cs.DB hep-ex
|
The ATLAS detector at CERN has completed its first full year of recording
collisions at 7 TeV, resulting in billions of events and petabytes of data. At
these scales, physicists must have the capability to read only the data of
interest to their analyses, with the importance of efficient selective access
increasing as data taking continues. ATLAS has developed a sophisticated
event-level metadata infrastructure and supporting I/O framework allowing event
selections by explicit specification, by back navigation, and by selection
queries to a TAG database via an integrated web interface. These systems and
their performance have been reported on elsewhere. The ultimate success of such
a system, however, depends significantly upon the efficiency of selective event
retrieval. Supporting such retrieval can be challenging, as ATLAS stores its
event data in column-wise orientation using ROOT trees for a number of reasons,
including compression considerations, histogramming use cases, and more. For
2011 data, ATLAS will utilize new capabilities in ROOT to tune the persistent
storage layout of event data, and to significantly speed up selective event
reading. The new persistent layout strategy and its implications for I/O
performance are described in this paper.
|
1109.3125
|
The mathematical law of evolutionary information dynamics and an
observer's evolution regularities
|
cs.IT math.IT math.OC nlin.AO
|
An interactive stochastics, evaluated by an entropy functional (EF) of a
random field and informational process' path functional (IPF), allows us
modeling the evolutionary information processes and revealing regularities of
evolution dynamics. Conventional Shannon's information measure evaluates a
sequence of the process' static events for each information state and do not
reveal hidden dynamic connections between these events. The paper formulates
the mathematical forms of the information regularities, based on a minimax
variation principle (VP) for IPF, applied to the evolution's both random
microprocesses and dynamic macroprocesses. The paper shows that the VP single
form of the mathematical law leads to the following evolutionary regularities:
-creation of the order from stochastics through the evolutionary macrodynamics,
described by a gradient of dynamic potential, evolutionary speed and the
evolutionary conditions of a fitness and diversity; -the evolutionary hierarchy
with growing information values and potential adaptation; -the adaptive
self-controls and a self-organization with a mechanism of copying to a genetic
code. This law and the regularities determine unified functional informational
mechanisms of evolution dynamics. By introducing both objective and subjective
information observers, we consider the observers' information acquisition,
interactive cognitive evolution dynamics, and neurodynamics, based on the
EF-IPF approach. An evolution improvement consists of the subjective observer s
ability to attract and encode information whose value progressively increases.
The specific properties of a common information structure of evolution
processes are identifiable for each particular object-organism by collecting a
behavioral data from these organisms.
|
1109.3126
|
A Non-Iterative Solution to the Four-Point Three-Views Pose Problem in
Case of Collinear Cameras
|
cs.CV
|
We give a non-iterative solution to a particular case of the four-point
three-views pose problem when three camera centers are collinear. Using the
well-known Cayley representation of orthogonal matrices, we derive from the
epipolar constraints a system of three polynomial equations in three variables.
The eliminant of that system is a multiple of a 36th degree univariate
polynomial. The true (unique) solution to the problem can be expressed in terms
of one of real roots of that polynomial. Experiments on synthetic data confirm
that our method is robust enough even in case of planar configurations.
|
1109.3138
|
Folksodriven Structure Network
|
cs.IR
|
Nowadays folksonomy is used as a system derived from user-generated
electronic tags or keywords that annotate and describe online content. But it
is not a classification system as an ontology. To consider it as a
classification system it would be necessary to share a representation of
contexts by all the users. This paper is proposing the use of folksonomies and
network theory to devise a new concept: a "Folksodriven Structure Network" to
represent folksonomies. This paper proposed and analyzed the network structure
of Folksodriven tags thought as folsksonomy tags suggestions for the user on a
dataset built on chosen websites. It is observed that the Folksodriven Network
has relative low path lengths checking it with classic networking measures
(clustering coefficient). Experiment result shows it can facilitate
serendipitous discovery of content among users. Neat examples and clear
formulas can show how a "Folksodriven Structure Network" can be used to tackle
ontology mapping challenges.
|
1109.3145
|
Sample-Based Planning with Volumes in Configuration Space
|
cs.RO
|
A simple sample-based planning method is presented which approximates
connected regions of free space with volumes in Configuration space instead of
points. The algorithm produces very sparse trees compared to point-based
planning approaches, yet it maintains probabilistic completeness guarantees.
The planner is shown to improve performance on a variety of planning problems,
by focusing sampling on more challenging regions of a planning problem,
including collision boundary areas such as narrow passages.
|
1109.3151
|
Regulation, Volatility and Efficiency in Continuous-Time Markets
|
cs.SY math.OC
|
We analyze the efficiency of markets with friction, particularly power
markets. We model the market as a dynamic system with $(d_t;\,t\geq 0)$ the
demand process and $(s_t;\,t\geq 0)$ the supply process. Using stochastic
differential equations to model the dynamics with friction, we investigate the
efficiency of the market under an integrated expected undiscounted cost
function solving the optimal control problem. Then, we extend the setup to a
game theoretic model where multiple suppliers and consumers interact
continuously by setting prices in a dynamic market with friction. We
investigate the equilibrium, and analyze the efficiency of the market under an
integrated expected social cost function. We provide an intriguing
efficiency-volatility no-free-lunch trade-off theorem.
|
1109.3160
|
Inference and Characterization of Multi-Attribute Networks with
Application to Computational Biology
|
stat.AP cs.SI physics.soc-ph q-bio.MN
|
Our work is motivated by and illustrated with application of association
networks in computational biology, specifically in the context of gene/protein
regulatory networks. Association networks represent systems of interacting
elements, where a link between two different elements indicates a sufficient
level of similarity between element attributes. While in reality relational
ties between elements can be expected to be based on similarity across multiple
attributes, the vast majority of work to date on association networks involves
ties defined with respect to only a single attribute. We propose an approach
for the inference of multi-attribute association networks from measurements on
continuous attribute variables, using canonical correlation and a
hypothesis-testing strategy. Within this context, we then study the impact of
partial information on multi-attribute network inference and characterization,
when only a subset of attributes is available. We consider in detail the case
of two attributes, wherein we examine through a combination of analytical and
numerical techniques the implications of the choice and number of node
attributes on the ability to detect network links and, more generally, to
estimate higher-level network summary statistics, such as node degree,
clustering coefficients, and measures of centrality. Illustration and
applications throughout the paper are developed using gene and protein
expression measurements on human cancer cell lines from the NCI-60 database.
|
1109.3195
|
Efficient Quantum Polar Coding
|
quant-ph cs.IT math.IT
|
Polar coding, introduced 2008 by Arikan, is the first (very) efficiently
encodable and decodable coding scheme whose information transmission rate
provably achieves the Shannon bound for classical discrete memoryless channels
in the asymptotic limit of large block sizes. Here we study the use of polar
codes for the transmission of quantum information. Focusing on the case of
qubit Pauli channels and qubit erasure channels, we use classical polar codes
to construct a coding scheme which, using some pre-shared entanglement,
asymptotically achieves a net transmission rate equal to the coherent
information using efficient encoding and decoding operations and code
construction. Furthermore, for channels with sufficiently low noise level, we
demonstrate that the rate of preshared entanglement required is zero.
|
1109.3227
|
Multiple Beamforming with Perfect Coding
|
cs.IT math.IT
|
Perfect Space-Time Block Codes (PSTBCs) achieve full diversity, full rate,
nonvanishing constant minimum determinant, uniform average transmitted energy
per antenna, and good shaping. However, the high decoding complexity is a
critical issue for practice. When the Channel State Information (CSI) is
available at both the transmitter and the receiver, Singular Value
Decomposition (SVD) is commonly applied for a Multiple-Input Multiple-Output
(MIMO) system to enhance the throughput or the performance. In this paper, two
novel techniques, Perfect Coded Multiple Beamforming (PCMB) and Bit-Interleaved
Coded Multiple Beamforming with Perfect Coding (BICMB-PC), are proposed,
employing both PSTBCs and SVD with and without channel coding, respectively.
With CSI at the transmitter (CSIT), the decoding complexity of PCMB is
substantially reduced compared to a MIMO system employing PSTBC, providing a
new prospect of CSIT. Especially, because of the special property of the
generation matrices, PCMB provides much lower decoding complexity than the
state-of-the-art SVD-based uncoded technique in dimensions 2 and 4. Similarly,
the decoding complexity of BICMB-PC is much lower than the state-of-the-art
SVD-based coded technique in these two dimensions, and the complexity gain is
greater than the uncoded case. Moreover, these aforementioned complexity
reductions are achieved with only negligible or modest loss in performance.
|
1109.3240
|
Active Learning for Node Classification in Assortative and
Disassortative Networks
|
cs.IT cs.LG cs.SI math.IT physics.soc-ph stat.ML
|
In many real-world networks, nodes have class labels, attributes, or
variables that affect the network's topology. If the topology of the network is
known but the labels of the nodes are hidden, we would like to select a small
subset of nodes such that, if we knew their labels, we could accurately predict
the labels of all the other nodes. We develop an active learning algorithm for
this problem which uses information-theoretic techniques to choose which nodes
to explore. We test our algorithm on networks from three different domains: a
social network, a network of English words that appear adjacently in a novel,
and a marine food web. Our algorithm makes no initial assumptions about how the
groups connect, and performs well even when faced with quite general types of
network structure. In particular, we do not assume that nodes of the same class
are more likely to be connected to each other---only that they connect to the
rest of the network in similar ways.
|
1109.3248
|
Reconstruction of sequential data with density models
|
cs.LG stat.ML
|
We introduce the problem of reconstructing a sequence of multidimensional
real vectors where some of the data are missing. This problem contains
regression and mapping inversion as particular cases where the pattern of
missing data is independent of the sequence index. The problem is hard because
it involves possibly multivalued mappings at each vector in the sequence, where
the missing variables can take more than one value given the present variables;
and the set of missing variables can vary from one vector to the next. To solve
this problem, we propose an algorithm based on two redundancy assumptions:
vector redundancy (the data live in a low-dimensional manifold), so that the
present variables constrain the missing ones; and sequence redundancy (e.g.
continuity), so that consecutive vectors constrain each other. We capture the
low-dimensional nature of the data in a probabilistic way with a joint density
model, here the generative topographic mapping, which results in a Gaussian
mixture. Candidate reconstructions at each vector are obtained as all the modes
of the conditional distribution of missing variables given present variables.
The reconstructed sequence is obtained by minimising a global constraint, here
the sequence length, by dynamic programming. We present experimental results
for a toy problem and for inverse kinematics of a robot arm.
|
1109.3272
|
On the Performance of Cooperative Spectrum Sensing under Quantization
|
cs.IT math.IT
|
In cognitive radio, the cooperative spectrum sensing (CSS) plays a key role
in determining the performance of secondary networks. However, there have not
been feasible approaches that can analytically calculate the performance of CSS
with regard to the multi-level quantization. In this paper, we not only show
the cooperative false alarm probability and cooperative detection probability
impacted by quantization, but also formulate them by two closed form
expressions. These two expressions enable the calculation of cooperative false
alarm probability and cooperative detection probability tractable efficiently,
and provide a feasible approach for optimization of sensing performance.
Additionally, to facilitate this calculation, we derive Normal approximation
for evaluating the sensing performance conveniently. Furthermore, two
optimization methods are proposed to achieve the high sensing performance under
quantization.
|
1109.3311
|
Escort entropies and divergences and related canonical distribution
|
math-ph cond-mat.stat-mech cs.IT math.IT math.MP
|
We discuss two families of two-parameter entropies and divergences, derived
from the standard R\'enyi and Tsallis entropies and divergences. These
divergences and entropies are found as divergences or entropies of escort
distributions. Exploiting the nonnegativity of the divergences, we derive the
expression of the canonical distribution associated to the new entropies and a
observable given as an escort-mean value. We show that this canonical
distribution extends, and smoothly connects, the results obtained in
nonextensive thermodynamics for the standard and generalized mean value
constraints.
|
1109.3313
|
Neigborhood Selection in Variable Neighborhood Search
|
cs.AI
|
Variable neighborhood search (VNS) is a metaheuristic for solving
optimization problems based on a simple principle: systematic changes of
neighborhoods within the search, both in the descent to local minima and in the
escape from the valleys which contain them. Designing these neighborhoods and
applying them in a meaningful fashion is not an easy task. Moreover, an
appropriate order in which they are applied must be determined. In this paper
we attempt to investigate this issue. Assume that we are given an optimization
problem that is intended to be solved by applying the VNS scheme, how many and
which types of neighborhoods should be investigated and what could be
appropriate selection criteria to apply these neighborhoods. More specifically,
does it pay to "look ahead" (see, e.g., in the context of VNS and GRASP) when
attempting to switch from one neighborhood to another?
|
1109.3317
|
Design of an Optical Character Recognition System for Camera-based
Handheld Devices
|
cs.CV
|
This paper presents a complete Optical Character Recognition (OCR) system for
camera captured image/graphics embedded textual documents for handheld devices.
At first, text regions are extracted and skew corrected. Then, these regions
are binarized and segmented into lines and characters. Characters are passed
into the recognition module. Experimenting with a set of 100 business card
images, captured by cell phone camera, we have achieved a maximum recognition
accuracy of 92.74%. Compared to Tesseract, an open source desktop-based
powerful OCR engine, present recognition accuracy is worth contributing.
Moreover, the developed technique is computationally efficient and consumes low
memory so as to be applicable on handheld devices.
|
1109.3318
|
Distributed User Profiling via Spectral Methods
|
cs.LG
|
User profiling is a useful primitive for constructing personalised services,
such as content recommendation. In the present paper we investigate the
feasibility of user profiling in a distributed setting, with no central
authority and only local information exchanges between users. We compute a
profile vector for each user (i.e., a low-dimensional vector that characterises
her taste) via spectral transformation of observed user-produced ratings for
items. Our two main contributions follow: i) We consider a low-rank
probabilistic model of user taste. More specifically, we consider that users
and items are partitioned in a constant number of classes, such that users and
items within the same class are statistically identical. We prove that without
prior knowledge of the compositions of the classes, based solely on few random
observed ratings (namely $O(N\log N)$ such ratings for $N$ users), we can
predict user preference with high probability for unrated items by running a
local vote among users with similar profile vectors. In addition, we provide
empirical evaluations characterising the way in which spectral profiling
performance depends on the dimension of the profile space. Such evaluations are
performed on a data set of real user ratings provided by Netflix. ii) We
develop distributed algorithms which provably achieve an embedding of users
into a low-dimensional space, based on spectral transformation. These involve
simple message passing among users, and provably converge to the desired
embedding. Our method essentially relies on a novel combination of gossiping
and the algorithm proposed by Oja and Karhunen.
|
1109.3320
|
Combining Convex-Concave Decompositions and Linearization Approaches for
solving BMIs, with application to Static Output Feedback
|
math.OC cs.SY
|
A novel optimization method is proposed to minimize a convex function subject
to bilinear matrix inequality (BMI) constraints. The key idea is to decompose
the bilinear mapping as a difference between two positive semidefinite convex
mappings. At each iteration of the algorithm the concave part is linearized,
leading to a convex subproblem.Applications to various output feedback
controller synthesis problems are presented. In these applications the
subproblem in each iteration step can be turned into a convex optimization
problem with linear matrix inequality (LMI) constraints. The performance of the
algorithm has been benchmarked on the data from COMPleib library.
|
1109.3385
|
Source coding with escort distributions and Renyi entropy bounds
|
math-ph cond-mat.stat-mech cs.IT math.IT math.MP
|
We discuss the interest of escort distributions and R\'enyi entropy in the
context of source coding. We first recall a source coding theorem by Campbell
relating a generalized measure of length to the R\'enyi-Tsallis entropy. We
show that the associated optimal codes can be obtained using considerations on
escort-distributions. We propose a new family of measure of length involving
escort-distributions and we show that these generalized lengths are also
bounded below by the R\'enyi entropy. Furthermore, we obtain that the standard
Shannon codes lengths are optimum for the new generalized lengths measures,
whatever the entropic index. Finally, we show that there exists in this setting
an interplay between standard and escort distributions.
|
1109.3428
|
One, None and One Hundred Thousand Profiles: Re-imagining the
Pirandellian Identity Dilemma in the Era of Online Social Networks
|
cs.SI cs.CY
|
Uno, Nessuno, Centomila ("One, No One and One Hundred Thousand") is a classic
novel by Italian playwright Luigi Pirandello. Published in 1925, it recounts
the tragedy of Vitangelo Moscarda, a man who struggles to reclaim a coherent
and unitary identity for himself in the face of an inherently social and
multi-faceted world. What would Moscarda identity tragedy look like today? In
this article we transplant Moscarda's identity play from its offline setting to
the contemporary arena of social media and online social networks. With
reference to established theories on identity construction, performance, and
self-presentation, we re-imagine how Moscarda would go about defending the
integrity of his selfhood in the face of the discountenancing influences of the
online world.
|
1109.3437
|
Learning Topic Models by Belief Propagation
|
cs.LG
|
Latent Dirichlet allocation (LDA) is an important hierarchical Bayesian model
for probabilistic topic modeling, which attracts worldwide interests and
touches on many important applications in text mining, computer vision and
computational biology. This paper represents LDA as a factor graph within the
Markov random field (MRF) framework, which enables the classic loopy belief
propagation (BP) algorithm for approximate inference and parameter estimation.
Although two commonly-used approximate inference methods, such as variational
Bayes (VB) and collapsed Gibbs sampling (GS), have gained great successes in
learning LDA, the proposed BP is competitive in both speed and accuracy as
validated by encouraging experimental results on four large-scale document data
sets. Furthermore, the BP algorithm has the potential to become a generic
learning scheme for variants of LDA-based topic models. To this end, we show
how to learn two typical variants of LDA-based topic models, such as
author-topic models (ATM) and relational topic models (RTM), using BP based on
the factor graph representation.
|
1109.3475
|
Diameter Perfect Lee Codes
|
cs.IT math.CO math.IT
|
Lee codes have been intensively studied for more than 40 years. Interest in
these codes has been triggered by the Golomb-Welch conjecture on the existence
of the perfect error-correcting Lee codes. In this paper we deal with the
existence and enumeration of diameter perfect Lee codes. As main results we
determine all $q$ for which there exists a linear diameter-4 perfect Lee code
of word length $n$ over $Z_{q},$ and prove that for each $n\geq 3$ there are
uncountable many diameter-4 perfect Lee codes of word length $n$ over $Z.$ This
is in a strict contrast with perfect error-correcting Lee codes of word length
$n$ over $Z\,$\ as there is a unique such code for $n=3,$ and its is
conjectured that this is always the case when $2n+1$ is a prime. We produce
diameter perfect Lee codes by an algebraic construction that is based on a
group homomorphism. This will allow us to design an efficient algorithm for
their decoding. We hope that this construction will turn out to be useful far
beyond the scope of this paper.
|
1109.3488
|
Using MOEAs To Outperform Stock Benchmarks In The Presence of Typical
Investment Constraints
|
q-fin.PM cs.CE cs.NE stat.AP stat.CO
|
Portfolio managers are typically constrained by turnover limits, minimum and
maximum stock positions, cardinality, a target market capitalization and
sometimes the need to hew to a style (such as growth or value). In addition,
portfolio managers often use multifactor stock models to choose stocks based
upon their respective fundamental data.
We use multiobjective evolutionary algorithms (MOEAs) to satisfy the above
real-world constraints. The portfolios generated consistently outperform
typical performance benchmarks and have statistically significant asset
selection.
|
1109.3510
|
Diversity Analysis of Bit-Interleaved Coded Multiple Beamforming with
Orthogonal Frequency Division Multiplexing
|
cs.IT math.IT
|
For broadband wireless communication systems, Orthogonal Frequency Division
Multiplexing (OFDM) has been combined with Multi-Input Multi-Output (MIMO)
techniques. Bit-Interleaved Coded Multiple Beamforming (BICMB) can achieve both
spatial diversity and spatial multiplexing for flat fading MIMO channels. For
frequency selective fading MIMO channels, BICMB with OFDM (BICMB-OFDM) can be
applied to achieve both spatial diversity and multipath diversity, making it an
important technique. However, analyzing the diversity of BICMB-OFDM is a
challenging problem. In this paper, the diversity analysis of BICMB-OFDM is
carried out. First, the maximum achievable diversity is derived and a full
diversity condition RcSL <= 1 is proved, where Rc, S, and L are the code rate,
the number of parallel steams transmitted at each subcarrier, and the number of
channel taps, respectively. Then, the performance degradation due to the
correlation among subcarriers is investigated. Finally, the subcarrier grouping
technique is employed to combat the performance degradation and provide
multi-user compatibility.
|
1109.3524
|
cuIBM -- A GPU-accelerated Immersed Boundary Method
|
cs.CE
|
A projection-based immersed boundary method is dominated by sparse linear
algebra routines. Using the open-source Cusp library, we observe a speedup
(with respect to a single CPU core) which reflects the constraints of a
bandwidth-dominated problem on the GPU. Nevertheless, GPUs offer the capacity
to solve large problems on commodity hardware. This work includes validation
and a convergence study of the GPU-accelerated IBM, and various optimizations.
|
1109.3532
|
A Characterization of the Combined Effects of Overlap and Imbalance on
the SVM Classifier
|
cs.AI
|
In this paper we demonstrate that two common problems in Machine
Learning---imbalanced and overlapping data distributions---do not have
independent effects on the performance of SVM classifiers. This result is
notable since it shows that a model of either of these factors must account for
the presence of the other. Our study of the relationship between these problems
has lead to the discovery of a previously unreported form of "covert"
overfitting which is resilient to commonly used empirical regularization
techniques. We demonstrate the existance of this covert phenomenon through
several methods based around the parametric regularization of trained SVMs. Our
findings in this area suggest a possible approach to quantifying overlap in
real world data sets.
|
1109.3547
|
Awareness and Movement vs. the Spread of Epidemics - Analyzing a Dynamic
Model for Urban Social/Technological Networks
|
cs.SI physics.soc-ph
|
We consider the spread of epidemics in technological and social networks. How
do people react? Does awareness and cautious behavior help? We analyze these
questions and present a dynamic model to describe the movement of individuals
and/or their mobile devices in a certain (idealistic) urban environment.
Furthermore, our model incorporates the fact that different locations can
accommodate a different number of people (possibly with their mobile devices),
who may pass the infection to each other. We obtain two main results. First, we
prove that w.r.t. our model at least a small part of the system will remain
uninfected even if no countermeasures are taken. The second result shows that
with certain counteractions in use, which only influence the individuals'
behavior, a prevalent epidemic can be avoided. The results explain possible
courses of a disease, and point out why cost-efficient countermeasures may
reduce the number of total infections from a high percentage of the population
to a negligible fraction.
|
1109.3555
|
Using In-Memory Encrypted Databases on the Cloud
|
cs.CR cs.DB cs.DC
|
Storing data in the cloud poses a number of privacy issues. A way to handle
them is supporting data replication and distribution on the cloud via a local,
centrally synchronized storage. In this paper we propose to use an in-memory
RDBMS with row-level data encryption for granting and revoking access rights to
distributed data. This type of solution is rarely adopted in conventional
RDBMSs because it requires several complex steps. In this paper we focus on
implementation and benchmarking of a test system, which shows that our simple
yet effective solution overcomes most of the problems.
|
1109.3556
|
On the reachability and observability of path and cycle graphs
|
math.OC cs.SY
|
In this paper we investigate the reachability and observability properties of
a network system, running a Laplacian based average consensus algorithm, when
the communication graph is a path or a cycle. More in detail, we provide
necessary and sufficient conditions, based on simple algebraic rules from
number theory, to characterize all and only the nodes from which the network
system is reachable (respectively observable). Interesting immediate
corollaries of our results are: (i) a path graph is reachable (observable) from
any single node if and only if the number of nodes of the graph is a power of
two, $n=2^i, i\in \natural$, and (ii) a cycle is reachable (observable) from
any pair of nodes if and only if $n$ is a prime number. For any set of control
(observation) nodes, we provide a closed form expression for the (unreachable)
unobservable eigenvalues and for the eigenvectors of the (unreachable)
unobservable subsystem.
|
1109.3563
|
Verification, Validation and Testing of Kinetic Mechanisms of Hydrogen
Combustion in Fluid Dynamic Computations
|
cs.CE physics.flu-dyn
|
A one-step, a two-step, an abridged, a skeletal and four detailed kinetic
schemes of hydrogen oxidation have been tested. A new skeletal kinetic scheme
of hydrogen oxidation has been developed. The CFD calculations were carried out
using ANSYS CFX software. Ignition delay times and speeds of flames were
derived from the computational results. The computational data obtained using
ANSYS CFX and CHEMKIN, and experimental data were compared. The precision,
reliability, and range of validity of the kinetic schemes in CFD simulations
were estimated. The impact of kinetic scheme on the results of computations was
discussed. The relationship between grid spacing, timestep, accuracy, and
computational cost were analyzed.
|
1109.3569
|
Numerical approximation of Nash equilibria for a class of
non-cooperative differential games
|
math.NA cs.SY math.AP math.OC
|
In this paper we propose a numerical method to obtain an approximation of
Nash equilibria for multi-player non-cooperative games with a special
structure. We consider the infinite horizon problem in a case which leads to a
system of Hamilton-Jacobi equations. The numerical method is based on the
Dynamic Programming Principle for every equation and on a global fixed point
iteration. We present the numerical solutions of some two-player games in one
and two dimensions. The paper has an experimental nature, but some features and
properties of the approximation scheme are discussed.
|
1109.3577
|
A patchy Dynamic Programming scheme for a class of
Hamilton-Jacobi-Bellman equations
|
math.NA cs.SY math.OC
|
In this paper we present a new algorithm for the solution of
Hamilton-Jacobi-Bellman equations related to optimal control problems. The key
idea is to divide the domain of computation into subdomains which are shaped by
the optimal dynamics of the underlying control problem. This can result in a
rather complex geometrical subdivision, but it has the advantage that every
subdomain is invariant with respect to the optimal dynamics, and then the
solution can be computed independently in each subdomain. The features of this
dynamics-dependent domain decomposition can be exploited to speed up the
computation and for an efficient parallelization, since the classical
transmission conditions at the boundaries of the subdomains can be avoided. For
their properties, the subdomains are patches in the sense introduced by Ancona
and Bressan [ESAIM Control Optim. Calc. Var., 4 (1999), pp. 445-471]. Several
examples in two and three dimensions illustrate the properties of the new
method.
|
1109.3617
|
IR-based Communication and Perception in Microrobotic Swarms
|
cs.RO
|
In this work we consider development of IR-based communication and perception
mechanisms for real microrobotic systems. It is demonstrated that a specific
combination of hardware and software elements provides capabilities for
navigation, objects recognition, directional and unidirectional communication.
We discuss open issues and their resolution based on the experiments in the
swarm of microrobots "Jasmine".
|
1109.3627
|
Roulette-wheel selection via stochastic acceptance
|
cs.NE cond-mat.stat-mech cs.CC physics.comp-ph
|
Roulette-wheel selection is a frequently used method in genetic and
evolutionary algorithms or in modeling of complex networks. Existing routines
select one of N individuals using search algorithms of O(N) or O(log(N))
complexity. We present a simple roulette-wheel selection algorithm, which
typically has O(1) complexity and is based on stochastic acceptance instead of
searching. We also discuss a hybrid version, which might be suitable for highly
heterogeneous weight distributions, found, for example, in some models of
complex networks. With minor modifications, the algorithm might also be used
for sampling with fitness cut-off at a certain value or for sampling without
replacement.
|
1109.3637
|
Connectivity-Enforcing Hough Transform for the Robust Extraction of Line
Segments
|
cs.CV
|
Global voting schemes based on the Hough transform (HT) have been widely used
to robustly detect lines in images. However, since the votes do not take line
connectivity into account, these methods do not deal well with cluttered
images. In opposition, the so-called local methods enforce connectivity but
lack robustness to deal with challenging situations that occur in many
realistic scenarios, e.g., when line segments cross or when long segments are
corrupted. In this paper, we address the critical limitations of the HT as a
line segment extractor by incorporating connectivity in the voting process.
This is done by only accounting for the contributions of edge points lying in
increasingly larger neighborhoods and whose position and directional content
agree with potential line segments. As a result, our method, which we call
STRAIGHT (Segment exTRAction by connectivity-enforcInG HT), extracts the
longest connected segments in each location of the image, thus also integrating
into the HT voting process the usually separate step of individual segment
extraction. The usage of the Hough space mapping and a corresponding
hierarchical implementation make our approach computationally feasible. We
present experiments that illustrate, with synthetic and real images, how
STRAIGHT succeeds in extracting complete segments in several situations where
current methods fail.
|
1109.3639
|
Local Correction of Juntas
|
cs.CC cs.IT math.IT
|
A Boolean function f over n variables is said to be q-locally correctable if,
given a black-box access to a function g which is "close" to an isomorphism
f_sigma of f, we can compute f_sigma(x) for any x in Z_2^n with good
probability using q queries to g.
We observe that any k-junta, that is, any function which depends only on k of
its input variables, is O(2^k)-locally correctable. Moreover, we show that
there are examples where this is essentially best possible, and locally
correcting some k-juntas requires a number of queries which is exponential in
k. These examples, however, are far from being typical, and indeed we prove
that for almost every k-junta, O(k log k) queries suffice.
|
1109.3649
|
Compressive Sensing of Analog Signals Using Discrete Prolate Spheroidal
Sequences
|
cs.IT math.IT
|
Compressive sensing (CS) has recently emerged as a framework for efficiently
capturing signals that are sparse or compressible in an appropriate basis.
While often motivated as an alternative to Nyquist-rate sampling, there remains
a gap between the discrete, finite-dimensional CS framework and the problem of
acquiring a continuous-time signal. In this paper, we attempt to bridge this
gap by exploiting the Discrete Prolate Spheroidal Sequences (DPSS's), a
collection of functions that trace back to the seminal work by Slepian, Landau,
and Pollack on the effects of time-limiting and bandlimiting operations. DPSS's
form a highly efficient basis for sampled bandlimited functions; by modulating
and merging DPSS bases, we obtain a dictionary that offers high-quality sparse
approximations for most sampled multiband signals. This multiband modulated
DPSS dictionary can be readily incorporated into the CS framework. We provide
theoretical guarantees and practical insight into the use of this dictionary
for recovery of sampled multiband signals from compressive measurements.
|
1109.3650
|
Bi-Objective Community Detection (BOCD) in Networks using Genetic
Algorithm
|
cs.SI cs.AI cs.NE physics.soc-ph
|
A lot of research effort has been put into community detection from all
corners of academic interest such as physics, mathematics and computer science.
In this paper I have proposed a Bi-Objective Genetic Algorithm for community
detection which maximizes modularity and community score. Then the results
obtained for both benchmark and real life data sets are compared with other
algorithms using the modularity and MNI performance metrics. The results show
that the BOCD algorithm is capable of successfully detecting community
structure in both real life and synthetic datasets, as well as improving upon
the performance of previous techniques.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.