id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1207.4180
|
A Hierarchical Graphical Model for Record Linkage
|
cs.LG cs.IR stat.ML
|
The task of matching co-referent records is known among other names as rocord
linkage. For large record-linkage problems, often there is little or no labeled
data available, but unlabeled data shows a reasonable clear structure. For such
problems, unsupervised or semi-supervised methods are preferable to supervised
methods. In this paper, we describe a hierarchical graphical model framework
for the linakge-problem in an unsupervised setting. In addition to proposing
new methods, we also cast existing unsupervised probabilistic record-linkage
methods in this framework. Some of the techniques we propose to minimize
overfitting in the above model are of interest in the general graphical model
setting. We describe a method for incorporating monotinicity constraints in a
graphical model. We also outline a bootstrapping approach of using
"single-field" classifiers to noisily label latent variables in a hierarchical
model. Experimental results show that our proposed unsupervised methods perform
quite competitively even with fully supervised record-linkage methods.
|
1207.4252
|
The Wideband Slope of Interference Channels: The Small Bandwidth Case
|
cs.IT math.IT
|
This paper studies the low-SNR regime performance of a scalar complex K -user
interference channel with Gaussian noise. The finite bandwidth case is
considered, where the low-SNR regime is approached by letting the input power
go to zero while bandwidth is small and fixed. We show that for all \delta>0
there exists a set with non-zero measure (probability) in which the wideband
slope per user satisfies Slope<2/K+\delta . This is quite contrary to the large
bandwidth case [ShenAHM11IT], where a slope of 1 per user is achievable with
probability 1. We also develop an interference alignment scheme for the finite
bandwidth case that shows some gain.
|
1207.4254
|
MIMO Interference Alignment in Random Access Networks
|
cs.IT math.IT
|
In this paper, we analyze a multiple-input multiple-output (MIMO)
interference channel where nodes are randomly distributed on a plane as a
spatial Poisson cluster point process. Each cluster uses interference alignment
(IA) to suppress intra-cluster interference but unlike most work on IA, we do
not neglect inter-cluster interference. We also connect the accuracy of channel
state information to the distance between the nodes, i.e. the quality of CSI
degrades with increasing distance. Accounting for the training and feedback
overhead, we derive the transmission capacity of this MIMO IA ad hoc network
and then compare it to open-loop (interference-blind) spatial multiplexing.
Finally, we present exemplary system setups where spatial multiplexing
outperforms IA due to the imperfect channel state information or the
non-aligned inter-cluster interference.
|
1207.4255
|
On the Statistical Efficiency of $\ell_{1,p}$ Multi-Task Learning of
Gaussian Graphical Models
|
cs.LG stat.ML
|
In this paper, we present $\ell_{1,p}$ multi-task structure learning for
Gaussian graphical models. We analyze the sufficient number of samples for the
correct recovery of the support union and edge signs. We also analyze the
necessary number of samples for any conceivable method by providing
information-theoretic lower bounds. We compare the statistical efficiency of
multi-task learning versus that of single-task learning. For experiments, we
use a block coordinate descent method that is provably convergent and generates
a sequence of positive definite solutions. We provide experimental validation
on synthetic data as well as on two publicly available real-world data sets,
including functional magnetic resonance imaging and gene expression data.
|
1207.4259
|
Content Based Multimedia Information Retrieval to Support Digital
Libraries
|
cs.IR cs.CV
|
Content-based multimedia information retrieval is an interesting research
area since it allows retrieval based on inherent characteristic of multimedia
objects. For example retrieval based on visual characteristics such as colour,
shapes or textures of objects in images or retrieval based on spatial
relationships among objects in the media (images or video clips). This paper
reviews some work done in image and video retrieval and then proposes an
integrated model that can handle images and video clips uniformly. Using this
model retrieval on images or video clips can be done based on the same
framework.
|
1207.4262
|
Differentially Private Iterative Synchronous Consensus
|
cs.CR cs.DC cs.SY
|
The iterative consensus problem requires a set of processes or agents with
different initial values, to interact and update their states to eventually
converge to a common value. Protocols solving iterative consensus serve as
building blocks in a variety of systems where distributed coordination is
required for load balancing, data aggregation, sensor fusion, filtering, clock
synchronization and platooning of autonomous vehicles. In this paper, we
introduce the private iterative consensus problem where agents are required to
converge while protecting the privacy of their initial values from honest but
curious adversaries. Protecting the initial states, in many applications,
suffice to protect all subsequent states of the individual participants.
First, we adapt the notion of differential privacy in this setting of
iterative computation. Next, we present a server-based and a completely
distributed randomized mechanism for solving private iterative consensus with
adversaries who can observe the messages as well as the internal states of the
server and a subset of the clients. Finally, we establish the tradeoff between
privacy and the accuracy of the proposed randomized mechanism.
|
1207.4266
|
Multiscale Network Generation
|
cs.DM cond-mat.stat-mech cs.SI math.CO physics.soc-ph
|
Networks are widely used in science and technology to represent relationships
between entities, such as social or ecological links between organisms,
enzymatic interactions in metabolic systems, or computer infrastructure.
Statistical analyses of networks can provide critical insights into the
structure, function, dynamics, and evolution of those systems. However, the
structures of real-world networks are often not known completely, and they may
exhibit considerable variation so that no single network is sufficiently
representative of a system. In such situations, researchers may turn to proxy
data from related systems, sophisticated methods for network inference, or
synthetic networks. Here, we introduce a flexible method for synthesizing
realistic ensembles of networks starting from a known network, through a series
of mappings that coarsen and later refine the network structure by randomized
editing. The method, MUSKETEER, preserves structural properties with minimal
bias, including unknown or unspecified features, while introducing realistic
variability at multiple scales. Using examples from several domains, we show
that MUSKETEER produces the intended stochasticity while achieving greater
fidelity across a suite of network properties than do other commonly used
network generation algorithms.
|
1207.4291
|
ConnectiCity, augmented perception of the city
|
cs.CY cs.SI physics.soc-ph
|
As we move through cities in our daily lives, we are in a constant state of
transformation of the spaces around us. The form and essence of urban space
directly affects people's behavior, describing in their perception what is
possible or impossible, allowed or prohibited, suggested or advised against. We
are now able to fill and stratify space/time with digital information layers,
completely wrapping cities in a membrane of information and of opportunities
for interaction and communication. Mobile devices, smartphones, wearables,
digital tags, near field communication devices, location based services and
mixed/augmented reality have gone much further in this direction, turning the
world into an essentially read/write, ubiquitous publishing surface. The usage
of mobile devices and ubiquitous technologies alters the understanding of
place. In this process, the definition of (urban) landscape powerfully shifts
from a definition which is purely administrative (e.g.: the borders of the
flower bed in the middle of a roundabout) to one that is multiplied according
to all individuals which experience that location; as a lossless sum of their
perceptions; as a stratification of interpretations and activities which forms
our cognition of space and time. In our research we investigated the
possibilities to use the scenario which sees urban spaces progressively filling
with multiple layers of real-time, ubiquitous, digital information to
conceptualize, design and implement a series of usage scenarios. It is possible
to create multiple layers of narratives which traverse the city and which allow
us to read them in different ways, according to the different strategies and
methodologies enabling us to highlight how cities express points of view on the
environment, culture, economy, transports, energy and politics.
|
1207.4293
|
Analysis of Neighbourhoods in Multi-layered Dynamic Social Networks
|
cs.SI physics.soc-ph
|
Social networks existing among employees, customers or users of various IT
systems have become one of the research areas of growing importance. A social
network consists of nodes - social entities and edges linking pairs of nodes.
In regular, one-layered social networks, two nodes - i.e. people are connected
with a single edge whereas in the multi-layered social networks, there may be
many links of different types for a pair of nodes. Nowadays data about people
and their interactions, which exists in all social media, provides information
about many different types of relationships within one network. Analysing this
data one can obtain knowledge not only about the structure and characteristics
of the network but also gain understanding about semantic of human relations.
Are they direct or not? Do people tend to sustain single or multiple relations
with a given person? What types of communication is the most important for
them? Answers to these and more questions enable us to draw conclusions about
semantic of human interactions. Unfortunately, most of the methods used for
social network analysis (SNA) may be applied only to one-layered social
networks. Thus, some new structural measures for multi-layered social networks
are proposed in the paper, in particular: cross-layer clustering coefficient,
cross-layer degree centrality and various versions of multi-layered degree
centralities. Authors also investigated the dynamics of multi-layered
neighbourhood for five different layers within the social network. The
evaluation of the presented concepts on the real-world dataset is presented.
The measures proposed in the paper may directly be used to various methods for
collective classification, in which nodes are assigned to labels according to
their structural input features.
|
1207.4297
|
GED: the method for group evolution discovery in social networks
|
cs.SI physics.soc-ph
|
The continuous interest in the social network area contributes to the fast
development of this field. The new possibilities of obtaining and storing data
facilitate deeper analysis of the entire network, extracted social groups and
single individuals as well. One of the most interesting research topic is the
dynamics of social groups, it means analysis of group evolution over time.
Having appropriate knowledge and methods for dynamic analysis, one may attempt
to predict the future of the group, and then manage it properly in order to
achieve or change this predicted future according to specific needs. Such
ability would be a powerful tool in the hands of human resource managers,
personnel recruitment, marketing, etc.
The social group evolution consists of individual events and seven types of
such changes have been identified in the paper: continuing, shrinking, growing,
splitting, merging, dissolving and forming. To enable the analysis of group
evolution a change indicator - inclusion measure was proposed. It has been used
in a new method for exploring the evolution of social groups, called Group
Evolution Discovery (GED). The experimental results of its use together with
the comparison to two well-known algorithms in terms of accuracy, execution
time, flexibility and ease of implementation are also described in the paper.
|
1207.4307
|
Frame Interpretation and Validation in a Open Domain Dialogue System
|
cs.CL cs.RO
|
Our goal in this paper is to establish a means for a dialogue platform to be
able to cope with open domains considering the possible interaction between the
embodied agent and humans. To this end we present an algorithm capable of
processing natural language utterances and validate them against knowledge
structures of an intelligent agent's mind. Our algorithm leverages dialogue
techniques in order to solve ambiguities and acquire knowledge about unknown
entities.
|
1207.4308
|
Assessment of SAR Image Filtering using Adaptive Stack Filters
|
cs.CV
|
Stack filters are a special case of non-linear filters. They have a good
performance for filtering images with different types of noise while preserving
edges and details. A stack filter decomposes an input image into several binary
images according to a set of thresholds. Each binary image is then filtered by
a Boolean function, which characterizes the filter. Adaptive stack filters can
be designed to be optimal; they are computed from a pair of images consisting
of an ideal noiseless image and its noisy version. In this work we study the
performance of adaptive stack filters when they are applied to Synthetic
Aperture Radar (SAR) images. This is done by evaluating the quality of the
filtered images through the use of suitable image quality indexes and by
measuring the classification accuracy of the resulting images.
|
1207.4318
|
Empirical review of standard benchmark functions using evolutionary
global optimization
|
cs.NE
|
We have employed a recent implementation of genetic algorithms to study a
range of standard benchmark functions for global optimization. It turns out
that some of them are not very useful as challenging test functions, since they
neither allow for a discrimination between different variants of genetic
operators nor exhibit a dimensionality scaling resembling that of real-world
problems, for example that of global structure optimization of atomic and
molecular clusters. The latter properties seem to be simulated better by two
other types of benchmark functions. One type is designed to be deceptive,
exemplified here by Lunacek's function. The other type offers additional
advantages of markedly increased complexity and of broad tunability in search
space characteristics. For the latter type, we use an implementation based on
randomly distributed Gaussians. We advocate the use of the latter types of test
functions for algorithm development and benchmarking.
|
1207.4328
|
Quantum-like Tests for Contextual Querying
|
cs.IR quant-ph
|
Tests are essential in Information Retrieval (IR), in order to evaluate the
effectiveness of a query. Tests intended to exhibit the sense of words in
con-text were undertaken and linked with Quantum Mechanics (QM). Poll tests
were undertaken on heterogeneous media such as music and polysemy in foreign
languages. Interference effects are shown in the results. Bell inequality was
used leading to a significant spread in the results of the poll tests but
without violating the classical limit. Then an automatic pertinence measure
tool on texts has been developed using the HAL algorithm using an orthonormal
vector decomposition model. In this case the spread in the values can lead to
the violation of the Bell inequality even beyond Cirel'son bound.
|
1207.4343
|
Construction and analysis of polar and concatenated polar codes:
practical approach
|
cs.IT math.IT
|
We consider two problems related to polar codes. First is the problem of
polar codes construction and analysis of their performance without Monte-Carlo
method. The formulas proposed are the same as those in [Mori-Tanaka], yet we
believe that our approach is original and has clear advantages. The resulting
computational procedure is presented in a fast algorithm form which can be
easily implemented on a computer. Secondly, we present an original method of
construction of concatenated codes based on polar codes. We give an algorithm
for construction of such codes and present numerical experiments showing
significant performance improvement with respect to original polar codes
proposed by Ar\i kan. We use the term \emph{concatenated code} not in its
classical sense (e.g. [Forney]). However we believe that our usage is quite
appropriate for the exploited construction. Further, we solve the optimization
problem of choosing codes minimizing the block error of the whole concatenated
code under the constraint of its fixed rate.
|
1207.4371
|
Computing n-Gram Statistics in MapReduce
|
cs.IR cs.DB cs.DC
|
Statistics about n-grams (i.e., sequences of contiguous words or other tokens
in text documents or other string data) are an important building block in
information retrieval and natural language processing. In this work, we study
how n-gram statistics, optionally restricted by a maximum n-gram length and
minimum collection frequency, can be computed efficiently harnessing MapReduce
for distributed data processing. We describe different algorithms, ranging from
an extension of word counting, via methods based on the Apriori principle, to a
novel method Suffix-\sigma that relies on sorting and aggregating suffixes. We
examine possible extensions of our method to support the notions of
maximality/closedness and to perform aggregations beyond occurrence counting.
Assuming Hadoop as a concrete MapReduce implementation, we provide insights on
an efficient implementation of the methods. Extensive experiments on The New
York Times Annotated Corpus and ClueWeb09 expose the relative benefits and
trade-offs of the methods.
|
1207.4393
|
Joint Access Point Selection and Power Allocation for Uplink Wireless
Networks
|
cs.IT math.IT
|
We consider the distributed uplink resource allocation problem in a
multi-carrier wireless network with multiple access points (APs). Each mobile
user can optimize its own transmission rate by selecting a suitable AP and by
controlling its transmit power. Our objective is to devise suitable algorithms
by which mobile users can jointly perform these tasks in a distributed manner.
Our approach relies on a game theoretic formulation of the joint power control
and AP selection problem. In the proposed game, each user is a player with an
associated strategy containing a discrete variable (the AP selection decision)
and a continuous vector (the power allocation among multiple channels). We
provide characterizations of the Nash Equilibrium of the proposed game, and
present a set of novel algorithms that allow the users to efficiently optimize
their rates. Finally, we study the properties of the proposed algorithms as
well as their performance via extensive simulations.
|
1207.4404
|
Better Mixing via Deep Representations
|
cs.LG
|
It has previously been hypothesized, and supported with some experimental
evidence, that deeper representations, when well trained, tend to do a better
job at disentangling the underlying factors of variation. We study the
following related conjecture: better representations, in the sense of better
disentangling, can be exploited to produce faster-mixing Markov chains.
Consequently, mixing would be more efficient at higher levels of
representation. To better understand why and how this is happening, we propose
a secondary conjecture: the higher-level samples fill more uniformly the space
they occupy and the high-density manifolds tend to unfold when represented at
higher levels. The paper discusses these hypotheses and tests them
experimentally through visualization and measurements of mixing and
interpolating between samples.
|
1207.4417
|
Penalty Constraints and Kernelization of M-Estimation Based Fuzzy
C-Means
|
cs.CV stat.CO
|
A framework of M-estimation based fuzzy C-means clustering (MFCM) algorithm
is proposed with iterative reweighted least squares (IRLS) algorithm, and
penalty constraint and kernelization extensions of MFCM algorithms are also
developed. Introducing penalty information to the object functions of MFCM
algorithms, the spatially constrained fuzzy C-means (SFCM) is extended to
penalty constraints MFCM algorithms(abbr. pMFCM).Substituting the Euclidean
distance with kernel method, the MFCM and pMFCM algorithms are extended to
kernelized MFCM (abbr. KMFCM) and kernelized pMFCM (abbr.pKMFCM) algorithms.
The performances of MFCM, pMFCM, KMFCM and pKMFCM algorithms are evaluated in
three tasks: pattern recognition on 10 standard data sets from UCI Machine
Learning databases, noise image segmentation performances on a synthetic image,
a magnetic resonance brain image (MRI), and image segmentation of a standard
images from Berkeley Segmentation Dataset and Benchmark. The experimental
results demonstrate the effectiveness of our proposed algorithms in pattern
recognition and image segmentation.
|
1207.4420
|
On the Nuclear Norm heuristic for a Hankel matrix Recovery Problem
|
cs.SY math.OC
|
This note addresses the question if and why the nuclear norm heuristic can
recover an impulse response generated by a stable single-real-pole system, if
elements of the upper-triangle of the associated Hankel matrix were given.
Since the setting is deterministic, theories based on stochastic assumptions
for low-rank matrix recovery do not apply here. A 'certificate' which
guarantees the completion is constructed by exploring the structural
information of the hidden matrix. Experimental results and discussions
regarding the nuclear norm heuristic applied to a more general setting are also
given.
|
1207.4421
|
Stochastic optimization and sparse statistical recovery: An optimal
algorithm for high dimensions
|
stat.ML cs.LG math.OC
|
We develop and analyze stochastic optimization algorithms for problems in
which the expected loss is strongly convex, and the optimum is (approximately)
sparse. Previous approaches are able to exploit only one of these two
structures, yielding an $\order(\pdim/T)$ convergence rate for strongly convex
objectives in $\pdim$ dimensions, and an $\order(\sqrt{(\spindex \log
\pdim)/T})$ convergence rate when the optimum is $\spindex$-sparse. Our
algorithm is based on successively solving a series of $\ell_1$-regularized
optimization problems using Nesterov's dual averaging algorithm. We establish
that the error of our solution after $T$ iterations is at most
$\order((\spindex \log\pdim)/T)$, with natural extensions to approximate
sparsity. Our results apply to locally Lipschitz losses including the logistic,
exponential, hinge and least-squares losses. By recourse to statistical minimax
results, we show that our convergence rates are optimal up to multiplicative
constant factors. The effectiveness of our approach is also confirmed in
numerical simulations, in which we compare to several baselines on a
least-squares regression problem.
|
1207.4432
|
Towards Understanding Triangle Construction Problems
|
cs.AI
|
Straightedge and compass construction problems are one of the oldest and most
challenging problems in elementary mathematics. The central challenge, for a
human or for a computer program, in solving construction problems is a huge
search space. In this paper we analyze one family of triangle construction
problems, aiming at detecting a small core of the underlying geometry
knowledge. The analysis leads to a small set of needed definitions, lemmas and
primitive construction steps, and consequently, to a simple algorithm for
automated solving of problems from this family. The same approach can be
applied to other families of construction problems.
|
1207.4442
|
Complex-network analysis of combinatorial spaces: The NK landscape case
|
cond-mat.stat-mech cs.NE nlin.AO
|
We propose a network characterization of combinatorial fitness landscapes by
adapting the notion of inherent networks proposed for energy surfaces. We use
the well-known family of NK landscapes as an example. In our case the inherent
network is the graph whose vertices represent the local maxima in the
landscape, and the edges account for the transition probabilities between their
corresponding basins of attraction. We exhaustively extracted such networks on
representative NK landscape instances, and performed a statistical
characterization of their properties. We found that most of these network
properties are related to the search difficulty on the underlying NK landscapes
with varying values of K.
|
1207.4445
|
Communities of Minima in Local Optima Networks of Combinatorial Spaces
|
cs.NE cs.AI
|
In this work we present a new methodology to study the structure of the
configuration spaces of hard combinatorial problems. It consists in building
the network that has as nodes the locally optimal configurations and as edges
the weighted oriented transitions between their basins of attraction. We apply
the approach to the detection of communities in the optima networks produced by
two different classes of instances of a hard combinatorial optimization
problem: the quadratic assignment problem (QAP). We provide evidence indicating
that the two problem instance classes give rise to very different configuration
spaces. For the so-called real-like class, the networks possess a clear modular
structure, while the optima networks belonging to the class of random uniform
instances are less well partitionable into clusters. This is convincingly
supported by using several statistical tests. Finally, we shortly discuss the
consequences of the findings for heuristically searching the corresponding
problem spaces.
|
1207.4448
|
DAMS: Distributed Adaptive Metaheuristic Selection
|
cs.NE cs.AI
|
We present a distributed generic algorithm called DAMS dedicated to adaptive
optimization in distributed environments. Given a set of metaheuristic, the
goal of DAMS is to coordinate their local execution on distributed nodes in
order to optimize the global performance of the distributed system. DAMS is
based on three-layer architecture allowing node to decide distributively what
local information to communicate, and what metaheuristic to apply while the
optimization process is in progress. The adaptive features of DAMS are first
addressed in a very general setting. A specific DAMS called SBM is then
described and analyzed from both a parallel and an adaptive point of view. SBM
is a simple, yet efficient, adaptive distributed algorithm using an
exploitation component allowing nodes to select the metaheuristic with the best
locally observed performance, and an exploration component allowing nodes to
detect the metaheuristic with the actual best performance. The efficiency of
BSM-DAMS is demonstrated through experimentations and comparisons with other
adaptive strategies (sequential and distributed).
|
1207.4450
|
NILS: a Neutrality-based Iterated Local Search and its application to
Flowshop Scheduling
|
cs.NE cs.AI
|
This paper presents a new methodology that exploits specific characteristics
from the fitness landscape. In particular, we are interested in the property of
neutrality, that deals with the fact that the same fitness value is assigned to
numerous solutions from the search space. Many combinatorial optimization
problems share this property, that is generally very inhibiting for local
search algorithms. A neutrality-based iterated local search, that allows
neutral walks to move on the plateaus, is proposed and experimented on a
permutation flowshop scheduling problem with the aim of minimizing the
makespan. Our experiments show that the proposed approach is able to find
improving solutions compared with a classical iterated local search. Moreover,
the tradeoff between the exploitation of neutrality and the exploration of new
parts of the search space is deeply analyzed.
|
1207.4451
|
Set-based Multiobjective Fitness Landscapes: A Preliminary Study
|
cs.NE cs.AI
|
Fitness landscape analysis aims to understand the geometry of a given
optimization problem in order to design more efficient search algorithms.
However, there is a very little knowledge on the landscape of multiobjective
problems. In this work, following a recent proposal by Zitzler et al. (2010),
we consider multiobjective optimization as a set problem. Then, we give a
general definition of set-based multiobjective fitness landscapes. An
experimental set-based fitness landscape analysis is conducted on the
multiobjective NK-landscapes with objective correlation. The aim is to adapt
and to enhance the comprehensive design of set-based multiobjective search
approaches, motivated by an a priori analysis of the corresponding set problem
properties.
|
1207.4452
|
Pareto Local Optima of Multiobjective NK-Landscapes with Correlated
Objectives
|
cs.NE cs.AI
|
In this paper, we conduct a fitness landscape analysis for multiobjective
combinatorial optimization, based on the local optima of multiobjective
NK-landscapes with objective correlation. In single-objective optimization, it
has become clear that local optima have a strong impact on the performance of
metaheuristics. Here, we propose an extension to the multiobjective case, based
on the Pareto dominance. We study the co-influence of the problem dimension,
the degree of non-linearity, the number of objectives and the correlation
degree between objective functions on the number of Pareto local optima.
|
1207.4455
|
First-improvement vs. Best-improvement Local Optima Networks of NK
Landscapes
|
cs.NE cs.AI
|
This paper extends a recently proposed model for combinatorial landscapes:
Local Optima Networks (LON), to incorporate a first-improvement (greedy-ascent)
hill-climbing algorithm, instead of a best-improvement (steepest-ascent) one,
for the definition and extraction of the basins of attraction of the landscape
optima. A statistical analysis comparing best and first improvement network
models for a set of NK landscapes, is presented and discussed. Our results
suggest structural differences between the two models with respect to both the
network connectivity, and the nature of the basins of attraction. The impact of
these differences in the behavior of search heuristics based on first and best
improvement local search is thoroughly discussed.
|
1207.4462
|
A Quantum Copy-Protection Scheme with Authentication
|
quant-ph cs.IT math.IT
|
We propose a quantum copy-protection system which protects classical
information in the form of non-orthogonal quantum states. The decryption of the
stored information is not possible in the classical representation and the
decryption mechanism of data qubits is realized by secret unitary rotations. We
define an authentication method for the proposed copy-protection scheme and
analyse the success probabilities of the authentication process. A possible
experimental realization of the scheme is also presented.
|
1207.4463
|
Protein Function Prediction Based on Kernel Logistic Regression with
2-order Graphic Neighbor Information
|
q-bio.QM cs.LG q-bio.MN
|
To enhance the accuracy of protein-protein interaction function prediction, a
2-order graphic neighbor information feature extraction method based on
undirected simple graph is proposed in this paper, which extends the 1-order
graphic neighbor featureextraction method. And the chi-square test statistical
method is also involved in feature combination. To demonstrate the
effectiveness of our 2-order graphic neighbor feature, four logistic regression
models (logistic regression (abbrev. LR), diffusion kernel logistic regression
(abbrev. DKLR), polynomial kernel logistic regression (abbrev. PKLR), and
radial basis function (RBF) based kernel logistic regression (abbrev. RBF KLR))
are investigated on the two feature sets. The experimental results of protein
function prediction of Yeast Proteome Database (YPD) using the the
protein-protein interaction data of Munich Information Center for Protein
Sequences (MIPS) show that 2-order graphic neighbor information of proteins can
significantly improve the average overall percentage of protein function
prediction especially with RBF KLR. And, with a new 5-top chi-square feature
combination method, RBF KLR can achieve 99.05% average overall percentage on
2-order neighbor feature combination set.
|
1207.4464
|
An Improvement in Quantum Fourier Transform
|
quant-ph cs.IT math.IT
|
Singular Value Decomposition (SVD) is one of the most useful techniques for
analyzing data in linear algebra. SVD decomposes a rectangular real or complex
matrix into two orthogonal matrices and one diagonal matrix. In this work we
introduce a new approach to improve the preciseness of the standard Quantum
Fourier Transform. The presented Quantum-SVD algorithm is based on the singular
value decomposition mechanism. While the complexity of the proposed scheme is
the same as the standard Quantum Fourier Transform, the precision of the
Quantum-SVD approach is some orders higher. The Quantum-SVD approach also
exploits the benefits of quantum searching.
|
1207.4467
|
Information Geometric Security Analysis of Differential Phase Shift
Quantum Key Distribution Protocol
|
quant-ph cs.IT math.IT
|
This paper analyzes the information-theoretical security of the Differential
Phase Shift (DPS) Quantum Key Distribution (QKD) protocol, using efficient
computational information geometric algorithms. The DPS QKD protocol was
introduced for practical reasons, since the earlier QKD schemes were too
complicated to implement in practice. The DPS QKD protocol can be an integrated
part of current network security applications, hence it's practical
implementation is much easier with the current optical devices and optical
networks. The proposed algorithm could be a very valuable tool to answer the
still open questions related to the security bounds of the DPS QKD protocol.
|
1207.4474
|
On Model Based Synthesis of Embedded Control Software
|
cs.SE cs.SY
|
Many Embedded Systems are indeed Software Based Control Systems (SBCSs), that
is control systems whose controller consists of control software running on a
microcontroller device. This motivates investigation on Formal Model Based
Design approaches for control software. Given the formal model of a plant as a
Discrete Time Linear Hybrid System and the implementation specifications (that
is, number of bits in the Analog-to-Digital (AD) conversion)
correct-by-construction control software can be automatically generated from
System Level Formal Specifications of the closed loop system (that is, safety
and liveness requirements), by computing a suitable finite abstraction of the
plant.
With respect to given implementation specifications, the automatically
generated code implements a time optimal control strategy (in terms of set-up
time), has a Worst Case Execution Time linear in the number of AD bits $b$, but
unfortunately, its size grows exponentially with respect to $b$. In many
embedded systems, there are severe restrictions on the computational resources
(such as memory or computational power) available to microcontroller devices.
This paper addresses model based synthesis of control software by trading
system level non-functional requirements (such us optimal set-up time, ripple)
with software non-functional requirements (its footprint). Our experimental
results show the effectiveness of our approach: for the inverted pendulum
benchmark, by using a quantization schema with 12 bits, the size of the small
controller is less than 6% of the size of the time optimal one.
|
1207.4491
|
Algorithmic Superactivation of Asymptotic Quantum Capacity of
Zero-Capacity Quantum Channels
|
quant-ph cs.IT math.IT
|
The superactivation of zero-capacity quantum channels makes it possible to
use two zero-capacity quantum channels with a positive joint capacity for their
output. Currently, we have no theoretical background to describe all possible
combinations of superactive zero-capacity channels; hence, there may be many
other possible combinations. In practice, to discover such superactive
zero-capacity channel-pairs, we must analyze an extremely large set of possible
quantum states, channel models, and channel probabilities. There is still no
extremely efficient algorithmic tool for this purpose. This paper shows an
efficient algorithmical method of finding such combinations. Our method can be
a very valuable tool for improving the results of fault-tolerant quantum
computation and possible communication techniques over very noisy quantum
channels.
|
1207.4498
|
Distributed Inter-Cell Interference Mitigation Via Joint Scheduling and
Power Control Under Noise Rise Constraints
|
cs.NI cs.IT math.IT
|
Consider the problem of joint uplink scheduling and power allocation. Being
inherent to almost any wireless system, this resource allocation problem has
received extensive attention. Yet, most common techniques either adopt
classical power control, in which mobile stations are received with the same
Signal-to-Interference-plus-Noise Ratio, or use centralized schemes, in which
base stations coordinate their allocations.
In this work, we suggest a novel scheduling approach in which each base
station, besides allocating the time and frequency according to given
constraints, also manages its uplink power budget such that the aggregate
interference, "Noise Rise", caused by its subscribers at the neighboring cells
is bounded. Our suggested scheme is distributed, requiring neither coordination
nor message exchange.
We rigorously define the allocation problem under noise rise constraints,
give the optimal solution and derive an efficient iterative algorithm to
achieve it. We then discuss a relaxed problem, where the noise rise is
constrained separately for each sub-channel or resource unit. While
sub-optimal, this view renders the scheduling and power allocation problems
separate, yielding an even simpler and more efficient solution, while the
essence of the scheme is kept. Via extensive simulations, we show that the
suggested approach increases overall performance dramatically, with the same
level of fairness and power consumption.
|
1207.4502
|
Pilot Quantum Error Correction for Global-Scale Quantum Communications
|
quant-ph cs.IT math.IT
|
Real global-scale quantum communications and quantum key distribution systems
cannot be implemented by the current fiber and free-space links. These links
have high attenuation, low polarization-preserving capability or extreme
sensitivity to the environment. A potential solution to the problem is the
space-earth quantum channels. These channels have no absorption since the
signal states are propagated in empty space, however a small fraction of these
channels is in the atmosphere, which causes slight depolarizing effect.
Furthermore, the relative motion of the ground station and the satellite causes
a rotation in the polarization of the quantum states. In the current approaches
to compensate for these types of polarization errors, high computational costs
and extra physical apparatuses are required. Here we introduce a novel approach
which breaks with the traditional views of currently developed quantum-error
correction schemes. The proposed quantum error-correction technique can be
applied to fix the polarization errors which are critical in space-earth
quantum communication systems. Moreover, the channel coding scheme provides
capacity-achieving communication over slightly depolarizing space-earth
channels.
|
1207.4525
|
SiGMa: Simple Greedy Matching for Aligning Large Knowledge Bases
|
cs.AI cs.DB cs.IR
|
The Internet has enabled the creation of a growing number of large-scale
knowledge bases in a variety of domains containing complementary information.
Tools for automatically aligning these knowledge bases would make it possible
to unify many sources of structured knowledge and answer complex queries.
However, the efficient alignment of large-scale knowledge bases still poses a
considerable challenge. Here, we present Simple Greedy Matching (SiGMa), a
simple algorithm for aligning knowledge bases with millions of entities and
facts. SiGMa is an iterative propagation algorithm which leverages both the
structural information from the relationship graph as well as flexible
similarity measures between entity properties in a greedy local search, thus
making it scalable. Despite its greedy nature, our experiments indicate that
SiGMa can efficiently match some of the world's largest knowledge bases with
high precision. We provide additional experiments on benchmark datasets which
demonstrate that SiGMa can outperform state-of-the-art approaches both in
accuracy and efficiency.
|
1207.4526
|
Iterative Design of L_p Digital Filters
|
cs.IT math.IT
|
The design of digital filters is a fundamental process in the context of
digital signal processing. The purpose of this paper is to study the use of
$\lp$ norms (for $2 < p < \infty$) as design criteria for digital filters, and
to introduce a set of algorithms for the design of Finite (FIR) and Infinite
(IIR) Impulse Response digital filters based on the Iterative Reweighted Least
Squares (IRLS) algorithm. The proposed algorithms rely on the idea of breaking
the $\lp$ filter design problem into a sequence of approximations rather than
solving the original $\lp$ problem directly. It is shown that one can
efficiently design filters that arbitrarily approximate a desired $\lp$
solution (for $2 < p < \infty$) including the commonly used $l_\infty$ (or
minimax) design problem. A method to design filters with different norms in
different bands is presented (allowing the user for better control of the
signal and noise behavior per band). Among the main contributions of this work
is a method for the design of {\it magnitude} $\lp$ IIR filters. Experimental
results show that the algorithms in this work are robust and efficient,
improving over traditional off-the-shelf optimization tools. The group of
proposed algorithms form a flexible collection that offers robustness and
efficiency for a wide variety of digital filter design applications.
|
1207.4530
|
Time-Space Constrained Codes for Phase-Change Memories
|
cs.IT math.IT
|
Phase-change memory (PCM) is a promising non-volatile solid-state memory
technology. A PCM cell stores data by using its amorphous and crystalline
states. The cell changes between these two states using high temperature.
However, since the cells are sensitive to high temperature, it is important,
when programming cells, to balance the heat both in time and space.
In this paper, we study the time-space constraint for PCM, which was
originally proposed by Jiang et al. A code is called an
\emph{$(\alpha,\beta,p)$-constrained code} if for any $\alpha$ consecutive
rewrites and for any segment of $\beta$ contiguous cells, the total rewrite
cost of the $\beta$ cells over those $\alpha$ rewrites is at most $p$. Here,
the cells are binary and the rewrite cost is defined to be the Hamming distance
between the current and next memory states. First, we show a general upper
bound on the achievable rate of these codes which extends the results of Jiang
et al. Then, we generalize their construction for $(\alpha\geq 1,
\beta=1,p=1)$-constrained codes and show another construction for $(\alpha = 1,
\beta\geq 1,p\geq1)$-constrained codes. Finally, we show that these two
constructions can be used to construct codes for all values of $\alpha$,
$\beta$, and $p$.
|
1207.4552
|
Delay-Robustness of Linear Predictor Feedback Without Restriction on
Delay Rate
|
math.OC cs.SY
|
Robustness is established for the predictor feedback for linear
time-invariant systems with respect to possibly time-varying perturbations of
the input delay, with a constant nominal delay. Prior results have addressed
qualitatively constant delay perturbations (robustness of stability in L2 norm
of actuator state) and delay perturbations with restricted rate of change
(robustness of stability in H1 norm of actuator state). The present work
provides simple formulae that allow direct and accurate computation of the
least upper bound of the magnitude of the delay perturbation for which
exponential stability in supremum norm on the actuator state is preserved.
While prior work has employed Lyapunov-Krasovskii functionals constructed via
backstepping, the present work employs a particular form of small-gain
analysis. Two cases are considered: the case of measurable (possibly
discontinuous) perturbations and the case of constant perturbations.
|
1207.4553
|
The Impacts of Subsidy Policies on Vaccination Decisions in Contact
Networks
|
physics.soc-ph cs.SI physics.med-ph
|
Often, vaccination programs are carried out based on self-interest rather
than being mandatory. Owing to the perceptions about risks associated with
vaccines and the `herd immunity' effect, it may provide suboptimal vaccination
coverage for the population as a whole. In this case, some subsidy policies may
be offered by the government to promote vaccination coverage. But, not all
subsidy policies are effective in controlling the transmission of infectious
diseases. We address the question of which subsidy policy is best, and how to
appropriately distribute the limited subsidies to maximize vaccine coverage. To
answer these questions, we establish a model based on evolutionary game theory,
where individuals try to maximize their personal payoffs when considering the
voluntary vaccination mechanism. Our model shows that voluntary vaccination
alone is insufficient to control an epidemic. Hence, two subsidy policies are
systematically studied: (1) in the free subsidy policy the total amount of
subsidies is distributed to some individuals and all the donees may vaccinate
at no cost, and (2) in the part-offset subsidy policy each vaccinated person is
offset by a certain proportion of the vaccination cost. Simulations suggest
that, since the part-offset subsidy policy can encourage more individuals to be
vaccinated, the performance of this policy is significantly better than that of
the free subsidy policy.
|
1207.4567
|
Efficient Core Maintenance in Large Dynamic Graphs
|
cs.DS cs.DB cs.SI physics.soc-ph
|
The $k$-core decomposition in a graph is a fundamental problem for social
network analysis. The problem of $k$-core decomposition is to calculate the
core number for every node in a graph. Previous studies mainly focus on
$k$-core decomposition in a static graph. There exists a linear time algorithm
for $k$-core decomposition in a static graph. However, in many real-world
applications such as online social networks and the Internet, the graph
typically evolves over time. Under such applications, a key issue is to
maintain the core number of nodes given the graph changes over time. A simple
implementation is to perform the linear time algorithm to recompute the core
number for every node after the graph is updated. Such simple implementation is
expensive when the graph is very large. In this paper, we propose a new
efficient algorithm to maintain the core number for every node in a dynamic
graph. Our main result is that only certain nodes need to update their core
number given the graph is changed by inserting/deleting an edge. We devise an
efficient algorithm to identify and recompute the core number of such nodes.
The complexity of our algorithm is independent of the graph size. In addition,
to further accelerate the algorithm, we develop two pruning strategies by
exploiting the lower and upper bounds of the core number. Finally, we conduct
extensive experiments over both real-world and synthetic datasets, and the
results demonstrate the efficiency of the proposed algorithm.
|
1207.4570
|
Presentation an Approach for Optimization of Semantic Web Language Based
on the Document Structure
|
cs.DB
|
Pattern tree are based on integrated rules which are equal to a combination
of some points connected to each other in a hierarchical structure, called
Enquiry Hierarchical (EH). The main operation in pattern enquiry seeking is to
locate the steps that match the given EH in the dataset. A point of algorithms
has offered for EH matching; but the majority of this algorithms seeks all of
the enquiry steps to access all EHs in the dataset. A few algorithms such as
seek only steps that satisfy end points of EH. All of above algorithms are
trying to locate a way just for investigating direct testing of steps and to
locate the answer of enquiry, directly via these points. In this paper, we
describe a novel algorithm to locate the answer of enquiry without access to
real point of the dataset blindly. In this algorithm, first, the enquiry will
be executed on enquiry schema and this leads to a schema. Using this plan, it
will be clear how to seek end steps and how to achieve enquiry dataset, before
seeking of the dataset steps. Therefore, none of dataset steps will be seek
blindly.
|
1207.4587
|
Causal relay networks
|
cs.IT math.IT
|
In this paper, we study causal discrete-memoryless relay networks (DMRNs).
The network consists of multiple nodes, each of which can be a source, relay,
and/or destination. In the network, there are two types of relays, i.e., relays
with one sample delay (strictly causal) and relays without delay (causal) whose
transmit signal depends not only on the past received symbols but also on the
current received symbol. For this network, we derive two new cut-set bounds,
one when the causal relays have their own messages and the other when not.
Using examples of a causal vector Gaussian two-way relay channel and a causal
vector Gaussian relay channel, we show that the new cut-set bounds can be
achieved by a simple amplify-and-forward type relaying. Our result for the
causal relay channel strengthens the previously known capacity result for the
same channel by El Gamal, Hassanpour, and Mammen.
|
1207.4589
|
Minimum-Length Scheduling with Finite Queues: Solution Characterization
and Algorithmic Framework
|
cs.IT cs.NI math.IT
|
We consider a set of transmitter-receiver pairs, or links, that share a
common channel and address the problem of emptying backlogged queues at the
transmitters in minimum time. The problem amounts to determining activation
subsets of links and their time durations to form a minimum-length schedule.
The problem of scheduling has been studied under various formulations before.
In this paper, we present fundamental insights and solution characterizations
that include: (i) showing that the complexity of the problem remains high for
any continuous and increasing rate function, (ii) formulating and proving
sufficient and necessary optimality conditions of two base scheduling
strategies that correspond to emptying the queues using "one-at-a-time" or
"all-at-once" strategies, (iii) presenting and proving the tractability of the
special case in which the transmission rates are functions only of the
cardinality of the link activation sets. These results are independent of
physical-layer system specifications and are valid for any form of rate
function. We then develop an algorithmic framework. The framework encompasses
exact as well as sub-optimal, but fast, scheduling algorithms, all under a
unified principle design. Through computational experiments we finally
investigate the performance of several specific algorithms.
|
1207.4592
|
Differentially Private Kalman Filtering
|
math.OC cs.CR cs.SY
|
This paper studies the H2 (Kalman) filtering problem in the situation where a
signal estimate must be constructed based on inputs from individual
participants, whose data must remain private. This problem arises in emerging
applications such as smart grids or intelligent transportation systems, where
users continuously send data to third-party aggregators performing global
monitoring or control tasks, and require guarantees that this data cannot be
used to infer additional personal information. To provide strong formal privacy
guarantees against adversaries with arbitrary side information, we rely on the
notion of differential privacy introduced relatively recently in the database
literature. This notion is extended to dynamic systems with many participants
contributing independent input signals, and mechanisms are then proposed to
solve the H2 filtering problem with a differential privacy constraint. A method
for mitigating the impact of the privacy-inducing mechanism on the estimation
performance is described, which relies on controlling the Hinfinity norm of the
filter. Finally, we discuss an application to a privacy-preserving traffic
monitoring system.
|
1207.4597
|
Local stability of Belief Propagation algorithm with multiple fixed
points
|
stat.ML cs.LG
|
A number of problems in statistical physics and computer science can be
expressed as the computation of marginal probabilities over a Markov random
field. Belief propagation, an iterative message-passing algorithm, computes
exactly such marginals when the underlying graph is a tree. But it has gained
its popularity as an efficient way to approximate them in the more general
case, even if it can exhibits multiple fixed points and is not guaranteed to
converge. In this paper, we express a new sufficient condition for local
stability of a belief propagation fixed point in terms of the graph structure
and the beliefs values at the fixed point. This gives credence to the usual
understanding that Belief Propagation performs better on sparse graphs.
|
1207.4598
|
Quick HyperVolume
|
cs.DS cs.DM cs.NE
|
We present a new algorithm to calculate exact hypervolumes. Given a set of
$d$-dimensional points, it computes the hypervolume of the dominated space.
Determining this value is an important subroutine of Multiobjective
Evolutionary Algorithms (MOEAs). We analyze the "Quick Hypervolume" (QHV)
algorithm theoretically and experimentally. The theoretical results are a
significant contribution to the current state of the art. Moreover the
experimental performance is also very competitive, compared with existing exact
hypervolume algorithms.
A full description of the algorithm is currently submitted to IEEE
Transactions on Evolutionary Computation.
|
1207.4625
|
Appropriate Nouns with Obligatory Modifiers
|
cs.CL
|
The notion of appropriate sequence as introduced by Z. Harris provides a
powerful syntactic way of analysing the detailed meaning of various sentences,
including ambiguous ones. In an adjectival sentence like 'The leather was
yellow', the introduction of an appropriate noun, here 'colour', specifies
which quality the adjective describes. In some other adjectival sentences with
an appropriate noun, that noun plays the same part as 'colour' and seems to be
relevant to the description of the adjective. These appropriate nouns can
usually be used in elementary sentences like 'The leather had some colour', but
in many cases they have a more or less obligatory modifier. For example, you
can hardly mention that an object has a colour without qualifying that colour
at all. About 300 French nouns are appropriate in at least one adjectival
sentence and have an obligatory modifier. They enter in a number of sentence
structures related by several syntactic transformations. The appropriateness of
the noun and the fact that the modifier is obligatory are reflected in these
transformations. The description of these syntactic phenomena provides a basis
for a classification of these nouns. It also concerns the lexical properties of
thousands of predicative adjectives, and in particular the relations between
the sentence without the noun : 'The leather was yellow' and the adjectival
sentence with the noun : 'The colour of the leather was yellow'.
|
1207.4626
|
The Road to VEGAS: Guiding the Search over Neutral Networks
|
cs.NE
|
VEGAS (Varying Evolvability-Guided Adaptive Search) is a new methodology
proposed to deal with the neutrality property of some optimization problems. ts
main feature is to consider the whole neutral network rather than an arbitrary
solution. Moreover, VEGAS is designed to escape from plateaus based on the
evolvability of solution and a multi-armed bandit. Experiments are conducted on
NK-landscapes with neutrality. Results show the importance of considering the
whole neutral network and of guiding the search cleverly. The impact of the
level of neutrality and of the exploration-exploitation trade-off are deeply
analyzed.
|
1207.4628
|
On the Effect of Connectedness for Biobjective Multiple and Long Path
Problems
|
cs.NE cs.AI
|
Recently, the property of connectedness has been claimed to give a strong
motivation on the design of local search techniques for multiobjective
combinatorial optimization (MOCO). Indeed, when connectedness holds, a basic
Pareto local search, initialized with at least one non-dominated solution,
allows to identify the efficient set exhaustively. However, this becomes
quickly infeasible in practice as the number of efficient solutions typically
grows exponentially with the instance size. As a consequence, we generally have
to deal with a limited-size approximation, where a good sample set has to be
found. In this paper, we propose the biobjective multiple and long path
problems to show experimentally that, on the first problems, even if the
efficient set is connected, a local search may be outperformed by a simple
evolutionary algorithm in the sampling of the efficient set. At the opposite,
on the second problems, a local search algorithm may successfully approximate a
disconnected efficient set. Then, we argue that connectedness is not the single
property to study for the design of local search heuristics for MOCO. This work
opens new discussions on a proper definition of the multiobjective fitness
landscape.
|
1207.4629
|
On the Neutrality of Flowshop Scheduling Fitness Landscapes
|
cs.NE cs.AI
|
Solving efficiently complex problems using metaheuristics, and in particular
local searches, requires incorporating knowledge about the problem to solve. In
this paper, the permutation flowshop problem is studied. It is well known that
in such problems, several solutions may have the same fitness value. As this
neutrality property is an important one, it should be taken into account during
the design of optimization methods. Then in the context of the permutation
flowshop, a deep landscape analysis focused on the neutrality property is
driven and propositions on the way to use this neutrality to guide efficiently
the search are given.
|
1207.4631
|
Analyzing the Effect of Objective Correlation on the Efficient Set of
MNK-Landscapes
|
cs.NE cs.AI
|
In multiobjective combinatorial optimization, there exists two main classes
of metaheuristics, based either on multiple aggregations, or on a dominance
relation. As in the single objective case, the structure of the search space
can explain the difficulty for multiobjective metaheuristics, and guide the
design of such methods. In this work we analyze the properties of
multiobjective combinatorial search spaces. In particular, we focus on the
features related the efficient set, and we pay a particular attention to the
correlation between objectives. Few benchmark takes such objective correlation
into account. Here, we define a general method to design multiobjective
problems with correlation. As an example, we extend the well-known
multiobjective NK-landscapes. By measuring different properties of the search
space, we show the importance of considering the objective correlation on the
design of metaheuristics.
|
1207.4632
|
Clustering of Local Optima in Combinatorial Fitness Landscapes
|
cs.NE cs.AI
|
Using the recently proposed model of combinatorial landscapes: local optima
networks, we study the distribution of local optima in two classes of instances
of the quadratic assignment problem. Our results indicate that the two problem
instance classes give rise to very different configuration spaces. For the
so-called real-like class, the optima networks possess a clear modular
structure, while the networks belonging to the class of random uniform
instances are less well partitionable into clusters. We briefly discuss the
consequences of the findings for heuristically searching the corresponding
problem spaces.
|
1207.4656
|
Aspiration-induced reconnection in spatial public goods game
|
physics.soc-ph cs.SI
|
In this Letter, we introduce an aspiration-induced reconnection mechanism
into the spatial public goods game. A player will reconnect to a randomly
chosen player if its payoff acquired from the group centered on the neighbor
does not exceed the aspiration level. We find that an intermediate aspiration
level can best promote cooperation. This optimal phenomenon can be explained by
a negative feedback effect, namely, a moderate level of reconnection induced by
the intermediate aspiration level induces can change the downfall of
cooperators, and then facilitate the fast spreading of cooperation. While
insufficient reconnection and excessive reconnection induced by low and high
aspiration levels respectively are not conductive to such an effect. Moreover,
we find that the intermediate aspiration level can lead to the heterogeneous
distribution of degree, which will be beneficial to the evolution of
cooperation.
|
1207.4661
|
A variant of list plus CRC concatenated polar code
|
cs.IT math.IT
|
A new family of codes based on polar codes, soft concatenation and list+CRC
decoding is proposed. Numerical experiments show the performance competitive
with industry standards and Tal, Vardy approach.
|
1207.4676
|
Proceedings of the 29th International Conference on Machine Learning
(ICML-12)
|
cs.LG stat.ML
|
This is an index to the papers that appear in the Proceedings of the 29th
International Conference on Machine Learning (ICML-12). The conference was held
in Edinburgh, Scotland, June 27th - July 3rd, 2012.
|
1207.4680
|
Reduced Complexity Super-Trellis Decoding for Convolutionally Encoded
Transmission Over ISI-Channels
|
cs.IT math.IT
|
In this paper we propose a matched encoding (ME) scheme for convolutionally
encoded transmission over intersymbol interference (usually called ISI)
channels. A novel trellis description enables to perform equalization and
decoding jointly, i.e., enables efficient super-trellis decoding. By means of
this matched non-linear trellis description we can significantly reduce the
number of states needed for the receiver-side Viterbi algorithm to perform
maximum-likelihood sequence estimation. Further complexity reduction is
achieved using the concept of reduced-state sequence estimation.
|
1207.4701
|
An Adaptive Online Ad Auction Scoring Algorithm for Revenue Maximization
|
cs.GT cs.IT math.IT
|
Sponsored search becomes an easy platform to match potential consumers'
intent with merchants' advertising. Advertisers express their willingness to
pay for each keyword in terms of bids to the search engine. When a user's query
matches the keyword, the search engine evaluates the bids and allocates slots
to the advertisers that are displayed along side the unpaid algorithmic search
results. The advertiser only pays the search engine when its ad is clicked by
the user and the price-per-click is determined by the bids of other competing
advertisers.
|
1207.4707
|
Correction to "A Note on Gallager's Capacity Theorem for Waveform
Channels"
|
cs.IT math.IT
|
We correct an alleged contradiction to Gallager's capacity theorem for
waveform channels as presented in a poster at the 2012 IEEE International
Symposium on Information Theory.
|
1207.4708
|
The Arcade Learning Environment: An Evaluation Platform for General
Agents
|
cs.AI
|
In this article we introduce the Arcade Learning Environment (ALE): both a
challenge problem and a platform and methodology for evaluating the development
of general, domain-independent AI technology. ALE provides an interface to
hundreds of Atari 2600 game environments, each one different, interesting, and
designed to be a challenge for human players. ALE presents significant research
challenges for reinforcement learning, model learning, model-based planning,
imitation learning, transfer learning, and intrinsic motivation. Most
importantly, it provides a rigorous testbed for evaluating and comparing
approaches to these problems. We illustrate the promise of ALE by developing
and benchmarking domain-independent agents designed using well-established AI
techniques for both reinforcement learning and planning. In doing so, we also
propose an evaluation methodology made possible by ALE, reporting empirical
results on over 55 different games. All of the software, including the
benchmark agents, is publicly available.
|
1207.4711
|
Efficient Feedback-Based Scheduling Policies for Chunked Network Codes
over Networks with Loss and Delay
|
cs.IT cs.NI math.IT
|
The problem of designing efficient feedback-based scheduling policies for
chunked codes (CC) over packet networks with delay and loss is considered. For
networks with feedback, two scheduling policies, referred to as random push
(RP) and local-rarest-first (LRF), already exist. We propose a new scheduling
policy, referred to as minimum-distance-first (MDF), based on the expected
number of innovative successful packet transmissions at each node of the
network prior to the "next" transmission time, given the feedback information
from the downstream node(s) about the received packets. Unlike the existing
policies, the MDF policy incorporates loss and delay models of the link in the
selection process of the chunk to be transmitted. Our simulations show that MDF
significantly reduces the expected time required for all the chunks (or
equivalently, all the message packets) to be decodable compared to the existing
scheduling policies for line networks with feedback. The improvements are
particularly profound (up to about 46% for the tested cases) for smaller chunks
and larger networks which are of more practical interest. The improvement in
the performance of the proposed scheduling policy comes at the cost of more
computations, and a slight increase in the amount of feedback. We also propose
a low-complexity version of MDF with a rather small loss in the performance,
referred to as minimumcurrent-metric-first (MCMF). The MCMF policy is based on
the expected number of innovative packet transmissions prior to the "current"
transmission time, as opposed to the next transmission time, used in MDF. Our
simulations (over line networks) demonstrate that MCMF is always superior to RP
and LRF policies, and the superiority becomes more pronounced for smaller
chunks and larger networks.
|
1207.4746
|
Heterogeneous length of stay of hosts' movements and spatial epidemic
spread
|
physics.soc-ph cs.SI
|
Infectious diseases outbreaks are often characterized by a spatial component
induced by hosts' distribution, mobility, and interactions. Spatial models that
incorporate hosts' movements are being used to describe these processes, to
investigate the conditions for propagation, and to predict the spatial spread.
Several assumptions are being considered to model hosts' movements, ranging
from permanent movements to daily commuting, where the time spent at
destination is either infinite or assumes a homogeneous fixed value,
respectively. Prompted by empirical evidence, here we introduce a general
metapopulation approach to model the disease dynamics in a spatially structured
population where the mobility process is characterized by a heterogeneous
length of stay. We show that large fluctuations of the length of stay, as
observed in reality, can have a significant impact on the threshold conditions
for the global epidemic invasion, thus altering model predictions based on
simple assumptions, and displaying important public health implications.
|
1207.4747
|
Block-Coordinate Frank-Wolfe Optimization for Structural SVMs
|
cs.LG math.OC stat.ML
|
We propose a randomized block-coordinate variant of the classic Frank-Wolfe
algorithm for convex optimization with block-separable constraints. Despite its
lower iteration cost, we show that it achieves a similar convergence rate in
duality gap as the full Frank-Wolfe algorithm. We also show that, when applied
to the dual structural support vector machine (SVM) objective, this yields an
online algorithm that has the same low iteration complexity as primal
stochastic subgradient methods. However, unlike stochastic subgradient methods,
the block-coordinate Frank-Wolfe algorithm allows us to compute the optimal
step-size and yields a computable duality gap guarantee. Our experiments
indicate that this simple algorithm outperforms competing structural SVM
solvers.
|
1207.4748
|
Hierarchical Clustering using Randomly Selected Similarities
|
stat.ML cs.IT cs.LG math.IT
|
The problem of hierarchical clustering items from pairwise similarities is
found across various scientific disciplines, from biology to networking. Often,
applications of clustering techniques are limited by the cost of obtaining
similarities between pairs of items. While prior work has been developed to
reconstruct clustering using a significantly reduced set of pairwise
similarities via adaptive measurements, these techniques are only applicable
when choice of similarities are available to the user. In this paper, we
examine reconstructing hierarchical clustering under similarity observations
at-random. We derive precise bounds which show that a significant fraction of
the hierarchical clustering can be recovered using fewer than all the pairwise
similarities. We find that the correct hierarchical clustering down to a
constant fraction of the total number of items (i.e., clusters sized O(N)) can
be found using only O(N log N) randomly selected pairwise similarities in
expectation.
|
1207.4763
|
Buffer-Aided Relaying with Adaptive Link Selection - Fixed and Mixed
Rate Transmission
|
cs.IT math.IT
|
We consider a simple network consisting of a source, a half-duplex DF relay
with a buffer, and a destination. We assume that the direct source-destination
link is not available and all links undergo fading. We propose two new
buffer-aided relaying schemes. In the first scheme, neither the source nor the
relay have CSIT, and consequently, both nodes are forced to transmit with fixed
rates. In contrast, in the second scheme, the source does not have CSIT and
transmits with fixed rate but the relay has CSIT and adapts its transmission
rate accordingly. In the absence of delay constraints, for both fixed rate and
mixed rate transmission, we derive the throughput-optimal buffer-aided relaying
protocols which select either the source or the relay for transmission based on
the instantaneous SNRs of the source-relay and the relay-destination links. In
addition, for the delay constrained case, we develop buffer-aided relaying
protocols that achieve a predefined average delay. Compared to conventional
relaying protocols, which select the transmitting node according to a
predefined schedule independent of the link instantaneous SNRs, the proposed
buffer-aided protocols with adaptive link selection achieve large performance
gains. In particular, for fixed rate transmission, we show that the proposed
protocol achieves a diversity gain of two as long as an average delay of more
than three time slots can be afforded. Furthermore, for mixed rate transmission
with an average delay of $E{T}$ time slots, a multiplexing gain of
$r=1-1/(2E{T})$ is achieved. Hence, for mixed rate transmission, for
sufficiently large average delays, buffer-aided half-duplex relaying with and
without adaptive link selection does not suffer from a multiplexing gain loss
compared to full-duplex relaying.
|
1207.4766
|
Computer control of gene expression: Robust setpoint tracking of protein
mean and variance using integral feedback
|
math.OC cs.SY q-bio.MN q-bio.QM
|
Protein mean and variance levels in a simple stochastic gene expression
circuit are controlled using proportional integral feedback. It is shown that
the protein mean level can be globally and robustly tracked to any desired
value using a simple PI controller that satisfies explicit sufficient
conditions. Controlling both the mean and variance on the other hand requires
the use of an additional control input, chosen here as the mRNA degradation
rate. Local robust tracking of mean and variance is proved to be achievable
using multivariable PI control, provided that the reference point satisfies
necessary conditions imposed by the system. Even more importantly, it is shown
that there exist PI controllers that locally, robustly and simultaneously
stabilize all the equilibrium points inside the admissible region. Simulation
examples illustrate the results.
|
1207.4800
|
Finite Alphabet Iterative Decoders, Part I: Decoding Beyond Belief
Propagation on BSC
|
cs.IT math.IT
|
We introduce a new paradigm for finite precision iterative decoding on
low-density parity-check codes over the Binary Symmetric channel. The messages
take values from a finite alphabet, and unlike traditional quantized decoders
which are quantized versions of the Belief propagation (BP) decoder, the
proposed finite alphabet iterative decoders (FAIDs) do not propagate quantized
probabilities or log-likelihoods and the variable node update functions do not
mimic the BP decoder. Rather, the update functions are maps designed using the
knowledge of potentially harmful subgraphs that could be present in a given
code, thereby rendering these decoders capable of outperforming the BP in the
error floor region. On certain column-weight-three codes of practical interest,
we show that there exist 3-bit precision FAIDs that surpass the BP decoder in
the error floor. Hence, FAIDs are able to achieve a superior performance at
much lower complexity. We also provide a methodology for the selection of FAIDs
that is not code-specific, but gives a set of candidate FAIDs containing
potentially good decoders in the error floor region for any column-weight-three
code. We validate the code generality of our methodology by providing
particularly good three-bit precision FAIDs for a variety of codes with
different rates and lengths.
|
1207.4807
|
Finite Alphabet Iterative Decoders, Part II: Improved Guaranteed Error
Correction of LDPC Codes via Iterative Decoder Diversity
|
cs.IT math.IT
|
Recently, we introduced a new class of finite alphabet iterative decoders
(FAIDs) for low-density parity-check (LDPC) codes. These decoders are capable
of surpassing belief propagation in the error floor region on the Binary
Symmetric channel with much lower complexity. In this paper, we introduce a a
novel scheme to further increase the guaranteed error correction capability
from what is achievable by a FAID on column-weight-three LDPC codes. The
proposed scheme uses a plurality of FAIDs which collectively correct more error
patterns than a single FAID on a given code. The collection of FAIDs utilized
by the scheme is judiciously chosen to ensure that individual decoders have
different decoding dynamics and correct different error patterns. Consequently,
they can collectively correct a diverse set of error patterns, which is
referred to as decoder diversity. We provide a systematic method to generate
the set of FAIDs for decoder diversity on a given code based on the knowledge
of the most harmful trapping sets present in the code. Using the well-known
column-weight-three $(155,64)$ Tanner code with $d_{min}$ = 20 as an example,
we describe the method in detail and show that the guaranteed error correction
capability can be significantly increased with decoder diversity.
|
1207.4813
|
Exploring the rationality of some syntactic merging operators (extended
version)
|
cs.AI
|
Most merging operators are defined by semantics methods which have very high
computational complexity. In order to have operators with a lower computational
complexity, some merging operators defined in a syntactical way have be
proposed. In this work we define some syntactical merging operators and
exploring its rationality properties. To do that we constrain the belief bases
to be sets of formulas very close to logic programs and the underlying logic is
defined through forward chaining rule (Modus Ponens). We propose two types of
operators: arbitration operators when the inputs are only two bases and fusion
with integrity constraints operators. We introduce a set of postulates inspired
of postulates LS, proposed by Liberatore and Shaerf and then we analyzed the
first class of operators through these postulates. We also introduce a set of
postulates inspired of postulates KP, proposed by Konieczny and Pino P\'erez
and then we analyzed the second class of operators through these postulates.
|
1207.4814
|
Automorphism Groups of Graphical Models and Lifted Variational Inference
|
cs.AI cs.LG math.CO stat.CO stat.ML
|
Using the theory of group action, we first introduce the concept of the
automorphism group of an exponential family or a graphical model, thus
formalizing the general notion of symmetry of a probabilistic model. This
automorphism group provides a precise mathematical framework for lifted
inference in the general exponential family. Its group action partitions the
set of random variables and feature functions into equivalent classes (called
orbits) having identical marginals and expectations. Then the inference problem
is effectively reduced to that of computing marginals or expectations for each
class, thus avoiding the need to deal with each individual variable or feature.
We demonstrate the usefulness of this general framework in lifting two classes
of variational approximation for MAP inference: local LP relaxation and local
LP relaxation with cycle constraints; the latter yields the first lifted
inference that operate on a bound tighter than local constraints. Initial
experimental results demonstrate that lifted MAP inference with cycle
constraints achieved the state of the art performance, obtaining much better
objective function values than local approximation while remaining relatively
efficient.
|
1207.4821
|
The Architecture of an Autonomic, Resource-Aware, Workstation-Based
Distributed Database System
|
cs.DB cs.DC
|
Distributed software systems that are designed to run over workstation
machines within organisations are termed workstation-based. Workstation-based
systems are characterised by dynamically changing sets of machines that are
used primarily for other, user-centric tasks. They must be able to adapt to and
utilize spare capacity when and where it is available, and ensure that the
non-availability of an individual machine does not affect the availability of
the system. This thesis focuses on the requirements and design of a
workstation-based database system, which is motivated by an analysis of
existing database architectures that are typically run over static, specially
provisioned sets of machines. A typical clustered database system -- one that
is run over a number of specially provisioned machines -- executes queries
interactively, returning a synchronous response to applications, with its data
made durable and resilient to the failure of machines. There are no existing
workstation-based databases. Furthermore, other workstation-based systems do
not attempt to achieve the requirements of interactivity and durability,
because they are typically used to execute asynchronous batch processing jobs
that tolerate data loss -- results can be re-computed. These systems use
external servers to store the final results of computations rather than
workstation machines. This thesis describes the design and implementation of a
workstation-based database system and investigates its viability by evaluating
its performance against existing clustered database systems and testing its
availability during machine failures.
|
1207.4825
|
A new algorithm for extracting a small representative subgraph from a
very large graph
|
cs.DS cs.SI physics.soc-ph
|
Many real-world networks are prohibitively large for data retrieval, storage
and analysis of all of its nodes and links. Understanding the structure and
dynamics of these networks entails creating a smaller representative sample of
the full graph while preserving its relevant topological properties. In this
report, we show that graph sampling algorithms currently proposed in the
literature are not able to preserve network properties even with sample sizes
containing as many as 20% of the nodes from the original graph. We present a
new sampling algorithm, called Tiny Sample Extractor, with a new goal of a
sample size smaller than 5% of the original graph while preserving two key
properties of a network, the degree distribution and its clustering
co-efficient. Our approach is based on a new empirical method of estimating
measurement biases in crawling algorithms and compensating for them
accordingly. We present a detailed comparison of best known graph sampling
algorithms, focusing in particular on how the properties of the sample
subgraphs converge to those of the original graph as they grow. These results
show that our sampling algorithm extracts a smaller subgraph than other
algorithms while also achieving a closer convergence to the degree
distribution, measured by the degree exponent, of the original graph. The
subgraph generated by the Tiny Sample Extractor, however, is not necessarily
representative of the full graph with regard to other properties such as
assortativity. This indicates that the problem of extracting a truly
representative small subgraph from a large graph remains unsolved.
|
1207.4831
|
Robust Energy Management for Microgrids With High-Penetration Renewables
|
math.OC cs.SY
|
Due to its reduced communication overhead and robustness to failures,
distributed energy management is of paramount importance in smart grids,
especially in microgrids, which feature distributed generation (DG) and
distributed storage (DS). Distributed economic dispatch for a microgrid with
high renewable energy penetration and demand-side management operating in
grid-connected mode is considered in this paper. To address the intrinsically
stochastic availability of renewable energy sources (RES), a novel power
scheduling approach is introduced. The approach involves the actual renewable
energy as well as the energy traded with the main grid, so that the
supply-demand balance is maintained. The optimal scheduling strategy minimizes
the microgrid net cost, which includes DG and DS costs, utility of dispatchable
loads, and worst-case transaction cost stemming from the uncertainty in RES.
Leveraging the dual decomposition, the optimization problem formulated is
solved in a distributed fashion by the local controllers of DG, DS, and
dispatchable loads. Numerical results are reported to corroborate the
effectiveness of the novel approach.
|
1207.4860
|
Inference of Extreme Synchrony with an Entropy Measure on a Bipartite
Network
|
physics.data-an cs.CE physics.soc-ph q-fin.RM
|
This article proposes a method to quantify the structure of a bipartite graph
using a network entropy per link. The network entropy of a bipartite graph with
random links is calculated both numerically and theoretically. As an
application of the proposed method to analyze collective behavior, the affairs
in which participants quote and trade in the foreign exchange market are
quantified. The network entropy per link is found to correspond to the
macroeconomic situation. A finite mixture of Gumbel distributions is used to
fit the empirical distribution for the minimum values of network entropy per
link in each week. The mixture of Gumbel distributions with parameter estimates
by segmentation procedure is verified by the Kolmogorov--Smirnov test. The
finite mixture of Gumbel distributions that extrapolate the empirical
probability of extreme events has explanatory power at a statistically
significant level.
|
1207.4883
|
Bounds of restricted isometry constants in extreme asymptotics: formulae
for Gaussian matrices
|
math.NA cs.IT math.IT
|
Restricted Isometry Constants (RICs) provide a measure of how far from an
isometry a matrix can be when acting on sparse vectors. This, and related
quantities, provide a mechanism by which standard eigen-analysis can be applied
to topics relying on sparsity. RIC bounds have been presented for a variety of
random matrices and matrix dimension and sparsity ranges. We provide explicitly
formulae for RIC bounds, of n by N Gaussian matrices with sparsity k, in three
settings: a) n/N fixed and k/n approaching zero, b) k/n fixed and n/N
approaching zero, and c) n/N approaching zero with k/n decaying inverse
logrithmically in N/n; in these three settings the RICs a) decay to zero, b)
become unbounded (or approach inherent bounds), and c) approach a non-zero
constant. Implications of these results for RIC based analysis of compressed
sensing algorithms are presented.
|
1207.4914
|
Opinions, Conflicts and Consensus: Modeling Social Dynamics in a
Collaborative Environment
|
physics.soc-ph cs.CY cs.SI nlin.AO
|
Information-communication technology promotes collaborative environments like
Wikipedia where, however, controversiality and conflicts can appear. To
describe the rise, persistence, and resolution of such conflicts we devise an
extended opinion dynamics model where agents with different opinions perform a
single task to make a consensual product. As a function of the convergence
parameter describing the influence of the product on the agents, the model
shows spontaneous symmetry breaking of the final consensus opinion represented
by the medium. In the case when agents are replaced with new ones at a certain
rate, a transition from mainly consensus to a perpetual conflict occurs, which
is in qualitative agreement with the scenarios observed in Wikipedia.
|
1207.4931
|
Motion Planning Of an Autonomous Mobile Robot Using Artificial Neural
Network
|
cs.RO cs.AI cs.LG cs.NE
|
The paper presents the electronic design and motion planning of a robot based
on decision making regarding its straight motion and precise turn using
Artificial Neural Network (ANN). The ANN helps in learning of robot so that it
performs motion autonomously. The weights calculated are implemented in
microcontroller. The performance has been tested to be excellent.
|
1207.4933
|
Multi-parameter models of innovation diffusion on complex networks
|
nlin.AO cs.MA cs.SI physics.soc-ph
|
A model, applicable to a range of innovation diffusion applications with a
strong peer to peer component, is developed and studied, along with methods for
its investigation and analysis. A particular application is to individual
households deciding whether to install an energy efficiency measure in their
home. The model represents these individuals as nodes on a network, each with a
variable representing their current state of adoption of the innovation. The
motivation to adopt is composed of three terms, representing personal
preference, an average of each individual's network neighbours' states and a
system average, which is a measure of the current social trend. The adoption
state of a node changes if a weighted linear combination of these factors
exceeds some threshold. Numerical simulations have been carried out, computing
the average uptake after a sufficient number of time-steps over many
realisations at a range of model parameter values, on various network
topologies, including random (Erdos-Renyi), small world (Watts-Strogatz) and
(Newman's) highly clustered, community-based networks. An analytical and
probabilistic approach has been developed to account for the observed
behaviour, which explains the results of the numerical calculations.
|
1207.4940
|
Ontology for Cellular Communication
|
cs.SE cs.AI
|
The lack of interoperability between mobile cellular access networks has long
been a challenging obstacle, which telecommunication engineering is trying to
overcome. In second generation networks for example, this problem lies in the
fact that there are multiple standards. Each of these standards can operate in
the same frequency range. However, each utilizes a different Radio Technology
and Modulation Scheme, which are characteristics of the standard. Therefore,
the lack of interoperability in 2G occurs because of the lack of
standardization. Interoperability within 3G networks is limited to a few
operating modes using different Radio Transmission Technologies that are not
inter-operable. Thus, interoperability remains an issue for 3G. 4G technology
even being successful in its various trials cannot guarantee the
interoperability. This is within each network generation; meanwhile between
heterogeneous network generations the situation seems to be worst. This
approach is first to analyze the structure, inputs, and outputs of three
different cellular technologies, performing a domain analysis (of this subset
of technologies) and producing a feature model of the domain. Finally, we
sought to build an ontology capable of providing a common view of the domain,
providing an effective representation of relations between representations of
corresponding concepts in different cellular technologies.
|
1207.4941
|
Clustering function: a measure of social influence
|
stat.AP cs.SI math.CO math.PR physics.soc-ph
|
A commonly used characteristic of statistical dependence of adjacency
relations in real networks, the clustering coefficient, evaluates chances that
two neighbours of a given vertex are adjacent. An extension is obtained by
considering conditional probabilities that two randomly chosen vertices are
adjacent given that they have r common neighbours. We denote such probabilities
cl(r) and call r-> cl(r) the clustering function.
We compare clustering functions of several networks having non-negligible
clustering coefficient. They show similar patterns and surprising regularity.
We establish a first order asymptotic (as the number of vertices tends to
infinity) of the clustering function of related random intersection graph
models admitting nonvanishing clustering coefficient and asymptotic degree
distribution having a finite second moment.
|
1207.4958
|
Minimally Infrequent Itemset Mining using Pattern-Growth Paradigm and
Residual Trees
|
cs.DB
|
Itemset mining has been an active area of research due to its successful
application in various data mining scenarios including finding association
rules. Though most of the past work has been on finding frequent itemsets,
infrequent itemset mining has demonstrated its utility in web mining,
bioinformatics and other fields. In this paper, we propose a new algorithm
based on the pattern-growth paradigm to find minimally infrequent itemsets. A
minimally infrequent itemset has no subset which is also infrequent. We also
introduce the novel concept of residual trees. We further utilize the residual
trees to mine multiple level minimum support itemsets where different
thresholds are used for finding frequent itemsets for different lengths of the
itemset. Finally, we analyze the behavior of our algorithm with respect to
different parameters and show through experiments that it outperforms the
competing ones.
|
1207.4973
|
Variance Based Algorithm for Grouped-Subcarrier Allocation in OFDMA
Wireless Systems
|
cs.IT math.IT
|
In this paper, a reduced complexity algorithm is proposed for
grouped-subcarriers and power allocation in the downlink of OFDMA packet access
wireless systems. The available subcarriers for data communication are grouped
into partitions (groups) where each group is defined as a subchannel. The
scheduler located at the base station allocates subchannels to users based on
the variance of subchannel gains. The proposed algorithm for group allocation
is a two-step algorithm that allocates groups to users based on the descending
order of their variances to resolve the conflicting selection problem, followed
by a step of fairness proportionality enhancement. To reduce the feedback
burden and the complexity of the power allocation algorithm, each user feeds
back the CSI on each group if the variance of gains of subcarriers inside it is
less than a predefined threshold. To Show the performance of the proposed
scheme, a selection of simulation results is presented.
|
1207.4984
|
Control and Synthesis of Non-Interferent Timed Systems
|
cs.LO cs.FL cs.SY
|
In this paper, we focus on the synthesis of secure timed systems which are
modelled as timed automata. The security property that the system must satisfy
is a non-interference property. Intuitively, non-interference ensures the
absence of any causal dependency from a high-level domain to a lower-level
domain. Various notions of non-interference have been defined in the
literature, and in this paper we focus on Strong Non-deterministic
Non-Interference (SNNI) and two (bi)simulation based variants thereof (CSNNI
and BSNNI). We consider timed non-interference properties for timed systems
specified by timed automata and we study the two following problems: (1) check
whether it is possible to find a sub-system so that it is non-interferent; if
yes (2) compute a (largest) sub-system which is non-interferent.
|
1207.4992
|
Fast nonparametric classification based on data depth
|
stat.ML cs.LG
|
A new procedure, called DDa-procedure, is developed to solve the problem of
classifying d-dimensional objects into q >= 2 classes. The procedure is
completely nonparametric; it uses q-dimensional depth plots and a very
efficient algorithm for discrimination analysis in the depth space [0,1]^q.
Specifically, the depth is the zonoid depth, and the algorithm is the
alpha-procedure. In case of more than two classes several binary
classifications are performed and a majority rule is applied. Special
treatments are discussed for 'outsiders', that is, data having zero depth
vector. The DDa-classifier is applied to simulated as well as real data, and
the results are compared with those of similar procedures that have been
recently proposed. In most cases the new procedure has comparable error rates,
but is much faster than other classification approaches, including the SVM.
|
1207.5007
|
Multisegmentation through wavelets: Comparing the efficacy of Daubechies
vs Coiflets
|
cs.CV
|
In this paper, we carry out a comparative study of the efficacy of wavelets
belonging to Daubechies and Coiflet family in achieving image segmentation
through a fast statistical algorithm.The fact that wavelets belonging to
Daubechies family optimally capture the polynomial trends and those of Coiflet
family satisfy mini-max condition, makes this comparison interesting. In the
context of the present algorithm, it is found that the performance of Coiflet
wavelets is better, as compared to Daubechies wavelet.
|
1207.5010
|
The GDOF of 3-user MIMO Gaussian interference channel
|
cs.IT math.IT
|
The paper establishes the optimal generalized degrees of freedom (GDOF) of
3-user $M \times N$ multiple-input multiple-output (MIMO) Gaussian interference
channel (GIC) in which each transmitter has $M$ antennas and each receiver has
$N$ antennas. A constraint of $2M \leq N$ is imposed so that random coding with
message-splitting achieves the optimal GDOF. Unlike symmetric case, two cross
channels to unintended receivers from each transmitter can have different
strengths, and hence, well known Han-Kobayashi common-private message splitting
would not achieve the optimal GDOF. Instead, splitting each user's message into
three parts is shown to achieve the optimal GDOF. The capacity of the
corresponding deterministic model is first established which provides
systematic way of determining side information for converse. Although this
deterministic model is philosophically similar to the one considered by Gou and
Jafar, additional constraints are imposed so that capacity description of the
deterministic model only contains the essential terms for establishing the GDOF
of Gaussian case. Based on this, the optimal GDOF of Gaussian case is
established with $\mathcal{O}(1)$ capacity approximation. The behavior of the
GDOF is interestingly different from that of the corresponding symmetric case.
Regarding the converse, several multiuser outer bounds which are suitable for
asymmetric case are derived by non-trivial generalization of the symmetric
case.
|
1207.5027
|
Power-Laws and the Conservation of Information in discrete token
systems: Part 1 General Theory
|
cs.IT math-ph math.IT math.MP q-bio.GN
|
The Conservation of Energy plays a pivotal part in the development of the
physical sciences. With the growth of computation and the study of other
discrete token based systems such as the genome, it is useful to ask if there
are conservation principles which apply to such systems and what kind of
functional behaviour they imply for such systems.
Here I propose that the Conservation of Hartley-Shannon Information plays the
same over-arching role in discrete token based systems as the Conservation of
Energy does in physical systems. I will go on to prove that this implies
power-law behaviour in component sizes in software systems no matter what they
do or how they were built, and also implies the constancy of average gene
length in biological systems as reported for example by Lin Xu et al
(10.1093/molbev/msk019).
These propositions are supported by very large amounts of experimental data
extending the first presentation of these ideas in Hatton (2011, IFIP / SIAM /
NIST Working Conference on Uncertainty Quantification in Scientific Computing,
Boulder, August 2011).
|
1207.5040
|
Capacity Theorems for the Cognitive Radio Channel with Confidential
Messages
|
cs.IT math.IT
|
As a brain inspired wireless communication scheme, cognitive radio is a novel
approach to promote the efficient use of the scarce radio spectrum by allowing
some users called cognitive users to access the under-utilized spectrum
licensed out to the primary users. Besides highly reliable communication and
efficient utilization of the radio spectrum, the security of information
transmission against eavesdropping is critical in the cognitive radios for many
potential applications. In this paper, this problem is investigated from an
information theoretic viewpoint. Capacity limits are explored for the Cognitive
Radio Channel (CRC) with confidential messages. As an idealized information
theoretic model for the cognitive radio, this channel includes two transmitters
which send independent messages to their corresponding receivers such that one
transmitter, i.e., the cognitive transmitter, has access non-causally to the
message of the other transmitter, i.e., the primary transmitter. The message
designated to each receiver is required to be kept confidential with respect to
the other receiver. The secrecy level for each message is evaluated using the
equivocation rate. Novel inner and outer bounds for the capacity-equivocation
region are established. It is shown that these bounds coincide for some special
cases. Specifically, the capacity-equivocation region is derived for a class of
less-noisy CRCs and also a class of semi-deterministic CRCs. For the case where
only the message of the cognitive transmitter is required to be kept
confidential, the capacity-equivocation region is also established for the
Gaussian CRC with weak interference.
|
1207.5055
|
Construction of zero autocorrelation stochastic waveforms
|
cs.IT math.IT
|
Stochastic waveforms are constructed whose expected autocorrelation can be
made arbitrarily small outside the origin. These waveforms are unimodular and
complex-valued. Waveforms with such spike like autocorrelation are desirable in
waveform design and are particularly useful in areas of radar and
communications. Both discrete and continuous waveforms with low expected
autocorrelation are constructed. Further, in the discrete case, frames for the
d-dimensional complex space are constructed from these waveforms and the frame
properties of such frames are studied.
|
1207.5063
|
Secrecy Sum-Rates for Multi-User MIMO Regularized Channel Inversion
Precoding
|
cs.IT math.IT
|
In this paper, we propose a linear precoder for the downlink of a multi-user
MIMO system with multiple users that potentially act as eavesdroppers. The
proposed precoder is based on regularized channel inversion (RCI) with a
regularization parameter $\alpha$ and power allocation vector chosen in such a
way that the achievable secrecy sum-rate is maximized. We consider the
worst-case scenario for the multi-user MIMO system, where the transmitter
assumes users cooperate to eavesdrop on other users. We derive the achievable
secrecy sum-rate and obtain the closed-form expression for the optimal
regularization parameter $\alpha_{\mathrm{LS}}$ of the precoder using
large-system analysis. We show that the RCI precoder with
$\alpha_{\mathrm{LS}}$ outperforms several other linear precoding schemes, and
it achieves a secrecy sum-rate that has same scaling factor as the sum-rate
achieved by the optimum RCI precoder without secrecy requirements. We propose a
power allocation algorithm to maximize the secrecy sum-rate for fixed $\alpha$.
We then extend our algorithm to maximize the secrecy sum-rate by jointly
optimizing $\alpha$ and the power allocation vector. The jointly optimized
precoder outperforms RCI with $\alpha_{\mathrm{LS}}$ and equal power allocation
by up to 20 percent at practical values of the signal-to-noise ratio and for 4
users and 4 transmit antennas.
|
1207.5064
|
A Novel Metric Approach Evaluation For The Spatial Enhancement Of
Pan-Sharpened Images
|
cs.CV
|
Various and different methods can be used to produce high-resolution
multispectral images from high-resolution panchromatic image (PAN) and
low-resolution multispectral images (MS), mostly on the pixel level. The
Quality of image fusion is an essential determinant of the value of processing
images fusion for many applications. Spatial and spectral qualities are the two
important indexes that used to evaluate the quality of any fused image.
However, the jury is still out of fused image's benefits if it compared with
its original images. In addition, there is a lack of measures for assessing the
objective quality of the spatial resolution for the fusion methods. So, an
objective quality of the spatial resolution assessment for fusion images is
required. Therefore, this paper describes a new approach proposed to estimate
the spatial resolution improve by High Past Division Index (HPDI) upon
calculating the spatial-frequency of the edge regions of the image and it deals
with a comparison of various analytical techniques for evaluating the Spatial
quality, and estimating the colour distortion added by image fusion including:
MG, SG, FCC, SD, En, SNR, CC and NRMSE. In addition, this paper devotes to
concentrate on the comparison of various image fusion techniques based on pixel
and feature fusion technique.
|
1207.5072
|
Distributed Supervisory Control of Discrete-Event Systems with
Communication Delay
|
cs.SY cs.MA
|
This paper identifies a property of delay-robustness in distributed
supervisory control of discrete-event systems (DES) with communication delays.
In previous work a distributed supervisory control problem has been
investigated on the assumption that inter-agent communications take place with
negligible delay. From an applications viewpoint it is desirable to relax this
constraint and identify communicating distributed controllers which are
delay-robust, namely logically equivalent to their delay-free counterparts. For
this we introduce inter-agent channels modeled as 2-state automata, compute the
overall system behavior, and present an effective computational test for
delay-robustness. From the test it typically results that the given delay-free
distributed control is delay-robust with respect to certain communicated
events, but not for all, thus distinguishing events which are not
delay-critical from those that are. The approach is illustrated by a workcell
model with three communicating agents.
|
1207.5091
|
Learning Probabilistic Systems from Tree Samples
|
cs.LO cs.LG
|
We consider the problem of learning a non-deterministic probabilistic system
consistent with a given finite set of positive and negative tree samples.
Consistency is defined with respect to strong simulation conformance. We
propose learning algorithms that use traditional and a new "stochastic"
state-space partitioning, the latter resulting in the minimum number of states.
We then use them to solve the problem of "active learning", that uses a
knowledgeable teacher to generate samples as counterexamples to simulation
equivalence queries. We show that the problem is undecidable in general, but
that it becomes decidable under a suitable condition on the teacher which comes
naturally from the way samples are generated from failed simulation checks. The
latter problem is shown to be undecidable if we impose an additional condition
on the learner to always conjecture a "minimum state" hypothesis. We therefore
propose a semi-algorithm using stochastic partitions. Finally, we apply the
proposed (semi-) algorithms to infer intermediate assumptions in an automated
assume-guarantee verification framework for probabilistic systems.
|
1207.5113
|
Piecewise Linear Patch Reconstruction for Segmentation and Description
of Non-smooth Image Structures
|
cs.CV
|
In this paper, we propose a unified energy minimization model for the
segmentation of non-smooth image structures. The energy of piecewise linear
patch reconstruction is considered as an objective measure of the quality of
the segmentation of non-smooth structures. The segmentation is achieved by
minimizing the single energy without any separate process of feature
extraction. We also prove that the error of segmentation is bounded by the
proposed energy functional, meaning that minimizing the proposed energy leads
to reducing the error of segmentation. As a by-product, our method produces a
dictionary of optimized orthonormal descriptors for each segmented region. The
unique feature of our method is that it achieves the simultaneous segmentation
and description for non-smooth image structures under the same optimization
framework. The experiments validate our theoretical claims and show the clear
superior performance of our methods over other related methods for segmentation
of various image textures. We show that our model can be coupled with the
piecewise smooth model to handle both smooth and non-smooth structures, and we
demonstrate that the proposed model is capable of coping with multiple
different regions through the one-against-all strategy.
|
1207.5119
|
Feedback stabilization of dynamical systems with switched delays
|
math.OC cs.SY
|
We analyze a classification of two main families of controllers that are of
interest when the feedback loop is subject to switching propagation delays due
to routing via a wireless multi-hop communication network. We show that we can
cast this problem as a subclass of classical switching systems, which is a
non-trivial generalization of classical LTI systems with timevarying delays. We
consider both cases where delay-dependent and delay independent controllers are
used, and show that both can be modeled as switching systems with unconstrained
switchings. We provide NP-hardness results for the stability verification
problem, and propose a general methodology for approximate stability analysis
with arbitrary precision. We finally give evidence that non-trivial design
problems arise for which new algorithmic methods are needed.
|
1207.5123
|
Lifted polytope methods for stability analysis of switching systems
|
math.OC cs.SY
|
We describe new methods for deciding the stability of switching systems. The
methods build on two ideas previously appeared in the literature: the polytope
norm iterative construction, and the lifting procedure. Moreover, the
combination of these two ideas allows us to introduce a pruning algorithm which
can importantly reduce the computational burden. We prove several appealing
theoretical properties of our methods like a finiteness computational result
which extends a known result for unlifted sets of matrices, and provide
numerical examples of their good behaviour.
|
1207.5136
|
Causal Inference on Time Series using Structural Equation Models
|
stat.ML cs.LG stat.ME
|
Causal inference uses observations to infer the causal structure of the data
generating system. We study a class of functional models that we call Time
Series Models with Independent Noise (TiMINo). These models require independent
residual time series, whereas traditional methods like Granger causality
exploit the variance of residuals. There are two main contributions: (1)
Theoretical: By restricting the model class (e.g. to additive noise) we can
provide a more general identifiability result than existing ones. This result
incorporates lagged and instantaneous effects that can be nonlinear and do not
need to be faithful, and non-instantaneous feedbacks between the time series.
(2) Practical: If there are no feedback loops between time series, we propose
an algorithm based on non-linear independence tests of time series. When the
data are causally insufficient, or the data generating process does not satisfy
the model assumptions, this algorithm may still give partial results, but
mostly avoids incorrect answers. An extension to (non-instantaneous) feedbacks
is possible, but not discussed. It outperforms existing methods on artificial
and real data. Code can be provided upon request.
|
1207.5152
|
Stator flux optimization on direct torque control with fuzzy logic
|
cs.AI
|
The Direct Torque Control (DTC) is well known as an effective control
technique for high performance drives in a wide variety of industrial
applications and conventional DTC technique uses two constant reference value:
torque and stator flux. In this paper, fuzzy logic based stator flux
optimization technique for DTC drives that has been proposed. The proposed
fuzzy logic based stator flux optimizer self-regulates the stator flux
reference using induction motor load situation without need of any motor
parameters. Simulation studies have been carried out with Matlab/Simulink to
compare the proposed system behaviors at vary load conditions. Simulation
results show that the performance of the proposed DTC technique has been
improved and especially at low-load conditions torque ripple are greatly
reduced with respect to the conventional DTC.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.