id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1107.4470
|
Symmetry Breaking in Neuroevolution: A Technical Report
|
cs.NE
|
Artificial Neural Networks (ANN) comprise important symmetry properties,
which can influence the performance of Monte Carlo methods in Neuroevolution.
The problem of the symmetries is also known as the competing conventions
problem or simply as the permutation problem. In the literature, symmetries are
mainly addressed in Genetic Algoritm based approaches. However, investigations
in this direction based on other Evolutionary Algorithms (EA) are rare or
missing. Furthermore, there are different and contradictionary reports on the
efficacy of symmetry breaking. By using a novel viewpoint, we offer a possible
explanation for this issue. As a result, we show that a strategy which is
invariant to the global optimum can only be successfull on certain problems,
whereas it must fail to improve the global convergence on others. We introduce
the \emph{Minimum Global Optimum Proximity} principle as a generalized and
adaptive strategy to symmetry breaking, which depends on the location of the
global optimum. We apply the proposed principle to Differential Evolution (DE)
and Covariance Matrix Adaptation Evolution Strategies (CMA-ES), which are two
popular and conceptually different global optimization methods. Using a wide
range of feedforward ANN problems, we experimentally illustrate significant
improvements in the global search efficiency by the proposed symmetry breaking
technique.
|
1107.4491
|
Enhancing topology adaptation in information-sharing social networks
|
physics.soc-ph cs.SI
|
The advent of Internet and World Wide Web has led to unprecedent growth of
the information available. People usually face the information overload by
following a limited number of sources which best fit their interests. It has
thus become important to address issues like who gets followed and how to allow
people to discover new and better information sources. In this paper we conduct
an empirical analysis on different on-line social networking sites, and draw
inspiration from its results to present different source selection strategies
in an adaptive model for social recommendation. We show that local search rules
which enhance the typical topological features of real social communities give
rise to network configurations that are globally optimal. These rules create
networks which are effective in information diffusion and resemble structures
resulting from real social systems.
|
1107.4496
|
Cartesian stiffness matrix of manipulators with passive joints:
analytical approach
|
cs.RO
|
The paper focuses on stiffness matrix computation for manipulators with
passive joints. It proposes both explicit analytical expressions and an
efficient recursive procedure that are applicable in general case and allow
obtaining the desired matrix either in analytical or numerical form. Advantages
of the developed technique and its ability to produce both singular and
non-singular stiffness matrices are illustrated by application examples that
deal with stiffness modeling of two Stewart-Gough platforms.
|
1107.4498
|
Singular surfaces and cusps in symmetric planar 3-RPR manipulators
|
cs.RO
|
We study in this paper a class of 3-RPR manipulators for which the direct
kinematic problem (DKP) is split into a cubic problem followed by a quadratic
one. These manipulators are geometrically characterized by the fact that the
moving triangle is the image of the base triangle by an indirect isometry. We
introduce a specific coordinate system adapted to this geometric feature and
which is also well adapted to the splitting of the DKP. This allows us to
obtain easily precise descriptions of the singularities and of the cusp edges.
These latter second order singularities are important for nonsingular assembly
mode changing. We show how to sort assembly modes and use this sorting for
motion planning in the joint space.
|
1107.4500
|
Short Huffman Codes Producing 1s Half of the Time
|
cs.IT math.IT
|
The design of the channel part of a digital communication system (e.g., error
correction, modulation) is heavily based on the assumption that the data to be
transmitted forms a fair bit stream. However, simple source encoders such as
short Huffman codes generate bit streams that poorly match this assumption. As
a result, the channel input distribution does not match the original design
criteria. In this work, a simple method called half Huffman coding (halfHc) is
developed. halfHc transforms a Huffman code into a source code whose output is
more similar to a fair bit stream. This is achieved by permuting the codewords
such that the frequency of 1s at the output is close to 0.5. The permutations
are such that the optimality in terms of achieved compression ratio is
preserved. halfHc is applied in a practical example, and the resulting overall
system performs better than when conventional Huffman coding is used.
|
1107.4502
|
MeLinDa: an interlinking framework for the web of data
|
cs.AI
|
The web of data consists of data published on the web in such a way that they
can be interpreted and connected together. It is thus critical to establish
links between these data, both for the web of data and for the semantic web
that it contributes to feed. We consider here the various techniques developed
for that purpose and analyze their commonalities and differences. We propose a
general framework and show how the diverse techniques fit in the framework.
From this framework we consider the relation between data interlinking and
ontology matching. Although, they can be considered similar at a certain level
(they both relate formal entities), they serve different purposes, but would
find a mutual benefit at collaborating. We thus present a scheme under which it
is possible for data linking tools to take advantage of ontology alignments.
|
1107.4524
|
An Analysis of Anonymity in the Bitcoin System
|
physics.soc-ph cs.SI
|
Anonymity in Bitcoin, a peer-to-peer electronic currency system, is a
complicated issue. Within the system, users are identified by public-keys only.
An attacker wishing to de-anonymize its users will attempt to construct the
one-to-many mapping between users and public-keys and associate information
external to the system with the users. Bitcoin tries to prevent this attack by
storing the mapping of a user to his or her public-keys on that user's node
only and by allowing each user to generate as many public-keys as required. In
this chapter we consider the topological structure of two networks derived from
Bitcoin's public transaction history. We show that the two networks have a
non-trivial topological structure, provide complementary views of the Bitcoin
system and have implications for anonymity. We combine these structures with
external information and techniques such as context discovery and flow analysis
to investigate an alleged theft of Bitcoins, which, at the time of the theft,
had a market value of approximately half a million U.S. dollars.
|
1107.4530
|
Remarks on generalized toric codes
|
cs.IT math.AG math.IT
|
This note presents some new information on how the minimum distance of the
generalized toric code corresponding to a fixed set of integer lattice points S
in R^2 varies with the base field. The main results show that in some cases,
over sufficiently large fields, the minimum distance of the code corresponding
to a set S will be the same as that of the code corresponding to the convex
hull of S. In an example, we will also discuss a [49,12,28] generalized toric
code over GF(8), better than any previously known code according to M. Grassl's
online tables, as of July 2011.
|
1107.4540
|
Non-adaptive probabilistic group testing with noisy measurements:
Near-optimal bounds with efficient algorithms
|
cs.IT math.IT
|
We consider the problem of detecting a small subset of defective items from a
large set via non-adaptive "random pooling" group tests. We consider both the
case when the measurements are noiseless, and the case when the measurements
are noisy (the outcome of each group test may be independently faulty with
probability q). Order-optimal results for these scenarios are known in the
literature. We give information-theoretic lower bounds on the query complexity
of these problems, and provide corresponding computationally efficient
algorithms that match the lower bounds up to a constant factor. To the best of
our knowledge this work is the first to explicitly estimate such a constant
that characterizes the gap between the upper and lower bounds for these
problems.
|
1107.4553
|
Solving Linear Constraints in Elementary Abelian p-Groups of Symmetries
|
cs.AI
|
Symmetries occur naturally in CSP or SAT problems and are not very difficult
to discover, but using them to prune the search space tends to be very
challenging. Indeed, this usually requires finding specific elements in a group
of symmetries that can be huge, and the problem of their very existence is
NP-hard. We formulate such an existence problem as a constraint problem on one
variable (the symmetry to be used) ranging over a group, and try to find
restrictions that may be solved in polynomial time. By considering a simple
form of constraints (restricted by a cardinality k) and the class of groups
that have the structure of Fp-vector spaces, we propose a partial algorithm
based on linear algebra. This polynomial algorithm always applies when k=p=2,
but may fail otherwise as we prove the problem to be NP-hard for all other
values of k and p. Experiments show that this approach though restricted should
allow for an efficient use of at least some groups of symmetries. We conclude
with a few directions to be explored to efficiently solve this problem on the
general case.
|
1107.4557
|
Finding Deceptive Opinion Spam by Any Stretch of the Imagination
|
cs.CL cs.CY
|
Consumers increasingly rate, review and research products online.
Consequently, websites containing consumer reviews are becoming targets of
opinion spam. While recent work has focused primarily on manually identifiable
instances of opinion spam, in this work we study deceptive opinion
spam---fictitious opinions that have been deliberately written to sound
authentic. Integrating work from psychology and computational linguistics, we
develop and compare three approaches to detecting deceptive opinion spam, and
ultimately develop a classifier that is nearly 90% accurate on our
gold-standard opinion spam dataset. Based on feature analysis of our learned
models, we additionally make several theoretical contributions, including
revealing a relationship between deceptive opinions and imaginative writing.
|
1107.4570
|
Consistent Query Answering via ASP from Different Perspectives: Theory
and Practice
|
cs.DB cs.AI
|
A data integration system provides transparent access to different data
sources by suitably combining their data, and providing the user with a unified
view of them, called global schema. However, source data are generally not
under the control of the data integration process, thus integrated data may
violate global integrity constraints even in presence of locally-consistent
data sources. In this scenario, it may be anyway interesting to retrieve as
much consistent information as possible. The process of answering user queries
under global constraint violations is called consistent query answering (CQA).
Several notions of CQA have been proposed, e.g., depending on whether
integrated information is assumed to be sound, complete, exact or a variant of
them. This paper provides a contribution in this setting: it uniforms solutions
coming from different perspectives under a common ASP-based core, and provides
query-driven optimizations designed for isolating and eliminating
inefficiencies of the general approach for computing consistent answers.
Moreover, the paper introduces some new theoretical results enriching existing
knowledge on decidability and complexity of the considered problems. The
effectiveness of the approach is evidenced by experimental results.
To appear in Theory and Practice of Logic Programming (TPLP).
|
1107.4573
|
Analogy perception applied to seven tests of word comprehension
|
cs.AI cs.CL cs.LG
|
It has been argued that analogy is the core of cognition. In AI research,
algorithms for analogy are often limited by the need for hand-coded high-level
representations as input. An alternative approach is to use high-level
perception, in which high-level representations are automatically generated
from raw data. Analogy perception is the process of recognizing analogies using
high-level perception. We present PairClass, an algorithm for analogy
perception that recognizes lexical proportional analogies using representations
that are automatically generated from a large corpus of raw textual data. A
proportional analogy is an analogy of the form A:B::C:D, meaning "A is to B as
C is to D". A lexical proportional analogy is a proportional analogy with
words, such as carpenter:wood::mason:stone. PairClass represents the semantic
relations between two words using a high-dimensional feature vector, in which
the elements are based on frequencies of patterns in the corpus. PairClass
recognizes analogies by applying standard supervised machine learning
techniques to the feature vectors. We show how seven different tests of word
comprehension can be framed as problems of analogy perception and we then apply
PairClass to the seven resulting sets of analogy perception problems. We
achieve competitive results on all seven tests. This is the first time a
uniform approach has handled such a range of tests of word comprehension.
|
1107.4581
|
Hybrid Noncoherent Network Coding
|
cs.IT math.IT
|
We describe a novel extension of subspace codes for noncoherent networks,
suitable for use when the network is viewed as a communication system that
introduces both dimension and symbol errors. We show that when symbol erasures
occur in a significantly large number of different basis vectors transmitted
through the network and when the min-cut of the networks is much smaller then
the length of the transmitted codewords, the new family of codes outperforms
their subspace code counterparts.
For the proposed coding scheme, termed hybrid network coding, we derive two
upper bounds on the size of the codes. These bounds represent a variation of
the Singleton and of the sphere-packing bound. We show that a simple
concatenated scheme that represents a combination of subspace codes and
Reed-Solomon codes is asymptotically optimal with respect to the Singleton
bound. Finally, we describe two efficient decoding algorithms for concatenated
subspace codes that in certain cases have smaller complexity than subspace
decoders.
|
1107.4600
|
On the Capacity of the Interference Channel with a Cognitive Relay
|
cs.IT math.IT
|
The InterFerence Channel with a Cognitive Relay (IFC-CR) consists of the
classical interference channel with two independent source-destination pairs
whose communication is aided by an additional node, referred to as the
cognitive relay, that has a priori knowledge of both sources' messages. This a
priori message knowledge is termed cognition and idealizes the relay learning
the messages of the two sources from their transmissions over a wireless
channel. This paper presents new inner and outer bounds for the capacity region
of the general memoryless IFC-CR that are shown to be tight for a certain class
of channels. The new outer bound follows from arguments originally devised for
broadcast channels among which Sato's observation that the capacity region of
channels with non-cooperative receivers only depends on the channel output
conditional marginal distributions. The new inner bound is shown to include all
previously proposed coding schemes and it is thus the largest known achievable
rate region to date. The new inner and outer bounds coincide for a subset of
channel satisfying a strong interference condition. For these channels there is
no loss in optimality if both destinations decode both messages. This result
parallels analogous results for the classical IFC and for the cognitive IFC and
is the first known capacity result for the general IFC-CR. Numerical
evaluations of the proposed inner and outer bounds are presented for the
Gaussian noise case.
|
1107.4606
|
The Divergence of Reinforcement Learning Algorithms with Value-Iteration
and Function Approximation
|
cs.LG
|
This paper gives specific divergence examples of value-iteration for several
major Reinforcement Learning and Adaptive Dynamic Programming algorithms, when
using a function approximator for the value function. These divergence examples
differ from previous divergence examples in the literature, in that they are
applicable for a greedy policy, i.e. in a "value iteration" scenario. Perhaps
surprisingly, with a greedy policy, it is also possible to get divergence for
the algorithms TD(1) and Sarsa(1). In addition to these divergences, we also
achieve divergence for the Adaptive Dynamic Programming algorithms HDP, DHP and
GDHP.
|
1107.4613
|
Percolation in the Secrecy Graph
|
math.PR cs.IT math.IT
|
The secrecy graph is a random geometric graph which is intended to model the
connectivity of wireless networks under secrecy constraints. Directed edges in
the graph are present whenever a node can talk to another node securely in the
presence of eavesdroppers, which, in the model, is determined solely by the
locations of the nodes and eavesdroppers. In the case of infinite networks, a
critical parameter is the maximum density of eavesdroppers that can be
accommodated while still guaranteeing an infinite component in the network,
i.e., the percolation threshold. We focus on the case where the locations of
the nodes and eavesdroppers are given by Poisson point processes, and present
bounds for different types of percolation, including in-, out- and undirected
percolation.
|
1107.4617
|
Constant-time filtering using shiftable kernels
|
cs.CV cs.DS
|
It was recently demonstrated in [5] that the non-linear bilateral filter [14]
can be efficiently implemented using a constant-time or O(1) algorithm. At the
heart of this algorithm was the idea of approximating the Gaussian range kernel
of the bilateral filter using trigonometric functions. In this letter, we
explain how the idea in [5] can be extended to few other linear and non-linear
filters [14, 17, 2]. While some of these filters have received a lot of
attention in recent years, they are known to be computationally intensive. To
extend the idea in [5], we identify a central property of trigonometric
functions, called shiftability, that allows us to exploit the redundancy
inherent in the filtering operations. In particular, using shiftable kernels,
we show how certain complex filtering can be reduced to simply that of
computing the moving sum of a stack of images. Each image in the stack is
obtained through an elementary pointwise transform of the input image. This has
a two-fold advantage. First, we can use fast recursive algorithms for computing
the moving sum [15, 6], and, secondly, we can use parallel computation to
further speed up the computation. We also show how shiftable kernels can also
be used to approximate the (non-shiftable) Gaussian kernel that is ubiquitously
used in image filtering.
|
1107.4619
|
On the Hilbert transform of wavelets
|
math.FA cs.CV
|
A wavelet is a localized function having a prescribed number of vanishing
moments. In this correspondence, we provide precise arguments as to why the
Hilbert transform of a wavelet is again a wavelet. In particular, we provide
sharp estimates of the localization, vanishing moments, and smoothness of the
transformed wavelet. We work in the general setting of non-compactly supported
wavelets. Our main result is that, in the presence of some minimal smoothness
and decay, the Hilbert transform of a wavelet is again as smooth and
oscillating as the original wavelet, whereas its localization is controlled by
the number of vanishing moments of the original wavelet. We motivate our
results using concrete examples.
|
1107.4623
|
A Unifying Analysis of Projected Gradient Descent for
$\ell_p$-constrained Least Squares
|
math.NA cs.IT math.IT math.OC stat.ML
|
In this paper we study the performance of the Projected Gradient Descent(PGD)
algorithm for $\ell_{p}$-constrained least squares problems that arise in the
framework of Compressed Sensing. Relying on the Restricted Isometry Property,
we provide convergence guarantees for this algorithm for the entire range of
$0\leq p\leq1$, that include and generalize the existing results for the
Iterative Hard Thresholding algorithm and provide a new accuracy guarantee for
the Iterative Soft Thresholding algorithm as special cases. Our results suggest
that in this group of algorithms, as $p$ increases from zero to one, conditions
required to guarantee accuracy become stricter and robustness to noise
deteriorates.
|
1107.4637
|
Efficient variational inference in large-scale Bayesian compressed
sensing
|
cs.CV cs.IT math.IT stat.ML
|
We study linear models under heavy-tailed priors from a probabilistic
viewpoint. Instead of computing a single sparse most probable (MAP) solution as
in standard deterministic approaches, the focus in the Bayesian compressed
sensing framework shifts towards capturing the full posterior distribution on
the latent variables, which allows quantifying the estimation uncertainty and
learning model parameters using maximum likelihood. The exact posterior
distribution under the sparse linear model is intractable and we concentrate on
variational Bayesian techniques to approximate it. Repeatedly computing
Gaussian variances turns out to be a key requisite and constitutes the main
computational bottleneck in applying variational techniques in large-scale
problems. We leverage on the recently proposed Perturb-and-MAP algorithm for
drawing exact samples from Gaussian Markov random fields (GMRF). The main
technical contribution of our paper is to show that estimating Gaussian
variances using a relatively small number of such efficiently drawn random
samples is much more effective than alternative general-purpose variance
estimation techniques. By reducing the problem of variance estimation to
standard optimization primitives, the resulting variational algorithms are
fully scalable and parallelizable, allowing Bayesian computations in extremely
large-scale problems with the same memory and time complexity requirements as
conventional point estimation techniques. We illustrate these ideas with
experiments in image deblurring.
|
1107.4649
|
Mandelbrot Law of Evolving Networks
|
physics.data-an cs.SI physics.soc-ph
|
Degree distributions of many real networks are known to follow the Mandelbrot
law, which can be considered as an extension of the power law and is determined
by not only the power-law exponent, but also the shifting coefficient. Although
the shifting coefficient highly affects the shape of distribution, it receives
less attention in the literature and in fact, mainstream analytical method
based on backward or forward difference will lead to considerable deviations to
its value. In this Letter, we show that the degree distribution of a growing
network with linear preferential attachment approximately follows the
Mandelbrot law. We propose an analytical method based on a recursive formula
that can obtain a more accurate expression of the shifting coefficient.
Simulations demonstrate the advantages of our method. This work provides a
possible mechanism leading to the Mandelbrot law of evolving networks, and
refines the mainstream analytical methods for the shifting coefficient.
|
1107.4651
|
Higher Order Programming to Mine Knowledge for a Modern Medical Expert
System
|
cs.LO cs.AI
|
Knowledge mining is the process of deriving new and useful knowledge from
vast volumes of data and background knowledge. Modern healthcare organizations
regularly generate huge amount of electronic data stored in the databases.
These data are a valuable resource for mining useful knowledge to help medical
practitioners making appropriate and accurate decision on the diagnosis and
treatment of diseases. In this paper, we propose the design of a novel medical
expert system based on a logic-programming framework. The proposed system
includes a knowledge-mining component as a repertoire of tools for discovering
useful knowledge. The implementation of classification and association mining
tools based on the higher order and meta-level programming schemes using Prolog
has been presented to express the power of logic-based language. Such language
also provides a pattern matching facility, which is an essential function for
the development of knowledge-intensive tasks. Besides the major goal of medical
decision support, the knowledge discovered by our logic-based knowledge-mining
component can also be deployed as background knowledge to pre-treatment data
from other sources as well as to guard the data repositories against constraint
violation. A framework for knowledge deployment is also presented.
|
1107.4652
|
On the Achievability of Interference Alignment for Three-Cell Constant
Cellular Interfering Networks
|
cs.IT math.IT
|
For a three-cell constant cellular interfering network, a new property of
alignment is identified, i.e., interference alignment (IA) solution obtained in
an user-cooperation scenario can also be applied in a non-cooperation
environment. By using this property, an algorithm is proposed by jointly
designing transmit and receive beamforming matrices. Analysis and numerical
results show that more degree of freedom (DoF) can be achieved compared with
conventional schemes in most cases.
|
1107.4667
|
Correlation Estimation from Compressed Images
|
cs.CV
|
This paper addresses the problem of correlation estimation in sets of
compressed images. We consider a framework where images are represented under
the form of linear measurements due to low complexity sensing or security
requirements. We assume that the images are correlated through the displacement
of visual objects due to motion or viewpoint change and the correlation is
effectively represented by optical flow or motion field models. The correlation
is estimated in the compressed domain by jointly processing the linear
measurements. We first show that the correlated images can be efficiently
related using a linear operator. Using this linear relationship we then
describe the dependencies between images in the compressed domain. We further
cast a regularized optimization problem where the correlation is estimated in
order to satisfy both data consistency and motion smoothness objectives with a
Graph Cut algorithm. We analyze in detail the correlation estimation
performance and quantify the penalty due to image compression. Extensive
experiments in stereo and video imaging applications show that our novel
solution stays competitive with methods that implement complex image
reconstruction steps prior to correlation estimation. We finally use the
estimated correlation in a novel joint image reconstruction scheme that is
based on an optimization problem with sparsity priors on the reconstructed
images. Additional experiments show that our correlation estimation algorithm
leads to an effective reconstruction of pairs of images in distributed image
coding schemes that outperform independent reconstruction algorithms by 2 to 4
dB.
|
1107.4687
|
Fence - An Efficient Parser with Ambiguity Support for Model-Driven
Language Specification
|
cs.CL
|
Model-based language specification has applications in the implementation of
language processors, the design of domain-specific languages, model-driven
software development, data integration, text mining, natural language
processing, and corpus-based induction of models. Model-based language
specification decouples language design from language processing and, unlike
traditional grammar-driven approaches, which constrain language designers to
specific kinds of grammars, it needs general parser generators able to deal
with ambiguities. In this paper, we propose Fence, an efficient bottom-up
parsing algorithm with lexical and syntactic ambiguity support that enables the
use of model-based language specification in practice.
|
1107.4705
|
A unified graphical approach to random coding for multi-terminal
networks
|
cs.IT math.IT
|
A unified approach to the derivation of rate regions for single-hop
memoryless networks is presented. A general transmission scheme for any
memoryless, single-hop, k-user channel with or without common information, is
defined through two steps. The first step is user virtualization: each user is
divided into multiple virtual sub-users according to a chosen rate-splitting
strategy which preserves the rates of the original messages. This results in an
enhanced channel with a possibly larger number of users for which more coding
possibilities are available. Moreover, user virtualization provides a simple
mechanism to encode common messages to any subset of users. Following user
virtualization, the message of each user in the enhanced model is coded using a
chosen combination of coded time-sharing, superposition coding and joint
binning. A graph is used to represent the chosen coding strategies: nodes in
the graph represent codewords while edges represent coding operations. This
graph is used to construct a graphical Markov model which illustrates the
statistical dependency among codewords that can be introduced by the
superposition coding or joint binning. Using this statistical representation of
the overall codebook distribution, the error probability of the code is shown
to vanish via a unified analysis. The rate bounds that define the achievable
rate region are obtained by linking the error analysis to the properties of the
graphical Markov model. This proposed framework makes it possible to
numerically obtain an achievable rate region by specifying a user
virtualization strategy and describing a set of coding operations. The largest
achievable rate region can be obtained by considering all the possible
rate-splitting strategies and taking the union over all the possible ways to
superimpose or bin codewords.
|
1107.4709
|
Applications of Derandomization Theory in Coding
|
cs.DM cs.CC cs.IT math.IT
|
Randomized techniques play a fundamental role in theoretical computer science
and discrete mathematics, in particular for the design of efficient algorithms
and construction of combinatorial objects. The basic goal in derandomization
theory is to eliminate or reduce the need for randomness in such randomized
constructions. In this thesis, we explore some applications of the fundamental
notions in derandomization theory to problems outside the core of theoretical
computer science, and in particular, certain problems related to coding theory.
First, we consider the wiretap channel problem which involves a communication
system in which an intruder can eavesdrop a limited portion of the
transmissions, and construct efficient and information-theoretically optimal
communication protocols for this model. Then we consider the combinatorial
group testing problem. In this classical problem, one aims to determine a set
of defective items within a large population by asking a number of queries,
where each query reveals whether a defective item is present within a specified
group of items. We use randomness condensers to explicitly construct optimal,
or nearly optimal, group testing schemes for a setting where the query outcomes
can be highly unreliable, as well as the threshold model where a query returns
positive if the number of defectives pass a certain threshold. Finally, we
design ensembles of error-correcting codes that achieve the
information-theoretic capacity of a large class of communication channels, and
then use the obtained ensembles for construction of explicit capacity achieving
codes.
[This is a shortened version of the actual abstract in the thesis.]
|
1107.4723
|
A Semantic Relatedness Measure Based on Combined Encyclopedic,
Ontological and Collocational Knowledge
|
cs.CL
|
We describe a new semantic relatedness measure combining the Wikipedia-based
Explicit Semantic Analysis measure, the WordNet path measure and the mixed
collocation index. Our measure achieves the currently highest results on the
WS-353 test: a Spearman rho coefficient of 0.79 (vs. 0.75 in (Gabrilovich and
Markovitch, 2007)) when applying the measure directly, and a value of 0.87 (vs.
0.78 in (Agirre et al., 2009)) when using the prediction of a polynomial SVM
classifier trained on our measure.
In the appendix we discuss the adaptation of ESA to 2011 Wikipedia data, as
well as various unsuccessful attempts to enhance ESA by filtering at word,
sentence, and section level.
|
1107.4730
|
Empirical analysis of collective human behavior for extraordinary events
in blogosphere
|
physics.soc-ph cs.SI physics.data-an
|
To uncover underlying mechanism of collective human dynamics, we survey more
than 1.8 billion blog entries and observe the statistical properties of word
appearances. We focus on words that show dynamic growth and decay with a
tendency to diverge on a certain day. After careful pretreatment and fitting
method, we found power laws generally approximate the functional forms of
growth and decay with various exponents values between -0.1 and -2.5. We also
observe news words whose frequency increase suddenly and decay following power
laws. In order to explain these dynamics, we propose a simple model of posting
blogs involving a keyword, and its validity is checked directly from the data.
The model suggests that bloggers are not only responding to the latest number
of blogs but also suffering deadline pressure from the divergence day. Our
empirical results can be used for predicting the number of blogs in advance and
for estimating the period to return to the normal fluctuation level.
|
1107.4734
|
Design of Arabic Diacritical Marks
|
cs.CL
|
Diacritical marks play a crucial role in meeting the criteria of usability of
typographic text, such as: homogeneity, clarity and legibility. To change the
diacritic of a letter in a word could completely change its semantic. The
situation is very complicated with multilingual text. Indeed, the problem of
design becomes more difficult by the presence of diacritics that come from
various scripts; they are used for different purposes, and are controlled by
various typographic rules. It is quite challenging to adapt rules from one
script to another. This paper aims to study the placement and sizing of
diacritical marks in Arabic script, with a comparison with the Latin's case.
The Arabic script is cursive and runs from right-to-left; its criteria and
rules are quite distinct from those of the Latin script. In the beginning, we
compare the difficulty of processing diacritics in both scripts. After, we will
study the limits of Latin resolution strategies when applied to Arabic. At the
end, we propose an approach to resolve the problem for positioning and resizing
diacritics. This strategy includes creating an Arabic font, designed in
OpenType format, along with suitable justification in TEX.
|
1107.4747
|
The PITA System: Tabling and Answer Subsumption for Reasoning under
Uncertainty
|
cs.AI cs.LO cs.PL
|
Many real world domains require the representation of a measure of
uncertainty. The most common such representation is probability, and the
combination of probability with logic programs has given rise to the field of
Probabilistic Logic Programming (PLP), leading to languages such as the
Independent Choice Logic, Logic Programs with Annotated Disjunctions (LPADs),
Problog, PRISM and others. These languages share a similar distribution
semantics, and methods have been devised to translate programs between these
languages. The complexity of computing the probability of queries to these
general PLP programs is very high due to the need to combine the probabilities
of explanations that may not be exclusive. As one alternative, the PRISM system
reduces the complexity of query answering by restricting the form of programs
it can evaluate. As an entirely different alternative, Possibilistic Logic
Programs adopt a simpler metric of uncertainty than probability. Each of these
approaches -- general PLP, restricted PLP, and Possibilistic Logic Programming
-- can be useful in different domains depending on the form of uncertainty to
be represented, on the form of programs needed to model problems, and on the
scale of the problems to be solved. In this paper, we show how the PITA system,
which originally supported the general PLP language of LPADs, can also
efficiently support restricted PLP and Possibilistic Logic Programs. PITA
relies on tabling with answer subsumption and consists of a transformation
along with an API for library functions that interface with answer subsumption.
|
1107.4763
|
Diffeomorphic Metric Mapping of High Angular Resolution Diffusion
Imaging based on Riemannian Structure of Orientation Distribution Functions
|
cs.CV
|
In this paper, we propose a novel large deformation diffeomorphic
registration algorithm to align high angular resolution diffusion images
(HARDI) characterized by orientation distribution functions (ODFs). Our
proposed algorithm seeks an optimal diffeomorphism of large deformation between
two ODF fields in a spatial volume domain and at the same time, locally
reorients an ODF in a manner such that it remains consistent with the
surrounding anatomical structure. To this end, we first review the Riemannian
manifold of ODFs. We then define the reorientation of an ODF when an affine
transformation is applied and subsequently, define the diffeomorphic group
action to be applied on the ODF based on this reorientation. We incorporate the
Riemannian metric of ODFs for quantifying the similarity of two HARDI images
into a variational problem defined under the large deformation diffeomorphic
metric mapping (LDDMM) framework. We finally derive the gradient of the cost
function in both Riemannian spaces of diffeomorphisms and the ODFs, and present
its numerical implementation. Both synthetic and real brain HARDI data are used
to illustrate the performance of our registration algorithm.
|
1107.4796
|
Use Pronunciation by Analogy for text to speech system in Persian
language
|
cs.CL
|
The interest in text to speech synthesis increased in the world .text to
speech have been developed formany popular languages such as English, Spanish
and French and many researches and developmentshave been applied to those
languages. Persian on the other hand, has been given little attentioncompared
to other languages of similar importance and the research in Persian is still
in its infancy.Persian language possess many difficulty and exceptions that
increase complexity of text to speechsystems. For example: short vowels is
absent in written text or existence of homograph words. in thispaper we propose
a new method for persian text to phonetic that base on pronunciations by
analogy inwords, semantic relations and grammatical rules for finding proper
phonetic. Keywords:PbA, text to speech, Persian language, FPbA
|
1107.4797
|
Multiple Access Demodulation in the Lifted Signal Graph with Spatial
Coupling
|
cs.IT math.IT
|
Demodulation in a random multiple access channel is considered where the
signals are chosen uniformly randomly with unit energy, a model applicable to
several modern transmission systems. It is shown that by lifting (replicating)
the graph of this system and randomizing the graph connections, a simple
iterative cancellation demodulator can be constructed which achieves the same
performance as an optimal symbol-by-symbol detector of the original system. The
iterative detector has a complexity that is linear in the number of users,
while the direct optimal approach is known to be NP-hard. However, the maximal
system load of this lifted graph is limited to \alpha<2.07, even for
signal-to-noise ratios going to infinity - the system is interference limited.
We then show that by introducing spatial coupling between subsequent lifted
graphs, and anchoring the initial graphs, this limitation can be avoided and
arbitrary system loads are achievable. Our results apply to several
well-documented system proposals, such as IDMA, partitioned spreading, and
certain forms of MIMO communications.
|
1107.4822
|
Optimal Selective Feedback Policies for Opportunistic Beamforming
|
cs.IT math.IT
|
This paper studies the structure of downlink sum-rate maximizing selective
decentralized feedback policies for opportunistic beamforming under finite
feedback constraints on the average number of mobile users feeding back.
Firstly, it is shown that any sum-rate maximizing selective decentralized
feedback policy must be a threshold feedback policy. This result holds for all
fading channel models with continuous distribution functions. Secondly, the
resulting optimum threshold selection problem is analyzed in detail. This is a
non-convex optimization problem over finite dimensional Euclidean spaces. By
utilizing the theory of majorization, an underlying Schur-concave structure in
the sum-rate function is identified, and the sufficient conditions for the
optimality of homogenous threshold feedback policies are obtained. Applications
of these results are illustrated for well known fading channel models such as
Rayleigh, Nakagami and Rician fading channels, along with various engineering
and design insights. Rather surprisingly, it is shown that using the same
threshold value at all mobile users is not always a rate-wise optimal feedback
strategy, even for a network with identical mobile users experiencing
statistically the same channel conditions. For the Rayleigh fading channel
model, on the other hand, homogenous threshold feedback policies are proven to
be rate-wise optimal if multiple orthonormal data carrying beams are used to
communicate with multiple mobile users simultaneously.
|
1107.4838
|
Payoff-based Inhomogeneous Partially Irrational Play for Potential Game
Theoretic Cooperative Control of Multi-agent Systems
|
cs.SY math.OC
|
This paper handles a kind of strategic game called potential games and
develops a novel learning algorithm Payoff-based Inhomogeneous Partially
Irrational Play (PIPIP). The present algorithm is based on Distributed
Inhomogeneous Synchronous Learning (DISL) presented in an existing work but,
unlike DISL,PIPIP allows agents to make irrational decisions with a specified
probability, i.e. agents can choose an action with a low utility from the past
actions stored in the memory. Due to the irrational decisions, we can prove
convergence in probability of collective actions to potential function
maximizers. Finally, we demonstrate the effectiveness of the present algorithm
through experiments on a sensor coverage problem. It is revealed through the
demonstration that the present learning algorithm successfully leads agents to
around potential function maximizers even in the presence of undesirable Nash
equilibria. We also see through the experiment with a moving density function
that PIPIP has adaptability to environmental changes.
|
1107.4865
|
Actual Causation in CP-logic
|
cs.AI
|
Given a causal model of some domain and a particular story that has taken
place in this domain, the problem of actual causation is deciding which of the
possible causes for some effect actually caused it. One of the most influential
approaches to this problem has been developed by Halpern and Pearl in the
context of structural models. In this paper, I argue that this is actually not
the best setting for studying this problem. As an alternative, I offer the
probabilistic logic programming language of CP-logic. Unlike structural models,
CP-logic incorporates the deviant/default distinction that is generally
considered an important aspect of actual causation, and it has an explicitly
dynamic semantics, which helps to formalize the stories that serve as input to
an actual causation problem.
|
1107.4900
|
Threshold Improvement of Low-Density Lattice Codes via Spatial Coupling
|
cs.IT math.IT
|
Spatially-coupled low-density lattice codes (LDLC) are constructed using
protographs. Using Monte Carlo density evolution using single-Gaussian
messages, we observe that the threshold of the spatially-coupled LDLC is within
0.22 dB of capacity of the unconstrained power channel. This is in contrast
with a 0.5 dB noise threshold for the conventional LDLC lattice construction.
|
1107.4918
|
Fluid Flow Complexity in Fracture Networks: Analysis with Graph Theory
and LBM
|
cs.CE
|
Through this research, embedded synthetic fracture networks in rock masses
are studied. To analysis the fluid flow complexity in fracture networks with
respect to the variation of connectivity patterns, two different approaches are
employed, namely, the Lattice Boltzmann method and graph theory. The Lattice
Boltzmann method is used to show the sensitivity of the permeability and fluid
velocity distribution to synthetic fracture networks' connectivity patterns.
Furthermore, the fracture networks are mapped into the graphs, and the
characteristics of these graphs are compared to the main spatial fracture
networks. Among different characteristics of networks, we distinguish the
modularity of networks and sub-graphs distributions. We map the flow regimes
into the proper regions of the network's modularity space. Also, for each type
of fluid regime, corresponding motifs shapes are scaled. Implemented power law
distributions of fracture length in spatial fracture networks yielded the same
node's degree distribution in transformed networks. Two general spatial
networks are considered: random networks and networks with "hubness" properties
mimicking a spatial damage zone (both with power law distribution of fracture
length). In the first case, the fractures are embedded in uniformly distributed
fracture sets; the second case covers spatial fracture zones. We prove
numerically that the abnormal change (transition) in permeability is controlled
by the hub growth rate. Also, comparing LBM results with the characteristic
mean length of transformed networks' links shows a reverse relationship between
the aforementioned parameters. In addition, the abnormalities in advection
through nodes are presented.
|
1107.4924
|
Discovering Attractive Products based on Influence Sets
|
cs.DB
|
Skyline queries have been widely used as a practical tool for multi-criteria
decision analysis and for applications involving preference queries. For
example, in a typical online retail application, skyline queries can help
customers select the most interesting, among a pool of available, products.
Recently, reverse skyline queries have been proposed, highlighting the
manufacturer's perspective, i.e. how to determine the expected buyers of a
given product. In this work we develop novel algorithms for two important
classes of queries involving customer preferences. We first propose a novel
algorithm, termed as RSA, for answering reverse skyline queries. We then
introduce a new type of queries, namely the k-Most Attractive Candidates k-MAC
query. In this type of queries, given a set of existing product specifications
P, a set of customer preferences C and a set of new candidate products Q, the
k-MAC query returns the set of k candidate products from Q that jointly
maximizes the total number of expected buyers, measured as the cardinality of
the union of individual reverse skyline sets (i.e., influence sets). Applying
existing approaches to solve this problem would require calculating the reverse
skyline set for each candidate, which is prohibitively expensive for large data
sets. We, thus, propose a batched algorithm for this problem and compare its
performance against a branch-and-bound variant that we devise. Both of these
algorithms use in their core variants of our RSA algorithm. Our experimental
study using both synthetic and real data sets demonstrates that our proposed
algorithms outperform existing, or naive solutions to our studied classes of
queries.
|
1107.4935
|
Public Announcement Logic in Geometric Frameworks
|
cs.LO cs.GT cs.MA
|
In this paper we introduce public announcement logic in different geometric
frameworks. First, we consider topological models, and then extend our
discussion to a more expressive model, namely, subset space models.
Furthermore, we prove the completeness of public announcement logic in those
frameworks. Moreover, we apply our results to different issues: announcement
stabilization, backward induction and persistence.
|
1107.4937
|
Instantiation Schemes for Nested Theories
|
cs.AI
|
This paper investigates under which conditions instantiation-based proof
procedures can be combined in a nested way, in order to mechanically construct
new instantiation procedures for richer theories. Interesting applications in
the field of verification are emphasized, particularly for handling extensions
of the theory of arrays.
|
1107.4958
|
Efficient and Accurate Gaussian Image Filtering Using Running Sums
|
cs.CV
|
This paper presents a simple and efficient method to convolve an image with a
Gaussian kernel. The computation is performed in a constant number of
operations per pixel using running sums along the image rows and columns. We
investigate the error function used for kernel approximation and its relation
to the properties of the input signal. Based on natural image statistics we
propose a quadratic form kernel error function so that the output image l2
error is minimized. We apply the proposed approach to approximate the Gaussian
kernel by linear combination of constant functions. This results in very
efficient Gaussian filtering method. Our experiments show that the proposed
technique is faster than state of the art methods while preserving a similar
accuracy.
|
1107.4965
|
Polar codes for q-ary channels, q=2^r
|
cs.IT math.IT
|
We study polarization for nonbinary channels with input alphabet of size
q=2^r,r=2,3,... Using Arikan's polarizing kernel H_2, we prove that the virtual
channels that arise in the process of polarization converge to q-ary channels
with capacity 1,2,...,r bits, and that the total transmission rate approaches
the symmetric capacity of the channel. This leads to an explicit transmission
scheme for q-ary channels. The error probability of decoding using successive
cancellation behaves as exp(-N^\alpha), where N is the code length and {\alpha}
is any constant less than 0.5.
|
1107.4966
|
Lifted Graphical Models: A Survey
|
cs.AI cs.LG
|
This article presents a survey of work on lifted graphical models. We review
a general form for a lifted graphical model, a par-factor graph, and show how a
number of existing statistical relational representations map to this
formalism. We discuss inference algorithms, including lifted inference
algorithms, that efficiently compute the answers to probabilistic queries. We
also review work in learning lifted graphical models from data. It is our
belief that the need for statistical relational models (whether it goes by that
name or another) will grow in the coming decades, as we are inundated with data
which is a mix of structured and unstructured, with entities and relations
extracted in a noisy manner from text, and with the need to reason effectively
with this data. We hope that this synthesis of ideas from many different
research groups will provide an accessible starting point for new researchers
in this expanding field.
|
1107.4967
|
Normative design using inductive learning
|
cs.LO cs.AI cs.LG
|
In this paper we propose a use-case-driven iterative design methodology for
normative frameworks, also called virtual institutions, which are used to
govern open systems. Our computational model represents the normative framework
as a logic program under answer set semantics (ASP). By means of an inductive
logic programming approach, implemented using ASP, it is possible to synthesise
new rules and revise the existing ones. The learning mechanism is guided by the
designer who describes the desired properties of the framework through use
cases, comprising (i) event traces that capture possible scenarios, and (ii) a
state that describes the desired outcome. The learning process then proposes
additional rules, or changes to current rules, to satisfy the constraints
expressed in the use cases. Thus, the contribution of this paper is a process
for the elaboration and revision of a normative framework by means of a
semi-automatic and iterative process driven from specifications of
(un)desirable behaviour. The process integrates a novel and general methodology
for theory revision based on ASP.
|
1107.4969
|
An end-to-end machine learning system for harmonic analysis of music
|
cs.SD cs.AI cs.MM
|
We present a new system for simultaneous estimation of keys, chords, and bass
notes from music audio. It makes use of a novel chromagram representation of
audio that takes perception of loudness into account. Furthermore, it is fully
based on machine learning (instead of expert knowledge), such that it is
potentially applicable to a wider range of genres as long as training data is
available. As compared to other models, the proposed system is fast and memory
efficient, while achieving state-of-the-art performance.
|
1107.4985
|
Variational Gaussian Process Dynamical Systems
|
stat.ML cs.AI cs.CV math.PR
|
High dimensional time series are endemic in applications of machine learning
such as robotics (sensor data), computational biology (gene expression data),
vision (video sequences) and graphics (motion capture data). Practical
nonlinear probabilistic approaches to this data are required. In this paper we
introduce the variational Gaussian process dynamical system. Our work builds on
recent variational approximations for Gaussian process latent variable models
to allow for nonlinear dimensionality reduction simultaneously with learning a
dynamical prior in the latent space. The approach also allows for the
appropriate dimensionality of the latent space to be automatically determined.
We demonstrate the model on a human motion capture data set and a series of
high resolution video sequences.
|
1107.5000
|
An iterative feature selection method for GRNs inference by exploring
topological properties
|
cs.CV cs.AI cs.IT math.IT q-bio.MN
|
An important problem in bioinformatics is the inference of gene regulatory
networks (GRN) from temporal expression profiles. In general, the main
limitations faced by GRN inference methods is the small number of samples with
huge dimensionalities and the noisy nature of the expression measurements. In
face of these limitations, alternatives are needed to get better accuracy on
the GRNs inference problem. This work addresses this problem by presenting an
alternative feature selection method that applies prior knowledge on its search
strategy, called SFFS-BA. The proposed search strategy is based on the
Sequential Floating Forward Selection (SFFS) algorithm, with the inclusion of a
scale-free (Barab\'asi-Albert) topology information in order to guide the
search process to improve inference. The proposed algorithm explores the
scale-free property by pruning the search space and using a power law as a
weight for reducing it. In this way, the search space traversed by the SFFS-BA
method combines a breadth-first search when the number of combinations is small
(<k> <= 2) with a depth-first search when the number of combinations becomes
explosive (<k> >= 3), being guided by the scale-free prior information.
Experimental results show that the SFFS-BA provides a better inference
similarities than SFS and SFFS, keeping the robustness of the SFS and SFFS
methods, thus presenting very good results.
|
1107.5108
|
Cooperative Estimation of 3D Target Motion via Networked Visual Motion
Observer
|
cs.SY math.OC
|
This paper investigates cooperative estimation of 3D target object motion for
visual sensor networks. In particular, we consider the situation where multiple
smart vision cameras see a group of target objects. The objective here is to
meet two requirements simultaneously: averaging for static objects and tracking
to moving target objects. For this purpose, we present a cooperative estimation
mechanism called networked visual motion observer. We then derive an upper
bound of the ultimate error between the actual average and the estimates
produced by the present networked estimation mechanism. Moreover, we also
analyze the tracking performance of the estimates to moving target objects.
Finally the effectiveness of the networked visual motion observer is
demonstrated through simulation.
|
1107.5114
|
Fast and Scalable Analysis of Massive Social Graphs
|
cs.SI physics.soc-ph
|
Graph analysis is a critical component of applications such as online social
networks, protein interactions in biological networks, and Internet traffic
analysis. The arrival of massive graphs with hundreds of millions of nodes,
e.g. social graphs, presents a unique challenge to graph analysis applications.
Most of these applications rely on computing distances between node pairs,
which for large graphs can take minutes to compute using traditional algorithms
such as breadth-first-search (BFS). In this paper, we study ways to enable
scalable graph processing on today's massive graphs. We explore the design
space of graph coordinate systems, a new approach that accurately approximates
node distances in constant time by embedding graphs into coordinate spaces. We
show that a hyperbolic embedding produces relatively low distortion error, and
propose Rigel, a hyperbolic graph coordinate system that lends itself to
efficient parallelization across a compute cluster. Rigel produces
significantly more accurate results than prior systems, and is naturally
parallelizable across compute clusters, allowing it to provide accurate results
for graphs up to 43 million nodes. Finally, we show that Rigel's functionality
can be easily extended to locate (near-) shortest paths between node pairs.
After a one- time preprocessing cost, Rigel answers node-distance queries in
10's of microseconds, and also produces shortest path results up to 18 times
faster than prior shortest-path systems with similar levels of accuracy.
|
1107.5123
|
Achievable Secrecy Sum-Rate in a Fading MAC-WT with Power Control and
without CSI of Eavesdropper
|
cs.IT math.IT
|
We consider a two user fading Multiple Access Channel with a wire-tapper
(MAC-WT) where the transmitter has the channel state information (CSI) to the
intended receiver but not to the eavesdropper (eve). We provide an achievable
secrecy sum-rate with optimal power control. We next provide a secrecy sum-rate
with optimal power control and cooperative jamming (CJ). We then study an
achievable secrecy sum rate by employing an ON/OFF power control scheme which
is more easily computable. We also employ CJ over this power control scheme.
Results show that CJ boosts the secrecy sum-rate significantly even if we do
not know the CSI of the eve's channel. At high SNR, the secrecy sum-rate (with
CJ) without CSI of the eve exceeds the secrecy sum-rate (without CJ) with full
CSI of the eve.
|
1107.5186
|
Fast multi-scale edge-detection in medical ultrasound signals
|
cs.CV physics.med-ph
|
In this article we suggest a fast multi-scale edge-detection scheme for
medical ultrasound signals. The edge-detector is based on well-known properties
of the continuous wavelet trans- form. To achieve both good localization of
edges and detect only significant edges, we study the maxima-lines of the
wavelet transform. One can obtain the maxima-lines between two scales by
computing the wavelet transform at several intermediate scales. To reduce
computational effort and time we suggest a time-scale filtering procedure which
uses only few scales to connect modulus-maxima across time-scale plane. The
design of this procedure is based on a study of maxima-lines corresponding to
edges typical for medical ultrasound signals. This study allows us to construct
an algorithm for medical ultrasound signals which meets the demand for speed,
but not on expense of reliability. The edge-detection algorithm has been
applied to a large class of medical ultrasound sig- nals including tumour-,
liver- and artery-images. Our results show that the proposed algorithm
effectively detects major features in such signals, including edges with low
contrast.
|
1107.5187
|
Solvability of the $H^\infty$ algebraic Riccati equation in Banach
algebras
|
math.OC cs.SY math.AP math.FA math.RA
|
Let $R$ be a commutative complex unital semisimple Banach algebra with the
involution $\cdot ^\star$. Sufficient conditions are given for the existence of
a stabilizing solution to the $H^\infty$ Riccati equation when the matricial
data has entries from $R$. Applications to spatially distributed systems are
discussed.
|
1107.5203
|
Sparse approximation property and stable recovery of sparse signals from
noisy measurements
|
cs.IT math.IT
|
In this paper, we introduce a sparse approximation property of order $s$ for
a measurement matrix ${\bf A}$: $$\|{\bf x}_s\|_2\le D \|{\bf A}{\bf x}\|_2+
\beta \frac{\sigma_s({\bf x})}{\sqrt{s}} \quad {\rm for\ all} \ {\bf x},$$
where ${\bf x}_s$ is the best $s$-sparse approximation of the vector ${\bf x}$
in $\ell^2$, $\sigma_s({\bf x})$ is the $s$-sparse approximation error of the
vector ${\bf x}$ in $\ell^1$, and $D$ and $\beta$ are positive constants. The
sparse approximation property for a measurement matrix can be thought of as a
weaker version of its restricted isometry property and a stronger version of
its null space property. In this paper, we show that the sparse approximation
property is an appropriate condition on a measurement matrix to consider stable
recovery of any compressible signal from its noisy measurements. In particular,
we show that any compressible signalcan be stably recovered from its noisy
measurements via solving an $\ell^1$-minimization problem if the measurement
matrix has the sparse approximation property with $\beta\in (0,1)$, and
conversely the measurement matrix has the sparse approximation property with
$\beta\in (0,\infty)$ if any compressible signal can be stably recovered from
its noisy measurements via solving an $\ell^1$-minimization problem.
|
1107.5236
|
Submodular Optimization for Efficient Semi-supervised Support Vector
Machines
|
cs.LG cs.AI
|
In this work we present a quadratic programming approximation of the
Semi-Supervised Support Vector Machine (S3VM) problem, namely approximate
QP-S3VM, that can be efficiently solved using off the shelf optimization
packages. We prove that this approximate formulation establishes a relation
between the low density separation and the graph-based models of
semi-supervised learning (SSL) which is important to develop a unifying
framework for semi-supervised learning methods. Furthermore, we propose the
novel idea of representing SSL problems as submodular set functions and use
efficient submodular optimization algorithms to solve them. Using this new idea
we develop a representation of the approximate QP-S3VM as a maximization of a
submodular set function which makes it possible to optimize using efficient
greedy algorithms. We demonstrate that the proposed methods are accurate and
provide significant improvement in time complexity over the state of the art in
the literature.
|
1107.5241
|
Flooding Time in Opportunistic Networks under Power Law and Exponential
Inter-Contact Times
|
cs.SI
|
Performance bounds for opportunistic networks have been derived in a number
of recent papers for several key quantities, such as the expected delivery time
of a unicast message, or the flooding time (a measure of how fast information
spreads). However, to the best of our knowledge, none of the existing results
is derived under a mobility model which is able to reproduce the power
law+exponential tail dichotomy of the pairwise node inter-contact time
distribution which has been observed in traces of several real opportunistic
networks.
The contributions of this paper are two-fold: first, we present a simple
pairwise contact model -- called the Home-MEG model -- for opportunistic
networks based on the observation made in previous work that pairs of nodes in
the network tend to meet in very few, selected locations (home locations); this
contact model is shown to be able to faithfully reproduce the power
law+exponential tail dichotomy of inter-contact time. Second, we use the
Home-MEG model to analyze flooding time in opportunistic networks, presenting
asymptotic bounds on flooding time that assume different initial conditions for
the existence of opportunistic links.
Finally, our bounds provide some analytical evidences that the speed of
information spreading in opportunistic networks can be much faster than that
predicted by simple geometric mobility models.
|
1107.5242
|
ALPprolog --- A New Logic Programming Method for Dynamic Domains
|
cs.LO cs.AI
|
Logic programming is a powerful paradigm for programming autonomous agents in
dynamic domains, as witnessed by languages such as Golog and Flux. In this work
we present ALPprolog, an expressive, yet efficient, logic programming language
for the online control of agents that have to reason about incomplete
information and sensing actions.
|
1107.5266
|
Identifying Overlapping and Hierarchical Thematic Structures in Networks
of Scholarly Papers: A Comparison of Three Approaches
|
physics.soc-ph cs.DL cs.SI
|
We implemented three recently proposed approaches to the identification of
overlapping and hierarchical substructures in graphs and applied the
corresponding algorithms to a network of 492 information-science papers coupled
via their cited sources. The thematic substructures obtained and overlaps
produced by the three hierarchical cluster algorithms were compared to a
content-based categorisation, which we based on the interpretation of titles
and keywords. We defined sets of papers dealing with three topics located on
different levels of aggregation: h-index, webometrics, and bibliometrics. We
identified these topics with branches in the dendrograms produced by the three
cluster algorithms and compared the overlapping topics they detected with one
another and with the three pre-defined paper sets. We discuss the advantages
and drawbacks of applying the three approaches to paper networks in research
fields.
|
1107.5279
|
Information-theoretically Secure Regenerating Codes for Distributed
Storage
|
cs.IT cs.DC cs.NI math.IT
|
Regenerating codes are a class of codes for distributed storage networks that
provide reliability and availability of data, and also perform efficient node
repair. Another important aspect of a distributed storage network is its
security. In this paper, we consider a threat model where an eavesdropper may
gain access to the data stored in a subset of the storage nodes, and possibly
also, to the data downloaded during repair of some nodes. We provide explicit
constructions of regenerating codes that achieve information-theoretic secrecy
capacity in this setting.
|
1107.5348
|
Decision Making for Rapid Information Acquisition in the Reconnaissance
of Random Fields
|
cs.SY math.OC
|
Research into several aspects of robot-enabled reconnaissance of random
fields is reported. The work has two major components: the underlying theory of
information acquisition in the exploration of unknown fields and the results of
experiments on how humans use sensor-equipped robots to perform a simulated
reconnaissance exercise.
The theoretical framework reported herein extends work on robotic exploration
that has been reported by ourselves and others. Several new figures of merit
for evaluating exploration strategies are proposed and compared. Using concepts
from differential topology and information theory, we develop the theoretical
foundation of search strategies aimed at rapid discovery of topological
features (locations of critical points and critical level sets) of a priori
unknown differentiable random fields. The theory enables study of efficient
reconnaissance strategies in which the tradeoff between speed and accuracy can
be understood. The proposed approach to rapid discovery of topological features
has led in a natural way to to the creation of parsimonious reconnaissance
routines that do not rely on any prior knowledge of the environment. The design
of topology-guided search protocols uses a mathematical framework that
quantifies the relationship between what is discovered and what remains to be
discovered. The quantification rests on an information theory inspired model
whose properties allow us to treat search as a problem in optimal information
acquisition. A central theme in this approach is that "conservative" and
"aggressive" search strategies can be precisely defined, and search decisions
regarding "exploration" vs. "exploitation" choices are informed by the rate at
which the information metric is changing.
|
1107.5349
|
Multi Layer Analysis
|
cs.CV cs.DS cs.LG q-bio.QM
|
This thesis presents a new methodology to analyze one-dimensional signals
trough a new approach called Multi Layer Analysis, for short MLA. It also
provides some new insights on the relationship between one-dimensional signals
processed by MLA and tree kernels, test of randomness and signal processing
techniques. The MLA approach has a wide range of application to the fields of
pattern discovery and matching, computational biology and many other areas of
computer science and signal processing. This thesis includes also some
applications of this approach to real problems in biology and seismology.
|
1107.5354
|
Replicator Dynamics of Co-Evolving Networks
|
cs.GT cs.SI q-bio.PE
|
We propose a simple model of network co-evolution in a game-dynamical system
of interacting agents that play repeated games with their neighbors, and adapt
their behaviors and network links based on the outcome of those games. The
adaptation is achieved through a simple reinforcement learning scheme. We show
that the collective evolution of such a system can be described by
appropriately defined replicator dynamics equations. In particular, we suggest
an appropriate factorization of the agents' strategies that results in a
coupled system of equations characterizing the evolution of both strategies and
network structure, and illustrate the framework on two simple examples.
|
1107.5355
|
A Practical Approach to Polar Codes
|
cs.IT math.IT
|
In this paper, we study polar codes from a practical point of view. In
particular, we study concatenated polar codes and rate-compatible polar codes.
First, we propose a concatenation scheme including polar codes and Low-Density
Parity-Check (LDPC) codes. We will show that our proposed scheme outperforms
conventional concatenation schemes formed by LDPC and Reed-Solomon (RS) codes.
We then study two rate-compatible coding schemes using polar codes. We will see
that polar codes can be designed as universally capacity achieving
rate-compatible codes over a set of physically degraded channels. We also study
the effect of puncturing on polar codes to design rate-compatible codes.
|
1107.5387
|
Controlling wheelchairs by body motions: A learning framework for the
adaptive remapping of space
|
cs.RO cs.AI cs.NE
|
Learning to operate a vehicle is generally accomplished by forming a new
cognitive map between the body motions and extrapersonal space. Here, we
consider the challenge of remapping movement-to-space representations in
survivors of spinal cord injury, for the control of powered wheelchairs. Our
goal is to facilitate this remapping by developing interfaces between residual
body motions and navigational commands that exploit the degrees of freedom that
disabled individuals are most capable to coordinate. We present a new framework
for allowing spinal cord injured persons to control powered wheelchairs through
signals derived from their residual mobility. The main novelty of this approach
lies in substituting the more common joystick controllers of powered
wheelchairs with a sensor shirt. This allows the whole upper body of the user
to operate as an adaptive joystick. Considerations about learning and risks
have lead us to develop a safe testing environment in 3D Virtual Reality. A
Personal Augmented Reality Immersive System (PARIS) allows us to analyse
learning skills and provide users with an adequate training to control a
simulated wheelchair through the signals generated by body motions in a safe
environment. We provide a description of the basic theory, of the development
phases and of the operation of the complete system. We also present preliminary
results illustrating the processing of the data and supporting of the
feasibility of this approach.
|
1107.5448
|
Importance Sampling for Multiscale Diffusions
|
math.PR cs.SY math.OC
|
We construct importance sampling schemes for stochastic differential
equations with small noise and fast oscillating coefficients. Standard Monte
Carlo methods perform poorly for these problems in the small noise limit. With
multiscale processes there are additional complications, and indeed the
straightforward adaptation of methods for standard small noise diffusions will
not produce efficient schemes. Using the subsolution approach we construct
schemes and identify conditions under which the schemes will be asymptotically
optimal. Examples and simulation results are provided.
|
1107.5462
|
HyFlex: A Benchmark Framework for Cross-domain Heuristic Search
|
cs.AI
|
Automating the design of heuristic search methods is an active research field
within computer science, artificial intelligence and operational research. In
order to make these methods more generally applicable, it is important to
eliminate or reduce the role of the human expert in the process of designing an
effective methodology to solve a given computational search problem.
Researchers developing such methodologies are often constrained on the number
of problem domains on which to test their adaptive, self-configuring
algorithms; which can be explained by the inherent difficulty of implementing
their corresponding domain specific software components.
This paper presents HyFlex, a software framework for the development of
cross-domain search methodologies. The framework features a common software
interface for dealing with different combinatorial optimisation problems, and
provides the algorithm components that are problem specific. In this way, the
algorithm designer does not require a detailed knowledge the problem domains,
and thus can concentrate his/her efforts in designing adaptive general-purpose
heuristic search algorithms. Four hard combinatorial problems are fully
implemented (maximum satisfiability, one dimensional bin packing, permutation
flow shop and personnel scheduling), each containing a varied set of instance
data (including real-world industrial applications) and an extensive set of
problem specific heuristics and search operators. The framework forms the basis
for the first International Cross-domain Heuristic Search Challenge (CHeSC),
and it is currently in use by the international research community. In summary,
HyFlex represents a valuable new benchmark of heuristic search generality, with
which adaptive cross-domain algorithms are being easily developed, and reliably
compared.
|
1107.5469
|
A small world of citations? The influence of collaboration networks on
citation practices
|
physics.soc-ph cs.DL cs.SI
|
This paper examines the proximity of authors to those they cite using degrees
of separation in a co-author network, essentially using collaboration networks
to expand on the notion of self-citations. While the proportion of direct
self-citations (including co-authors of both citing and cited papers) is
relatively constant in time and across specialties in the natural sciences (10%
of citations) and the social sciences (20%), the same cannot be said for
citations to authors who are members of the co-author network. Differences
between fields and trends over time lie not only in the degree of co-authorship
which defines the large-scale topology of the collaboration network, but also
in the referencing practices within a given discipline, computed by defining a
propensity to cite at a given distance within the collaboration network.
Overall, there is little tendency to cite those nearby in the collaboration
network, excluding direct self-citations. By analyzing these social references,
we characterize the social capital of local collaboration networks in terms of
the knowledge production within scientific fields. These results have
implications for the long-standing debate over biases common to most types of
citation analysis, and for understanding citation practices across scientific
disciplines over the past 50 years. In addition, our findings have important
practical implications for the availability of 'arm's length' expert reviewers
of grant applications and manuscripts.
|
1107.5474
|
Selecting Attributes for Sport Forecasting using Formal Concept Analysis
|
cs.AI
|
In order to address complex systems, apply pattern recongnition on their
evolution could play an key role to understand their dynamics. Global patterns
are required to detect emergent concepts and trends, some of them with
qualitative nature. Formal Concept Analysis (FCA) is a theory whose goal is to
discover and to extract Knowledge from qualitative data. It provides tools for
reasoning with implication basis (and association rules). Implications and
association rules are usefull to reasoning on previously selected attributes,
providing a formal foundation for logical reasoning. In this paper we analyse
how to apply FCA reasoning to increase confidence in sports betting, by means
of detecting temporal regularities from data. It is applied to build a
Knowledge-Based system for confidence reasoning.
|
1107.5520
|
Axioms for Rational Reinforcement Learning
|
cs.LG
|
We provide a formal, simple and intuitive theory of rational decision making
including sequential decisions that affect the environment. The theory has a
geometric flavor, which makes the arguments easy to visualize and understand.
Our theory is for complete decision makers, which means that they have a
complete set of preferences. Our main result shows that a complete rational
decision maker implicitly has a probabilistic model of the environment. We have
a countable version of this result that brings light on the issue of countable
vs finite additivity by showing how it depends on the geometry of the space
which we have preferences over. This is achieved through fruitfully connecting
rationality with the Hahn-Banach Theorem. The theory presented here can be
viewed as a formalization and extension of the betting odds approach to
probability of Ramsey and De Finetti.
|
1107.5523
|
An Algebraic Approach for Decoding Spread Codes
|
cs.IT math.IT
|
In this paper we study spread codes: a family of constant-dimension codes for
random linear network coding. In other words, the codewords are full-rank
matrices of size (k x n) with entries in a finite field F_q. Spread codes are a
family of optimal codes with maximal minimum distance. We give a
minimum-distance decoding algorithm which requires O((n-k)k^3) operations over
an extension field F_{q^k}. Our algorithm is more efficient than the previous
ones in the literature, when the dimension k of the codewords is small with
respect to n. The decoding algorithm takes advantage of the algebraic structure
of the code, and it uses original results on minors of a matrix and on the
factorization of polynomials over finite fields.
|
1107.5528
|
Time Consistent Discounting
|
cs.AI cs.SY math.OC
|
A possibly immortal agent tries to maximise its summed discounted rewards
over time, where discounting is used to avoid infinite utilities and encourage
the agent to value current rewards more than future ones. Some commonly used
discount functions lead to time-inconsistent behavior where the agent changes
its plan over time. These inconsistencies can lead to very poor behavior. We
generalise the usual discounted utility model to one where the discount
function changes with the age of the agent. We then give a simple
characterisation of time-(in)consistent discount functions and show the
existence of a rational policy for an agent that knows its discount function is
time-inconsistent.
|
1107.5531
|
Universal Prediction of Selected Bits
|
cs.LG cs.IT math.IT
|
Many learning tasks can be viewed as sequence prediction problems. For
example, online classification can be converted to sequence prediction with the
sequence being pairs of input/target data and where the goal is to correctly
predict the target data given input data and previous input/target pairs.
Solomonoff induction is known to solve the general sequence prediction problem,
but only if the entire sequence is sampled from a computable distribution. In
the case of classification and discriminative learning though, only the targets
need be structured (given the inputs). We show that the normalised version of
Solomonoff induction can still be used in this case, and more generally that it
can detect any recursive sub-pattern (regularity) within an otherwise
completely unstructured sequence. It is also shown that the unnormalised
version can fail to predict very simple recursive sub-patterns.
|
1107.5537
|
Asymptotically Optimal Agents
|
cs.AI cs.LG
|
Artificial general intelligence aims to create agents capable of learning to
solve arbitrary interesting problems. We define two versions of asymptotic
optimality and prove that no agent can satisfy the strong version while in some
cases, depending on discounting, there does exist a non-computable weak
asymptotically optimal agent.
|
1107.5541
|
Closed Form Secrecy Capacity of MIMO Wiretap Channels with Two Transmit
Antennas
|
cs.IT math.IT
|
A Gaussian multiple-input multiple-output (MIMO) wiretap channel model is
considered. The input is a two-antenna transmitter, while the outputs are the
legitimate receiver and an eavesdropper, both equipped with multiple antennas.
All channels are assumed to be known. The problem of obtaining the optimal
input covariance matrix that achieves secrecy capacity subject to a power
constraint is addressed, and a closed-form expression for the secrecy capacity
is obtained.
|
1107.5543
|
Coevolution of Network Structure and Content
|
cs.SI physics.soc-ph
|
As individuals communicate, their exchanges form a dynamic network. We
demonstrate, using time series analysis of communication in three online
settings, that network structure alone can be highly revealing of the diversity
and novelty of the information being communicated. Our approach uses both
standard and novel network metrics to characterize how unexpected a network
configuration is, and to capture a network's ability to conduct information. We
find that networks with a higher conductance in link structure exhibit higher
information entropy, while unexpected network configurations can be tied to
information novelty. We use a simulation model to explain the observed
correspondence between the evolution of a network's structure and the
information it carries.
|
1107.5605
|
Singular Perturbation Approximations for a Class of Linear Quantum
Systems
|
cs.SY math.OC quant-ph
|
This paper considers the use of singular perturbation approximations for a
class of linear quantum systems arising in the area of linear quantum optics.
The paper presents results on the physical realizability properties of the
approximate system arising from singular perturbation model reduction.
|
1107.5607
|
Low Frequency Approximation for a class of Linear Quantum Systems using
Cascade Cavity Realization
|
cs.SY math.OC quant-ph
|
This paper presents a method for approximating a class of complex transfer
function matrices corresponding to physically realizable complex linear quantum
systems. The class of linear quantum systems under consideration includes
interconnections of passive optical components such as cavities,
beam-splitters, phase-shifters and interferometers. This approximation method
builds on a previous result for cascade realization and gives good
approximations at low frequencies.
|
1107.5615
|
Lagrange Stabilization of Pendulum-like Systems: A Pseudo H-infinity
Control Approach
|
cs.SY math.OC
|
This paper studies the Lagrange stabilization of a class of nonlinear systems
whose linear part has a singular system matrix and which have multiple periodic
(in state) nonlinearities. Both state and output feedback Lagrange
stabilization problems are considered. The paper develops a pseudo H-infinity
control theory to solve these stabilization problems. In a similar fashion to
the Strict Bounded Real Lemma in classic H-infinity control theory, a Pseudo
Strict Bounded Real Lemma is established for systems with a single unstable
pole. Sufficient conditions for the synthesis of state feedback and output
feedback controllers are given to ensure that the closed-loop system is pseudo
strict bounded real. The pseudo H-infinity control approach is applied to solve
state feedback and output feedback Lagrange stabilization problems for
nonlinear systems with multiple nonlinearities. An example is given to
illustrate the proposed method.
|
1107.5620
|
A bounded confidence approach to understanding user participation in
peer production systems
|
physics.soc-ph cs.CY cs.SI
|
Commons-based peer production does seem to rest upon a paradox. Although
users produce all contents, at the same time participation is commonly on a
voluntary basis, and largely incentivized by achievement of project's goals.
This means that users have to coordinate their actions and goals, in order to
keep themselves from leaving. While this situation is easily explainable for
small groups of highly committed, like-minded individuals, little is known
about large-scale, heterogeneous projects, such as Wikipedia.
In this contribution we present a model of peer production in a large online
community. The model features a dynamic population of bounded confidence users,
and an endogenous process of user departure. Using global sensitivity analysis,
we identify the most important parameters affecting the lifespan of user
participation. We find that the model presents two distinct regimes, and that
the shift between them is governed by the bounded confidence parameter. For low
values of this parameter, users depart almost immediately. For high values,
however, the model produces a bimodal distribution of user lifespan. These
results suggest that user participation to online communities could be
explained in terms of group consensus, and provide a novel connection between
models of opinion dynamics and commons-based peer production.
|
1107.5637
|
Quantization of Binary-Input Discrete Memoryless Channels
|
cs.IT math.IT
|
The quantization of the output of a binary-input discrete memoryless channel
to a smaller number of levels is considered. An algorithm which finds an
optimal quantizer, in the sense of maximizing mutual information between the
channel input and the quantizer output is given. This result holds for
arbitrary channels, in contrast to previous results for restricted channels or
a restricted number of quantizer outputs. In the worst case, the algorithm
complexity is cubic $M^3$ in the number of channel outputs $M$. Optimality is
proved using the theorem of Burshtein, Della Pietra, Kanevsky, and N\'adas for
mappings which minimize average impurity for classification and regression
trees.
|
1107.5638
|
Model Based Synthesis of Control Software from System Level Formal
Specifications
|
cs.SE cs.SY
|
Many Embedded Systems are indeed Software Based Control Systems, that is
control systems whose controller consists of control software running on a
microcontroller device. This motivates investigation on Formal Model Based
Design approaches for automatic synthesis of embedded systems control software.
We present an algorithm, along with a tool QKS implementing it, that from a
formal model (as a Discrete Time Linear Hybrid System) of the controlled system
(plant), implementation specifications (that is, number of bits in the
Analog-to-Digital, AD, conversion) and System Level Formal Specifications (that
is, safety and liveness requirements for the closed loop system) returns
correct-by-construction control software that has a Worst Case Execution Time
(WCET) linear in the number of AD bits and meets the given specifications.
We show feasibility of our approach by presenting experimental results on
using it to synthesize control software for a buck DC-DC converter, a widely
used mixed-mode analog circuit, and for the inverted pendulum.
|
1107.5645
|
Minimization of Storage Cost in Distributed Storage Systems with Repair
Consideration
|
cs.IT cs.DC math.IT
|
In a distributed storage system, the storage costs of different storage
nodes, in general, can be different. How to store a file in a given set of
storage nodes so as to minimize the total storage cost is investigated. By
analyzing the min-cut constraints of the information flow graph, the feasible
region of the storage capacities of the nodes can be determined. The storage
cost minimization can then be reduced to a linear programming problem, which
can be readily solved. Moreover, the tradeoff between storage cost and
repair-bandwidth is established.
|
1107.5646
|
Temporal motifs in time-dependent networks
|
physics.data-an cs.SI physics.soc-ph
|
Temporal networks are commonly used to represent systems where connections
between elements are active only for restricted periods of time, such as
networks of telecommunication, neural signal processing, biochemical reactions
and human social interactions. We introduce the framework of temporal motifs to
study the mesoscale topological-temporal structure of temporal networks in
which the events of nodes do not overlap in time. Temporal motifs are classes
of similar event sequences, where the similarity refers not only to topology
but also to the temporal order of the events. We provide a mapping from event
sequences to colored directed graphs that enables an efficient algorithm for
identifying temporal motifs. We discuss some aspects of temporal motifs,
including causality and null models, and present basic statistics of temporal
motifs in a large mobile call network.
|
1107.5654
|
Interest-Based vs. Social Person-Recommenders in Social Networking
Platforms
|
cs.SI cs.CY physics.soc-ph
|
Social network based approaches to person recommendations are compared to
interest based approaches with the help of an empirical study on a large German
social networking platform. We assess and compare the performance of different
basic variants of the two approaches by precision / recall based performance
with respect to reproducing known friendship relations and by an empirical
questionnaire based study. In accordance to expectation, the results show that
interest based person recommenders are able to produce more novel
recommendations while performing less well with respect to friendship
reproduction. With respect to the user's assessment of recommendation quality
all approaches perform comparably well, while combined social-interest-based
variants are slightly ahead in performance. The overall results qualify those
combined approaches as a good compromise.
|
1107.5661
|
On the Impact of Random Index-Partitioning on Index Compression
|
cs.IR
|
The performance of processing search queries depends heavily on the stored
index size. Accordingly, considerable research efforts have been devoted to the
development of efficient compression techniques for inverted indexes. Roughly,
index compression relies on two factors: the ordering of the indexed documents,
which strives to position similar documents in proximity, and the encoding of
the inverted lists that result from the ordered stream of documents. Large
commercial search engines index tens of billions of pages of the ever growing
Web. The sheer size of their indexes dictates the distribution of documents
among thousands of servers in a scheme called local index-partitioning, such
that each server indexes only several millions pages. Due to engineering and
runtime performance considerations, random distribution of documents to servers
is common. However, random index-partitioning among many servers adversely
impacts the resulting index sizes, as it decreases the effectiveness of
document ordering schemes. We study the impact of random index-partitioning on
document ordering schemes. We show that index-partitioning decreases the
aggregated size of the inverted lists logarithmically with the number of
servers, when documents within each server are randomly reordered. On the other
hand, the aggregated partitioned index size increases logarithmically with the
number of servers, when state-of-the-art document ordering schemes, such as
lexical URL sorting and clustering with TSP, are applied. Finally, we justify
the common practice of randomly distributing documents to servers, as we
qualitatively show that despite its ill-effects on the ensuing compression, it
decreases key factors in distributed query evaluation time by an order of
magnitude as compared with partitioning techniques that compress better.
|
1107.5671
|
Automatic Network Reconstruction using ASP
|
cs.LG
|
Building biological models by inferring functional dependencies from
experimental data is an im- portant issue in Molecular Biology. To relieve the
biologist from this traditionally manual process, various approaches have been
proposed to increase the degree of automation. However, available ap- proaches
often yield a single model only, rely on specific assumptions, and/or use
dedicated, heuris- tic algorithms that are intolerant to changing circumstances
or requirements in the view of the rapid progress made in Biotechnology. Our
aim is to provide a declarative solution to the problem by ap- peal to Answer
Set Programming (ASP) overcoming these difficulties. We build upon an existing
approach to Automatic Network Reconstruction proposed by part of the authors.
This approach has firm mathematical foundations and is well suited for ASP due
to its combinatorial flavor providing a characterization of all models
explaining a set of experiments. The usage of ASP has several ben- efits over
the existing heuristic algorithms. First, it is declarative and thus
transparent for biological experts. Second, it is elaboration tolerant and thus
allows for an easy exploration and incorporation of biological constraints.
Third, it allows for exploring the entire space of possible models. Finally,
our approach offers an excellent performance, matching existing,
special-purpose systems.
|
1107.5676
|
Structural Analysis of Laplacian Spectral Properties of Large-Scale
Networks
|
math.OC cs.CE cs.DM cs.SI cs.SY physics.data-an physics.soc-ph
|
Using methods from algebraic graph theory and convex optimization, we study
the relationship between local structural features of a network and spectral
properties of its Laplacian matrix. In particular, we derive expressions for
the so-called spectral moments of the Laplacian matrix of a network in terms of
a collection of local structural measurements. Furthermore, we propose a series
of semidefinite programs to compute bounds on the spectral radius and the
spectral gap of the Laplacian matrix from a truncated sequence of Laplacian
spectral moments. Our analysis shows that the Laplacian spectral moments and
spectral radius are strongly constrained by local structural features of the
network. On the other hand, we illustrate how local structural features are
usually not enough to estimate the Laplacian spectral gap.
|
1107.5708
|
Perfect Codes for Uniform Chains Poset Metrics
|
cs.IT math.CO math.IT
|
The class of poset metrics is very large and contains some interesting
families of metrics. A family of metrics, based on posets which are formed from
disjoint chains which have the same size, is examined. A necessary and
sufficient condition, for the existence of perfect single-error-correcting
codes for such poset metrics, is proved.
|
1107.5728
|
The network of global corporate control
|
q-fin.GN cs.SI physics.soc-ph
|
The structure of the control network of transnational corporations affects
global market competition and financial stability. So far, only small national
samples were studied and there was no appropriate methodology to assess control
globally. We present the first investigation of the architecture of the
international ownership network, along with the computation of the control held
by each global player. We find that transnational corporations form a giant
bow-tie structure and that a large portion of control flows to a small
tightly-knit core of financial institutions. This core can be seen as an
economic "super-entity" that raises new important issues both for researchers
and policy makers.
|
1107.5730
|
On the Role of Diversity in Sparsity Estimation
|
cs.IT math.IT
|
A major challenge in sparsity pattern estimation is that small modes are
difficult to detect in the presence of noise. This problem is alleviated if one
can observe samples from multiple realizations of the nonzero values for the
same sparsity pattern. We will refer to this as "diversity". Diversity comes at
a price, however, since each new realization adds new unknown nonzero values,
thus increasing uncertainty. In this paper, upper and lower bounds on joint
sparsity pattern estimation are derived. These bounds, which improve upon
existing results even in the absence of diversity, illustrate key tradeoffs
between the number of measurements, the accuracy of estimation, and the
diversity. It is shown, for instance, that diversity introduces a tradeoff
between the uncertainty in the noise and the uncertainty in the nonzero values.
Moreover, it is shown that the optimal amount of diversity significantly
improves the behavior of the estimation problem for both optimal and
computationally efficient estimators.
|
1107.5742
|
Complex Optimization in Answer Set Programming
|
cs.LO cs.AI
|
Preference handling and optimization are indispensable means for addressing
non-trivial applications in Answer Set Programming (ASP). However, their
implementation becomes difficult whenever they bring about a significant
increase in computational complexity. As a consequence, existing ASP systems do
not offer complex optimization capacities, supporting, for instance,
inclusion-based minimization or Pareto efficiency. Rather, such complex
criteria are typically addressed by resorting to dedicated modeling techniques,
like saturation. Unlike the ease of common ASP modeling, however, these
techniques are rather involved and hardly usable by ASP laymen. We address this
problem by developing a general implementation technique by means of
meta-programming, thus reusing existing ASP systems to capture various forms of
qualitative preferences among answer sets. In this way, complex preferences and
optimization capacities become readily available for ASP applications.
|
1107.5743
|
NEMO: Extraction and normalization of organization names from PubMed
affiliation strings
|
cs.CL
|
We propose NEMO, a system for extracting organization names in the
affiliation and normalizing them to a canonical organization name. Our parsing
process involves multi-layered rule matching with multiple dictionaries. The
system achieves more than 98% f-score in extracting organization names. Our
process of normalization that involves clustering based on local sequence
alignment metrics and local learning based on finding connected components. A
high precision was also observed in normalization. NEMO is the missing link in
associating each biomedical paper and its authors to an organization name in
its canonical form and the Geopolitical location of the organization. This
research could potentially help in analyzing large social networks of
organizations for landscaping a particular topic, improving performance of
author disambiguation, adding weak links in the co-author network of authors,
augmenting NLM's MARS system for correcting errors in OCR output of affiliation
field, and automatically indexing the PubMed citations with the normalized
organization name and country. Our system is available as a graphical user
interface available for download along with this paper.
|
1107.5744
|
BioSimplify: an open source sentence simplification engine to improve
recall in automatic biomedical information extraction
|
cs.CL
|
BioSimplify is an open source tool written in Java that introduces and
facilitates the use of a novel model for sentence simplification tuned for
automatic discourse analysis and information extraction (as opposed to sentence
simplification for improving human readability). The model is based on a
"shot-gun" approach that produces many different (simpler) versions of the
original sentence by combining variants of its constituent elements. This tool
is optimized for processing biomedical scientific literature such as the
abstracts indexed in PubMed. We tested our tool on its impact to the task of
PPI extraction and it improved the f-score of the PPI tool by around 7%, with
an improvement in recall of around 20%. The BioSimplify tool and test corpus
can be downloaded from https://biosimplify.sourceforge.net.
|
1107.5752
|
An Effective Approach to Biomedical Information Extraction with Limited
Training Data
|
cs.CL
|
Overall, the two main contributions of this work include the application of
sentence simplification to association extraction as described above, and the
use of distributional semantics for concept extraction. The proposed work on
concept extraction amalgamates for the first time two diverse research areas
-distributional semantics and information extraction. This approach renders all
the advantages offered in other semi-supervised machine learning systems, and,
unlike other proposed semi-supervised approaches, it can be used on top of
different basic frameworks and algorithms.
http://gradworks.umi.com/34/49/3449837.html
|
1107.5766
|
Information, Utility & Bounded Rationality
|
cs.AI
|
Perfectly rational decision-makers maximize expected utility, but crucially
ignore the resource costs incurred when determining optimal actions. Here we
employ an axiomatic framework for bounded rational decision-making based on a
thermodynamic interpretation of resource costs as information costs. This leads
to a variational "free utility" principle akin to thermodynamical free energy
that trades off utility and information costs. We show that bounded optimal
control solutions can be derived from this variational principle, which leads
in general to stochastic policies. Furthermore, we show that risk-sensitive and
robust (minimax) control schemes fall out naturally from this framework if the
environment is considered as a bounded rational and perfectly rational
opponent, respectively. When resource costs are ignored, the maximum expected
utility principle is recovered.
|
1107.5774
|
Carleman Estimate for Stochastic Parabolic Equations and Inverse
Stochastic Parabolic Problems
|
math.OC cs.SY
|
In this paper, we establish a global Carleman estimate for stochastic
parabolic equations. Based on this estimate, we solve two inverse problems for
stochastic parabolic equations. One is concerned with a determination problem
of the history of a stochastic heat process through the observation at the
final time $T$, for which we obtain a conditional stability estimate. The other
is an inverse source problem with observation on the lateral boundary. We
derive the uniqueness of the source.
|
1107.5782
|
Codes as fractals and noncommutative spaces
|
cs.IT math.IT
|
We consider the CSS algorithm relating self-orthogonal classical linear codes
to q-ary quantum stabilizer codes and we show that to such a pair of a
classical and a quantum code one can associate geometric spaces constructed
using methods from noncommutative geometry, arising from rational
noncommutative tori and finite abelian group actions on Cuntz algebras and
fractals associated to the classical codes.
|
1107.5806
|
On Computing a Function of Correlated Sources
|
cs.IT math.IT
|
A receiver wants to compute a function f of two correlated sources X and Y
and side information Z. What is the minimum number of bits that needs to be
communicated by each transmitter?
In this paper, we derive inner and outer bounds to the rate region of this
problem which coincide in the cases where f is partially invertible and where
the sources are independent given the side information.
These rate regions point to an important difference with the single source
case. Whereas for the latter it is sufficient to consider independent sets of
some suitable characteristic graph, for multiple sources such a restriction is
suboptimal and multisets are necessary.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.