id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1107.2900
|
Network Congestion Control with Markovian Multipath Routing
|
cs.NI cs.SY math.OC
|
In this paper we consider an integrated model for TCP/IP protocols with
multipath routing. The model combines a Network Utility Maximization for rate
control based on end-to-end queuing delays, with a Markovian Traffic
Equilibrium for routing based on total expected delays. We prove the existence
of a unique equilibrium state which is characterized as the solution of an
unconstrained strictly convex program. A distributed algorithm for solving this
optimization problem is proposed, with a brief discussion of how it can be
implemented by adapting the current Internet protocols.
|
1107.2972
|
An MCMC Approach to Universal Lossy Compression of Analog Sources
|
cs.IT math.IT
|
Motivated by the Markov chain Monte Carlo (MCMC) approach to the compression
of discrete sources developed by Jalali and Weissman, we propose a lossy
compression algorithm for analog sources that relies on a finite reproduction
alphabet, which grows with the input length. The algorithm achieves, in an
appropriate asymptotic sense, the optimum Shannon theoretic tradeoff between
rate and distortion, universally for stationary ergodic continuous amplitude
sources. We further propose an MCMC-based algorithm that resorts to a reduced
reproduction alphabet when such reduction does not prevent achieving the
Shannon limit. The latter algorithm is advantageous due to its reduced
complexity and improved rates of convergence when employed on sources with a
finite and small optimum reproduction alphabet.
|
1107.2973
|
Quantum Master Equation and Filter for Systems Driven by Fields in a
Single Photon State
|
quant-ph cs.SY math.OC
|
The aim of this paper is to determine quantum master and filter equations for
systems coupled to continuous-mode single photon fields. The system and field
are described using a quantum stochastic unitary model, where the
continuous-mode single photon state for the field is determined by a wavepacket
pulse shape. The master equation is derived from this model and is given in
terms of a system of coupled equations. The output field carries information
about the system from the scattered photon, and is continuously monitored. The
quantum filter is determined with the aid of an embedding of the system into a
larger system, and is given by a system of coupled stochastic differential
equations. An example is provided to illustrate the main results.
|
1107.2974
|
Quantum Filtering for Systems Driven by Fields in Single Photon States
and Superposition of Coherent States using Non-Markovian Embeddings
|
quant-ph cs.SY math.OC
|
The purpose of this paper is to determine quantum master and filter equations
for systems coupled to fields in certain non-classical continuous-mode states.
Specifically, we consider two types of field states (i) single photon states,
and (ii) superpositions of coherent states. The system and field are described
using a quantum stochastic unitary model. Master equations are derived from
this model and are given in terms of systems of coupled equations. The output
field carries information about the system, and is continuously monitored. The
quantum filters are determined with the aid of an embedding of the system into
a larger non-Markovian system, and are given by a system of coupled stochastic
differential equations.
|
1107.2976
|
Quantum Filtering (Quantum Trajectories) for Systems Driven by Fields in
Single Photon States and Superposition of Coherent States
|
quant-ph cs.SY math.OC
|
We derive the stochastic master equations, that is to say, quantum filters,
and master equations for an arbitrary quantum system probed by a
continuous-mode bosonic input field in two types of non-classical states.
Specifically, we consider the cases where the state of the input field is a
superposition or combination of: (1) a continuous-mode single photon wave
packet and vacuum, and (2) any number of continuous-mode coherent states.
|
1107.2984
|
An Introductory Review of Information Theory in the Context of
Computational Neuroscience
|
cs.IT math.IT
|
This paper introduces several fundamental concepts in information theory from
the perspective of their origins in engineering. Understanding such concepts is
important in neuroscience for two reasons. Simply applying formulae from
information theory without understanding the assumptions behind their
definitions can lead to erroneous results and conclusions. Furthermore, this
century will see a convergence of information theory and neuroscience;
information theory will expand its foundations to incorporate more
comprehensively biological processes thereby helping reveal how neuronal
networks achieve their remarkable information processing abilities.
|
1107.2997
|
An Ontology-driven Framework for Supporting Complex Decision Process
|
cs.AI
|
The study proposes a framework of ONTOlogy-based Group Decision Support
System (ONTOGDSS) for decision process which exhibits the complex structure of
decision-problem and decision-group. It is capable of reducing the complexity
of problem structure and group relations. The system allows decision makers to
participate in group decision-making through the web environment, via the
ontology relation. It facilitates the management of decision process as a
whole, from criteria generation, alternative evaluation, and opinion
interaction to decision aggregation. The embedded ontology structure in
ONTOGDSS provides the important formal description features to facilitate
decision analysis and verification. It examines the software architecture, the
selection methods, the decision path, etc. Finally, the ontology application of
this system is illustrated with specific real case to demonstrate its
potentials towards decision-making development.
|
1107.3033
|
The Expected Order of Saturated RNA Secondary Structures
|
math.CO cs.IT math.IT
|
We show the expected order of RNA saturated secondary structures of size $n$
is $\log_4n(1+O(\frac{\log_2n}{n}))$, if we select the saturated secondary
structure uniformly at random. Furthermore, the order of saturated secondary
structures is sharply concentrated around its mean. As a consequence saturated
structures and structures in the traditional model behave the same with respect
to the expected order. Thus we may conclude that the traditional model has
already drawn the right picture and conclusions inferred from it with respect
to the order (the overall shape) of a structure remain valid even if enforcing
saturation (at least in expectation).
|
1107.3059
|
From Small-World Networks to Comparison-Based Search
|
cs.LG cs.DS cs.IT cs.SI math.IT stat.ML
|
The problem of content search through comparisons has recently received
considerable attention. In short, a user searching for a target object
navigates through a database in the following manner: the user is asked to
select the object most similar to her target from a small list of objects. A
new object list is then presented to the user based on her earlier selection.
This process is repeated until the target is included in the list presented, at
which point the search terminates. This problem is known to be strongly related
to the small-world network design problem.
However, contrary to prior work, which focuses on cases where objects in the
database are equally popular, we consider here the case where the demand for
objects may be heterogeneous. We show that, under heterogeneous demand, the
small-world network design problem is NP-hard. Given the above negative result,
we propose a novel mechanism for small-world design and provide an upper bound
on its performance under heterogeneous demand. The above mechanism has a
natural equivalent in the context of content search through comparisons, and we
establish both an upper bound and a lower bound for the performance of this
mechanism. These bounds are intuitively appealing, as they depend on the
entropy of the demand as well as its doubling constant, a quantity capturing
the topology of the set of target objects. They also illustrate interesting
connections between comparison-based search to classic results from information
theory. Finally, we propose an adaptive learning algorithm for content search
that meets the performance guarantees achieved by the above mechanisms.
|
1107.3087
|
Non-equilibrium Information Envelopes and the
Capacity-Delay-Error-Tradeoff of Source Coding
|
cs.PF cs.IT math.IT
|
This paper develops an envelope-based approach to establish a link between
information and queueing theory. Unlike classical, equilibrium information
theory, information envelopes focus on the dynamics of sources and coders,
using functions of time that bound the number of bits generated. In the limit
the information envelopes converge to the average behavior and recover the
entropy of a source, respectively, the average codeword length of a coder. In
contrast, on short time scales and for sources with memory it is shown that
large deviations from known equilibrium results occur with non-negligible
probability. These can cause significant network delays. Compared to well-known
traffic models from queueing theory, information envelopes consider the
functioning of information sources and coders, avoiding a priori assumptions,
such as exponential traffic, or empirical, trace-based traffic models. Using
results from the stochastic network calculus, the envelopes yield a
characterization of the operating points of source coders by the triplet of
capacity, delay, and error. In the limit, assuming an optimal coder the
required capacity approaches the entropy with arbitrarily small probability of
error if infinitely large delays are permitted. We derive a corresponding
characterization of channels and prove that the model has the desirable
property of additivity, that allows analyzing coders and channels separately.
|
1107.3090
|
On the Computational Complexity of Stochastic Controller Optimization in
POMDPs
|
cs.CC cs.LG cs.SY math.OC
|
We show that the problem of finding an optimal stochastic 'blind' controller
in a Markov decision process is an NP-hard problem. The corresponding decision
problem is NP-hard, in PSPACE, and SQRT-SUM-hard, hence placing it in NP would
imply breakthroughs in long-standing open problems in computer science. Our
result establishes that the more general problem of stochastic controller
optimization in POMDPs is also NP-hard. Nonetheless, we outline a special case
that is convex and admits efficient global solutions.
|
1107.3099
|
Algorithm for Optimal Mode Scheduling in Switched Systems
|
cs.SY math.OC
|
This paper considers the problem of computing the schedule of modes in a
switched dynamical system, that minimizes a cost functional defined on the
trajectory of the system's continuous state variable. A recent approach to such
optimal control problems consists of algorithms that alternate between
computing the optimal switching times between modes in a given sequence, and
updating the mode-sequence by inserting to it a finite number of new modes.
These algorithms have an inherent inefficiency due to their sparse update of
the mode-sequences, while spending most of the computing times on optimizing
with respect to the switching times for a given mode-sequence. This paper
proposes an algorithm that operates directly in the schedule space without
resorting to the timing optimization problem. It is based on the Armijo step
size along certain Gateaux derivatives of the performance functional, thereby
avoiding some of the computational difficulties associated with discrete
scheduling parameters. Its convergence to local minima as well as its rate of
convergence are proved, and a simulation example on a nonlinear system exhibits
quite a fast convergence.
|
1107.3119
|
Experimenting with Transitive Verbs in a DisCoCat
|
cs.CL math.CT
|
Formal and distributional semantic models offer complementary benefits in
modeling meaning. The categorical compositional distributional (DisCoCat) model
of meaning of Coecke et al. (arXiv:1003.4394v1 [cs.CL]) combines aspected of
both to provide a general framework in which meanings of words, obtained
distributionally, are composed using methods from the logical setting to form
sentence meaning. Concrete consequences of this general abstract setting and
applications to empirical data are under active study (Grefenstette et al.,
arxiv:1101.0309; Grefenstette and Sadrzadeh, arXiv:1106.4058v1 [cs.CL]). . In
this paper, we extend this study by examining transitive verbs, represented as
matrices in a DisCoCat. We discuss three ways of constructing such matrices,
and evaluate each method in a disambiguation task developed by Grefenstette and
Sadrzadeh (arXiv:1106.4058v1 [cs.CL]).
|
1107.3133
|
Robust Kernel Density Estimation
|
stat.ML cs.LG stat.ME
|
We propose a method for nonparametric density estimation that exhibits
robustness to contamination of the training sample. This method achieves
robustness by combining a traditional kernel density estimator (KDE) with ideas
from classical $M$-estimation. We interpret the KDE based on a radial, positive
semi-definite kernel as a sample mean in the associated reproducing kernel
Hilbert space. Since the sample mean is sensitive to outliers, we estimate it
robustly via $M$-estimation, yielding a robust kernel density estimator (RKDE).
An RKDE can be computed efficiently via a kernelized iteratively re-weighted
least squares (IRWLS) algorithm. Necessary and sufficient conditions are given
for kernelized IRWLS to converge to the global minimizer of the $M$-estimator
objective function. The robustness of the RKDE is demonstrated with a
representer theorem, the influence function, and experimental results for
density estimation and anomaly detection.
|
1107.3166
|
Stable, scalable, decentralized P2P file sharing with non-altruistic
peers
|
cs.NI cs.DC cs.SI cs.SY
|
P2P systems provide a scalable solution for distributing large files in a
network. The file is split into many chunks, and peers contact other peers to
collect missing chunks to eventually complete the entire file. The so-called
`rare chunk' phenomenon, where a single chunk becomes rare and prevents peers
from completing the file, is a threat to the stability of such systems.
Practical systems such as BitTorrent overcome this issue by requiring a global
search for the rare chunk, which necessitates a centralized mechanism. We
demonstrate a new system based on an approximate rare-chunk rule, allowing for
completely distributed file sharing while retaining scalability and stability.
We assume non-altruistic peers and the seed is required to make only a minimal
contribution.
|
1107.3172
|
Use of Hamiltonian Cycles in Cryptograph
|
cs.IT math.IT
|
This paper has been withdrawn by the authors.
|
1107.3174
|
On the infeasibility of entanglement generation in Gaussian quantum
systems via classical control
|
cs.SY math.OC quant-ph
|
This paper uses a system theoretic approach to show that classical linear
time invariant controllers cannot generate steady state entanglement in a
bipartite Gaussian quantum system which is initialized in a Gaussian state. The
paper also shows that the use of classical linear controllers cannot generate
entanglement in a finite time from a bipartite system initialized in a
separable Gaussian state. The approach reveals connections between system
theoretic concepts and the well known physical principle that local operations
and classical communications cannot generate entangled states starting from
separable states.
|
1107.3177
|
Convergence of Weighted Min-Sum Decoding Via Dynamic Programming on
Trees
|
cs.IT math.IT
|
Applying the max-product (and belief-propagation) algorithms to loopy graphs
is now quite popular for best assignment problems. This is largely due to their
low computational complexity and impressive performance in practice. Still,
there is no general understanding of the conditions required for convergence
and/or the optimality of converged solutions. This paper presents an analysis
of both attenuated max-product (AMP) decoding and weighted min-sum (WMS)
decoding for LDPC codes which guarantees convergence to a fixed point when a
weight parameter, {\beta}, is sufficiently small. It also shows that, if the
fixed point satisfies some consistency conditions, then it must be both the
linear-programming (LP) and maximum-likelihood (ML) solution.
For (dv,dc)-regular LDPC codes, the weight must satisfy {\beta}(dv-1) \leq 1
whereas the results proposed by Frey and Koetter require instead that
{\beta}(dv-1)(dc-1) < 1. A counterexample which shows a fixed point might not
be the ML solution if {\beta}(dv-1) > 1 is also given. Finally, connections are
explored with recent work by Arora et al. on the threshold of LP decoding.
|
1107.3194
|
Fingerprint recognition using standardized fingerprint model
|
cs.CV
|
Fingerprint recognition is one of most popular and accuracy Biometric
technologies. Nowadays, it is used in many real applications. However,
recognizing fingerprints in poor quality images is still a very complex
problem. In recent years, many algorithms, models...are given to improve the
accuracy of recognition system. This paper discusses on the standardized
fingerprint model which is used to synthesize the template of fingerprints. In
this model, after pre-processing step, we find the transformation between
templates, adjust parameters, synthesize fingerprint, and reduce noises. Then,
we use the final fingerprint to match with others in FVC2004 fingerprint
database (DB4) to show the capability of the model.
|
1107.3195
|
Facial Expression Classification Based on Multi Artificial Neural
Network and Two Dimensional Principal Component Analysis
|
cs.CV
|
Facial expression classification is a kind of image classification and it has
received much attention, in recent years. There are many approaches to solve
these problems with aiming to increase efficient classification. One of famous
suggestions is described as first step, project image to different spaces;
second step, in each of these spaces, images are classified into responsive
class and the last step, combine the above classified results into the final
result. The advantages of this approach are to reflect fulfill and multiform of
image classified. In this paper, we use 2D-PCA and its variants to project the
pattern or image into different spaces with different grouping strategies. Then
we develop a model which combines many Neural Networks applied for the last
step. This model evaluates the reliability of each space and gives the final
classification conclusion. Our model links many Neural Networks together, so we
call it Multi Artificial Neural Network (MANN). We apply our proposal model for
6 basic facial expressions on JAFFE database consisting 213 images posed by 10
Japanese female models.
|
1107.3199
|
Performance Guarantee under Longest-Queue-First Schedule in Wireless
Networks
|
cs.IT math.IT
|
Efficient link scheduling in a wireless network is challenging. Typical
optimal algorithms require solving an NP-hard sub-problem. To meet the
challenge, one stream of research focuses on finding simpler sub-optimal
algorithms that have low complexity but high efficiency in practice. In this
paper, we study the performance guarantee of one such scheduling algorithm, the
Longest-Queue-First (LQF) algorithm. It is known that the LQF algorithm
achieves the full capacity region, $\Lambda$, when the interference graph
satisfies the so-called local pooling condition. For a general graph $G$, LQF
achieves (i.e., stabilizes) a part of the capacity region, $\sigma^*(G)
\Lambda$, where $\sigma^*(G)$ is the overall local pooling factor of the
interference graph $G$ and $\sigma^*(G) \leq 1$. It has been shown later that
LQF achieves a larger rate region, $\Sigma^*(G) \Lambda$, where $\Sigma^ (G)$
is a diagonal matrix. The contribution of this paper is to describe three new
achievable rate regions, which are larger than the previously-known regions. In
particular, the new regions include all the extreme points of the capacity
region and are not convex in general. We also discover a counter-intuitive
phenomenon in which increasing the arrival rate may sometime help to stabilize
the network. This phenomenon can be well explained using the theory developed
in the paper.
|
1107.3225
|
An Agent-based Strategy for Deploying Analysis Models into Specification
and Design for Distributed APS Systems
|
cs.MA
|
Despite the extensive use of the agent technology in the Supply Chain
Management field, its integration with Advanced Planning and Scheduling (APS)
tools still represents a promising field with several open research questions.
Specifically, the literature falls short in providing an integrated framework
to analyze, specify, design and implement simulation experiments covering the
whole simulation cycle. Thus, this paper proposes an agent-based strategy to
convert the 'analysis' models into 'specification' and 'design' models
combining two existing methodologies proposed in the literature. The first one
is a recent and unique approach dedicated to the 'analysis' of agent-based APS
systems. The second one is a well-established methodological framework to
'specify' and 'design' agent-based supply chain systems. The proposed
conversion strategy is original and is the first one allowing simulation
analysts to integrate the whole simulation development process in the domain of
distributed APS.
|
1107.3231
|
Triangles to Capture Social Cohesion
|
cs.SI physics.soc-ph
|
Although community detection has drawn tremendous amount of attention across
the sciences in the past decades, no formal consensus has been reached on the
very nature of what qualifies a community as such. In this article we take an
orthogonal approach by introducing a novel point of view to the problem of
overlapping communities. Instead of quantifying the quality of a set of
communities, we choose to focus on the intrinsic community-ness of one given
set of nodes. To do so, we propose a general metric on graphs, the cohesion,
based on counting triangles and inspired by well established sociological
considerations. The model has been validated through a large-scale online
experiment called Fellows in which users were able to compute their social
groups on Face- book and rate the quality of the obtained groups. By observing
those ratings in relation to the cohesion we assess that the cohesion is a
strong indicator of users subjective perception of the community-ness of a set
of people.
|
1107.3246
|
Unique continuation and approximate controllability for a degenerate
parabolic equation
|
math.AP cs.SY math.OC
|
This paper studies unique continuation for weakly degenerate parabolic
equations in one space dimension. A new Carleman estimate of local type is
obtained to deduce that all solutions that vanish on the degeneracy set,
together with their conormal derivative, are identically equal to zero. An
approximate controllability result for weakly degenerate parabolic equations
under Dirichlet boundary condition is deduced.
|
1107.3253
|
Spatially-Coupled Codes and Threshold Saturation on
Intersymbol-Interference Channels
|
cs.IT math.IT
|
Recently, it has been observed that terminated low-density-parity-check
(LDPC) convolutional codes (or spatially-coupled codes) appear to approach
capacity universally across the class of binary memoryless channels. This is
facilitated by the "threshold saturation" effect whereby the belief-propagation
(BP) threshold of the spatially-coupled ensemble is boosted to the maximum
a-posteriori (MAP) threshold of the underlying constituent ensemble.
In this paper, we consider the universality of spatially-coupled codes over
intersymbol-interference (ISI) channels under joint iterative decoding. More
specifically, we empirically show that threshold saturation also occurs for the
considered problem. This can be observed by first identifying the EXIT curve
for erasure noise and the GEXIT curve for general noise that naturally obey the
general area theorem. From these curves, the corresponding MAP and the BP
thresholds are then numerically obtained. With the fact that regular LDPC codes
can achieve the symmetric information rate (SIR) under MAP decoding,
spatially-coupled codes with joint iterative decoding can universally approach
the SIR of ISI channels. For the dicode erasure channel, Kudekar and Kasai
recently reported very similar results based on EXIT-like curves.
|
1107.3258
|
On Learning Discrete Graphical Models Using Greedy Methods
|
cs.LG math.ST stat.ML stat.TH
|
In this paper, we address the problem of learning the structure of a pairwise
graphical model from samples in a high-dimensional setting. Our first main
result studies the sparsistency, or consistency in sparsity pattern recovery,
properties of a forward-backward greedy algorithm as applied to general
statistical models. As a special case, we then apply this algorithm to learn
the structure of a discrete graphical model via neighborhood estimation. As a
corollary of our general result, we derive sufficient conditions on the number
of samples n, the maximum node-degree d and the problem size p, as well as
other conditions on the model parameters, so that the algorithm recovers all
the edges with high probability. Our result guarantees graph selection for
samples scaling as n = Omega(d^2 log(p)), in contrast to existing
convex-optimization based algorithms that require a sample complexity of
\Omega(d^3 log(p)). Further, the greedy algorithm only requires a restricted
strong convexity condition which is typically milder than irrepresentability
assumptions. We corroborate these results using numerical simulations at the
end.
|
1107.3263
|
Naming Game on Adaptive Weighted Networks
|
cond-mat.stat-mech cs.CL physics.soc-ph
|
We examine a naming game on an adaptive weighted network. A weight of
connection for a given pair of agents depends on their communication success
rate and determines the probability with which the agents communicate. In some
cases, depending on the parameters of the model, the preference toward
successfully communicating agents is basically negligible and the model behaves
similarly to the naming game on a complete graph. In particular, it quickly
reaches a single-language state, albeit some details of the dynamics are
different from the complete-graph version. In some other cases, the preference
toward successfully communicating agents becomes much more relevant and the
model gets trapped in a multi-language regime. In this case gradual coarsening
and extinction of languages lead to the emergence of a dominant language,
albeit with some other languages still being present. A comparison of
distribution of languages in our model and in the human population is
discussed.
|
1107.3268
|
Complex Orthogonal Designs with Forbidden $2 \times 2$ Submatrices
|
cs.IT cs.DM math.IT
|
Complex orthogonal designs (CODs) are used to construct space-time block
codes. COD $\mathcal{O}_z$ with parameter $[p, n, k]$ is a $p \times n$ matrix,
where nonzero entries are filled by $\pm z_i$ or $\pm z^*_i$, $i = 1, 2,...,
k$, such that $\mathcal{O}^H_z \mathcal{O}_z =
(|z_1|^2+|z_2|^2+...+|z_k|^2)I_{n \times n}$. Define $\mathcal{O}_z$ a first
type COD if and only if $\mathcal{O}_z$ does not contain submatrix {\pm z_j &
0; \ 0 & \pm z^*_j}$ or ${\pm z^*_j & 0; \ 0 & \pm z_j}$. It is already known
that, all CODs with maximal rate, i.e., maximal $k/p$, are of the first type.
In this paper, we determine all achievable parameters $[p, n, k]$ of first
type COD, as well as all their possible structures. The existence of parameters
is proved by explicit-form constructions. New CODs with parameters
$[p,n,k]=[\binom{n}{w-1}+\binom{n}{w+1}, n, \binom{n}{w}], $ for $0 \le w \le
n$, are constructed, which demonstrate the possibility of sacrificing code rate
to reduce decoding delay. It's worth mentioning that all maximal rate, minimal
delay CODs are contained in our constructions, and their uniqueness under
equivalence operation is proved.
|
1107.3275
|
Hate networks revisited: time and user interface dependence study of
user emotions in political forum
|
physics.soc-ph cs.SI
|
The paper presents analysis of time evolution within am Internet political
forum, characterized by large political differences and high levels of
emotions. The study compares samples of discussions gathered at three periods
separated by important events. We focus on statistical aspects related to
emotional content of communication and changes brought by technologies that
increase or decrease the direct one-to-one discussions. We discuss implications
of user interface aspects on promoting communication across a political divide.
|
1107.3298
|
From decision to action : intentionality, a guide for the specification
of intelligent agents' behaviour
|
cs.AI cs.MA
|
This article introduces a reflexion about behavioural specification for
interactive and participative agent-based simulation in virtual reality. Within
this context, it is neces sary to reach a high level of expressivness in order
to enforce interactions between the designer and the behavioural model during
the in-line prototyping. This requires to consider the need of semantic very
early in the design process. The Intentional agent model is here exposed as a
possible answer. It relies on a mixed imperative and declarative approach which
focuses on the link between decision and action. The design of a tool able to
simulate virtual environment implying agents based on this model is discuss
|
1107.3302
|
A Temporal Neuro-Fuzzy Monitoring System to Manufacturing Systems
|
cs.AI
|
Fault diagnosis and failure prognosis are essential techniques in improving
the safety of many manufacturing systems. Therefore, on-line fault detection
and isolation is one of the most important tasks in safety-critical and
intelligent control systems. Computational intelligence techniques are being
investigated as extension of the traditional fault diagnosis methods. This
paper discusses the Temporal Neuro-Fuzzy Systems (TNFS) fault diagnosis within
an application study of a manufacturing system. The key issues of finding a
suitable structure for detecting and isolating ten realistic actuator faults
are described. Within this framework, data-processing interactive software of
simulation baptized NEFDIAG (NEuro Fuzzy DIAGnosis) version 1.0 is developed.
This software devoted primarily to creation, training and test of a
classification Neuro-Fuzzy system of industrial process failures. NEFDIAG can
be represented like a special type of fuzzy perceptron, with three layers used
to classify patterns and failures. The system selected is the workshop of
SCIMAT clinker, cement factory in Algeria.
|
1107.3313
|
Communication Systems for Grid Integration of Renewable Energy Resources
|
cs.IT math.IT
|
There is growing interest in renewable energy around the world. Since most
renewable sources are intermittent in nature, it is a challenging task to
integrate renewable energy resources into the power grid infrastructure. In
this grid integration, communication systems are crucial technologies, which
enable the accommodation of distributed renewable energy generation and play
extremely important role in monitoring, operating, and protecting both
renewable energy generators and power systems. In this paper, we review some
communication technologies available for grid integration of renewable energy
resources. Then, we present the communication systems used in a real renewable
energy project, Bear Mountain Wind Farm (BMW) in British Columbia, Canada. In
addition, we present the communication systems used in Photovoltaic Power
Systems (PPS). Finally, we outline some research challenges and possible
solutions about the communication systems for grid integration of renewable
energy resources.
|
1107.3326
|
Real-time retrieval for case-based reasoning in interactive
multiagent-based simulations
|
cs.AI cs.IR cs.MA
|
The aim of this paper is to present the principles and results about
case-based reasoning adapted to real- time interactive simulations, more
precisely concerning retrieval mechanisms. The article begins by introducing
the constraints involved in interactive multiagent-based simulations. The
second section pre- sents a framework stemming from case-based reasoning by
autonomous agents. Each agent uses a case base of local situations and, from
this base, it can choose an action in order to interact with other auton- omous
agents or users' avatars. We illustrate this framework with an example
dedicated to the study of dynamic situations in football. We then go on to
address the difficulties of conducting such simulations in real-time and
propose a model for case and for case base. Using generic agents and adequate
case base structure associated with a dedicated recall algorithm, we improve
retrieval performance under time pressure compared to classic CBR techniques.
We present some results relating to the performance of this solution. The
article concludes by outlining future development of our project.
|
1107.3342
|
Computing Strong Game-Theoretic Strategies in Jotto
|
cs.GT cs.AI cs.MA
|
We develop a new approach that computes approximate equilibrium strategies in
Jotto, a popular word game. Jotto is an extremely large two-player game of
imperfect information; its game tree has many orders of magnitude more states
than games previously studied, including no-limit Texas hold 'em. To address
the fact that the game is so large, we propose a novel strategy representation
called oracular form, in which we do not explicitly represent a strategy, but
rather appeal to an oracle that quickly outputs a sample move from the
strategy's distribution. Our overall approach is based on an extension of the
fictitious play algorithm to this oracular setting. We demonstrate the
superiority of our computed strategies over the strategies computed by a
benchmark algorithm, both in terms of head-to-head and worst-case performance.
|
1107.3348
|
Arithmetic and Frequency Filtering Methods of Pixel-Based Image Fusion
Techniques
|
cs.CV
|
In remote sensing, image fusion technique is a useful tool used to fuse high
spatial resolution panchromatic images (PAN) with lower spatial resolution
multispectral images (MS) to create a high spatial resolution multispectral of
image fusion (F) while preserving the spectral information in the multispectral
image (MS).There are many PAN sharpening techniques or Pixel-Based image fusion
techniques that have been developed to try to enhance the spatial resolution
and the spectral property preservation of the MS. This paper attempts to
undertake the study of image fusion, by using two types of pixel-based image
fusion techniques i.e. Arithmetic Combination and Frequency Filtering Methods
of Pixel-Based Image Fusion Techniques. The first type includes Brovey
Transform (BT), Color Normalized Transformation (CN) and Multiplicative Method
(MLT). The second type include High-Pass Filter Additive Method (HPFA),
High-Frequency-Addition Method (HFA) High Frequency Modulation Method (HFM) and
The Wavelet transform-based fusion method (WT). This paper also devotes to
concentrate on the analytical techniques for evaluating the quality of image
fusion (F) by using various methods including Standard Deviation (SD),
Entropy(En), Correlation Coefficient (CC), Signal-to Noise Ratio (SNR),
Normalization Root Mean Square Error (NRMSE) and Deviation Index (DI) to
estimate the quality and degree of information improvement of a fused image
quantitatively.
|
1107.3350
|
Compressive Mechanism: Utilizing Sparse Representation in Differential
Privacy
|
cs.DS cs.CR cs.DB
|
Differential privacy provides the first theoretical foundation with provable
privacy guarantee against adversaries with arbitrary prior knowledge. The main
idea to achieve differential privacy is to inject random noise into statistical
query results. Besides correctness, the most important goal in the design of a
differentially private mechanism is to reduce the effect of random noise,
ensuring that the noisy results can still be useful.
This paper proposes the \emph{compressive mechanism}, a novel solution on the
basis of state-of-the-art compression technique, called \emph{compressive
sensing}. Compressive sensing is a decent theoretical tool for compact synopsis
construction, using random projections. In this paper, we show that the amount
of noise is significantly reduced from $O(\sqrt{n})$ to $O(\log(n))$, when the
noise insertion procedure is carried on the synopsis samples instead of the
original database. As an extension, we also apply the proposed compressive
mechanism to solve the problem of continual release of statistical results.
Extensive experiments using real datasets justify our accuracy claims.
|
1107.3360
|
Object Oriented Information Computing over WWW
|
cs.IR
|
Traditional search engines on World Wide Web (WWW) focus essentially on
relevance ranking at the page level. But this lead to missing innumerable
structured information about real-world objects embedded in static Web pages
and online Web databases. Page-level information retrieval (IR) can
unfortunately lead to highly inaccurate relevance ranking in answering
object-oriented queries. On the other hand, Object Oriented Information
Computing (OOIC) is promising and greatly reduces the complexity of the system
while improving reusability and manageability. The most distinguishing
requirement of today's complex heterogeneous systems is the need of the
computing system to instantly adapt to vigorously changing conditions. OOIC
allows reflecting the dynamic characteristics of the applications by
instantiating objects dynamically. In this paper, major challenges of OOIC as
well as its rudiments are recapped. The review includes the insight to PopRank
Model and comparison analysis of conventional page rank based IR with OOIC
|
1107.3372
|
Snake-in-the-Box Codes for Rank Modulation
|
cs.IT math.IT
|
Motivated by the rank-modulation scheme with applications to flash memory, we
consider Gray codes capable of detecting a single error, also known as
snake-in-the-box codes. We study two error metrics: Kendall's $\tau$-metric,
which applies to charge-constrained errors, and the $\ell_\infty$-metric, which
is useful in the case of limited magnitude errors. In both cases we construct
snake-in-the-box codes with rate asymptotically tending to 1. We also provide
efficient successor-calculation functions, as well as ranking and unranking
functions. Finally, we also study bounds on the parameters of such codes.
|
1107.3383
|
Evolutionary Quantum Logic Synthesis of Boolean Reversible Logic
Circuits Embedded in Ternary Quantum Space using Heuristics
|
quant-ph cs.AI
|
It has been experimentally proven that realizing universal quantum gates
using higher-radices logic is practically and technologically possible. We
developed a Parallel Genetic Algorithm that synthesizes Boolean reversible
circuits realized with a variety of quantum gates on qudits with various
radices. In order to allow synthesizing circuits of medium sizes in the higher
radix quantum space we performed the experiments using a GPU accelerated
Genetic Algorithm. Using the accelerated GA we compare heuristic improvements
to the mutation process based on cost minimization, on the adaptive cost of the
primitives and improvements due to Baldwinian vs. Lamarckian GA. We also
describe various fitness function formulations that allowed for various
realizations of well known universal Boolean reversible or
quantum-probabilistic circuits.
|
1107.3407
|
Discovering Knowledge using a Constraint-based Language
|
cs.LG
|
Discovering pattern sets or global patterns is an attractive issue from the
pattern mining community in order to provide useful information. By combining
local patterns satisfying a joint meaning, this approach produces patterns of
higher level and thus more useful for the data analyst than the usual local
patterns, while reducing the number of patterns. In parallel, recent works
investigating relationships between data mining and constraint programming (CP)
show that the CP paradigm is a nice framework to model and mine such patterns
in a declarative and generic way. We present a constraint-based language which
enables us to define queries addressing patterns sets and global patterns. The
usefulness of such a declarative approach is highlighted by several examples
coming from the clustering based on associations. This language has been
implemented in the CP framework.
|
1107.3438
|
Duals of Affine Grassmann Codes and their Relatives
|
cs.IT math.IT
|
Affine Grassmann codes are a variant of generalized Reed-Muller codes and are
closely related to Grassmann codes. These codes were introduced in a recent
work [2]. Here we consider, more generally, affine Grassmann codes of a given
level. We explicitly determine the dual of an affine Grassmann code of any
level and compute its minimum distance. Further, we ameliorate the results of
[2] concerning the automorphism group of affine Grassmann codes. Finally, we
prove that affine Grassmann codes and their duals have the property that they
are linear codes generated by their minimum-weight codewords. This provides a
clean analogue of a corresponding result for generalized Reed-Muller codes.
|
1107.3474
|
A Generalized Poor-Verdu Error Bound for Multihypothesis Testing and the
Channel Reliability Function
|
cs.IT math.IT
|
A lower bound on the minimum error probability for multihypothesis testing is
established. The bound, which is expressed in terms of the cumulative
distribution function of the tilted posterior hypothesis distribution given the
observation with tilting parameter theta larger than or equal to 1, generalizes
an earlier bound due the Poor and Verdu (1995). A sufficient condition is
established under which the new bound (minus a multiplicative factor) provides
the exact error probability in the limit of theta going to infinity. Examples
illustrating the new bound are also provided.
The application of this generalized Poor-Verdu bound to the channel
reliability function is next carried out, resulting in two information-spectrum
upper bounds. It is observed that, for a class of channels including the
finite-input memoryless Gaussian channel, one of the bounds is tight and gives
a multi-letter asymptotic expression for the reliability function, albeit its
determination or calculation in single-letter form remains an open challenging
problem. Numerical examples regarding the other bound are finally presented.
|
1107.3498
|
What can we learn from slow self-avoiding adaptive walks by an infinite
radius search algorithm?
|
cs.NE cs.SI
|
Slow self-avoiding adaptive walks by an infinite radius search algorithm
(Limax) are analyzed as themselves, and as the network they form. The study is
conducted on several NK problems and two HIFF problems. We find that
examination of such "slacker" walks and networks can indicate relative search
difficulty within a family of problems, help identify potential local optima,
and detect presence of structure in fitness landscapes. Hierarchical walks are
used to differentiate rugged landscapes which are hierarchical (e.g. HIFF) from
those which are anarchic (e.g. NK). The notion of node viscidity as a measure
of local optimum potential is introduced and found quite successful although
more work needs to be done to improve its accuracy on problems with larger K.
|
1107.3499
|
Applying Advanced Spaceborne Thermal Emission and Reflection Radiometer
(ASTER) spectral indices for geological mapping and mineral identification on
the Tibetan Plateau
|
physics.geo-ph cs.CV
|
The Tibetan Plateau holds clues to understanding the dynamics and mechanisms
associated with continental growth. Part of the region is characterized by
zones of ophiolitic melange believed to represent the remnants of ancient
oceanic crust and underlying upper mantle emplaced during oceanic closures.
However, due to the remoteness of the region and the inhospitable terrain many
areas have not received detailed investigation. Increased spatial and spectral
resolution of satellite sensors have made it possible to map in greater detail
the mineralogy and lithology than in the past. Recent work by Yoshiki Ninomiya
of the Geological Survey of Japan has pioneered the use of several spectral
indices for the mapping of quartzose, carbonate, and silicate rocks using
Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) thermal
infrared (TIR) data. In this study, ASTER TIR indices have been applied to a
region in western-central Tibet for the purposes of assessing their
effectiveness for differentiating ophiolites and other lithologies. The results
agree well with existing geological maps and other published data. The study
area was chosen due to its diverse range of rock types, including an ophiolitic
melange, associated with the Bangong-Nujiang suture (BNS) that crops out on the
northern shores of Lagkor Tso and Dong Tso ("Tso" is Tibetan for lake). The
techniques highlighted in this paper could be applied to other geographical
regions where similar geological questions need to be resolved. The results of
this study aim to show the utility of ASTER TIR imagery for geological mapping
in semi-arid and sparsely vegetated areas on the Tibetan Plateau.
|
1107.3522
|
What Trends in Chinese Social Media
|
cs.CY cs.SI physics.soc-ph
|
There has been a tremendous rise in the growth of online social networks all
over the world in recent times. While some networks like Twitter and Facebook
have been well documented, the popular Chinese microblogging social network
Sina Weibo has not been studied. In this work, we examine the key topics that
trend on Sina Weibo and contrast them with our observations on Twitter. We find
that there is a vast difference in the content shared in China, when compared
to a global social network such as Twitter. In China, the trends are created
almost entirely due to retweets of media content such as jokes, images and
videos, whereas on Twitter, the trends tend to have more to do with current
global events and news stories.
|
1107.3534
|
Exploiting Channel Diversity in Secret Key Generation from Multipath
Fading Randomness
|
cs.CR cs.IT math.IT
|
We design and analyze a method to extract secret keys from the randomness
inherent to wireless channels. We study a channel model for multipath wireless
channel and exploit the channel diversity in generating secret key bits. We
compare the key extraction methods based both on entire channel state
information (CSI) and on single channel parameter such as the received signal
strength indicators (RSSI). Due to the reduction in the degree-of-freedom when
going from CSI to RSSI, the rate of key extraction based on CSI is far higher
than that based on RSSI. This suggests that exploiting channel diversity and
making CSI information available to higher layers would greatly benefit the
secret key generation. We propose a key generation system based on low-density
parity-check (LDPC) codes and describe the design and performance of two
systems: one based on binary LDPC codes and the other (useful at higher
signal-to-noise ratios) based on four-ary LDPC codes.
|
1107.3600
|
Unsupervised K-Nearest Neighbor Regression
|
stat.ML cs.LG
|
In many scientific disciplines structures in high-dimensional data have to be
found, e.g., in stellar spectra, in genome data, or in face recognition tasks.
In this work we present a novel approach to non-linear dimensionality
reduction. It is based on fitting K-nearest neighbor regression to the
unsupervised regression framework for learning of low-dimensional manifolds.
Similar to related approaches that are mostly based on kernel methods,
unsupervised K-nearest neighbor (UNN) regression optimizes latent variables
w.r.t. the data space reconstruction error employing the K-nearest neighbor
heuristic. The problem of optimizing latent neighborhoods is difficult to
solve, but the UNN formulation allows the design of efficient strategies that
iteratively embed latent points to fixed neighborhood topologies. UNN is well
appropriate for sorting of high-dimensional data. The iterative variants are
analyzed experimentally.
|
1107.3602
|
Heterogeneous Cellular Networks with Flexible Cell Association: A
Comprehensive Downlink SINR Analysis
|
cs.IT math.IT
|
In this paper we develop a tractable framework for SINR analysis in downlink
heterogeneous cellular networks (HCNs) with flexible cell association policies.
The HCN is modeled as a multi-tier cellular network where each tier's base
stations (BSs) are randomly located and have a particular transmit power, path
loss exponent, spatial density, and bias towards admitting mobile users. For
example, as compared to macrocells, picocells would usually have lower transmit
power, higher path loss exponent (lower antennas), higher spatial density (many
picocells per macrocell), and a positive bias so that macrocell users are
actively encouraged to use the more lightly loaded picocells. In the present
paper we implicitly assume all base stations have full queues; future work
should relax this. For this model, we derive the outage probability of a
typical user in the whole network or a certain tier, which is equivalently the
downlink SINR cumulative distribution function. The results are accurate for
all SINRs, and their expressions admit quite simple closed-forms in some
plausible special cases. We also derive the \emph{average ergodic rate} of the
typical user, and the \emph{minimum average user throughput} -- the smallest
value among the average user throughputs supported by one cell in each tier. We
observe that neither the number of BSs or tiers changes the outage probability
or average ergodic rate in an interference-limited full-loaded HCN with
unbiased cell association (no biasing), and observe how biasing alters the
various metrics.
|
1107.3606
|
Optimizing Index Deployment Order for Evolving OLAP (Extended Version)
|
cs.DB
|
Query workloads and database schemas in OLAP applications are becoming
increasingly complex. Moreover, the queries and the schemas have to continually
\textit{evolve} to address business requirements. During such repetitive
transitions, the \textit{order} of index deployment has to be considered while
designing the physical schemas such as indexes and MVs.
An effective index deployment ordering can produce (1) a prompt query runtime
improvement and (2) a reduced total deployment time. Both of these are
essential qualities of design tools for quickly evolving databases, but
optimizing the problem is challenging because of complex index interactions and
a factorial number of possible solutions.
We formulate the problem in a mathematical model and study several techniques
for solving the index ordering problem. We demonstrate that Constraint
Programming (CP) is a more flexible and efficient platform to solve the problem
than other methods such as mixed integer programming and A* search. In addition
to exact search techniques, we also studied local search algorithms to find
near optimal solution very quickly.
Our empirical analysis on the TPC-H dataset shows that our pruning techniques
can reduce the size of the search space by tens of orders of magnitude. Using
the TPC-DS dataset, we verify that our local search algorithm is a highly
scalable and stable method for quickly finding a near-optimal solution.
|
1107.3614
|
New construction of APN quaratic
|
cs.IT math.IT
|
The purpose of this paper is to detail the article of Carlet. Along the way I
recall some interesting results in the theory of finite fields, I give (new)
proofs of some known results, and then I generalize the construction of a
family of APN function. The reference precedes each result, and in the absence
of reference the proof is due to the author.
Keywords: boolean, bent, APN
|
1107.3636
|
GPS Signal Acquisition via Compressive Multichannel Sampling
|
cs.IT math.IT
|
In this paper, we propose an efficient acquisition scheme for GPS receivers.
It is shown that GPS signals can be effectively sampled and detected using a
bank of randomized correlators with much fewer chip-matched filters than those
used in existing GPS signal acquisition algorithms. The latter use correlations
with all possible shifted replicas of the satellite-specific C/A code and an
exhaustive search for peaking signals over the delay-Doppler space. Our scheme
is based on the recently proposed analog compressed sensing framework, and
consists of a multichannel sampling structure with far fewer correlators.
The compressive multichannel sampler outputs are linear combinations of a
vector whose support tends to be sparse; by detecting its support one can
identify the strongest satellite signals in the field of view and pinpoint the
correct code-phase and Doppler shifts for finer resolution during tracking. The
analysis in this paper demonstrates that GPS signals can be detected and
acquired via the proposed structure at a lower cost in terms of number of
correlations that need to be computed in the coarse acquisition phase, which in
current GPS technology scales like the product of the number of all possible
delays and Doppler shifts. In contrast, the required number of correlators in
our compressive multichannel scheme scales as the number of satellites in the
field of view of the device times the logarithm of number of delay-Doppler bins
explored, as is typical for compressed sensing methods.
|
1107.3663
|
Towards Open-Text Semantic Parsing via Multi-Task Learning of Structured
Embeddings
|
cs.AI
|
Open-text (or open-domain) semantic parsers are designed to interpret any
statement in natural language by inferring a corresponding meaning
representation (MR). Unfortunately, large scale systems cannot be easily
machine-learned due to lack of directly supervised data. We propose here a
method that learns to assign MRs to a wide range of text (using a dictionary of
more than 70,000 words, which are mapped to more than 40,000 entities) thanks
to a training scheme that combines learning from WordNet and ConceptNet with
learning from raw text. The model learns structured embeddings of words,
entities and MRs via a multi-task training process operating on these diverse
sources of data that integrates all the learnt knowledge into a single system.
This work ends up combining methods for knowledge acquisition, semantic
parsing, and word-sense disambiguation. Experiments on various tasks indicate
that our approach is indeed successful and can form a basis for future more
sophisticated systems.
|
1107.3674
|
Autonomous Traffic Control System Using Agent Based Technology
|
cs.MA
|
The way of analyzing, designing and building of real-time projects has been
changed due to the rapid growth of internet, mobile technologies and
intelligent applications. Most of these applications are intelligent, tiny and
distributed components called as agent. Agent works like it takes the input
from numerous real-time sources and gives back the real-time response. In this
paper how these agents can be implemented in vehicle traffic management
especially in large cities and identifying various challenges when there is a
rapid growth of population and vehicles. In this paper our proposal gives a
solution for using autonomous or agent based technology. These autonomous or
intelligent agents have the capability to observe, act and learn from their
past experience. This system uses the knowledge flow of precedent signal or
data to identify the incoming flow of forthcoming signal. Our architecture
involves the video analysis and exploration using some Intelligence learning
algorithm to estimate and identify the flow of traffic.
|
1107.3680
|
3-Phase Recognition Approach to Pseudo 3D Building Generation from 2D
Floor Plan
|
cs.GR cs.CV
|
Nowadays three dimension (3D) architectural visualisation has become a
powerful tool in the conceptualisation, design and presentation of
architectural products in the construction industry, providing realistic
interaction and walkthrough on engineering products. Traditional ways of
implementing 3D models involves the use of specialised 3D authoring tools along
with skilled 3D designers with blueprints of the model and this is a slow and
laborious process. The aim of this paper is to automate this process by simply
analyzing the blueprint document and generating the 3D scene automatically. For
this purpose we have devised a 3-Phase recognition approach to pseudo 3D
building generation from 2D floor plan and developed a software accordingly.
Our 3-phased 3D building system has been implemented using C, C++ and OpenCV
library [24] for the Image Processing module; The Save Module generated an XML
file for storing the processed floor plan objects attributes; while the
Irrlitch [14] game engine was used to implement the Interactive 3D module.
Though still at its infancy, our proposed system gave commendable results. We
tested our system on 6 floor plans with complexities ranging from low to high
and the results seems to be very promising with an average processing time of
around 3s and a 3D generation in 4s. In addition the system provides an
interactive walk-though and allows users to modify components.
|
1107.3707
|
Statistical Laws Governing Fluctuations in Word Use from Word Birth to
Word Death
|
physics.soc-ph cs.CL cs.IR nlin.AO physics.pop-ph
|
We analyze the dynamic properties of 10^7 words recorded in English, Spanish
and Hebrew over the period 1800--2008 in order to gain insight into the
coevolution of language and culture. We report language independent patterns
useful as benchmarks for theoretical models of language evolution. A
significantly decreasing (increasing) trend in the birth (death) rate of words
indicates a recent shift in the selection laws governing word use. For new
words, we observe a peak in the growth-rate fluctuations around 40 years after
introduction, consistent with the typical entry time into standard dictionaries
and the human generational timescale. Pronounced changes in the dynamics of
language during periods of war shows that word correlations, occurring across
time and between words, are largely influenced by coevolutionary social,
technological, and political factors. We quantify cultural memory by analyzing
the long-term correlations in the use of individual words using detrended
fluctuation analysis.
|
1107.3715
|
Mathematical Programming Decoding of Binary Linear Codes: Theory and
Algorithms
|
cs.IT math.IT
|
Mathematical programming is a branch of applied mathematics and has recently
been used to derive new decoding approaches, challenging established but often
heuristic algorithms based on iterative message passing. Concepts from
mathematical programming used in the context of decoding include linear,
integer, and nonlinear programming, network flows, notions of duality as well
as matroid and polyhedral theory. This survey article reviews and categorizes
decoding methods based on mathematical programming approaches for binary linear
codes over binary-input memoryless symmetric channels.
|
1107.3746
|
A Computational Complexity-Theoretic Elaboration of Weak Truth-Table
Reducibility
|
math.LO cs.CC cs.IT math.IT
|
The notion of weak truth-table reducibility plays an important role in
recursion theory. In this paper, we introduce an elaboration of this notion,
where a computable bound on the use function is explicitly specified. This
elaboration enables us to deal with the notion of asymptotic behavior in a
manner like in computational complexity theory, while staying in computability
theory. We apply the elaboration to sets which appear in the statistical
mechanical interpretation of algorithmic information theory. We demonstrate the
power of the elaboration by revealing a critical phenomenon, i.e., a phase
transition, in the statistical mechanical interpretation, which cannot be
captured by the original notion of weak truth-table reducibility.
|
1107.3765
|
Using Variational Inference and MapReduce to Scale Topic Modeling
|
cs.AI cs.DC
|
Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for
exploring document collections. Because of the increasing prevalence of large
datasets, there is a need to improve the scalability of inference of LDA. In
this paper, we propose a technique called ~\emph{MapReduce LDA} (Mr. LDA) to
accommodate very large corpus collections in the MapReduce framework. In
contrast to other techniques to scale inference for LDA, which use Gibbs
sampling, we use variational inference. Our solution efficiently distributes
computation and is relatively simple to implement. More importantly, this
variational implementation, unlike highly tuned and specialized
implementations, is easily extensible. We demonstrate two extensions of the
model possible with this scalable framework: informed priors to guide topic
discovery and modeling topics from a multilingual corpus.
|
1107.3784
|
Applying Data Privacy Techniques on Tabular Data in Uganda
|
cs.CR cs.DB
|
The growth of Information Technology(IT) in Africa has led to an increase in
the utilization of communication networks for data transaction across the
continent. A growing number of entities in the private sector, academia, and
government, have deployed the Internet as a medium to transact in data,
routinely posting statistical and non statistical data online and thereby
making many in Africa increasingly dependent on the Internet for data
transactions. In the country of Uganda, exponential growth in data transaction
has presented a new challenge: What is the most efficient way to implement data
privacy. This article discusses data privacy challenges faced by the country of
Uganda and implementation of data privacy techniques for published tabular
data. We make the case for data privacy, survey concepts of data privacy, and
implementations that could be employed to provide data privacy in Uganda.
|
1107.3792
|
Influence and Dynamic Behavior in Random Boolean Networks
|
cond-mat.dis-nn cs.AI cs.DM nlin.AO
|
We present a rigorous mathematical framework for analyzing dynamics of a
broad class of Boolean network models. We use this framework to provide the
first formal proof of many of the standard critical transition results in
Boolean network analysis, and offer analogous characterizations for novel
classes of random Boolean networks. We precisely connect the short-run dynamic
behavior of a Boolean network to the average influence of the transfer
functions. We show that some of the assumptions traditionally made in the more
common mean-field analysis of Boolean networks do not hold in general.
For example, we offer some evidence that imbalance, or expected internal
inhomogeneity, of transfer functions is a crucial feature that tends to drive
quiescent behavior far more strongly than previously observed.
|
1107.3823
|
Weakly Supervised Learning of Foreground-Background Segmentation using
Masked RBMs
|
cs.LG cs.CV
|
We propose an extension of the Restricted Boltzmann Machine (RBM) that allows
the joint shape and appearance of foreground objects in cluttered images to be
modeled independently of the background. We present a learning scheme that
learns this representation directly from cluttered images with only very weak
supervision. The model generates plausible samples and performs
foreground-background segmentation. We demonstrate that representing foreground
objects independently of the background can be beneficial in recognition tasks.
|
1107.3862
|
Achieving "Massive MIMO" Spectral Efficiency with a Not-so-Large Number
of Antennas
|
cs.IT math.IT
|
The main focus and contribution of this paper is a novel network-MIMO TDD
architecture that achieves spectral efficiencies comparable with "Massive
MIMO", with one order of magnitude fewer antennas per active user per cell. The
proposed architecture is based on a family of network-MIMO schemes defined by
small clusters of cooperating base stations, zero-forcing multiuser MIMO
precoding with suitable inter-cluster interference constraints, uplink pilot
signals reuse across cells, and frequency reuse. The key idea consists of
partitioning the users population into geographically determined "bins", such
that all users in the same bin are statistically equivalent, and use the
optimal network-MIMO architecture in the family for each bin. A scheduler takes
care of serving the different bins on the time-frequency slots, in order to
maximize a desired network utility function that captures some desired notion
of fairness. This results in a mixed-mode network-MIMO architecture, where
different schemes, each of which is optimized for the served user bin, are
multiplexed in time-frequency. In order to carry out the performance analysis
and the optimization of the proposed architecture in a clean and
computationally efficient way, we consider the large-system regime where the
number of users, the number of antennas, and the channel coherence block length
go to infinity with fixed ratios. The performance predicted by the large-system
asymptotic analysis matches very well the finite-dimensional simulations.
Overall, the system spectral efficiency obtained by the proposed architecture
is similar to that achieved by "Massive MIMO", with a 10-fold reduction in the
number of antennas at the base stations (roughly, from 500 to 50 antennas).
|
1107.3894
|
Online Anomaly Detection Systems Using Incremental Commute Time
|
cs.AI
|
Commute Time Distance (CTD) is a random walk based metric on graphs. CTD has
found widespread applications in many domains including personalized search,
collaborative filtering and making search engines robust against manipulation.
Our interest is inspired by the use of CTD as a metric for anomaly detection.
It has been shown that CTD can be used to simultaneously identify both global
and local anomalies. Here we propose an accurate and efficient approximation
for computing the CTD in an incremental fashion in order to facilitate
real-time applications. An online anomaly detection algorithm is designed where
the CTD of each new arriving data point to any point in the current graph can
be estimated in constant time ensuring a real-time response. Moreover, the
proposed approach can also be applied in many other applications that utilize
commute time distance.
|
1107.3942
|
Identification of clusters of investors from their real trading activity
in a financial market
|
q-fin.TR cs.SI physics.soc-ph
|
We use statistically validated networks, a recently introduced method to
validate links in a bipartite system, to identify clusters of investors trading
in a financial market. Specifically, we investigate a special database allowing
to track the trading activity of individual investors of the stock Nokia. We
find that many statistically detected clusters of investors show a very high
degree of synchronization in the time when they decide to trade and in the
trading action taken. We investigate the composition of these clusters and we
find that several of them show an over-expression of specific categories of
investors.
|
1107.3944
|
Optimal control with stochastic PDE constraints and uncertain controls
|
math.OC cs.NA cs.SY
|
The optimal control of problems that are constrained by partial differential
equations with uncertainties and with uncertain controls is addressed. The
Lagrangian that defines the problem is postulated in terms of stochastic
functions, with the control function possibly decomposed into an unknown
deterministic component and a known zero-mean stochastic component. The extra
freedom provided by the stochastic dimension in defining cost functionals is
explored, demonstrating the scope for controlling statistical aspects of the
system response. One-shot stochastic finite element methods are used to find
approximate solutions to control problems. It is shown that applying the
stochastic collocation finite element to the formulated problem leads to a
coupling between stochastic collocation points when a deterministic optimal
control is considered or when moments are included in the cost functional,
thereby obviating the primary advantage of the collocation method over the
stochastic Galerkin method for the considered problem. The application of the
presented methods is demonstrated through a number of numerical examples. The
presented framework is sufficiently general to also consider a class of inverse
problems, and numerical examples of this type are also presented.
|
1107.3979
|
Continuous-time quantized consensus: convergence of Krasowskii solutions
|
math.OC cs.SY
|
This note studies a network of agents having continuous-time dynamics with
quantized interactions and time-varying directed topology. Due to the
discontinuity of the dynamics, solutions of the resulting ODE system are
intended in the sense of Krasovskii. A limit connectivity graph is defined,
which encodes persistent interactions between nodes: if such graph has a
globally reachable node, Krasovskii solutions reach consensus (up to the
quantizer precision) after a finite time. Under the additional assumption of a
time-invariant topology, the convergence time is upper bounded by a quantity
which depends on the network size and the quantizer precision. It is observed
that the convergence time can be very large for solutions which stay on a
discontinuity surface.
|
1107.3995
|
Prescient Precoding in Heterogeneous DSA Networks with Both Underlay and
Interweave MIMO Cognitive Radios
|
cs.IT math.IT
|
This work examines a novel heterogeneous dynamic spectrum access network
where the primary users (PUs) coexist with both underlay and interweave
cognitive radios (ICRs); all terminals being potentially equipped with multiple
antennas. Underlay cognitive transmitters (UCTs) are allowed to transmit
concurrently with PUs subject to interference constraints, while the ICRs
employ spectrum sensing and are permitted to access the shared spectrum only
when both PUs and UCTs are absent. We investigate the design of MIMO precoding
algorithms for the UCT that increase the detection probability at the ICRs,
while simultaneously meeting a desired Quality-of-Service target to the
underlay cognitive receivers (UCRs) and constraining interference leaked to
PUs. The objective of such a proactive approach, referred to as prescient
precoding, is to minimize the probability of interference from ICRs to the UCRs
and primary receivers due to imperfect spectrum sensing. We begin with downlink
prescient precoding algorithms for multiple single-antenna UCRs and
multi-antenna PUs/ICRs. We then present prescient block-diagonalization
algorithms for the MIMO underlay downlink where spatial multiplexing is
performed for a plurality of multi-antenna UCRs. Numerical experiments
demonstrate that prescient precoding by UCTs provides a pronounced performance
gain compared to conventional underlay precoding strategies.
|
1107.4009
|
Social features of online networks: the strength of intermediary ties in
online social media
|
physics.soc-ph cs.SI
|
An increasing fraction of today social interactions occur using online social
media as communication channels. Recent worldwide events, such as social
movements in Spain or revolts in the Middle East, highlight their capacity to
boost people coordination. Online networks display in general a rich internal
structure where users can choose among different types and intensity of
interactions. Despite of this, there are still open questions regarding the
social value of online interactions. For example, the existence of users with
millions of online friends sheds doubts on the relevance of these relations. In
this work, we focus on Twitter, one of the most popular online social networks,
and find that the network formed by the basic type of connections is organized
in groups. The activity of the users conforms to the landscape determined by
such groups. Furthermore, Twitter's distinction between different types of
interactions allows us to establish a parallelism between online and offline
social networks: personal interactions are more likely to occur on internal
links to the groups (the weakness of strong ties), events transmitting new
information go preferentially through links connecting different groups (the
strength of weak ties) or even more through links connecting to users belonging
to several groups that act as brokers (the strength of intermediary ties).
|
1107.4021
|
Achieving a vanishing SNR-gap to exact lattice decoding at a
subexponential complexity
|
cs.IT cs.CC math.IT
|
The work identifies the first lattice decoding solution that achieves, in the
general outage-limited MIMO setting and in the high-rate and high-SNR limit,
both a vanishing gap to the error-performance of the (DMT optimal) exact
solution of preprocessed lattice decoding, as well as a computational
complexity that is subexponential in the number of codeword bits. The proposed
solution employs lattice reduction (LR)-aided regularized (lattice) sphere
decoding and proper timeout policies. These performance and complexity
guarantees hold for most MIMO scenarios, all reasonable fading statistics, all
channel dimensions and all full-rate lattice codes.
In sharp contrast to the above manageable complexity, the complexity of other
standard preprocessed lattice decoding solutions is shown here to be extremely
high. Specifically the work is first to quantify the complexity of these
lattice (sphere) decoding solutions and to prove the surprising result that the
complexity required to achieve a certain rate-reliability performance, is
exponential in the lattice dimensionality and in the number of codeword bits,
and it in fact matches, in common scenarios, the complexity of ML-based
solutions. Through this sharp contrast, the work was able to, for the first
time, rigorously quantify the pivotal role of lattice reduction as a special
complexity reducing ingredient.
Finally the work analytically refines transceiver DMT analysis which
generally fails to address potentially massive gaps between theory and
practice. Instead the adopted vanishing gap condition guarantees that the
decoder's error curve is arbitrarily close, given a sufficiently high SNR, to
the optimal error curve of exact solutions, which is a much stronger condition
than DMT optimality which only guarantees an error gap that is subpolynomial in
SNR, and can thus be unbounded and generally unacceptable in practical
settings.
|
1107.4035
|
Towards Completely Lifted Search-based Probabilistic Inference
|
cs.AI
|
The promise of lifted probabilistic inference is to carry out probabilistic
inference in a relational probabilistic model without needing to reason about
each individual separately (grounding out the representation) by treating the
undistinguished individuals as a block. Current exact methods still need to
ground out in some cases, typically because the representation of the
intermediate results is not closed under the lifted operations. We set out to
answer the question as to whether there is some fundamental reason why lifted
algorithms would need to ground out undifferentiated individuals. We have two
main results: (1) We completely characterize the cases where grounding is
polynomial in a population size, and show how we can do lifted inference in
time polynomial in the logarithm of the population size for these cases. (2)
For the case of no-argument and single-argument parametrized random variables
where the grounding is not polynomial in a population size, we present lifted
inference which is polynomial in the population size whereas grounding is
exponential. Neither of these cases requires reasoning separately about the
individuals that are not explicitly mentioned.
|
1107.4042
|
Optimal Adaptive Learning in Uncontrolled Restless Bandit Problems
|
math.OC cs.LG
|
In this paper we consider the problem of learning the optimal policy for
uncontrolled restless bandit problems. In an uncontrolled restless bandit
problem, there is a finite set of arms, each of which when pulled yields a
positive reward. There is a player who sequentially selects one of the arms at
each time step. The goal of the player is to maximize its undiscounted reward
over a time horizon T. The reward process of each arm is a finite state Markov
chain, whose transition probabilities are unknown by the player. State
transitions of each arm is independent of the selection of the player. We
propose a learning algorithm with logarithmic regret uniformly over time with
respect to the optimal finite horizon policy. Our results extend the optimal
adaptive learning of MDPs to POMDPs.
|
1107.4057
|
The Harmonic Theory; A mathematical framework to build intelligent
contextual and adaptive computing, cognition and sensory system
|
cs.AI cs.IT math.IT
|
Harmonic theory provides a mathematical framework to describe the structure,
behavior, evolution and emergence of harmonic systems. A harmonic system is
context aware, contains elements that manifest characteristics either
collaboratively or independently according to system's expression and can
interact with its environment. This theory provides a fresh way to analyze
emergence and collaboration of "ad-hoc" and complex systems.
|
1107.4067
|
Finding Non-overlapping Clusters for Generalized Inference Over
Graphical Models
|
stat.ML cs.IT math.IT
|
Graphical models use graphs to compactly capture stochastic dependencies
amongst a collection of random variables. Inference over graphical models
corresponds to finding marginal probability distributions given joint
probability distributions. In general, this is computationally intractable,
which has led to a quest for finding efficient approximate inference
algorithms. We propose a framework for generalized inference over graphical
models that can be used as a wrapper for improving the estimates of approximate
inference algorithms. Instead of applying an inference algorithm to the
original graph, we apply the inference algorithm to a block-graph, defined as a
graph in which the nodes are non-overlapping clusters of nodes from the
original graph. This results in marginal estimates of a cluster of nodes, which
we further marginalize to get the marginal estimates of each node. Our proposed
block-graph construction algorithm is simple, efficient, and motivated by the
observation that approximate inference is more accurate on graphs with longer
cycles. We present extensive numerical simulations that illustrate our
block-graph framework with a variety of inference algorithms (e.g., those in
the libDAI software package). These simulations show the improvements provided
by our framework.
|
1107.4080
|
On the Universality of Online Mirror Descent
|
cs.LG
|
We show that for a general class of convex online learning problems, Mirror
Descent can always achieve a (nearly) optimal regret guarantee.
|
1107.4104
|
A novel canonical dual computational approach for prion AGAAAAGA amyloid
fibril molecular modeling
|
q-bio.BM cs.CE math-ph math.MP math.OC
|
Many experimental studies have shown that the prion AGAAAAGA palindrome
hydrophobic region (113-120) has amyloid fibril forming properties and plays an
important role in prion diseases. However, due to the unstable, noncrystalline
and insoluble nature of the amyloid fibril, to date structural information on
AGAAAAGA region (113-120) has been very limited. This region falls just within
the N-terminal unstructured region PrP (1-123) of prion proteins. Traditional
X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy
experimental methods cannot be used to get its structural information. Under
this background, this paper introduces a novel approach of the canonical dual
theory to address the 3D atomic-resolution structure of prion AGAAAAGA amyloid
fibrils. The novel and powerful canonical dual computational approach
introduced in this paper is for the molecular modeling of prion AGAAAAGA
amyloid fibrils, and that the optimal atomic-resolution structures of prion
AGAAAAGA amyloid fibils presented in this paper are useful for the drive to
find treatments for prion diseases in the field of medicinal chemistry.
Overall, this paper presents an important method and provides useful
information for treatments of prion diseases. Overall, this paper could be of
interest to the general readership of Theoretical Biology.
|
1107.4118
|
Evaluating Data Assimilation Algorithms
|
physics.data-an cs.SY math.OC math.PR physics.ao-ph
|
Data assimilation leads naturally to a Bayesian formulation in which the
posterior probability distribution of the system state, given the observations,
plays a central conceptual role. The aim of this paper is to use this Bayesian
posterior probability distribution as a gold standard against which to evaluate
various commonly used data assimilation algorithms.
A key aspect of geophysical data assimilation is the high dimensionality and
low predictability of the computational model. With this in mind, yet with the
goal of allowing an explicit and accurate computation of the posterior
distribution, we study the 2D Navier-Stokes equations in a periodic geometry.
We compute the posterior probability distribution by state-of-the-art
statistical sampling techniques. The commonly used algorithms that we evaluate
against this accurate gold standard, as quantified by comparing the relative
error in reproducing its moments, are 4DVAR and a variety of sequential
filtering approximations based on 3DVAR and on extended and ensemble Kalman
filters.
The primary conclusions are that: (i) with appropriate parameter choices,
approximate filters can perform well in reproducing the mean of the desired
probability distribution; (ii) however they typically perform poorly when
attempting to reproduce the covariance; (iii) this poor performance is
compounded by the need to modify the covariance, in order to induce stability.
Thus, whilst filters can be a useful tool in predicting mean behavior, they
should be viewed with caution as predictors of uncertainty. These conclusions
are intrinsic to the algorithms and will not change if the model complexity is
increased, for example by employing a smaller viscosity, or by using a detailed
NWP model.
|
1107.4127
|
Spectra of sparse regular graphs with loops
|
cond-mat.stat-mech cond-mat.dis-nn cs.SI math-ph math.MP physics.soc-ph
|
We derive exact equations that determine the spectra of undirected and
directed sparsely connected regular graphs containing loops of arbitrary
length. The implications of our results to the structural and dynamical
properties of networks are discussed by showing how loops influence the size of
the spectral gap and the propensity for synchronization. Analytical formulas
for the spectrum are obtained for specific length of the loops.
|
1107.4132
|
Null-Control and Measurable Sets
|
math.OC cs.SY
|
We prove the interior and boundary null-controllability of some parabolic
evolutions with controls acting over measurable sets.
|
1107.4142
|
Asymptotics of the Invariant Measure in Mean Field Models with Jumps
|
math.PR cs.IT cs.SY math.IT math.OC
|
We consider the asymptotics of the invariant measure for the process of the
empirical spatial distribution of $N$ coupled Markov chains in the limit of a
large number of chains. Each chain reflects the stochastic evolution of one
particle. The chains are coupled through the dependence of the transition rates
on this spatial distribution of particles in the various states. Our model is a
caricature for medium access interactions in wireless local area networks. It
is also applicable to the study of spread of epidemics in a network. The
limiting process satisfies a deterministic ordinary differential equation
called the McKean-Vlasov equation. When this differential equation has a unique
globally asymptotically stable equilibrium, the spatial distribution
asymptotically concentrates on this equilibrium. More generally, its limit
points are supported on a subset of the $\omega$-limit sets of the
McKean-Vlasov equation. Using a control-theoretic approach, we examine the
question of large deviations of the invariant measure from this limit.
|
1107.4148
|
The Sender-Excited Secret Key Agreement Model: Capacity, Reliability and
Secrecy Exponents
|
cs.IT cs.CR math.IT
|
We consider the secret key generation problem when sources are randomly
excited by the sender and there is a noiseless public discussion channel. Our
setting is thus similar to recent works on channels with action-dependent
states where the channel state may be influenced by some of the parties
involved. We derive single-letter expressions for the secret key capacity
through a type of source emulation analysis. We also derive lower bounds on the
achievable reliability and secrecy exponents, i.e., the exponential rates of
decay of the probability of decoding error and of the information leakage.
These exponents allow us to determine a set of strongly-achievable secret key
rates. For degraded eavesdroppers the maximum strongly-achievable rate equals
the secret key capacity; our exponents can also be specialized to previously
known results.
In deriving our strong achievability results we introduce a coding scheme
that combines wiretap coding (to excite the channel) and key extraction (to
distill keys from residual randomness). The secret key capacity is naturally
seen to be a combination of both source- and channel-type randomness. Through
examples we illustrate a fundamental interplay between the portion of the
secret key rate due to each type of randomness. We also illustrate inherent
tradeoffs between the achievable reliability and secrecy exponents. Our new
scheme also naturally accommodates rate limits on the public discussion. We
show that under rate constraints we are able to achieve larger rates than those
that can be attained through a pure source emulation strategy.
|
1107.4153
|
Performance and Convergence of Multi-user Online Learning
|
cs.MA cs.LG
|
We study the problem of allocating multiple users to a set of wireless
channels in a decentralized manner when the channel quali- ties are
time-varying and unknown to the users, and accessing the same channel by
multiple users leads to reduced quality due to interference. In such a setting
the users not only need to learn the inherent channel quality and at the same
time the best allocations of users to channels so as to maximize the social
welfare. Assuming that the users adopt a certain online learning algorithm, we
investigate under what conditions the socially optimal allocation is
achievable. In particular we examine the effect of different levels of
knowledge the users may have and the amount of communications and cooperation.
The general conclusion is that when the cooperation of users decreases and the
uncertainty about channel payoffs increases it becomes harder to achieve the
socially opti- mal allocation.
|
1107.4157
|
Linear Differential Equations with Fuzzy Boundary Values
|
cs.NA cs.CE math.DS math.NA
|
In this study, we consider a linear differential equation with fuzzy boundary
values. We express the solution of the problem in terms of a fuzzy set of crisp
real functions. Each real function from the solution set satisfies differential
equation, and its boundary values belong to intervals, determined by the
corresponding fuzzy numbers. The least possibility among possibilities of
boundary values in corresponding fuzzy sets is defined as the possibility of
the real function in the fuzzy solution. In order to find the fuzzy solution we
propose a method based on the properties of linear transformations. We show
that, if the corresponding crisp problem has a unique solution then the fuzzy
problem has unique solution too. We also prove that if the boundary values are
triangular fuzzy numbers, then the value of the solution at any time is also a
triangular fuzzy number. We find that the fuzzy solution determined by our
method is the same as the one that is obtained from solution of crisp problem
by the application of the extension principle. We present two examples
describing the proposed method.
|
1107.4161
|
Local Optima Networks of the Quadratic Assignment Problem
|
cs.AI
|
Using a recently proposed model for combinatorial landscapes, Local Optima
Networks (LON), we conduct a thorough analysis of two types of instances of the
Quadratic Assignment Problem (QAP). This network model is a reduction of the
landscape in which the nodes correspond to the local optima, and the edges
account for the notion of adjacency between their basins of attraction. The
model was inspired by the notion of 'inherent network' of potential energy
surfaces proposed in physical-chemistry. The local optima networks extracted
from the so called uniform and real-like QAP instances, show features clearly
distinguishing these two types of instances. Apart from a clear confirmation
that the search difficulty increases with the problem dimension, the analysis
provides new confirming evidence explaining why the real-like instances are
easier to solve exactly using heuristic search, while the uniform instances are
easier to solve approximately. Although the local optima network model is still
under development, we argue that it provides a novel view of combinatorial
landscapes, opening up the possibilities for new analytical tools and
understanding of problem difficulty in combinatorial optimization.
|
1107.4162
|
Local Optima Networks of NK Landscapes with Neutrality
|
cs.AI
|
In previous work we have introduced a network-based model that abstracts many
details of the underlying landscape and compresses the landscape information
into a weighted, oriented graph which we call the local optima network. The
vertices of this graph are the local optima of the given fitness landscape,
while the arcs are transition probabilities between local optima basins. Here
we extend this formalism to neutral fitness landscapes, which are common in
difficult combinatorial search spaces. By using two known neutral variants of
the NK family (i.e. NKp and NKq) in which the amount of neutrality can be tuned
by a parameter, we show that our new definitions of the optima networks and the
associated basins are consistent with the previous definitions for the
non-neutral case. Moreover, our empirical study and statistical analysis show
that the features of neutral landscapes interpolate smoothly between landscapes
with maximum neutrality and non-neutral ones. We found some unknown structural
differences between the two studied families of neutral landscapes. But
overall, the network features studied confirmed that neutrality, in landscapes
with percolating neutral networks, may enhance heuristic search. Our current
methodology requires the exhaustive enumeration of the underlying search space.
Therefore, sampling techniques should be developed before this analysis can
have practical implications. We argue, however, that the proposed model offers
a new perspective into the problem difficulty of combinatorial optimization
problems and may inspire the design of more effective search heuristics.
|
1107.4163
|
Centric selection: a way to tune the exploration/exploitation trade-off
|
cs.AI
|
In this paper, we study the exploration / exploitation trade-off in cellular
genetic algorithms. We define a new selection scheme, the centric selection,
which is tunable and allows controlling the selective pressure with a single
parameter. The equilibrium model is used to study the influence of the centric
selection on the selective pressure and a new model which takes into account
problem dependent statistics and selective pressure in order to deal with the
exploration / exploitation trade-off is proposed: the punctuated equilibria
model. Performances on the quadratic assignment problem and NK-Landscapes put
in evidence an optimal exploration / exploitation trade-off on both of the
classes of problems. The punctuated equilibria model is used to explain these
results.
|
1107.4164
|
NK landscapes difficulty and Negative Slope Coefficient: How Sampling
Influences the Results
|
cs.AI
|
Negative Slope Coefficient is an indicator of problem hardness that has been
introduced in 2004 and that has returned promising results on a large set of
problems. It is based on the concept of fitness cloud and works by partitioning
the cloud into a number of bins representing as many different regions of the
fitness landscape. The measure is calculated by joining the bins centroids by
segments and summing all their negative slopes. In this paper, for the first
time, we point out a potential problem of the Negative Slope Coefficient: we
study its value for different instances of the well known NK-landscapes and we
show how this indicator is dramatically influenced by the minimum number of
points contained into a bin. Successively, we formally justify this behavior of
the Negative Slope Coefficient and we discuss pros and cons of this measure.
|
1107.4196
|
The Bethe Permanent of a Non-Negative Matrix
|
cs.IT cs.CC math-ph math.CO math.IT math.MP
|
It has recently been observed that the permanent of a non-negative square
matrix, i.e., of a square matrix containing only non-negative real entries, can
very well be approximated by solving a certain Bethe free energy function
minimization problem with the help of the sum-product algorithm. We call the
resulting approximation of the permanent the Bethe permanent. In this paper we
give reasons why this approach to approximating the permanent works well.
Namely, we show that the Bethe free energy function is convex and that the
sum-product algorithm finds its minimum efficiently. We then discuss the fact
that the permanent is lower bounded by the Bethe permanent, and we comment on
potential upper bounds on the permanent based on the Bethe permanent. We also
present a combinatorial characterization of the Bethe permanent in terms of
permanents of so-called lifted versions of the matrix under consideration.
Moreover, we comment on possibilities to modify the Bethe permanent so that it
approximates the permanent even better, and we conclude the paper with some
observations and conjectures about permanent-based pseudo-codewords and
permanent-based kernels.
|
1107.4199
|
An Analytical Model for the Intercell Interference Power in the Downlink
of Wireless Cellular Networks
|
cs.IT cs.NI math.IT
|
In this paper, we propose a methodology for estimating the statistics of the
intercell interference power in the downlink of a multicellular network. We
first establish an analytical expression for the probability law of the
interference power when only Rayleigh multipath fading is considered. Next,
focusing on a propagation environment where small-scale Rayleigh fading as well
as large-scale effects, including attenuation with distance and lognormal
shadowing, are taken into consideration, we elaborate a semi-analytical method
to build up the histogram of the interference power distribution. From the
results obtained for this combined small- and large-scale fading context, we
then develop a statistical model for the interference power distribution. The
interest of this model lies in the fact that it can be applied to a large range
of values of the shadowing parameter. The proposed methods can also be easily
extended to other types of networks.
|
1107.4212
|
On the Undecidability of Fuzzy Description Logics with GCIs with
Lukasiewicz t-norm
|
cs.LO cs.AI
|
Recently there have been some unexpected results concerning Fuzzy Description
Logics (FDLs) with General Concept Inclusions (GCIs). They show that, unlike
the classical case, the DL ALC with GCIs does not have the finite model
property under Lukasiewicz Logic or Product Logic and, specifically, knowledge
base satisfiability is an undecidable problem for Product Logic. We complete
here the analysis by showing that knowledge base satisfiability is also an
undecidable problem for Lukasiewicz Logic.
|
1107.4218
|
The settlement of Madagascar: what dialects and languages can tell
|
cs.CL q-bio.PE
|
The dialects of Madagascar belong to the Greater Barito East group of the
Austronesian family and it is widely accepted that the Island was colonized by
Indonesian sailors after a maritime trek which probably took place around 650
CE. The language most closely related to Malagasy dialects is Maanyan but also
Malay is strongly related especially for what concerns navigation terms. Since
the Maanyan Dayaks live along the Barito river in Kalimantan (Borneo) and they
do not possess the necessary skill for long maritime navigation, probably they
were brought as subordinates by Malay sailors.
In a recent paper we compared 23 different Malagasy dialects in order to
determine the time and the landing area of the first colonization. In this
research we use new data and new methods to confirm that the landing took place
on the south-east coast of the Island. Furthermore, we are able to state here
that it is unlikely that there were multiple settlements and, therefore,
colonization consisted in a single founding event.
To reach our goal we find out the internal kinship relations among all the 23
Malagasy dialects and we also find out the different kinship degrees of the 23
dialects versus Malay and Maanyan. The method used is an automated version of
the lexicostatistic approach. The data concerning Madagascar were collected by
the author at the beginning of 2010 and consist of Swadesh lists of 200 items
for 23 dialects covering all areas of the Island. The lists for Maanyan and
Malay were obtained from published datasets integrated by author's interviews.
|
1107.4222
|
Interference minimization in physical model of wireless networks
|
cs.DS cs.IT math.IT
|
Interference minimization problem in wireless sensor and ad-hoc networks is
considered. That is to assign a transmission power to each node of a network
such that the network is connected and at the same time the maximum of
accumulated signal straight on network nodes is minimum. Previous works on
interference minimization in wireless networks mainly consider the disk graph
model of network. For disk graph model two approximation algorithms with
$O(\sqrt{n})$ and $O((opt\ln{n})^{2})$ upper bounds of maximum interference are
known, where $n$ is the number of nodes and $opt$ is the minimal interference
of a given network. In current work we consider more general interference
model, the physical interference model, where sender nodes' signal straight on
a given node is a function of a sender/receiver node pair and sender nodes'
transmission power. For this model we give a polynomial time approximation
algorithm which finds a connected network with at most
$O((opt\ln{n})^{2}/\beta)$ interference, where $\beta \geq 1$ is the minimum
signal straight necessary on receiver node for successfully receiving a
message.
|
1107.4246
|
A computability challenge: asymptotic bounds and isolated
error-correcting codes
|
cs.IT math.IT math.NA
|
Consider the set of all error--correcting block codes over a fixed alphabet
with $q$ letters. It determines a recursively enumerable set of points in the
unit square with coordinates $(R,\delta)$:= {\it (relative transmission rate,
relative minimal distance).} Limit points of this set form a closed subset,
defined by $R\le \alpha_q(\delta)$, where $\alpha_q(\delta)$ is a continuous
decreasing function called {\it asymptotic bound.} Its existence was proved by
the author in 1981, but all attempts to find an explicit formula for it so far
failed.
In this note I consider the question whether this function is computable in
the sense of constructive mathematics, and discuss some arguments suggesting
that the answer might be negative.
|
1107.4255
|
A New Stability Result for the Feedback Interconnection of Negative
Imaginary Systems with a Pole at the Origin
|
math.OC cs.SY
|
This paper is concerned with stability conditions for the positive feedback
interconnection of negative imaginary systems. A generalization of the negative
imaginary lemma is derived, which remains true even if the transfer function
has poles on the imaginary axis including the origin. A sufficient condition
for the internal stability of a feedback interconnection for NI systems
including a pole at the origin is given and an illustrative example is
presented to support the result.
|
1107.4264
|
Accelerating Radio Astronomy Cross-Correlation with Graphics Processing
Units
|
astro-ph.IM cs.CE
|
We present a highly parallel implementation of the cross-correlation of
time-series data using graphics processing units (GPUs), which is scalable to
hundreds of independent inputs and suitable for the processing of signals from
"Large-N" arrays of many radio antennas. The computational part of the
algorithm, the X-engine, is implementated efficiently on Nvidia's Fermi
architecture, sustaining up to 79% of the peak single precision floating-point
throughput. We compare performance obtained for hardware- and software-managed
caches, observing significantly better performance for the latter. The high
performance reported involves use of a multi-level data tiling strategy in
memory and use of a pipelined algorithm with simultaneous computation and
transfer of data from host to device memory. The speed of code development,
flexibility, and low cost of the GPU implementations compared to ASIC and FPGA
implementations have the potential to greatly shorten the cycle of correlator
development and deployment, for cases where some power consumption penalty can
be tolerated.
|
1107.4303
|
Interactive ontology debugging: two query strategies for efficient fault
localization
|
cs.AI
|
Effective debugging of ontologies is an important prerequisite for their
broad application, especially in areas that rely on everyday users to create
and maintain knowledge bases, such as the Semantic Web. In such systems
ontologies capture formalized vocabularies of terms shared by its users.
However in many cases users have different local views of the domain, i.e. of
the context in which a given term is used. Inappropriate usage of terms
together with natural complications when formulating and understanding logical
descriptions may result in faulty ontologies. Recent ontology debugging
approaches use diagnosis methods to identify causes of the faults. In most
debugging scenarios these methods return many alternative diagnoses, thus
placing the burden of fault localization on the user. This paper demonstrates
how the target diagnosis can be identified by performing a sequence of
observations, that is, by querying an oracle about entailments of the target
ontology. To identify the best query we propose two query selection strategies:
a simple "split-in-half" strategy and an entropy-based strategy. The latter
allows knowledge about typical user errors to be exploited to minimize the
number of queries. Our evaluation showed that the entropy-based method
significantly reduces the number of required queries compared to the
"split-in-half" approach. We experimented with different probability
distributions of user errors and different qualities of the a-priori
probabilities. Our measurements demonstrated the superiority of entropy-based
query selection even in cases where all fault probabilities are equal, i.e.
where no information about typical user errors is available.
|
1107.4346
|
Effective Capacity of Two-Hop Wireless Communication Systems
|
cs.IT math.IT
|
A two-hop wireless communication link in which a source sends data to a
destination with the aid of an intermediate relay node is studied. It is
assumed that there is no direct link between the source and the destination,
and the relay forwards the information to the destination by employing the
decode-and-forward scheme. Both the source and intermediate relay nodes are
assumed to operate under statistical quality of service (QoS) constraints
imposed as limitations on the buffer overflow probabilities. The maximum
constant arrival rates that can be supported by this two-hop link in the
presence of QoS constraints are characterized by determining the effective
capacity of such links as a function of the QoS parameters and signal-to-noise
ratios at the source and relay, and the fading distributions of the links. The
analysis is performed for both full-duplex and half-duplex relaying. Through
this study, the impact upon the throughput of having buffer constraints at the
source and intermediate relay nodes is identified. The interactions between the
buffer constraints in different nodes and how they affect the performance are
studied. The optimal time-sharing parameter in half-duplex relaying is
determined, and performance with half-duplex relaying is investigated.
|
1107.4396
|
The IHS Transformations Based Image Fusion
|
cs.CV
|
The IHS sharpening technique is one of the most commonly used techniques for
sharpening. Different transformations have been developed to transfer a color
image from the RGB space to the IHS space. Through literature, it appears that,
various scientists proposed alternative IHS transformations and many papers
have reported good results whereas others show bad ones as will as not those
obtained which the formula of IHS transformation were used. In addition to
that, many papers show different formulas of transformation matrix such as IHS
transformation. This leads to confusion what is the exact formula of the IHS
transformation?. Therefore, the main purpose of this work is to explore
different IHS transformation techniques and experiment it as IHS based image
fusion. The image fusion performance was evaluated, in this study, using
various methods to estimate the quality and degree of information improvement
of a fused image quantitatively.
|
1107.4407
|
Determining Key Model Parameters of Rapidly Intensifying Hurricane
Guillermo(1997) using the Ensemble Kalman Filter
|
physics.geo-ph cs.SY math.OC
|
In this work we determine key model parameters for rapidly intensifying
Hurricane Guillermo (1997) using the Ensemble Kalman Filter (EnKF). The
approach is to utilize the EnKF as a tool to only estimate the parameter values
of the model for a particular data set. The assimilation is performed using
dual-Doppler radar observations obtained during the period of rapid
intensification of Hurricane Guillermo. A unique aspect of Guillermo was that
during the period of radar observations strong convective bursts, attributable
to wind shear, formed primarily within the eastern semicircle of the eyewall.
To reproduce this observed structure within a hurricane model, background wind
shear of some magnitude must be specified; as well as turbulence and surface
parameters appropriately specified so that the impact of the shear on the
simulated hurricane vortex can be realized. To identify the complex nonlinear
interactions induced by changes in these parameters, an ensemble of model
simulations have been conducted in which individual members were formulated by
sampling the parameters within a certain range via a Latin hypercube approach.
The ensemble and the data, derived latent heat and horizontal winds from the
dual-Doppler radar observations, are utilized in the EnKF to obtain varying
estimates of the model parameters. The parameters are estimated at each time
instance, and a final parameter value is obtained by computing the average over
time. Individual simulations were conducted using the estimates, with the
simulation using latent heat parameter estimates producing the lowest overall
model forecast error.
|
1107.4414
|
Frequency based Classification of Activities using Accelerometer Data
|
cs.NE
|
This work presents, the classification of user activities such as Rest, Walk
and Run, on the basis of frequency component present in the acceleration data
in a wireless sensor network environment. As the frequencies of the above
mentioned activities differ slightly for different person, so it gives a more
accurate result. The algorithm uses just one parameter i.e. the frequency of
the body acceleration data of the three axes for classifying the activities in
a set of data. The algorithm includes a normalization step and hence there is
no need to set a different value of threshold value for magnitude for different
test person. The classification is automatic and done on a block by block
basis.
|
1107.4429
|
High Accuracy Human Activity Monitoring using Neural network
|
cs.NE
|
This paper presents the designing of a neural network for the classification
of Human activity. A Triaxial accelerometer sensor, housed in a chest worn
sensor unit, has been used for capturing the acceleration of the movements
associated. All the three axis acceleration data were collected at a base
station PC via a CC2420 2.4GHz ISM band radio (zigbee wireless compliant),
processed and classified using MATLAB. A neural network approach for
classification was used with an eye on theoretical and empirical facts. The
work shows a detailed description of the designing steps for the classification
of human body acceleration data. A 4-layer back propagation neural network,
with Levenberg-marquardt algorithm for training, showed best performance among
the other neural network training algorithms.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.