id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1102.4205
|
An algebra for signal processing
|
cs.NA cs.IT math.IT
|
Our paper presents an attempt to axiomatise signal processing. Our long-term
goal is to formulate signal processing algorithms for an ideal world of exact
computation and prove properties about them, then interpret these ideal
formulations and apply them without change to real world discrete data. We give
models of the axioms that are based on Gaussian functions, that allow for exact
computations and automated tests of signal algorithm properties.
|
1102.4225
|
Model-checking ATL under Imperfect Information and Perfect Recall
Semantics is Undecidable
|
cs.LO cs.MA
|
We propose a formal proof of the undecidability of the model checking problem
for alternating- time temporal logic under imperfect information and perfect
recall semantics. This problem was announced to be undecidable according to a
personal communication on multi-player games with imperfect information, but no
formal proof was ever published. Our proof is based on a direct reduction from
the non-halting problem for Turing machines.
|
1102.4240
|
Sparse neural networks with large learning diversity
|
cs.LG cs.DS
|
Coded recurrent neural networks with three levels of sparsity are introduced.
The first level is related to the size of messages, much smaller than the
number of available neurons. The second one is provided by a particular coding
rule, acting as a local constraint in the neural activity. The third one is a
characteristic of the low final connection density of the network after the
learning phase. Though the proposed network is very simple since it is based on
binary neurons and binary connections, it is able to learn a large number of
messages and recall them, even in presence of strong erasures. The performance
of the network is assessed as a classifier and as an associative memory.
|
1102.4258
|
SHREC 2011: robust feature detection and description benchmark
|
cs.CV
|
Feature-based approaches have recently become very popular in computer vision
and image analysis applications, and are becoming a promising direction in
shape retrieval. SHREC'11 robust feature detection and description benchmark
simulates the feature detection and description stages of feature-based shape
retrieval algorithms. The benchmark tests the performance of shape feature
detectors and descriptors under a wide variety of transformations. The
benchmark allows evaluating how algorithms cope with certain classes of
transformations and strength of the transformations that can be dealt with. The
present paper is a report of the SHREC'11 robust feature detection and
description benchmark results.
|
1102.4272
|
Bounds on the Achievable Rate for the Fading Relay Channel with Finite
Input Constellations
|
cs.IT math.IT
|
We consider the wireless Rayleigh fading relay channel with finite complex
input constellations. Assuming global knowledge of the channel state
information and perfect synchronization, upper and lower bounds on the
achievable rate, for the full-duplex relay, as well as the more practical
half-duplex relay (in which the relay cannot transmit and receive
simultaneously), are studied. Assuming the power constraint at the source node
and the relay node to be equal, the gain in rate offered by the use of relay
over the direct transmission (without the relay) is investigated. It is shown
that for the case of finite complex input constellations, the relay gain
attains the maximum at a particular SNR and at higher SNRs the relay gain tends
to become zero. Since practical schemes always use finite complex input
constellation, the above result means that the relay offers maximum advantage
over the direct transmission when we operate at a particular SNR and offers no
advantage at very high SNRs. This is contrary to the results already known for
the relay channel with Gaussian input alphabet.
|
1102.4293
|
Protein Models Comparator: Scalable Bioinformatics Computing on the
Google App Engine Platform
|
cs.CE cs.DC q-bio.BM
|
The comparison of computer generated protein structural models is an
important element of protein structure prediction. It has many uses including
model quality evaluation, selection of the final models from a large set of
candidates or optimisation of parameters of energy functions used in
template-free modelling and refinement. Although many protein comparison
methods are available online on numerous web servers, they are not well suited
for large scale model comparison: (1) they operate with methods designed to
compare actual proteins, not the models of the same protein, (2) majority of
them offer only a single pairwise structural comparison and are unable to scale
up to a required order of thousands of comparisons. To bridge the gap between
the protein and model structure comparison we have developed the Protein Models
Comparator (pm-cmp). To be able to deliver the scalability on demand and handle
large comparison experiments the pm-cmp was implemented "in the cloud".
Protein Models Comparator is a scalable web application for a fast
distributed comparison of protein models with RMSD, GDT TS, TM-score and
Q-score measures. It runs on the Google App Engine (GAE) cloud platform and is
a showcase of how the emerging PaaS (Platform as a Service) technology could be
used to simplify the development of scalable bioinformatics services. The
functionality of pm-cmp is accessible through API which allows a full
automation of the experiment submission and results retrieval. Protein Models
Comparator is free software released on the Affero GNU Public Licence and is
available with its source code at: http://www.infobiotics.org/pm-cmp
This article presents a new web application addressing the need for a
large-scale model-specific protein structure comparison and provides an insight
into the GAE (Google App Engine) platform and its usefulness in scientific
computing.
|
1102.4360
|
Dynamic Homotopy and Landscape Dynamical Set Topology in Quantum Control
|
quant-ph cs.SY math.OC
|
We examine the topology of the subset of controls taking a given initial
state to a given final state in quantum control, where "state" may mean a pure
state |\psi>, an ensemble density matrix \rho, or a unitary propagator U(0,T).
The analysis consists in showing that the endpoint map acting on control space
is a Hurewicz fibration for a large class of affine control systems with vector
controls. Exploiting the resulting fibration sequence and the long exact
sequence of basepoint-preserving homotopy classes of maps, we show that the
indicated subset of controls is homotopy equivalent to the loopspace of the
state manifold. This not only allows us to understand the connectedness of
"dynamical sets" realized as preimages of subsets of the state space through
this endpoint map, but also provides a wealth of additional topological
information about such subsets of control space.
|
1102.4374
|
Link Prediction by De-anonymization: How We Won the Kaggle Social
Network Challenge
|
cs.CR cs.LG
|
This paper describes the winning entry to the IJCNN 2011 Social Network
Challenge run by Kaggle.com. The goal of the contest was to promote research on
real-world link prediction, and the dataset was a graph obtained by crawling
the popular Flickr social photo sharing website, with user identities scrubbed.
By de-anonymizing much of the competition test set using our own Flickr crawl,
we were able to effectively game the competition. Our attack represents a new
application of de-anonymization to gaming machine learning contests, suggesting
changes in how future competitions should be run.
We introduce a new simulated annealing-based weighted graph matching
algorithm for the seeding step of de-anonymization. We also show how to combine
de-anonymization with link prediction---the latter is required to achieve good
performance on the portion of the test set not de-anonymized---for example by
training the predictor on the de-anonymized portion of the test set, and
combining probabilistic predictions from de-anonymization and link prediction.
|
1102.4411
|
The AWGN Red Alert Problem
|
cs.IT math.IT
|
Consider the following unequal error protection scenario. One special
message, dubbed the "red alert" message, is required to have an extremely small
probability of missed detection. The remainder of the messages must keep their
average probability of error and probability of false alarm below a certain
threshold. The goal then is to design a codebook that maximizes the error
exponent of the red alert message while ensuring that the average probability
of error and probability of false alarm go to zero as the blocklength goes to
infinity. This red alert exponent has previously been characterized for
discrete memoryless channels. This paper completely characterizes the optimal
red alert exponent for additive white Gaussian noise channels with block power
constraints.
|
1102.4429
|
A Trajectory UML profile For Modeling Trajectory Data: A Mobile Hospital
Use Case
|
cs.DB
|
A large amount of data resulting from trajectories of moving objects
activities are collected thanks to localization based services and some
associated automated processes. Trajectories data can be used either for
transactional and analysis purposes in various domains (heath care, commerce,
environment, etc.). For this reason, modeling trajectory data at the conceptual
level is an important stair leading to global vision and successful
implementations. However, current modeling tools fail to fulfill specific
moving objects activities requirements. In this paper, we propose a new profile
based on UML in order to enhance the conceptual modeling of trajectory data
related to mobile objects by new stereotypes and icons. As illustration, we
present a mobile hospital use case.
|
1102.4442
|
Internal Regret with Partial Monitoring. Calibration-Based Optimal
Algorithms
|
cs.LG cs.GT math.OC
|
We provide consistent random algorithms for sequential decision under partial
monitoring, i.e. when the decision maker does not observe the outcomes but
receives instead random feedback signals. Those algorithms have no internal
regret in the sense that, on the set of stages where the decision maker chose
his action according to a given law, the average payoff could not have been
improved in average by using any other fixed law.
They are based on a generalization of calibration, no longer defined in terms
of a Voronoi diagram but instead of a Laguerre diagram (a more general
concept). This allows us to bound, for the first time in this general
framework, the expected average internal -- as well as the usual external --
regret at stage $n$ by $O(n^{-1/3})$, which is known to be optimal.
|
1102.4498
|
Digraph description of k-interchange technique for optimization over
permutations and adaptive algorithm system
|
cs.DS cs.AI math.OC
|
The paper describes a general glance to the use of element exchange
techniques for optimization over permutations. A multi-level description of
problems is proposed which is a fundamental to understand nature and complexity
of optimization problems over permutations (e.g., ordering, scheduling,
traveling salesman problem). The description is based on permutation
neighborhoods of several kinds (e.g., by improvement of an objective function).
Our proposed operational digraph and its kinds can be considered as a way to
understand convexity and polynomial solvability for combinatorial optimization
problems over permutations. Issues of an analysis of problems and a design of
hierarchical heuristics are discussed. The discussion leads to a multi-level
adaptive algorithm system which analyzes an individual problem and
selects/designs a solving strategy (trajectory).
|
1102.4527
|
Data Separation by Sparse Representations
|
math.NA cs.IT math.IT
|
Recently, sparsity has become a key concept in various areas of applied
mathematics, computer science, and electrical engineering. One application of
this novel methodology is the separation of data, which is composed of two (or
more) morphologically distinct constituents. The key idea is to carefully
select representation systems each providing sparse approximations of one of
the components. Then the sparsest coefficient vector representing the data
within the composed - and therefore highly redundant - representation system is
computed by $\ell_1$ minimization or thresholding. This automatically enforces
separation. This paper shall serve as an introduction to and a survey about
this exciting area of research as well as a reference for the state-of-the-art
of this research field. It will appear as a chapter in a book on "Compressed
Sensing: Theory and Applications" edited by Yonina Eldar and Gitta Kutyniok.
|
1102.4528
|
Modelling the Dynamics of the Work-Employment System by Predator-Prey
Interactions
|
cs.CE nlin.AO
|
The broad application range of the predator-prey modelling enabled us to
apply it to represent the dynamics of the work-employment system. For the
adopted period, we conclude that this dynamics is chaotic in the beginning of
the time series and tends to less perturbed states, as time goes by, due to
public policies and hidden intrinsic system features. Basic Lotka-Volterra
approach was revised and adapted to the reality of the study. The final aim is
to provide managers with generalized theoretical elements that allow to a more
accurate understanding of the behavior of the work-employment system.
|
1102.4563
|
Proceedings of the first international workshop on domain-specific
languages for robotic systems (DSLRob 2010)
|
cs.RO cs.PL
|
The First International Workshop on Domain-Specific Languages and models for
ROBotic systems (DSLRob'10) was held at the 2010 IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS'10), October 2010 in Taipei,
Taiwan.
The main topics of the workshop were domain-specific languages and models. A
domain-specific language (DSL) is a programming language dedicated to a
particular problem domain that offers specific notations and abstractions that
increase programmer productivity within that domain. Models offer a high-level
way for domain users to specify the functionality of their system at the right
level of abstraction. DSLs and models have historically been used for
programming complex systems. However recently they have garnered interest as a
separate field of study. Robotic systems blend hardware and software in a
holistic way that intrinsically raises many crosscutting concerns (concurrency,
uncertainty, time constraints, ...), for which reason, traditional
general-purpose languages often lead to a poor fit between the language
features and the implementation requirements. DSLs and models offer a powerful,
systematic way to overcome this problem, enabling the programmer to quickly and
precisely implement novel software solutions to complex problems within the
robotics domain.
|
1102.4570
|
Coveting thy neighbors fitness as a means to resolve social dilemmas
|
q-bio.PE cs.SI physics.soc-ph
|
In spatial evolutionary games the fitness of each individual is traditionally
determined by the payoffs it obtains upon playing the game with its neighbors.
Since defection yields the highest individual benefits, the outlook for
cooperators is gloomy. While network reciprocity promotes collaborative
efforts, chances of averting the impending social decline are slim if the
temptation to defect is strong. It is therefore of interest to identify viable
mechanisms that provide additional support for the evolution of cooperation.
Inspired by the fact that the environment may be just as important as
inheritance for individual development, we introduce a simple switch that
allows a player to either keep its original payoff or use the average payoff of
all its neighbors. Depending on which payoff is higher, the influence of either
option can be tuned by means of a single parameter. We show that, in general,
taking into account the environment promotes cooperation. Yet coveting the
fitness of one's neighbors too strongly is not optimal. In fact, cooperation
thrives best only if the influence of payoffs obtained in the traditional way
is equal to that of the average payoff of the neighborhood. We present results
for the prisoner's dilemma and the snowdrift game, for different levels of
uncertainty governing the strategy adoption process, and for different
neighborhood sizes. Our approach outlines a viable route to increased levels of
cooperative behavior in structured populations, but one that requires a
thoughtful implementation.
|
1102.4573
|
The Algebra of Two Dimensional Patterns
|
cs.IT math.IT
|
The article presents an algebra to represent two dimensional patterns using
reciprocals of polynomials. Such a representation will be useful in neural
network training and it provides a method of training patterns that is much
more efficient than a pixel-wise representation.
|
1102.4580
|
Gaussian bosonic synergy: quantum communication via realistic channels
of zero quantum capacity
|
quant-ph cs.IT math.IT
|
As with classical information, error-correcting codes enable reliable
transmission of quantum information through noisy or lossy channels. In
contrast to the classical theory, imperfect quantum channels exhibit a strong
kind of synergy: there exist pairs of discrete memoryless quantum channels,
each of zero quantum capacity, which acquire positive quantum capacity when
used together. Here we show that this "superactivation" phenomenon also occurs
in the more realistic setting of optical channels with attenuation and Gaussian
noise. This paves the way for its experimental realization and application in
real-world communications systems.
|
1102.4599
|
Towards Unbiased BFS Sampling
|
cs.SI cs.NI stat.ME
|
Breadth First Search (BFS) is a widely used approach for sampling large
unknown Internet topologies. Its main advantage over random walks and other
exploration techniques is that a BFS sample is a plausible graph on its own,
and therefore we can study its topological characteristics. However, it has
been empirically observed that incomplete BFS is biased toward high-degree
nodes, which may strongly affect the measurements. In this paper, we first
analytically quantify the degree bias of BFS sampling. In particular, we
calculate the node degree distribution expected to be observed by BFS as a
function of the fraction f of covered nodes, in a random graph RG(pk) with an
arbitrary degree distribution pk. We also show that, for RG(pk), all commonly
used graph traversal techniques (BFS, DFS, Forest Fire, Snowball Sampling, RDS)
suffer from exactly the same bias. Next, based on our theoretical analysis, we
propose a practical BFS-bias correction procedure. It takes as input a
collected BFS sample together with its fraction f. Even though RG(pk) does not
capture many graph properties common in real-life graphs (such as
assortativity), our RG(pk)-based correction technique performs well on a broad
range of Internet topologies and on two large BFS samples of Facebook and Orkut
networks. Finally, we consider and evaluate a family of alternative correction
procedures, and demonstrate that, although they are unbiased for an arbitrary
topology, their large variance makes them far less effective than the
RG(pk)-based technique.
|
1102.4612
|
Spatially-Coupled MacKay-Neal Codes and Hsu-Anastasopoulos Codes
|
cs.IT math.IT
|
Kudekar et al. recently proved that for transmission over the binary erasure
channel (BEC), spatial coupling of LDPC codes increases the BP threshold of the
coupled ensemble to the MAP threshold of the underlying LDPC codes. One major
drawback of the capacity-achieving spatially-coupled LDPC codes is that one
needs to increase the column and row weight of parity-check matrices of the
underlying LDPC codes.
It is proved, that Hsu-Anastasopoulos (HA) codes and MacKay-Neal (MN) codes
achieve the capacity of memoryless binary-input symmetric-output channels under
MAP decoding with bounded column and row weight of the parity-check matrices.
The HA codes and the MN codes are dual codes each other.
The aim of this paper is to present an empirical evidence that
spatially-coupled MN (resp. HA) codes with bounded column and row weight
achieve the capacity of the BEC. To this end, we introduce a spatial coupling
scheme of MN (resp. HA) codes. By density evolution analysis, we will show that
the resulting spatially-coupled MN (resp. HA) codes have the BP threshold close
to the Shannon limit.
|
1102.4639
|
Non-Conservative Diffusion and its Application to Social Network
Analysis
|
cs.SI physics.data-an physics.soc-ph
|
The random walk is fundamental to modeling dynamic processes on networks.
Metrics based on the random walk have been used in many applications from image
processing to Web page ranking. However, how appropriate are random walks to
modeling and analyzing social networks? We argue that unlike a random walk,
which conserves the quantity diffusing on a network, many interesting social
phenomena, such as the spread of information or disease on a social network,
are fundamentally non-conservative. When an individual infects her neighbor
with a virus, the total amount of infection increases. We classify diffusion
processes as conservative and non-conservative and show how these differences
impact the choice of metrics used for network analysis, as well as our
understanding of network structure and behavior. We show that Alpha-Centrality,
which mathematically describes non-conservative diffusion, leads to new
insights into the behavior of spreading processes on networks. We give a
scalable approximate algorithm for computing the Alpha-Centrality in a massive
graph. We validate our approach on real-world online social networks of Digg.
We show that a non-conservative metric, such as Alpha-Centrality, produces
better agreement with empirical measure of influence than conservative metrics,
such as PageRank. We hope that our investigation will inspire further
exploration into the realms of conservative and non-conservative metrics in
social network analysis.
|
1102.4646
|
Superposition Noisy Network Coding
|
cs.IT math.IT
|
We present a superposition coding scheme for communication over a network,
which combines partial decode and forward and noisy network coding. This hybrid
scheme is termed as superposition noisy network coding. The scheme is designed
and analyzed for single relay channel, single source multicast network and
multiple source multicast network. The achievable rate region is determined for
each case. The special cases of Gaussian single relay channel and two way relay
channel are analyzed for superposition noisy network coding. The achievable
rate of the proposed scheme is higher than the existing schemes of noisy
network coding and compress-forward.
|
1102.4652
|
Optimal Quantization for Compressive Sensing under Message Passing
Reconstruction
|
cs.IT math.IT
|
We consider the optimal quantization of compressive sensing measurements
following the work on generalization of relaxed belief propagation (BP) for
arbitrary measurement channels. Relaxed BP is an iterative reconstruction
scheme inspired by message passing algorithms on bipartite graphs. Its
asymptotic error performance can be accurately predicted and tracked through
the state evolution formalism. We utilize these results to design mean-square
optimal scalar quantizers for relaxed BP signal reconstruction and empirically
demonstrate the superior error performance of the resulting quantizers.
|
1102.4711
|
Turbo Codes Based on Time-Variant Memory-1 Convolutional Codes over Fq
|
cs.IT math.IT
|
Two classes of turbo codes over high-order finite fields are introduced. The
codes are derived from a particular protograph sub-ensemble of the (dv=2,dc=3)
low-density parity-check code ensemble. A first construction is derived as a
parallel concatenation of two non-binary, time-variant accumulators. The second
construction is based on the serial concatenation of a non-binary, time-variant
differentiator and of a non-binary, time-variant accumulator, and provides a
highly-structured flexible encoding scheme for (dv=2,dc=4) ensemble codes. A
cycle graph representation is provided. The proposed codes can be decoded
efficiently either as low-density parity-check codes (via belief propagation
decoding over the codes bipartite graph) or as turbo codes (via the
forward-backward algorithm applied to the component codes trellis). The
forward-backward algorithm for symbol maximum a posteriori decoding of the
component codes is illustrated and simplified by means of the fast Fourier
transform. The proposed codes provide remarkable gains (~ 1 dB) over binary
low-density parity-check and turbo codes in the moderate-short block regimes.
|
1102.4712
|
Effective protocols for low-distance file synchronization
|
cs.IT cs.CC math.IT
|
Suppose that we have two similar files stored on different computers. We need
to send the file from the first computer to the second one trying to minimize
the number of bits transmitted. This article presents a survey of results known
for this communication complexity problem in the case when files are "similar"
in the sense of Hamming distance. We mainly systematize earlier results
obtained by various authors in 1990s and 2000s and discuss its connection with
coding theory, hashing algorithms and other domains of computer science. In
particular cases we propose some improvements of previous constructions.
|
1102.4769
|
Data Base Mappings and Monads: (Co)Induction
|
cs.DB cs.LO math.CT
|
In this paper we presented the semantics of database mappings in the
relational DB category based on the power-view monad T and monadic algebras.
The objects in this category are the database-instances (a database-instance is
a set of n-ary relations, i.e., a set of relational tables as in standard
RDBs). The morphisms in DB category are used in order to express the semantics
of view-based Global and Local as View (GLAV) mappings between relational
databases, for example those used in Data Integration Systems. Such morphisms
in this DB category are not functions but have the complex tree structures
based on a set of complex query computations between two database-instances.
Thus DB category, as a base category for the semantics of databases and
mappings between them, is different from the Set category used dominantly for
such issues, and needs the full investigation of its properties. In this paper
we presented another contributions for an intensive exploration of properties
and semantics of this category, based on the power-view monad T and the Kleisli
category for databases. Here we stressed some Universal algebra considerations
based on monads and relationships between this DB category and the standard Set
category. Finally, we investigated the general algebraic and induction
properties for databases in this category, and we defined the initial monadic
algebras for database instances.
|
1102.4771
|
Efficient evaluation of polynomials over finite fields
|
cs.IT math.IT math.NT
|
A method is described which allows to evaluate efficiently a polynomial in a
(possibly trivial) extension of the finite field of its coefficients. Its
complexity is shown to be lower than that of standard techniques when the
degree of the polynomial is large with respect to the base field. Applications
to the syndrome computation in the decoding of cyclic codes, Reed-Solomon codes
in particular, are highlighted.
|
1102.4772
|
Polynomial evaluation over finite fields: new algorithms and complexity
bounds
|
cs.IT math.IT math.NT
|
An efficient evaluation method is described for polynomials in finite fields.
Its complexity is shown to be lower than that of standard techniques when the
degree of the polynomial is large enough. Applications to the syndrome
computation in the decoding of Reed-Solomon codes are highlighted.
|
1102.4773
|
Performance Analysis of 3-Dimensional Turbo Codes
|
cs.IT math.IT
|
In this work, we consider the minimum distance properties and convergence
thresholds of 3-dimensional turbo codes (3D-TCs), recently introduced by Berrou
et al.. Here, we consider binary 3D-TCs while the original work of Berrou et
al. considered double-binary codes. In the first part of the paper, the minimum
distance properties are analyzed from an ensemble perspective, both in the
finite-length regime and in the asymptotic case of large block lengths. In
particular, we analyze the asymptotic weight distribution of 3D-TCs and show
numerically that their typical minimum distance dmin may, depending on the
specific parameters, asymptotically grow linearly with the block length, i.e.,
the 3D-TC ensemble is asymptotically good for some parameters. In the second
part of the paper, we derive some useful upper bounds on the dmin when using
quadratic permutation polynomial (QPP) interleavers with a quadratic inverse.
Furthermore, we give examples of interleaver lengths where an upper bound
appears to be tight. The best codes (in terms of estimated dmin) obtained by
randomly searching for good pairs of QPPs for use in the 3D-TC are compared to
a probabilistic lower bound on the dmin when selecting codes from the 3D-TC
ensemble uniformly at random. This comparison shows that the use of designed
QPP interleavers can improve the dmin significantly. For instance, we have
found a (6144,2040) 3D-TC with an estimated dmin of 147, while the
probabilistic lower bound is 69. Higher rates are obtained by puncturing
nonsystematic bits, and optimized periodic puncturing patterns for rates 1/2,
2/3, and 4/5 are found by computer search. Finally, we give iterative decoding
thresholds, computed from an extrinsic information transfer chart analysis, and
present simulation results on the additive white Gaussian noise channel to
compare the error rate performance to that of conventional turbo codes.
|
1102.4794
|
Information Loss in Static Nonlinearities
|
cs.IT math.IT nlin.SI
|
In this work, conditional entropy is used to quantify the information loss
induced by passing a continuous random variable through a memoryless nonlinear
input-output system. We derive an expression for the information loss depending
on the input density and the nonlinearity and show that the result is strongly
related to the non-injectivity of the considered system. Tight upper bounds are
presented, which can be evaluated with less difficulty than a direct evaluation
of the information loss, which involves the logarithm of a sum. Application of
our results is illustrated on a set of examples.
|
1102.4803
|
Detection of objects in noisy images and site percolation on square
lattices
|
math.ST cs.CV math.PR stat.AP stat.ME stat.TH
|
We propose a novel probabilistic method for detection of objects in noisy
images. The method uses results from percolation and random graph theories. We
present an algorithm that allows to detect objects of unknown shapes in the
presence of random noise. Our procedure substantially differs from
wavelets-based algorithms. The algorithm has linear complexity and exponential
accuracy and is appropriate for real-time systems. We prove results on
consistency and algorithmic complexity of our procedure.
|
1102.4807
|
Noisy matrix decomposition via convex relaxation: Optimal rates in high
dimensions
|
stat.ML cs.IT cs.LG math.IT
|
We analyze a class of estimators based on convex relaxation for solving
high-dimensional matrix decomposition problems. The observations are noisy
realizations of a linear transformation $\mathfrak{X}$ of the sum of an
approximately) low rank matrix $\Theta^\star$ with a second matrix
$\Gamma^\star$ endowed with a complementary form of low-dimensional structure;
this set-up includes many statistical models of interest, including factor
analysis, multi-task regression, and robust covariance estimation. We derive a
general theorem that bounds the Frobenius norm error for an estimate of the
pair $(\Theta^\star, \Gamma^\star)$ obtained by solving a convex optimization
problem that combines the nuclear norm with a general decomposable regularizer.
Our results utilize a "spikiness" condition that is related to but milder than
singular vector incoherence. We specialize our general result to two cases that
have been studied in past work: low rank plus an entrywise sparse matrix, and
low rank plus a columnwise sparse matrix. For both models, our theory yields
non-asymptotic Frobenius error bounds for both deterministic and stochastic
noise matrices, and applies to matrices $\Theta^\star$ that can be exactly or
approximately low rank, and matrices $\Gamma^\star$ that can be exactly or
approximately sparse. Moreover, for the case of stochastic noise matrices and
the identity observation operator, we establish matching lower bounds on the
minimax error. The sharpness of our predictions is confirmed by numerical
simulations.
|
1102.4810
|
Distributed SNR Estimation using Constant Modulus Signaling over
Gaussian Multiple-Access Channels
|
cs.IT math.IT
|
A sensor network is used for distributed joint mean and variance estimation,
in a single time snapshot. Sensors observe a signal embedded in noise, which
are phase modulated using a constant-modulus scheme and transmitted over a
Gaussian multiple-access channel to a fusion center, where the mean and
variance are estimated jointly, using an asymptotically minimum-variance
estimator, which is shown to decouple into simple individual estimators of the
mean and the variance. The constant-modulus phase modulation scheme ensures a
fixed transmit power, robust estimation across several sensing noise
distributions, as well as an SNR estimate that requires a single set of
transmissions from the sensors to the fusion center, unlike the
amplify-and-forward approach. The performance of the estimators of the mean and
variance are evaluated in terms of asymptotic variance, which is used to
evaluate the performance of the SNR estimator in the case of Gaussian, Laplace
and Cauchy sensing noise distributions. For each sensing noise distribution,
the optimal phase transmission parameters are also determined. The asymptotic
relative efficiency of the mean and variance estimators is evaluated. It is
shown that among the noise distributions considered, the estimators are
asymptotically efficient only when the noise distribution is Gaussian.
Simulation results corroborate analytical results.
|
1102.4812
|
Octal Bent Generalized Boolean Functions
|
math.CO cs.IT math.IT
|
In this paper we characterize (octal) bent generalized Boolean functions
defined on $\BBZ_2^n$ with values in $\BBZ_8$. Moreover, we propose several
constructions of such generalized bent functions for both $n$ even and $n$ odd.
|
1102.4816
|
Computationally efficient algorithms for statistical image processing.
Implementation in R
|
stat.CO cs.CV stat.AP stat.ME stat.ML
|
In the series of our earlier papers on the subject, we proposed a novel
statistical hypothesis testing method for detection of objects in noisy images.
The method uses results from percolation theory and random graph theory. We
developed algorithms that allowed to detect objects of unknown shapes in the
presence of nonparametric noise of unknown level and of unknown distribution.
No boundary shape constraints were imposed on the objects, only a weak bulk
condition for the object's interior was required. Our algorithms have linear
complexity and exponential accuracy. In the present paper, we describe an
implementation of our nonparametric hypothesis testing method. We provide a
program that can be used for statistical experiments in image processing. This
program is written in the statistical programming language R.
|
1102.4825
|
Computing linear functions by linear coding over networks
|
cs.IT math.AC math.IT
|
We consider the scenario in which a set of sources generate messages in a
network and a receiver node demands an arbitrary linear function of these
messages. We formulate an algebraic test to determine whether an arbitrary
network can compute linear functions using linear codes. We identify a class of
linear functions that can be computed using linear codes in every network that
satisfies a natural cut-based condition. Conversely, for another class of
linear functions, we show that the cut-based condition does not guarantee the
existence of a linear coding solution. For linear functions over the binary
field, the two classes are complements of each other.
|
1102.4865
|
Power-Bandwidth Efficiency and Capacity of Wireless Feedback
Communication Systems
|
cs.IT math.IT
|
The paper is devoted to the analysis of problems appearing in optimisation
and improvement of the power-bandwidth efficiency of digital communication
feedback systems (FCS). There is shown that unlike digital systems, adaptive
FCS with the analogue forward transmission allow full optimisation and
derivation of optimal transmission-reception algorithm approaching their
efficiency to the Shannon boundary. Differences between the forward channel
capacity and capacity of adaptive FCS as communication unit, as well as their
influence of the power-bandwidth efficiency of transmission are considered.
|
1102.4868
|
Verifiable and computable performance analysis of sparsity recovery
|
cs.IT math.IT math.NA
|
In this paper, we develop verifiable and computable performance analysis of
sparsity recovery. We define a family of goodness measures for arbitrary
sensing matrices as a set of optimization problems, and design algorithms with
a theoretical global convergence guarantee to compute these goodness measures.
The proposed algorithms solve a series of second-order cone programs, or linear
programs. As a by-product, we implement an efficient algorithm to verify a
sufficient condition for exact sparsity recovery in the noise-free case. We
derive performance bounds on the recovery errors in terms of these goodness
measures. We also analytically demonstrate that the developed goodness measures
are non-degenerate for a large class of random sensing matrices, as long as the
number of measurements is relatively large. Numerical experiments show that,
compared with the restricted isometry based performance bounds, our error
bounds apply to a wider range of problems and are tighter, when the sparsity
levels of the signals are relatively low.
|
1102.4873
|
Weighted Radial Variation for Node Feature Classification
|
physics.data-an cs.CV
|
Connections created from a node-edge matrix have been traditionally difficult
to visualize and analyze because of the number of flows to be rendered in a
limited feature or cartographic space. Because analyzing connectivity patterns
is useful for understanding the complex dynamics of human and information flow
that connect non-adjacent space, techniques that allow for visual data mining
or static representations of system dynamics are a growing field of research.
Here, we create a Weighted Radial Variation (WRV) technique to classify a set
of nodes based on the configuration of their radially-emanating vector flows.
Each entity's vector is syncopated in terms of cardinality, direction, length,
and flow magnitude. The WRV process unravels each star-like entity's individual
flow vectors on a 0-360{\deg} spectrum, to form a unique signal whose
distribution depends on the flow presence at each step around the entity, and
is further characterized by flow distance and magnitude. The signals are
processed with an unsupervised classification method that clusters entities
with similar signatures in order to provide a typology for each node in the
system of spatial flows. We use a case study of U.S. county-to-county human
incoming and outgoing migration data to test our method.
|
1102.4876
|
Network connectivity during mergers and growth: optimizing the addition
of a module
|
physics.soc-ph cond-mat.dis-nn cs.SI
|
The principal eigenvalue $\lambda$ of a network's adjacency matrix often
determines dynamics on the network (e.g., in synchronization and spreading
processes) and some of its structural properties (e.g., robustness against
failure or attack) and is therefore a good indicator for how ``strongly'' a
network is connected. We study how $\lambda$ is modified by the addition of a
module, or community, which has broad applications, ranging from those
involving a single modification (e.g., introduction of a drug into a biological
process) to those involving repeated additions (e.g., power-grid and transit
development). We describe how to optimally connect the module to the network to
either maximize or minimize the shift in $\lambda$, noting several applications
of directing dynamics on networks.
|
1102.4878
|
Robustness of networks against propagating attacks under vaccination
strategies
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We study the effect of vaccination on robustness of networks against
propagating attacks that obey the susceptible-infected-removed model.By
extending the generating function formalism developed by Newman (2005), we
analytically determine the robustness of networks that depends on the
vaccination parameters. We consider the random defense where nodes are
vaccinated randomly and the degree-based defense where hubs are preferentially
vaccinated. We show that when vaccines are inefficient, the random graph is
more robust against propagating attacks than the scale-free network. When
vaccines are relatively efficient, the scale-free network with the degree-based
defense is more robust than the random graph with the random defense and the
scale-free network with the random defense.
|
1102.4904
|
Reverse Engineering of Molecular Networks from a Common Combinatorial
Approach
|
q-bio.MN cs.CE q-bio.QM
|
The understanding of molecular cell biology requires insight into the
structure and dynamics of networks that are made up of thousands of interacting
molecules of DNA, RNA, proteins, metabolites, and other components. One of the
central goals of systems biology is the unraveling of the as yet poorly
characterized complex web of interactions among these components. This work is
made harder by the fact that new species and interactions are continuously
discovered in experimental work, necessitating the development of adaptive and
fast algorithms for network construction and updating. Thus, the
"reverse-engineering" of networks from data has emerged as one of the central
concern of systems biology research.
A variety of reverse-engineering methods have been developed, based on tools
from statistics, machine learning, and other mathematical domains. In order to
effectively use these methods, it is essential to develop an understanding of
the fundamental characteristics of these algorithms. With that in mind, this
chapter is dedicated to the reverse-engineering of biological systems.
Specifically, we focus our attention on a particular class of methods for
reverse-engineering, namely those that rely algorithmically upon the so-called
"hitting-set" problem, which is a classical combinatorial and computer science
problem, Each of these methods utilizes a different algorithm in order to
obtain an exact or an approximate solution of the hitting set problem. We will
explore the ultimate impact that the alternative algorithms have on the
inference of published in silico biological networks.
|
1102.4922
|
Counting Solutions of Constraint Satisfiability Problems:Exact Phase
Transitions and Approximate Algorithm
|
cs.AI cs.CC
|
The study of phase transition phenomenon of NP complete problems plays an
important role in understanding the nature of hard problems. In this paper, we
follow this line of research by considering the problem of counting solutions
of Constraint Satisfaction Problems (#CSP). We consider the random model, i.e.
RB model. We prove that phase transition of #CSP does exist as the number of
variables approaches infinity and the critical values where phase transitions
occur are precisely located. Preliminary experimental results also show that
the critical point coincides with the theoretical derivation. Moreover, we
propose an approximate algorithm to estimate the expectation value of the
solutions number of a given CSP instance of RB model.
|
1102.4923
|
Further Results on Geometric Properties of a Family of Relative
Entropies
|
cs.IT math.IT
|
This paper extends some geometric properties of a one-parameter family of
relative entropies. These arise as redundancies when cumulants of compressed
lengths are considered instead of expected compressed lengths. These parametric
relative entropies are a generalization of the Kullback-Leibler divergence.
They satisfy the Pythagorean property and behave like squared distances. This
property, which was known for finite alphabet spaces, is now extended for
general measure spaces. Existence of projections onto convex and certain closed
sets is also established. Our results may have applications in the R\'enyi
entropy maximization rule of statistical physics.
|
1102.4924
|
New Worst-Case Upper Bound for #XSAT
|
cs.AI
|
An algorithm running in O(1.1995n) is presented for counting models for exact
satisfiability formulae(#XSAT). This is faster than the previously best
algorithm which runs in O(1.2190n). In order to improve the efficiency of the
algorithm, a new principle, i.e. the common literals principle, is addressed to
simplify formulae. This allows us to eliminate more common literals. In
addition, we firstly inject the resolution principles into solving #XSAT
problem, and therefore this further improves the efficiency of the algorithm.
|
1102.4925
|
Worst-Case Upper Bound for (1, 2)-QSAT
|
cs.AI cs.CC
|
The rigorous theoretical analysis of the algorithm for a subclass of QSAT,
i.e. (1, 2)-QSAT, has been proposed in the literature. (1, 2)-QSAT, first
introduced in SAT'08, can be seen as quantified extended 2-CNF formulas. Until
now, within our knowledge, there exists no algorithm presenting the worst upper
bound for (1, 2)-QSAT. Therefore in this paper, we present an exact algorithm
to solve (1, 2)-QSAT. By analyzing the algorithms, we obtain a worst-case upper
bound O(1.4142m), where m is the number of clauses.
|
1102.4926
|
New Worst-Case Upper Bound for X3SAT
|
cs.AI cs.CC
|
The rigorous theoretical analyses of algorithms for exact 3-satisfiability
(X3SAT) have been proposed in the literature. As we know, previous algorithms
for solving X3SAT have been analyzed only regarding the number of variables as
the parameter. However, the time complexity for solving X3SAT instances depends
not only on the number of variables, but also on the number of clauses.
Therefore, it is significant to exploit the time complexity from the other
point of view, i.e. the number of clauses. In this paper, we present algorithms
for solving X3SAT with rigorous complexity analyses using the number of clauses
as the parameter. By analyzing the algorithms, we obtain the new worst-case
upper bounds O(1.15855m), where m is the number of clauses.
|
1102.4930
|
Short-Message Quantize-Forward Network Coding
|
cs.IT math.IT
|
Recent work for single-relay channels shows that quantize-forward (QF) with
long-message encoding achieves the same reliable rates as compress-forward (CF)
with short-message encoding. It is shown that short-message QF with backward or
pipelined (sliding-window) decoding also achieves the same rates. Similarly,
for many relays and sources, short-message QF with backward decoding achieves
the same rates as long-message QF. Several practical advantages of
short-message encoding are pointed out, e.g., reduced delay and simpler
modulation. Furthermore, short-message encoding lets relays use decode-forward
(DF) if their channel quality is good, thereby enabling multiinput,
multi-output (MIMO) gains that are not possible with long-message encoding.
Finally, one may combine the advantages of long- and short-message encoding by
hashing a long message to short messages.
|
1102.4954
|
Minimizing the sum of many rational functions
|
math.OC cs.SY
|
We consider the problem of globally minimizing the sum of many rational
functions over a given compact semialgebraic set. The number of terms can be
large (10 to 100), the degree of each term should be small (up to 10), and the
number of variables can be large (10 to 100) provided some kind of sparsity is
present. We describe a formulation of the rational optimization problem as a
generalized moment problem and its hierarchy of convex semidefinite
relaxations. Under some conditions we prove that the sequence of optimal values
converges to the globally optimal value. We show how public-domain software can
be used to model and solve such problems.
|
1102.4967
|
Achievable rates for transmission of discrete constellations over the
Gaussian MAC channe
|
cs.IT math.IT
|
In this paper we consider the achievable rate region of the Gaussian Multiple
Access Channel (MAC) when suboptimal transmission schemes are employed.
Focusing on the two-user MAC and assuming uncoded Pulse Amplitude Modulation
(PAM), we derive a rate region that is a pentagon, and propose a strategy with
which it can be achieved. We also compare the region with outer bounds and with
orthogonal transmission.
|
1102.4975
|
Close or connected? Distance and connectivity effects on transport in
networks
|
cond-mat.stat-mech cs.SI physics.soc-ph
|
We develop an analytical approach which provides the dependence of the mean
first-passage time (MFPT) for random walks on complex networks both on the
target connectivity and on the source-target distance. Our approach puts
forward two strongly different behaviors depending on the type - compact or non
compact - of the random walk. In the case of non compact exploration, we show
that the MFPT scales linearly with the inverse connectivity of the target, and
is largely independent of the starting point. On the contrary, in the compact
case the MFPT is controlled by the source-target distance, and we find that
unexpectedly the target connectivity becomes irrelevant for remote targets.
|
1102.5030
|
Demonstration of Spectrum Sensing with Blindly Learned Feature
|
cs.IT math.IT
|
Spectrum sensing is essential in cognitive radio. By defining leading
\textit{eigenvector} as feature, we introduce a blind feature learning
algorithm (FLA) and a feature template matching (FTM) algorithm using learned
feature for spectrum sensing. We implement both algorithms on Lyrtech software
defined radio platform. Hardware experiment is performed to verify that feature
can be learned blindly. We compare FTM with a blind detector in hardware and
the results show that the detection performance for FTM is about 3 dB better.
|
1102.5046
|
An In-Depth Analysis of Stochastic Kronecker Graphs
|
cs.SI cs.DM physics.soc-ph
|
Graph analysis is playing an increasingly important role in science and
industry. Due to numerous limitations in sharing real-world graphs, models for
generating massive graphs are critical for developing better algorithms. In
this paper, we analyze the stochastic Kronecker graph model (SKG), which is the
foundation of the Graph500 supercomputer benchmark due to its favorable
properties and easy parallelization. Our goal is to provide a deeper
understanding of the parameters and properties of this model so that its
functionality as a benchmark is increased. We develop a rigorous mathematical
analysis that shows this model cannot generate a power-law distribution or even
a lognormal distribution. However, we formalize an enhanced version of the SKG
model that uses random noise for smoothing. We prove both in theory and in
practice that this enhancement leads to a lognormal distribution. Additionally,
we provide a precise analysis of isolated vertices, showing that the graphs
that are produced by SKG might be quite different than intended. For example,
between 50% and 75% of the vertices in the Graph500 benchmarks will be
isolated. Finally, we show that this model tends to produce extremely small
core numbers (compared to most social networks and other real graphs) for
common parameter choices.
|
1102.5063
|
Topology Discovery of Sparse Random Graphs With Few Participants
|
cs.SI physics.soc-ph stat.ME
|
We consider the task of topology discovery of sparse random graphs using
end-to-end random measurements (e.g., delay) between a subset of nodes,
referred to as the participants. The rest of the nodes are hidden, and do not
provide any information for topology discovery. We consider topology discovery
under two routing models: (a) the participants exchange messages along the
shortest paths and obtain end-to-end measurements, and (b) additionally, the
participants exchange messages along the second shortest path. For scenario
(a), our proposed algorithm results in a sub-linear edit-distance guarantee
using a sub-linear number of uniformly selected participants. For scenario (b),
we obtain a much stronger result, and show that we can achieve consistent
reconstruction when a sub-linear number of uniformly selected nodes
participate. This implies that accurate discovery of sparse random graphs is
tractable using an extremely small number of participants. We finally obtain a
lower bound on the number of participants required by any algorithm to
reconstruct the original random graph up to a given edit distance. We also
demonstrate that while consistent discovery is tractable for sparse random
graphs using a small number of participants, in general, there are graphs which
cannot be discovered by any algorithm even with a significant number of
participants, and with the availability of end-to-end information along all the
paths between the participants.
|
1102.5079
|
Measurement Matrix Design for Compressive Sensing Based MIMO Radar
|
cs.IT math.IT
|
In colocated multiple-input multiple-output (MIMO) radar using compressive
sensing (CS), a receive node compresses its received signal via a linear
transformation, referred to as measurement matrix. The samples are subsequently
forwarded to a fusion center, where an L1-optimization problem is formulated
and solved for target information. CS-based MIMO radar exploits the target
sparsity in the angle-Doppler-range space and thus achieves the high
localization performance of traditional MIMO radar but with many fewer
measurements. The measurement matrix is vital for CS recovery performance. This
paper considers the design of measurement matrices that achieve an optimality
criterion that depends on the coherence of the sensing matrix (CSM) and/or
signal-to-interference ratio (SIR). The first approach minimizes a performance
penalty that is a linear combination of CSM and the inverse SIR. The second one
imposes a structure on the measurement matrix and determines the parameters
involved so that the SIR is enhanced. Depending on the transmit waveforms, the
second approach can significantly improve SIR, while maintaining CSM comparable
to that of the Gaussian random measurement matrix (GRMM). Simulations indicate
that the proposed measurement matrices can improve detection accuracy as
compared to a GRMM.
|
1102.5085
|
Robustness and modular structure in networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
Complex networks have recently attracted much interest due to their
prevalence in nature and our daily lives [1, 2]. A critical property of a
network is its resilience to random breakdown and failure [3-6], typically
studied as a percolation problem [7-9] or by modeling cascading failures
[10-12]. Many complex systems, from power grids and the Internet to the brain
and society [13-15], can be modeled using modular networks comprised of small,
densely connected groups of nodes [16, 17]. These modules often overlap, with
network elements belonging to multiple modules [18, 19]. Yet existing work on
robustness has not considered the role of overlapping, modular structure. Here
we study the robustness of these systems to the failure of elements. We show
analytically and empirically that it is possible for the modules themselves to
become uncoupled or non-overlapping well before the network disintegrates. If
overlapping modular organization plays a role in overall functionality,
networks may be far more vulnerable than predicted by conventional percolation
theory.
|
1102.5087
|
Spatially Coupled LDPC Codes for Decode-and-Forward in Erasure Relay
Channel
|
cs.IT math.IT
|
We consider spatially-coupled protograph-based LDPC codes for the three
terminal erasure relay channel. It is observed that BP threshold value, the
maximal erasure probability of the channel for which decoding error probability
converges to zero, of spatially-coupled codes, in particular spatially-coupled
MacKay-Neal code, is close to the theoretical limit for the relay channel.
Empirical results suggest that spatially-coupled protograph-based LDPC codes
have great potential to achieve theoretical limit of a general relay channel.
|
1102.5112
|
Achievable Rates for Channels with Deletions and Insertions
|
cs.IT math.IT
|
This paper considers a binary channel with deletions and insertions, where
each input bit is transformed in one of the following ways: it is deleted with
probability d, or an extra bit is added after it with probability i, or it is
transmitted unmodified with probability 1-d-i. A computable lower bound on the
capacity of this channel is derived. The transformation of the input sequence
by the channel may be viewed in terms of runs as follows: some runs of the
input sequence get shorter/longer, some runs get deleted, and some new runs are
added. It is difficult for the decoder to synchronize the channel output
sequence to the transmitted codeword mainly due to deleted runs and new
inserted runs.
The main idea is a mutual information decomposition in terms of the rate
achieved by a sub-optimal decoder that determines the positions of the deleted
and inserted runs in addition to decoding the transmitted codeword. The mutual
information between the channel input and output sequences is expressed as the
sum of the rate achieved by this decoder and the rate loss due to its
sub-optimality. Obtaining computable lower bounds on each of these quantities
yields a lower bound on the capacity. The bounds proposed in this paper provide
the first characterization of achievable rates for channels with general
insertions, and for channels with both deletions and insertions. For the
special case of the deletion channel, the proposed bound improves on the
previous best lower bound for deletion probabilities up to 0.3.
|
1102.5126
|
Jump-Diffusion Risk-Sensitive Asset Management II: Jump-Diffusion Factor
Model
|
q-fin.PM cs.SY math.OC q-fin.CP
|
In this article we extend earlier work on the jump-diffusion risk-sensitive
asset management problem [SIAM J. Fin. Math. (2011) 22-54] by allowing jumps in
both the factor process and the asset prices, as well as stochastic volatility
and investment constraints. In this case, the HJB equation is a partial
integro-differential equation (PIDE). By combining viscosity solutions with a
change of notation, a policy improvement argument and classical results on
parabolic PDEs we prove that the HJB PIDE admits a unique smooth solution. A
verification theorem concludes the resolution of this problem.
|
1102.5138
|
Low-Complexity Near-Optimal Codes for Gaussian Relay Networks
|
cs.IT cs.NI math.IT
|
We consider the problem of information flow over Gaussian relay networks.
Similar to the recent work by Avestimehr \emph{et al.} [1], we propose network
codes that achieve up to a constant gap from the capacity of such networks.
However, our proposed codes are also computationally tractable. Our main
technique is to use the codes of Avestimehr \emph{et al.} as inner codes in a
concatenated coding scheme.
|
1102.5185
|
Universal Higher Order Grammar
|
cs.CL cs.AI
|
We examine the class of languages that can be defined entirely in terms of
provability in an extension of the sorted type theory (Ty_n) by embedding the
logic of phonologies, without introduction of special types for syntactic
entities. This class is proven to precisely coincide with the class of
logically closed languages that may be thought of as functions from expressions
to sets of logically equivalent Ty_n terms. For a specific sub-class of
logically closed languages that are described by finite sets of rules or rule
schemata, we find effective procedures for building a compact Ty_n
representation, involving a finite number of axioms or axiom schemata. The
proposed formalism is characterized by some useful features unavailable in a
two-component architecture of a language model. A further specialization and
extension of the formalism with a context type enable effective account of
intensional and dynamic semantics.
|
1102.5190
|
Specifying Data Bases Management Systems by Using RM-ODP Engineering
Language
|
cs.DB
|
Distributed systems can be very large and complex. The various considerations
that influence their design can result in a substantial specification, which
requires a structured framework that has to be managed successfully. The
purpose of the RMODP is to define such a framework. The Reference Model for
Open Distributed Processing (RM-ODP) provides a framework within which support
of distribution, inter-working and portability can be integrated. It defines:
an object model, architectural concepts and architecture for the development of
ODP systems in terms of five viewpoints. Which include an information
viewpoint. Since the usage of Data bases management systems (DBMS) in complex
networks is increasing considerably, we are interested, in our work, in giving
DBMS specifications through the use of the three schemas (static, dynamic,
invariant). The present paper is organized as follows. After a literature
review, we will describe then the subset of concepts considered in this work
named the database management system (DBMS) object model. In the third section,
we will be interested in the engineering language and DMBS structure by
describing essentially DBMS objects. Finally, we will present DBMS engineering
specifications and makes the connection between models and their instances.
This introduces the basic form of the semantic approach we have described here.
|
1102.5204
|
Bilayer LDPC Convolutional Codes for Half-Duplex Relay Channels
|
cs.IT math.IT
|
In this paper we present regular bilayer LDPC convolutional codes for
half-duplex relay channels. For the binary erasure relay channel, we prove that
the proposed code construction achieves the capacities for the source-relay
link and the source-destination link provided that the channel conditions are
known when designing the code. Meanwhile, this code enables the highest
transmission rate with decode-and-forward relaying. In addition, its regular
degree distributions can easily be computed from the channel parameters, which
significantly simplifies the code optimization. Numerical results are provided
for both binary erasure channels (BEC) and AWGN channels. In BECs, we can
observe that the gaps between the decoding thresholds and the Shannon limits
are impressively small. In AWGN channels, the bilayer LDPC convolutional code
clearly outperforms its block code counterpart in terms of bit error rate.
|
1102.5220
|
Coexistence of Interacting Opinions in a Generalized Sznajd Model
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
The Sznajd model is a sociophysics model that mimics the propagation of
opinions in a closed society, where the interactions favour groups of agreeing
people. It is based in the Ising and Potts ferromagnetic models and although
the original model used only linear chains, it has since been adapted to
general networks. This model has a very rich transient, that has been used to
model several aspects of elections, but its stationary states are always
consensus states. In order to model more complex behaviours we have, in a
recent work, introduced the idea of biases and prejudices to the Sznajd model,
by generalizing the bounded confidence rule that is common to many continuous
opinion models. In that work we have found that the mean-field version of this
model (corresponding to a complete network) allows for stationary states where
non-interacting opinions survive, but never for the coexistence of interacting
opinions. In the present work, we provide networks that allow for the
coexistence of interacting opinions. Moreover, we show that the model does not
become inactive, that is, the opinions keep changing, even in the stationary
regime. We also provide results that give some insights on how this behaviour
approaches the mean-field behaviour, as the networks are changed.
|
1102.5225
|
Let Us Dance Just a Little Bit More --- On the Information Capacity of
the Human Motor System
|
cs.IT cs.HC math.IT physics.bio-ph q-bio.NC
|
Fitts' law is a fundamental tool in measuring the capacity of the human motor
system. However, it is, by definition, limited to aimed movements toward
spatially expanded targets. We revisit its information-theoretic basis with the
goal of generalizing it into unconstrained trained movement such as dance and
sports. The proposed new measure is based on a subject's ability to accurately
reproduce a complex movement pattern. We demonstrate our framework using
motion-capture data from professional dance performances.
|
1102.5253
|
On the Szeg\"o-Asymptotics for Doubly-Dispersive Gaussian Channels
|
cs.IT math.IT
|
We consider the time-continuous doubly-dispersive channel with additive
Gaussian noise and establish a capacity formula for the case where the channel
correlation operator is represented by a symbol which is periodic in time and
fulfills some further integrability and smoothness conditions. The key to this
result is a new Szeg\"o formula for certain pseudo-differential operators. The
formula justifies the water-filling principle along time and frequency in terms
of the time--continuous time-varying transfer function (the symbol).
|
1102.5275
|
Further Results on Quadratic Permutation Polynomial-Based Interleavers
for Turbo Codes
|
cs.IT math.IT
|
An interleaver is a critical component for the channel coding performance of
turbo codes. Algebraic constructions are of particular interest because they
admit analytical designs and simple, practical hardware implementation. Also,
the recently proposed quadratic permutation polynomial (QPP) based interleavers
by Sun and Takeshita (IEEE Trans. Inf. Theory, Jan. 2005) provide excellent
performance for short-to-medium block lengths, and have been selected for the
3GPP LTE standard. In this work, we derive some upper bounds on the best
achievable minimum distance dmin of QPP-based conventional binary turbo codes
(with tailbiting termination, or dual termination when the interleaver length N
is sufficiently large) that are tight for larger block sizes. In particular, we
show that the minimum distance is at most 2(2^{\nu +1}+9), independent of the
interleaver length, when the QPP has a QPP inverse, where {\nu} is the degree
of the primitive feedback and monic feedforward polynomials. However, allowing
the QPP to have a larger degree inverse may give strictly larger minimum
distances (and lower multiplicities). In particular, we provide several QPPs
with an inverse degree of at least three for some of the 3GPP LTE interleaver
lengths giving a dmin with the 3GPP LTE constituent encoders which is strictly
larger than 50. For instance, we have found a QPP for N=6016 which gives an
estimated dmin of 57. Furthermore, we provide the exact minimum distance and
the corresponding multiplicity for all 3GPP LTE turbo codes (with dual
termination) which shows that the best minimum distance is 51. Finally, we
compute the best achievable minimum distance with QPP interleavers for all 3GPP
LTE interleaver lengths N <= 4096, and compare the minimum distance with the
one we get when using the 3GPP LTE polynomials.
|
1102.5288
|
Sparse Bayesian Methods for Low-Rank Matrix Estimation
|
stat.ML cs.LG cs.SY math.OC stat.AP
|
Recovery of low-rank matrices has recently seen significant activity in many
areas of science and engineering, motivated by recent theoretical results for
exact reconstruction guarantees and interesting practical applications. A
number of methods have been developed for this recovery problem. However, a
principled method for choosing the unknown target rank is generally not
provided. In this paper, we present novel recovery algorithms for estimating
low-rank matrices in matrix completion and robust principal component analysis
based on sparse Bayesian learning (SBL) principles. Starting from a matrix
factorization formulation and enforcing the low-rank constraint in the
estimates as a sparsity constraint, we develop an approach that is very
effective in determining the correct rank while providing high recovery
performance. We provide connections with existing methods in other similar
problems and empirical results and comparisons with current state-of-the-art
methods that illustrate the effectiveness of this approach.
|
1102.5314
|
Jointly Optimal Channel and Power Assignment for Dual-Hop Multi-channel
Multi-user Relaying
|
cs.IT cs.PF math.IT
|
We consider the problem of jointly optimizing channel pairing, channel-user
assignment, and power allocation, to maximize the weighted sum-rate, in a
single-relay cooperative system with multiple channels and multiple users.
Common relaying strategies are considered, and transmission power constraints
are imposed on both individual transmitters and the aggregate over all
transmitters. The joint optimization problem naturally leads to a mixed-integer
program. Despite the general expectation that such problems are intractable, we
construct an efficient algorithm to find an optimal solution, which incurs
computational complexity that is polynomial in the number of channels and the
number of users. We further demonstrate through numerical experiments that the
jointly optimal solution can significantly improve system performance over its
suboptimal alternatives.
|
1102.5335
|
Block Companion Singer Cycles, Primitive Recursive Vector Sequences, and
Coprime Polynomial Pairs over Finite Fields
|
math.CO cs.IT math.IT
|
We discuss a conjecture concerning the enumeration of nonsingular matrices
over a finite field that are block companion and whose order is the maximum
possible in the corresponding general linear group. A special case is proved
using some recent results on the probability that a pair of polynomials with
coefficients in a finite field is coprime. Connection with an older problem of
Niederreiter about the number of splitting subspaces of a given dimension are
outlined and an asymptotic version of the conjectural formula is established.
Some applications to the enumeration of nonsingular Toeplitz matrices of a
given size over a finite field are also discussed.
|
1102.5337
|
Variable Length Coding over the Two-User Multiple-Access Channel
|
cs.IT math.IT
|
For discrete memoryless multiple-access channels, we propose a general
definition of variable length codes with a measure of the transmission rates at
the receiver side. This gives a receiver perspective on the multiple-access
channel coding problem and allows us to characterize the region of achievable
rates when the receiver is able to decode each transmitted message at a
different instant of time.We show an outer bound on this region and derive a
simple coding scheme that can achieve, in particular settings, all rates within
the region delimited by the outer bound. In addition, we propose a random
variable length coding scheme that achieve the direct part of the block code
capacity region of a multiple-access channel without requiring any agreement
between the transmitters.
|
1102.5357
|
Physical-Layer MIMO Relaying
|
cs.IT math.IT
|
The physical-layer network coding (PNC) approach provides improved
performance in many scenarios over "traditional" relaying techniques or network
coding. This work addresses the generalization of PNC to wireless scenarios
where network nodes have multiple antennas. We use a recent matrix
decomposition, which allows, by linear pre- and post-processing, to
simultaneously transform both channel matrices to triangular forms, where the
diagonal entries, corresponding to both channels, are equal. This
decomposition, in conjunction with precoding, allows to convert any two-input
multiple-access channel (MAC) into parallel MACs, over which single-antenna PNC
may be used. The technique is demonstrated using the two-way relay channel with
multiple antennas. For this case it is shown that, in the high signal-to-noise
regime, the scheme approaches the cut-set bound, thus establishing the
asymptotic network capacity.
|
1102.5361
|
Irreversible k-threshold and majority conversion processes on complete
multipartite graphs and graph products
|
math.CO cs.DM cs.SI
|
In graph theoretical models of the spread of disease through populations, the
spread of opinion through social networks, and the spread of faults through
distributed computer networks, vertices are in two states, either black or
white, and these states are dynamically updated at discrete time steps
according to the rules of the particular conversion process used in the model.
This paper considers the irreversible k-threshold and majority conversion
processes. In an irreversible k-threshold (resp., majority) conversion process,
a vertex is permanently colored black in a certain time period if at least k
(resp., at least half) of its neighbors were black in the previous time period.
A k-conversion set (resp., dynamic monopoly) is a set of vertices which, if
initially colored black, will result in all vertices eventually being colored
black under a k-threshold (resp., majority) conversion process. We answer
several open problems by presenting bounds and some exact values of the minimum
number of vertices in k-conversion sets and dynamic monopolies of complete
multipartite graphs, as well as of Cartesian and tensor products of two graphs.
|
1102.5364
|
On Outage Probability and Diversity-Multiplexing Tradeoff in MIMO Relay
Channels
|
cs.IT math.IT
|
Fading MIMO relay channels are studied analytically, when the source and
destination are equipped with multiple antennas and the relays have a single
one. Compact closed-form expressions are obtained for the outage probability
under i.i.d. and correlated Rayleigh-fading links. Low-outage approximations
are derived, which reveal a number of insights, including the impact of
correlation, of the number of antennas, of relay noise and of relaying
protocol. The effect of correlation is shown to be negligible, unless the
channel becomes almost fully correlated. The SNR loss of relay fading channels
compared to the AWGN channel is quantified. The SNR-asymptotic
diversity-multiplexing tradeoff (DMT) is obtained for a broad class of fading
distributions, including, as special cases, Rayleigh, Rice, Nakagami, Weibull,
which may be non-identical, spatially correlated and/or non-zero mean. The DMT
is shown to depend not on a particular fading distribution, but rather on its
polynomial behavior near zero, and is the same for the simple
"amplify-and-forward" protocol and more complicated "decode-and-forward" one
with capacity achieving codes, i.e. the full processing capability at the relay
does not help to improve the DMT. There is however a significant difference
between the SNR-asymptotic DMT and the finite-SNR outage performance: while the
former is not improved by using an extra antenna on either side, the latter can
be significantly improved and, in particular, an extra antenna can be
traded-off for a full processing capability at the relay. The results are
extended to the multi-relay channels with selection relaying and typical outage
events are identified.
|
1102.5365
|
Diversity-Multiplexing Tradeoff in the Low-SNR Regime
|
cs.IT math.IT
|
An extension of the popular diversity-multiplexing tradeoff framework to the
low-SNR (or wideband) regime is proposed. The concept of diversity gain is
shown to be redundant in this regime since the outage probability is
SNR-independent and depends on the multiplexing gain and the channel power gain
statistics only. The outage probability under the DMT framework is obtained in
an explicit, closed form for a broad class of channels. The low and high-SNR
regime boundaries are explicitly determined for the scalar Rayleigh-fading
channel, indicating a significant limitation of the SNR-asymptotic DMT when the
multiplexing gain is small.
|
1102.5381
|
Blind Adaptive Subcarrier Combining Technique for MC-CDMA Receiver in
Mobile Rayleigh Channel
|
cs.IT math.IT
|
A new subcarrier combining technique is proposed for MC -CDMA receiver in
mobile Rayleigh fading channel. It exploits the structure formed by repeating
spreading sequences of users on different subcarriers to simultaneously
suppress multiple access interference (MAI) and provide implicit channel
tracking without any knowledge of the channel amplitudes or training sequences.
This is achieved by adaptively weighting each subcarrier in each symbol period
by employing a simple gradient descent algorithm to meet the constant modulus
(CM) criterion with judicious selection of step-size. Improved BER and user
capacity performance are shown with similar complexity in order of O(N)
compared with conventional maximum ratio combining and equal gain combining
techniques even under high channel Doppler rates.
|
1102.5385
|
Back and Forth Between Rules and SE-Models (Extended Version)
|
cs.AI
|
Rules in logic programming encode information about mutual interdependencies
between literals that is not captured by any of the commonly used semantics.
This information becomes essential as soon as a program needs to be modified or
further manipulated.
We argue that, in these cases, a program should not be viewed solely as the
set of its models. Instead, it should be viewed and manipulated as the set of
sets of models of each rule inside it. With this in mind, we investigate and
highlight relations between the SE-model semantics and individual rules. We
identify a set of representatives of rule equivalence classes induced by
SE-models, and so pinpoint the exact expressivity of this semantics with
respect to a single rule. We also characterise the class of sets of
SE-interpretations representable by a single rule. Finally, we discuss the
introduction of two notions of equivalence, both stronger than strong
equivalence [1] and weaker than strong update equivalence [2], which seem more
suitable whenever the dependency information found in rules is of interest.
|
1102.5386
|
Linear Programming based Detectors for Two-Dimensional Intersymbol
Interference Channels
|
cs.IT math.IT
|
We present and study linear programming based detectors for two-dimensional
intersymbol interference channels. Interesting instances of two-dimensional
intersymbol interference channels are magnetic storage, optical storage and
Wyner's cellular network model.
We show that the optimal maximum a posteriori detection in such channels
lends itself to a natural linear programming based sub-optimal detector. We
call this the Pairwise linear program detector. Our experiments show that the
Pairwise linear program detector performs poorly. We then propose two methods
to strengthen our detector. These detectors are based on systematically
enhancing the Pairwise linear program. The first one, the Block linear program
detector adds higher order potential functions in an {\em exhaustive} manner,
as constraints, to the Pairwise linear program detector. We show by experiments
that the Block linear program detector has performance close to the optimal
detector. We then develop another detector by
{\em adaptively} adding frustrated cycles to the Pairwise linear program
detector. Empirically, this detector also has performance close to the optimal
one and turns out to be less complex then the Block linear program detector.
|
1102.5388
|
Energy Efficiency and Goodput Analysis in Two-Way Wireless Relay
Networks
|
cs.IT math.IT
|
In this paper, we study two-way relay networks (TWRNs) in which two source
nodes exchange their information via a relay node indirectly in Rayleigh fading
channels. Both Amplify-and-Forward (AF) and Decode-and-Forward (DF) techniques
have been analyzed in the TWRN employing a Markov chain model through which the
network operation is described and investigated in depth. Automatic
Repeat-reQuest (ARQ) retransmission has been applied to guarantee the
successful packet delivery. The bit energy consumption and goodput expressions
have been derived as functions of transmission rate in a given AF or DF TWRN.
Numerical results are used to identify the optimal transmission rates where the
bit energy consumption is minimized or the goodput is maximized. The network
performances are compared in terms of energy and transmission efficiency in AF
and DF modes.
|
1102.5389
|
Program-Size Versus Time Complexity, Speed-Up and Slowdown Phenomena in
Small Turing Machines
|
cs.CC cs.IT math.IT
|
The aim of this paper is to undertake an experimental investigation of the
trade-offs between program-size and time computational complexity. The
investigation includes an exhaustive exploration and systematic study of the
functions computed by the set of all 2-color Turing machines with 2, 3 and 4
states--denoted by (n,2) with n the number of states--with particular attention
to the runtimes and space usages when the machines have access to larger
resources (more states). We report that the average runtime of Turing machines
computing a function almost surely increases as a function of the number of
states, indicating that machines not terminating (almost) immediately tend to
occupy all the resources at hand. We calculated all time complexity classes to
which the algorithms computing the functions found in both (2,2) and (3,2)
belong to, and made a comparison among these classes. For a selection of
functions the comparison was extended to (4,2). Our study revealed various
structures in the micro-cosmos of small Turing machines. Most notably we
observed "phase-transitions" in the halting-probability distribution that we
explain. Moreover, it is observed that short initial segments fully define a
function computed by a Turing machine.
|
1102.5396
|
Deformed Statistics Free Energy Model for Source Separation using
Unsupervised Learning
|
cond-mat.stat-mech cs.IT cs.LG math.IT
|
A generalized-statistics variational principle for source separation is
formulated by recourse to Tsallis' entropy subjected to the additive duality
and employing constraints described by normal averages. The variational
principle is amalgamated with Hopfield-like learning rules resulting in an
unsupervised learning model. The update rules are formulated with the aid of
q-deformed calculus. Numerical examples exemplify the efficacy of this model.
|
1102.5400
|
Power Allocation for Cognitive Wireless Mesh Networks by Applying
Multi-agent Q-learning Approach
|
cs.IT math.IT
|
As the scarce spectrum resource is becoming over-crowded, cognitive radios
(CRs) indicate great flexibility to improve the spectrum efficiency by
opportunistically accessing the authorized frequency bands. One of the critical
challenges for operating such radios in a network is how to efficiently
allocate transmission powers and frequency resource among the secondary users
(SUs) while satisfying the quality-of-service (QoS) constraints of the primary
users (PUs). In this paper, we focus on the non-cooperative power allocation
problem in cognitive wireless mesh networks (CogMesh) formed by a number of
clusters with the consideration of energy efficiency. Due to the SUs' selfish
and spontaneous properties, the problem is modeled as a stochastic learning
process. We first extend the single-agent Q-learning to a multi-user context,
and then propose a conjecture based multi-agent Qlearning algorithm to achieve
the optimal transmission strategies with only private and incomplete
information. An intelligent SU performs Q-function updates based on the
conjecture over the other SUs' stochastic behaviors. This learning algorithm
provably converges given certain restrictions that arise during learning
procedure. Simulation experiments are used to verify the performance of our
algorithm and demonstrate its effectiveness of improving the energy efficiency.
|
1102.5401
|
Minimax state estimation for linear descriptor systems
|
math.OC cs.SY
|
Author's Summary of the dissertation for the degree of the Candidate of
Science (physics and mathematics). The aim of the dissertation is to develop a
generalized Kalman Duality concept applicable for linear unbounded
non-invertible operators and introduce the minimax state estimation theory and
algorithms for linear differential-algebraic equations. In particular, the
dissertation pursues the following goals: - develop generalized duality concept
for the minimax state estimation theory for DAEs with unknown but bounded model
error and random observation noise with unknown but bounded correlation
operator; - derive the minimax state estimation theory for linear DAEs with
unknown but bounded model error and random observation noise with unknown but
bounded correlation operator; - describe how the DAE model propagates uncertain
parameters; - estimate the worst-case error; - construct fast estimation
algorithms in the form of filters; - develop a tool for model validation, that
is to assess how good the model describes observed phenomena.
The dissertation contains the following new results: - generalized version of
the Kalman duality principle is proposed allowing to handle unbounded linear
model operators with non-trivial null-space; - new definitions of the minimax
estimates for DAEs based on the generalized Kalman duality principle are
proposed; - theorems of existence for minimax estimates are proved; - new
minimax state estimation algorithms (in the form of filter and in the
variational form) for DAE are proposed.
|
1102.5407
|
Random Networks with given Rich-club Coefficient
|
physics.soc-ph cs.SI
|
In complex networks it is common to model a network or generate a surrogate
network based on the conservation of the network's degree distribution. We
provide an alternative network model based on the conservation of connection
density within a set of nodes. This density is measure by the rich-club
coefficient. We present a method to generate surrogates networks with a given
rich-club coefficient. We show that by choosing a suitable local linking term,
the generated random networks can reproduce the degree distribution and the
mixing pattern of real networks. The method is easy to implement and produces
good models of real networks.
|
1102.5418
|
Kolmogorov complexity as a language
|
cs.IT math.IT math.LO
|
The notion of Kolmogorov complexity (=the minimal length of a program that
generates some object) is often useful as a kind of language that allows us to
reformulate some notions and therefore provide new intuition. In this survey we
provide (with minimal comments) many different examples where notions and
statements that involve Kolmogorov complexity are compared with their
counterparts not involving complexity.
|
1102.5420
|
On the effect of the path length and transitivity of small-world
networks on epidemic dynamics
|
cs.SI math.DS physics.soc-ph
|
We show how one can trace in a systematic way the coarse-grained solutions of
individual-based stochastic epidemic models evolving on heterogeneous complex
networks with respect to their topological characteristics. In particular, we
have developed algorithms that allow the tuning of the transitivity (clustering
coefficient) and the average mean-path length allowing the investigation of the
"pure" impacts of the two characteristics on the emergent behavior of detailed
epidemic models. The framework could be used to shed more light into the
influence of weak and strong social ties on epidemic spread within small-world
network structures, and ultimately to provide novel systematic computational
modeling and exploration of better contagion control strategies.
|
1102.5442
|
Blind Adaptive Successive Interference Cancellation for Multicarrier
DS-CDMA
|
cs.IT math.IT
|
A new adaptive receiver design for the Multicarrier (MC) DS-CDMA is proposed
employing successive interference cancellation (SIC) architecture. One of the
main problems limiting the performance of SIC in MC DS-CDMA is the imperfect
estimation of multiple access interference (MAI), and hence, the limited
frequency diversity gain achieved in multipath fading channels. In this paper,
we design a blind adaptive SIC with new multiple access interference
suppression capability implemented within despreading process to improve both
detection and cancellation processes. Furthermore, dynamic scaling factors
derived from the despreader weights are used for interference cancellation
process. This method applied on each subcarrier is followed by maximum ratio or
equal gain combining to fully exploit the frequency diversity inherent in the
multicarrier CDMA systems. It is shown that this way of MAI estimation on
individual subcarrier provides significantly improved performance for a MC
DS-CDMA system compared to that with conventional matched filter (MF) and SIC
techniques at a little added complexity. Performance evaluation under severe
nearfar, fading correlation and system loading conditions are carried out to
affirm the gain of the proposed adaptive receiver design approach.
|
1102.5448
|
Continuous Multiclass Labeling Approaches and Algorithms
|
cs.CV math.OC
|
We study convex relaxations of the image labeling problem on a continuous
domain with regularizers based on metric interaction potentials. The generic
framework ensures existence of minimizers and covers a wide range of
relaxations of the originally combinatorial problem. We focus on two specific
relaxations that differ in flexibility and simplicity -- one can be used to
tightly relax any metric interaction potential, while the other one only covers
Euclidean metrics but requires less computational effort. For solving the
nonsmooth discretized problem, we propose a globally convergent
Douglas-Rachford scheme, and show that a sequence of dual iterates can be
recovered in order to provide a posteriori optimality bounds. In a quantitative
comparison to two other first-order methods, the approach shows competitive
performance on synthetical and real-world images. By combining the method with
an improved binarization technique for nonstandard potentials, we were able to
routinely recover discrete solutions within 1%--5% of the global optimum for
the combinatorial image labeling problem.
|
1102.5451
|
Reduction of fuzzy automata by means of fuzzy quasi-orders
|
cs.FL cs.AI
|
In our recent paper we have established close relationships between state
reduction of a fuzzy recognizer and resolution of a particular system of fuzzy
relation equations. In that paper we have also studied reductions by means of
those solutions which are fuzzy equivalences. In this paper we will see that in
some cases better reductions can be obtained using the solutions of this system
that are fuzzy quasi-orders. Generally, fuzzy quasi-orders and fuzzy
equivalences are equally good in the state reduction, but we show that right
and left invariant fuzzy quasi-orders give better reductions than right and
left invariant fuzzy equivalences. We also show that alternate reductions by
means of fuzzy quasi-orders give better results than alternate reductions by
means of fuzzy equivalences. Furthermore we study a more general type of fuzzy
quasi-orders, weakly right and left invariant ones, and we show that they are
closely related to determinization of fuzzy recognizers. We also demonstrate
some applications of weakly left invariant fuzzy quasi-orders in conflict
analysis of fuzzy discrete event systems.
|
1102.5452
|
Bisimulations for fuzzy automata
|
cs.FL cs.AI
|
Bisimulations have been widely used in many areas of computer science to
model equivalence between various systems, and to reduce the number of states
of these systems, whereas uniform fuzzy relations have recently been introduced
as a means to model the fuzzy equivalence between elements of two possible
different sets. Here we use the conjunction of these two concepts as a powerful
tool in the study of equivalence between fuzzy automata. We prove that a
uniform fuzzy relation between fuzzy automata $\cal A$ and $\cal B$ is a
forward bisimulation if and only if its kernel and co-kernel are forward
bisimulation fuzzy equivalences on $\cal A$ and $\cal B$ and there is a special
isomorphism between factor fuzzy automata with respect to these fuzzy
equivalences. As a consequence we get that fuzzy automata $\cal A$ and $\cal B$
are UFB-equivalent, i.e., there is a uniform forward bisimulation between them,
if and only if there is a special isomorphism between the factor fuzzy automata
of $\cal A$ and $\cal B$ with respect to their greatest forward bisimulation
fuzzy equivalences. This result reduces the problem of testing UFB-equivalence
to the problem of testing isomorphism of fuzzy automata, which is closely
related to the well-known graph isomorphism problem. We prove some similar
results for backward-forward bisimulations, and we point to fundamental
differences. Because of the duality with the studied concepts, backward and
forward-backward bisimulations are not considered separately. Finally, we give
a comprehensive overview of various concepts on deterministic,
nondeterministic, fuzzy, and weighted automata, which are related to
bisimulations.
|
1102.5458
|
Improving Image Search based on User Created Communities
|
cs.IR
|
Tag-based retrieval of multimedia content is a difficult problem, not only
because of the shorter length of tags associated with images and videos, but
also due to mismatch in the terminologies used by searcher and content creator.
To alleviate this problem, we propose a simple concept-driven probabilistic
model for improving text-based rich-media search. While our approach is similar
to existing topic-based retrieval and cluster-based language modeling work,
there are two important differences: (1) our proposed model considers not only
the query-generation likelihood from cluster, but explicitly accounts for the
overall "popularity" of the cluster or underlying concept, and (2) we explore
the possibility of inferring the likely concept relevant to a rich-media
content through the user-created communities that the content belongs to.
We implement two methods of concept extraction: a traditional cluster based
approach, and the proposed community based approach. We evaluate these two
techniques for how effectively they capture the intended meaning of a term from
the content creator and searcher, and their overall value in improving image
search. Our results show that concept-driven search, though simple, clearly
outperforms plain search. Among the two techniques for concept-driven search,
community-based approach is more successful, as the concepts generated from
user communities are found to be more intuitive and appealing.
|
1102.5461
|
Distributed Opportunistic Channel Access in Wireless Relay Networks
|
cs.IT math.IT
|
In this paper, the problem of distributed opportunistic channel access in
wireless relaying is investigated. A relay network with multiple
source-destination pairs and multiple relays is considered. All the source
nodes contend through a random access procedure. A winner source node may give
up its transmission opportunity if its link quality is poor. In this research,
we apply the optimal stopping theory to analyze when a winner node should give
up its transmission opportunity. By assuming the winner node has information of
channel gains of links from itself to relays and from relays to its
destination, the existence and uniqueness of an optimal stopping rule are
rigorously proved. It is also found that the optimal stopping rule is a
pure-threshold strategy. The case when the winner node does not have
information of channel gains of links from relays to its destination is also
studied. Two stopping problems exist, one in the main layer (for channel access
of source nodes), and the other in the sub-layer (for channel access of relay
nodes). An intuitive stopping rule, where the sub-layer and the main layer
maximize their throughput respectively, is shown to be a semi-pure-threshold
strategy. The intuitive stopping rule turns out to be non-optimal. An optimal
stopping rule is then derived theoretically. Our research reveals that
multi-user (including multi-source and multi-relay) diversity and time
diversity can be fully utilized in a relay network by our proposed strategies.
|
1102.5462
|
Summary Based Structures with Improved Sublinear Recovery for Compressed
Sensing
|
cs.IT math.IT
|
We introduce a new class of measurement matrices for compressed sensing,
using low order summaries over binary sequences of a given length. We prove
recovery guarantees for three reconstruction algorithms using the proposed
measurements, including $\ell_1$ minimization and two combinatorial methods. In
particular, one of the algorithms recovers $k$-sparse vectors of length $N$ in
sublinear time $\text{poly}(k\log{N})$, and requires at most
$\Omega(k\log{N}\log\log{N})$ measurements. The empirical oversampling constant
of the algorithm is significantly better than existing sublinear recovery
algorithms such as Chaining Pursuit and Sudocodes. In particular, for $10^3\leq
N\leq 10^8$ and $k=100$, the oversampling factor is between 3 to 8. We provide
preliminary insight into how the proposed constructions, and the fast recovery
scheme can be used in a number of practical applications such as market basket
analysis, and real time compressed sensing implementation.
|
1102.5482
|
A Note on the Compaction of long Training Sequences for Universal
Classification -a Non-Probabilistic Approach
|
cs.IT math.IT
|
One of the central problems in the classification of individual test
sequences (e.g. genetic analysis), is that of checking for the similarity of
sample test sequences as compared with a set of much longer training sequences.
This is done by a set of classifiers for test sequences of length N, where each
of the classifiers is trained by the training sequences so as to minimize the
classification error rate when fed with each of the training sequences.
It should be noted that the storage of long training sequences is considered
to be a serious bottleneck in the next generation sequencing for Genome
analysis
Some popular classification algorithms adopt a probabilistic approach, by
assuming that the sequences are realizations of some variable-length Markov
process or a hidden Markov process (HMM), thus enabling the imbeding of the
training data onto a variable-length Suffix-tree, the size of which is usually
linear in $N$, the length of the test sequence.
Despite of the fact that it is not assumed here that the sequences are
realizations of probabilistic processes (an assumption that does not seem to be
fully justified when dealing with biological data), it is demonstrated that
"feature-based" classifiers, where particular substrings (called "features" or
markers) are sought in a set of "big data" training sequences may be based on a
universal compaction of the training data that is contained in a set of $t$
(long) individual training sequences, onto a suffix-tree with no more than O(N)
leaves, regardless of how long the training sequence is, at only a vanishing
increase in the classification error rate.
|
1102.5496
|
Efficient regularized isotonic regression with application to gene--gene
interaction search
|
stat.ME cs.SY math.OC stat.AP
|
Isotonic regression is a nonparametric approach for fitting monotonic models
to data that has been widely studied from both theoretical and practical
perspectives. However, this approach encounters computational and statistical
overfitting issues in higher dimensions. To address both concerns, we present
an algorithm, which we term Isotonic Recursive Partitioning (IRP), for isotonic
regression based on recursively partitioning the covariate space through
solution of progressively smaller "best cut" subproblems. This creates a
regularized sequence of isotonic models of increasing model complexity that
converges to the global isotonic regression solution. The models along the
sequence are often more accurate than the unregularized isotonic regression
model because of the complexity control they offer. We quantify this complexity
control through estimation of degrees of freedom along the path. Success of the
regularized models in prediction and IRPs favorable computational properties
are demonstrated through a series of simulated and real data experiments. We
discuss application of IRP to the problem of searching for gene--gene
interactions and epistasis, and demonstrate it on data from genome-wide
association studies of three common diseases.
|
1102.5499
|
Information filtering via preferential diffusion
|
physics.data-an cs.IR
|
Recommender systems have shown great potential to address information
overload problem, namely to help users in finding interesting and relevant
objects within a huge information space. Some physical dynamics, including heat
conduction process and mass or energy diffusion on networks, have recently
found applications in personalized recommendation. Most of the previous studies
focus overwhelmingly on recommendation accuracy as the only important factor,
while overlook the significance of diversity and novelty which indeed provide
the vitality of the system. In this paper, we propose a recommendation
algorithm based on the preferential diffusion process on user-object bipartite
network. Numerical analyses on two benchmark datasets, MovieLens and Netflix,
indicate that our method outperforms the state-of-the-art methods.
Specifically, it can not only provide more accurate recommendations, but also
generate more diverse and novel recommendations by accurately recommending
unpopular objects.
|
1102.5509
|
Probabilistic analysis of the human transcriptome with side information
|
stat.ML cs.CE q-bio.GN q-bio.MN q-bio.QM stat.AP stat.ME
|
Understanding functional organization of genetic information is a major
challenge in modern biology. Following the initial publication of the human
genome sequence in 2001, advances in high-throughput measurement technologies
and efficient sharing of research material through community databases have
opened up new views to the study of living organisms and the structure of life.
In this thesis, novel computational strategies have been developed to
investigate a key functional layer of genetic information, the human
transcriptome, which regulates the function of living cells through protein
synthesis. The key contributions of the thesis are general exploratory tools
for high-throughput data analysis that have provided new insights to
cell-biological networks, cancer mechanisms and other aspects of genome
function.
A central challenge in functional genomics is that high-dimensional genomic
observations are associated with high levels of complex and largely unknown
sources of variation. By combining statistical evidence across multiple
measurement sources and the wealth of background information in genomic data
repositories it has been possible to solve some the uncertainties associated
with individual observations and to identify functional mechanisms that could
not be detected based on individual measurement sources. Statistical learning
and probabilistic models provide a natural framework for such modeling tasks.
Open source implementations of the key methodological contributions have been
released to facilitate further adoption of the developed methods by the
research community.
|
1102.5511
|
A Fast Algorithm for the Discrete Core/Periphery Bipartitioning Problem
|
physics.soc-ph cs.DS cs.SI
|
Various methods have been proposed in the literature to determine an optimal
partitioning of the set of actors in a network into core and periphery subsets.
However, these methods either work only for relatively small input sizes, or do
not guarantee an optimal answer. In this paper, we propose a new algorithm to
solve this problem. This algorithm is efficient and exact, allowing the optimal
partitioning for networks of several thousand actors to be computed in under a
second. We also show that the optimal core can be characterized as a set
containing the actors with the highest degrees in the original network.
|
1102.5535
|
Full Rate Collaborative Diversity Scheme for Multiple Access Fading
Channels
|
cs.IT math.IT
|
User cooperation is a well-known approach to achieve diversity without
multiple antennas, however at the cost of inevitable loss of rate mostly due to
the need of additional channels for relaying. A new collaborative diversity
scheme is proposed here for multiple access fading channels to attain full rate
with near maximum diversity. This is achieved by allowing two users and their
corresponding relays to transmit/forward data on the same channel by exploiting
unique spatial-signatures of their fading channels. The base-station jointly
detects the co-channel users' data using maximum-likelihood search algorithm
over small set of possible data combinations. Full data rate with significant
diversity gain near to two-antenna Alamouti scheme is shown.
|
1102.5549
|
Instant Replay: Investigating statistical Analysis in Sports
|
stat.AP cs.AI physics.data-an stat.ML
|
Technology has had an unquestionable impact on the way people watch sports.
Along with this technological evolution has come a higher standard to ensure a
good viewing experience for the casual sports fan. It can be argued that the
pervasion of statistical analysis in sports serves to satiate the fan's desire
for detailed sports statistics. The goal of statistical analysis in sports is a
simple one: to eliminate subjective analysis. In this paper, we review previous
work that attempts to analyze various aspects in sports by using ideas from
Markov Chains, Bayesian Inference and Markov Chain Monte Carlo (MCMC) methods.
The unifying goal of these works is to achieve an accurate representation of
the player's ability, the sport, or the environmental effects on the player's
performance. With the prevalence of cheap computation, it is possible that
using techniques in Artificial Intelligence could improve the result of
statistical analysis in sport. This is best illustrated when evaluating
football using Neuro Dynamic Programming, a Control Theory paradigm heavily
based on theory in Stochastic processes. The results from this method suggest
that statistical analysis in sports may benefit from using ideas from the area
of Control Theory or Machine Learning
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.