id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1212.5877 | Blinking Molecule Tracking | cs.CV cs.DM | We discuss a method for tracking individual molecules which globally
optimizes the likelihood of the connections between molecule positions fast and
with high reliability even for high spot densities and blinking molecules. Our
method works with cost functions which can be freely chosen to combine costs
for distances between spots in space and time and which can account for the
reliability of positioning a molecule. To this end, we describe a top-down
polyhedral approach to the problem of tracking many individual molecules. This
immediately yields an effective implementation using standard linear
programming solvers. Our method can be applied to 2D and 3D tracking.
|
1212.5882 | The Kernel-SME Filter for Multiple Target Tracking | cs.SY | We present a novel method called Kernel-SME filter for tracking multiple
targets when the association of the measurements to the targets is unknown. The
method is a further development of the Symmetric Measurement Equation (SME)
filter, which removes the data association uncertainty of the original
measurement equation with the help of a symmetric transformation. The
underlying idea of the Kernel-SME filter is to construct a symmetric
transformation by means of mapping the measurements to a Gaussian mixture. This
transformation is scalable to a large number of targets and allows for deriving
a Gaussian state estimator that has a cubic time complexity in the number of
targets.
|
1212.5921 | Distributed optimization of deeply nested systems | cs.LG cs.NE math.OC stat.ML | In science and engineering, intelligent processing of complex signals such as
images, sound or language is often performed by a parameterized hierarchy of
nonlinear processing layers, sometimes biologically inspired. Hierarchical
systems (or, more generally, nested systems) offer a way to generate complex
mappings using simple stages. Each layer performs a different operation and
achieves an ever more sophisticated representation of the input, as, for
example, in an deep artificial neural network, an object recognition cascade in
computer vision or a speech front-end processing. Joint estimation of the
parameters of all the layers and selection of an optimal architecture is widely
considered to be a difficult numerical nonconvex optimization problem,
difficult to parallelize for execution in a distributed computation
environment, and requiring significant human expert effort, which leads to
suboptimal systems in practice. We describe a general mathematical strategy to
learn the parameters and, to some extent, the architecture of nested systems,
called the method of auxiliary coordinates (MAC). This replaces the original
problem involving a deeply nested function with a constrained problem involving
a different function in an augmented space without nesting. The constrained
problem may be solved with penalty-based methods using alternating optimization
over the parameters and the auxiliary coordinates. MAC has provable
convergence, is easy to implement reusing existing algorithms for single
layers, can be parallelized trivially and massively, applies even when
parameter derivatives are not available or not desirable, and is competitive
with state-of-the-art nonlinear optimizers even in the serial computation
setting, often providing reasonable models within a few iterations.
|
1212.5932 | Fully scalable online-preprocessing algorithm for short oligonucleotide
microarray atlases | q-bio.QM cs.CE cs.LG q-bio.GN stat.AP stat.ML | Accumulation of standardized data collections is opening up novel
opportunities for holistic characterization of genome function. The limited
scalability of current preprocessing techniques has, however, formed a
bottleneck for full utilization of contemporary microarray collections. While
short oligonucleotide arrays constitute a major source of genome-wide profiling
data, scalable probe-level preprocessing algorithms have been available only
for few measurement platforms based on pre-calculated model parameters from
restricted reference training sets. To overcome these key limitations, we
introduce a fully scalable online-learning algorithm that provides tools to
process large microarray atlases including tens of thousands of arrays. Unlike
the alternatives, the proposed algorithm scales up in linear time with respect
to sample size and is readily applicable to all short oligonucleotide
platforms. This is the only available preprocessing algorithm that can learn
probe-level parameters based on sequential hyperparameter updates at small,
consecutive batches of data, thus circumventing the extensive memory
requirements of the standard approaches and opening up novel opportunities to
take full advantage of contemporary microarray data collections. Moreover,
using the most comprehensive data collections to estimate probe-level effects
can assist in pinpointing individual probes affected by various biases and
provide new tools to guide array design and quality control. The implementation
is freely available in R/Bioconductor at
http://www.bioconductor.org/packages/devel/bioc/html/RPA.html
|
1212.5943 | Modeling page-view dynamics on Wikipedia | cs.CY cs.SI physics.data-an physics.soc-ph | We introduce a model for predicting page-view dynamics of promoted content.
The regularity of the content promotion process on Wikipedia provides excellent
experimental conditions which favour detailed modelling. We show that the
popularity of an article featured on Wikipedia's main page decays exponentially
in time if the circadian cycles of the users are taken into account. Our model
can be explained as the result of individual Poisson processes and is validated
through empirical measurements. It provides a simpler explanation for the
evolution of content popularity than previous studies.
|
1212.5969 | The Strength of Varying Tie Strength | physics.soc-ph cs.SI | ``The Strength of Weak Ties" argument (Granovetter 1973) says that the most
valuable information is best collected through bridging ties with other social
circles than one's own, and that those ties tend to be weak. Aral and Van
Alstyne (2011) added that to access complex information, actors need strong
ties (``high bandwidth") instead. These insights I integrate and generalize by
pointing at actors' benefits and costs. Weak ties are well-suited for
relatively simple information at low costs, whereas for complex information,
the best outcomes are expected for those actors who vary their bandwidths along
with the value of information accessed. To support my claim I use all patents
in the USA (two million) over the period 1975---1999.
|
1212.5981 | Core organization of directed complex networks | cond-mat.dis-nn cs.SI math-ph math.MP physics.soc-ph | The recursive removal of leaves (dead end vertices) and their neighbors from
an undirected network results, when this pruning algorithm stops, in a
so-called core of the network. This specific subgraph should be distinguished
from $k$-cores, which are principally different subgraphs in networks. If the
vertex mean degree of a network is sufficiently large, the core is a giant
cluster containing a finite fraction of vertices. We find that generalization
of this pruning algorithm to directed networks provides a significantly more
complex picture of cores. By implementing a rate equation approach to this
pruning procedure for directed uncorrelated networks, we identify a set of
cores progressively embedded into each other in a network and describe their
birth points and structure.
|
1212.6009 | Distributed Sparse Signal Recovery For Sensor Networks | cs.IT math.IT | We propose a distributed algorithm for sparse signal recovery in sensor
networks based on Iterative Hard Thresholding (IHT). Every agent has a set of
measurements of a signal x, and the objective is for the agents to recover x
from their collective measurements at a minimal communication cost and with low
computational complexity. A naive distributed implementation of IHT would
require global communication of every agent's full state in each iteration. We
find that we can dramatically reduce this communication cost by leveraging
solutions to the distributed top-K problem in the database literature.
Evaluations show that our algorithm requires up to three orders of magnitude
less total bandwidth than the best-known distributed basis pursuit method.
|
1212.6018 | Exponentially Weighted Moving Average Charts for Detecting Concept Drift | stat.ML cs.LG stat.AP | Classifying streaming data requires the development of methods which are
computationally efficient and able to cope with changes in the underlying
distribution of the stream, a phenomenon known in the literature as concept
drift. We propose a new method for detecting concept drift which uses an
Exponentially Weighted Moving Average (EWMA) chart to monitor the
misclassification rate of an streaming classifier. Our approach is modular and
can hence be run in parallel with any underlying classifier to provide an
additional layer of concept drift detection. Moreover our method is
computationally efficient with overhead O(1) and works in a fully online manner
with no need to store data points in memory. Unlike many existing approaches to
concept drift detection, our method allows the rate of false positive
detections to be controlled and kept constant over time.
|
1212.6027 | Belief propagation for optimal edge cover in the random complete graph | math.PR cs.DM cs.IT math.IT | We apply the objective method of Aldous to the problem of finding the
minimum-cost edge cover of the complete graph with random independent and
identically distributed edge costs. The limit, as the number of vertices goes
to infinity, of the expected minimum cost for this problem is known via a
combinatorial approach of Hessler and W\"{a}stlund. We provide a proof of this
result using the machinery of the objective method and local weak convergence,
which was used to prove the $\zeta(2)$ limit of the random assignment problem.
A proof via the objective method is useful because it provides us with more
information on the nature of the edge's incident on a typical root in the
minimum-cost edge cover. We further show that a belief propagation algorithm
converges asymptotically to the optimal solution. This can be applied in a
computational linguistics problem of semantic projection. The belief
propagation algorithm yields a near optimal solution with lesser complexity
than the known best algorithms designed for optimality in worst-case settings.
|
1212.6030 | Bounds on the state vector growth rate in stochastic dynamical systems | math.OC cs.SY | A stochastic dynamical system represented through a linear vector equation in
idempotent algebra is considered. We propose simple bounds on the mean growth
rate of the system state vector, and give an analysis of absolute error of a
bound. As an illustration, numerical results of evaluation of the bounds for a
test system are also presented.
|
1212.6031 | Tangent Bundle Manifold Learning via Grassmann&Stiefel Eigenmaps | cs.LG | One of the ultimate goals of Manifold Learning (ML) is to reconstruct an
unknown nonlinear low-dimensional manifold embedded in a high-dimensional
observation space by a given set of data points from the manifold. We derive a
local lower bound for the maximum reconstruction error in a small neighborhood
of an arbitrary point. The lower bound is defined in terms of the distance
between tangent spaces to the original manifold and the estimated manifold at
the considered point and reconstructed point, respectively. We propose an
amplification of the ML, called Tangent Bundle ML, in which the proximity not
only between the original manifold and its estimator but also between their
tangent spaces is required. We present a new algorithm that solves this problem
and gives a new solution for the ML also.
|
1212.6050 | Applying Social Network Analysis to Analyze a Web-Based Community | cs.SI cs.CY | This paper deals with a very renowned website (that is Book-Crossing) from
two angles: The first angle focuses on the direct relations between users and
books. Many things can be inferred from this part of analysis such as who is
more interested in book reading than others and why? Which books are most
popular and which users are most active and why? The task requires the use of
certain social network analysis measures (e.g. degree centrality). What does it
mean when two users like the same book? Is it the same when other two users
have one thousand books in common? Who is more likely to be a friend of whom
and why? Are there specific people in the community who are more qualified to
establish large circles of social relations? These questions (and of course
others) were answered through the other part of the analysis, which will take
us to probe the potential social relations between users in this community.
Although these relationships do not exist explicitly, they can be inferred with
the help of affiliation network analysis and techniques such as m-slice.
|
1212.6051 | Automatic approach for generating ETL operators | cs.DB | This article addresses the generation of the ETL
operators(Extract-Transform-Load) for supplying a Data Warehouse from a
relational data source. As a first step, we add new rules to those proposed by
the authors of [1], these rules deal with the combination of ETL operators. In
a second step, we propose an automatic approach based on model transformations
to generate the ETL operations needed for loading a data warehouse. This
approach offers the possibility to set some designer requirements for loading.
|
1212.6054 | New design of Robotics Remote lab | cs.RO | The Robotic Remote Laboratory controls the Robot labs via the Internet and
applies the Robot experiment in easy and advanced way. If we want to enhance
the RRL system, we must study requirements of the Robot experiment in a deeply
way. One of key requirements of the Robot experiment is the Control algorithm
that includes all important activities to affect the Robot; one of them relates
the path or obstacle. Our goal is to produce a new design of the RRL includes a
new treatment to the Control algorithm depends on isolation one of the Control
algorithm's activities that relates the paths in a separated algorithm, i.e.,
design the (Path planning algorithm) is independent of the original Control
algorithm. This aim can be achieved by depending on the light to produce the
Light obstacle. To apply the Light obstacle, we need to hardware (Light control
server and Light arms) and soft ware (path planning algorithm).The NXT 2.0
Robot will sense the Light obstacle depending on the Light sensor of it. The
new design has two servers, one for the path (Light control server) and other
for the other activities of the Control algorithm (Robot control server).The
website of the new design includes three main parts (Lab Reservation, Open Lab,
Download Simulation).We proposed a set of scenarios for organizing the
reservation of the Remote Lab. Additionally, we developed an appropriate
software to simulate the Robot and to practice it before usage the Remote lab.
|
1212.6058 | High Quality Image Interpolation via Local Autoregressive and Nonlocal
3-D Sparse Regularization | cs.MM cs.CV | In this paper, we propose a novel image interpolation algorithm, which is
formulated via combining both the local autoregressive (AR) model and the
nonlocal adaptive 3-D sparse model as regularized constraints under the
regularization framework. Estimating the high-resolution image by the local AR
regularization is different from these conventional AR models, which weighted
calculates the interpolation coefficients without considering the rough
structural similarity between the low-resolution (LR) and high-resolution (HR)
images. Then the nonlocal adaptive 3-D sparse model is formulated to regularize
the interpolated HR image, which provides a way to modify these pixels with the
problem of numerical stability caused by AR model. In addition, a new
Split-Bregman based iterative algorithm is developed to solve the above
optimization problem iteratively. Experiment results demonstrate that the
proposed algorithm achieves significant performance improvements over the
traditional algorithms in terms of both objective quality and visual perception
|
1212.6069 | Evaluation of Lyapunov exponent in generalized linear dynamical models
of queueing networks | math.OC cs.SY | The problem of evaluation of Lyapunov exponent in queueing network analysis
is considered based on models and methods of idempotent algebra. General
existence conditions for Lyapunov exponent to exist in generalized linear
stochastic dynamic systems are given, and examples of evaluation of the
exponent for systems with matrices of particular types are presented. A method
which allow one to get the exponent is proposed based on some appropriate
decomposition of the system matrix. A general approach to modeling of a wide
class of queueing networks is taken to provide for models in the form of
stochastic dynamic systems. It is shown how to find the mean service cycle time
for the networks through the evaluation of Lyapunov exponent for their
associated dynamic systems. As an illustration, the mean service time is
evaluated for some systems including open and closed tandem queues with finite
and infinite buffers, fork-join networks, and systems with round-robin routing.
|
1212.6074 | On the Diversity-Multiplexing Tradeoff of Unconstrained Multiple-Access
Channels | cs.IT math.IT | In this work the optimal diversity-multiplexing tradeoff (DMT) is
investigated for the multiple-input multiple-output fading multiple-access
channels with no power constraints (infinite constellations). For K users
(K>1), M transmit antennas for each user, and N receive antennas, infinite
constellations in general and lattices in particular are shown to attain the
optimal DMT of finite constellations for the case N equals or greater than
(K+1)M-1, i.e., user limited regime. On the other hand for N<(K+1)M-1 it is
shown that infinite constellations can not attain the optimal DMT. This is in
contrast to the point-to-point case in which infinite constellations are DMT
optimal for any M and N. In general, this work shows that when the network is
heavily loaded, i.e. K>max(1,(N-M+1)/M), taking into account the shaping region
in the decoding process plays a crucial role in pursuing the optimal DMT. By
investigating the cases where infinite constellations are optimal and
suboptimal, this work also gives a geometrical interpretation to the DMT of
infinite constellations in multiple-access channels.
|
1212.6079 | Evaluation of the Lyapunov exponent for generalized linear second-order
exponential systems | math.OC cs.SY math.PR | We consider generalized linear stochastic dynamical systems with second-order
state transition matrices. The entries of the matrix are assumed to be either
independent and exponentially distributed or equal to zero. We give an overview
of new results on evaluation of asymptotic growth rate of the system state
vector, which is called the Lyapunov exponent of the system.
|
1212.6086 | A Method to determine Partial Weight Enumerator for Linear Block Codes | cs.IT math.IT | In this paper we present a fast and efficient method to find partial weight
enumerator (PWE) for binary linear block codes by using the error impulse
technique and Monte Carlo method. This PWE can be used to compute an upper
bound of the error probability for the soft decision maximum likelihood decoder
(MLD). As application of this method we give partial weight enumerators and
analytical performances of the BCH(130,66), BCH(103,47) and BCH(111,55)
shortened codes; the first code is obtained by shortening the binary primitive
BCH (255,191,17) code and the two other codes are obtained by shortening the
binary primitive BCH(127,71,19) code. The weight distributions of these three
codes are unknown at our knowledge.
|
1212.6094 | Large Scale Strongly Supervised Ensemble Metric Learning, with
Applications to Face Verification and Retrieval | cs.CV | Learning Mahanalobis distance metrics in a high- dimensional feature space is
very difficult especially when structural sparsity and low rank are enforced to
improve com- putational efficiency in testing phase. This paper addresses both
aspects by an ensemble metric learning approach that consists of sparse block
diagonal metric ensembling and join- t metric learning as two consecutive
steps. The former step pursues a highly sparse block diagonal metric by
selecting effective feature groups while the latter one further exploits
correlations between selected feature groups to obtain an accurate and low rank
metric. Our algorithm considers all pairwise or triplet constraints generated
from training samples with explicit class labels, and possesses good scala-
bility with respect to increasing feature dimensionality and growing data
volumes. Its applications to face verification and retrieval outperform
existing state-of-the-art methods in accuracy while retaining high efficiency.
|
1212.6098 | Evaluation of the mean cycle time in stochastic discrete event dynamic
systems | math.OC cs.SY math.PR | We consider stochastic discrete event dynamic systems that have time
evolution represented with two-dimensional state vectors through a vector
equation that is linear in terms of an idempotent semiring. The state
transitions are governed by second-order random matrices that are assumed to be
independent and identically distributed. The problem of interest is to evaluate
the mean growth rate of state vector, which is also referred to as the mean
cycle time of the system, under various assumptions on the matrix entries. We
give an overview of early results including a solution for systems determined
by matrices with independent entries having a common exponential distribution.
It is shown how to extend the result to the cases when the entries have
different exponential distributions and when some of the entries are replaced
by zero. Finally, the mean cycle time is calculated for systems with matrices
that have one random entry, whereas the other entries in the matrices can be
arbitrary nonnegative and zero constants. The random entry is always assumed to
have exponential distribution except for one case of a matrix with zero row
when the particular form of the matrix makes it possible to obtain a solution
that does not rely on exponential distribution assumptions.
|
1212.6110 | Hyperplane Arrangements and Locality-Sensitive Hashing with Lift | cs.LG cs.IR stat.ML | Locality-sensitive hashing converts high-dimensional feature vectors, such as
image and speech, into bit arrays and allows high-speed similarity calculation
with the Hamming distance. There is a hashing scheme that maps feature vectors
to bit arrays depending on the signs of the inner products between feature
vectors and the normal vectors of hyperplanes placed in the feature space. This
hashing can be seen as a discretization of the feature space by hyperplanes. If
labels for data are given, one can determine the hyperplanes by using learning
algorithms. However, many proposed learning methods do not consider the
hyperplanes' offsets. Not doing so decreases the number of partitioned regions,
and the correlation between Hamming distances and Euclidean distances becomes
small. In this paper, we propose a lift map that converts learning algorithms
without the offsets to the ones that take into account the offsets. With this
method, the learning methods without the offsets give the discretizations of
spaces as if it takes into account the offsets. For the proposed method, we
input several high-dimensional feature data sets and studied the relationship
between the statistical characteristics of data, the number of hyperplanes, and
the effect of the proposed method.
|
1212.6147 | Finding Nemo: Searching and Resolving Identities of Users Across Online
Social Networks | cs.SI | An online user joins multiple social networks in order to enjoy different
services. On each joined social network, she creates an identity and
constitutes its three major dimensions namely profile, content and connection
network. She largely governs her identity formulation on any social network and
therefore can manipulate multiple aspects of it. With no global identifier to
mark her presence uniquely in the online domain, her online identities remain
unlinked, isolated and difficult to search. Earlier research has explored the
above mentioned dimensions, to search and link her multiple identities with an
assumption that the considered dimensions have been least disturbed across her
identities. However, majority of the approaches are restricted to exploitation
of one or two dimensions. We make a first attempt to deploy an integrated
system (Finding Nemo) which uses all the three dimensions of an identity to
search for a user on multiple social networks. The system exploits a known
identity on one social network to search for her identities on other social
networks. We test our system on two most popular and distinct social networks -
Twitter and Facebook. We show that the integrated system gives better accuracy
than the individual algorithms. We report experimental findings in the report.
|
1212.6167 | Transfer Learning Using Logistic Regression in Credit Scoring | cs.LG cs.CE | The credit scoring risk management is a fast growing field due to consumer's
credit requests. Credit requests, of new and existing customers, are often
evaluated by classical discrimination rules based on customers information.
However, these kinds of strategies have serious limits and don't take into
account the characteristics difference between current customers and the future
ones. The aim of this paper is to measure credit worthiness for non customers
borrowers and to model potential risk given a heterogeneous population formed
by borrowers customers of the bank and others who are not. We hold on previous
works done in generalized gaussian discrimination and transpose them into the
logistic model to bring out efficient discrimination rules for non customers'
subpopulation.
Therefore we obtain several simple models of connection between parameters of
both logistic models associated respectively to the two subpopulations. The
German credit data set is selected to experiment and to compare these models.
Experimental results show that the use of links between the two subpopulations
improve the classification accuracy for the new loan applicants.
|
1212.6177 | How Much of the Web Is Archived? | cs.DL cs.IR | Although the Internet Archive's Wayback Machine is the largest and most
well-known web archive, there have been a number of public web archives that
have emerged in the last several years. With varying resources, audiences and
collection development policies, these archives have varying levels of overlap
with each other. While individual archives can be measured in terms of number
of URIs, number of copies per URI, and intersection with other archives, to
date there has been no answer to the question "How much of the Web is
archived?" We study the question by approximating the Web using sample URIs
from DMOZ, Delicious, Bitly, and search engine indexes; and, counting the
number of copies of the sample URIs exist in various public web archives. Each
sample set provides its own bias. The results from our sample sets indicate
that range from 35%-90% of the Web has at least one archived copy, 17%-49% has
between 2-5 copies, 1%-8% has 6-10 copies, and 8%-63% has more than 10 copies
in public web archives. The number of URI copies varies as a function of time,
but no more than 31.3% of URIs are archived more than once per month.
|
1212.6193 | Learning Joint Query Interpretation and Response Ranking | cs.IR | Thanks to information extraction and semantic Web efforts, search on
unstructured text is increasingly refined using semantic annotations and
structured knowledge bases. However, most users cannot become familiar with the
schema of knowledge bases and ask structured queries. Interpreting free-format
queries into a more structured representation is of much current interest. The
dominant paradigm is to segment or partition query tokens by purpose
(references to types, entities, attribute names, attribute values, relations)
and then launch the interpreted query on structured knowledge bases. Given that
structured knowledge extraction is never complete, here we use a data
representation that retains the unstructured text corpus, along with structured
annotations (mentions of entities and relationships) on it. We propose two new,
natural formulations for joint query interpretation and response ranking that
exploit bidirectional flow of information between the knowledge base and the
corpus.One, inspired by probabilistic language models, computes expected
response scores over the uncertainties of query interpretation. The other is
based on max-margin discriminative learning, with latent variables representing
those uncertainties. In the context of typed entity search, both formulations
bridge a considerable part of the accuracy gap between a generic query that
does not constrain the type at all, and the upper bound where the "perfect"
target entity type of each query is provided by humans. Our formulations are
also superior to a two-stage approach of first choosing a target type using
recent query type prediction techniques, and then launching a type-restricted
entity search query.
|
1212.6207 | Irrespective Priority-Based Regular Properties of High-Intensity Virtual
Environments | cs.AI | We have a lot of relation to the encoding and the Theory of Information, when
considering thinking. This is a natural process and, at once, the complex thing
we investigate. This always was a challenge - to understand how our mind works,
and we are trying to find some universal models for this. A lot of ways have
been considered so far, but we are looking for Something, we seek for
approaches. And the goal is to find a consistent, noncontradictory view, which
should at once be enough flexible in any dimensions to allow to represent
various kinds of processes and environments, matters of different nature and
diverse objects. Developing of such a model is the destination of this article.
|
1212.6209 | Efficient Multiple Object Tracking Using Mutually Repulsive Active
Membranes | q-bio.QM cs.CV physics.bio-ph | Studies of social and group behavior in interacting organisms require
high-throughput analysis of the motion of a large number of individual
subjects. Computer vision techniques offer solutions to specific tracking
problems, and allow automated and efficient tracking with minimal human
intervention. In this work, we adopt the open active contour model to track the
trajectories of moving objects at high density. We add repulsive interactions
between open contours to the original model, treat the trajectories as an
extrusion in the temporal dimension, and show applications to two tracking
problems. The walking behavior of Drosophila is studied at different population
density and gender composition. We demonstrate that individual male flies have
distinct walking signatures, and that the social interaction between flies in a
mixed gender arena is gender specific. We also apply our model to studies of
trajectories of gliding Myxococcus xanthus bacteria at high density. We examine
the individual gliding behavioral statistics in terms of the gliding speed
distribution. Using these two examples at very distinctive spatial scales, we
illustrate the use of our algorithm on tracking both short rigid bodies
(Drosophila) and long flexible objects (Myxococcus xanthus). Our repulsive
active membrane model reaches error rates better than $5\times 10^{-6}$ per fly
per second for Drosophila tracking and comparable results for Myxococcus
xanthus.
|
1212.6216 | Generating Motion Patterns Using Evolutionary Computation in Digital
Soccer | cs.AI cs.RO | Dribbling an opponent player in digital soccer environment is an important
practical problem in motion planning. It has special complexities which can be
generalized to most important problems in other similar Multi Agent Systems. In
this paper, we propose a hybrid computational geometry and evolutionary
computation approach for generating motion trajectories to avoid a mobile
obstacle. In this case an opponent agent is not only an obstacle but also one
who tries to harden dribbling procedure. One characteristic of this approach is
reducing process cost of online stage by transferring it to offline stage which
causes increment in agents' performance. This approach breaks the problem into
two offline and online stages. During offline stage the goal is to find desired
trajectory using evolutionary computation and saving it as a trajectory plan. A
trajectory plan consists of nodes which approximate information of each
trajectory plan. In online stage, a linear interpolation along with Delaunay
triangulation in xy-plan is applied to trajectory plan to retrieve desired
action.
|
1212.6225 | Joint Sensing and Power Allocation in Nonconvex Cognitive Radio Games:
Quasi-Nash Equilibria | cs.IT math.IT | In this paper, we propose a novel class of Nash problems for Cognitive Radio
(CR) networks composed of multiple primary users (PUs) and secondary users
(SUs) wherein each SU (player) competes against the others to maximize his own
opportunistic throughput by choosing jointly the sensing duration, the
detection thresholds, and the vector power allocation over a multichannel link.
In addition to power budget constraints, several (deterministic or
probabilistic) interference constraints can be accommodated in the proposed
general formulation, such as constraints on the maximum individual/aggregate
(probabilistic) interference tolerable from the PUs. To keep the optimization
as decentralized as possible, global interference constraints, when present,
are imposed via pricing; the prices are thus additional variables to be
optimized. The resulting players' optimization problems are nonconvex and there
are price clearance conditions associated with the nonconvex global
interference constraints to be satisfied by the equilibria of the game, which
make the analysis of the proposed game a challenging task; none of classical
results in the game theory literature can be successfully applied. To deal with
the nonconvexity of the game, we introduce a relaxed equilibrium concept, the
Quasi-Nash Equilibrium (QNE), and study its main properties, performance, and
connection with local Nash equilibria. Quite interestingly, the proposed game
theoretical formulations yield a considerable performance improvement with
respect to current centralized and decentralized designs of CR systems, which
validates the concept of QNE.
|
1212.6235 | Real and Complex Monotone Communication Games | cs.GT cs.IT math.IT | Noncooperative game-theoretic tools have been increasingly used to study many
important resource allocation problems in communications, networking, smart
grids, and portfolio optimization. In this paper, we consider a general class
of convex Nash Equilibrium Problems (NEPs), where each player aims to solve an
arbitrary smooth convex optimization problem. Differently from most of current
works, we do not assume any specific structure for the players' problems, and
we allow the optimization variables of the players to be matrices in the
complex domain. Our main contribution is the design of a novel class of
distributed (asynchronous) best-response- algorithms suitable for solving the
proposed NEPs, even in the presence of multiple solutions. The new methods,
whose convergence analysis is based on Variational Inequality (VI) techniques,
can select, among all the equilibria of a game, those that optimize a given
performance criterion, at the cost of limited signaling among the players. This
is a major departure from existing best-response algorithms, whose convergence
conditions imply the uniqueness of the NE. Some of our results hinge on the use
of VI problems directly in the complex domain; the study of these new kind of
VIs also represents a noteworthy innovative contribution. We then apply the
developed methods to solve some new generalizations of SISO and MIMO games in
cognitive radios and femtocell systems, showing a considerable performance
improvement over classical pure noncooperative schemes.
|
1212.6246 | Gaussian Process Regression with Heteroscedastic or Non-Gaussian
Residuals | stat.ML cs.LG | Gaussian Process (GP) regression models typically assume that residuals are
Gaussian and have the same variance for all observations. However, applications
with input-dependent noise (heteroscedastic residuals) frequently arise in
practice, as do applications in which the residuals do not have a Gaussian
distribution. In this paper, we propose a GP Regression model with a latent
variable that serves as an additional unobserved covariate for the regression.
This model (which we call GPLC) allows for heteroscedasticity since it allows
the function to have a changing partial derivative with respect to this
unobserved covariate. With a suitable covariance function, our GPLC model can
handle (a) Gaussian residuals with input-dependent variance, or (b)
non-Gaussian residuals with input-dependent variance, or (c) Gaussian residuals
with constant variance. We compare our model, using synthetic datasets, with a
model proposed by Goldberg, Williams and Bishop (1998), which we refer to as
GPLV, which only deals with case (a), as well as a standard GP model which can
handle only case (c). Markov Chain Monte Carlo methods are developed for both
modelsl. Experiments show that when the data is heteroscedastic, both GPLC and
GPLV give better results (smaller mean squared error and negative
log-probability density) than standard GP regression. In addition, when the
residual are Gaussian, our GPLC model is generally nearly as good as GPLV,
while when the residuals are non-Gaussian, our GPLC model is better than GPLV.
|
1212.6273 | Human-Recognizable Robotic Gestures | cs.RO cs.AI cs.HC | For robots to be accommodated in human spaces and in humans daily activities,
robots should be able to understand messages from the human conversation
partner. In the same light, humans must also understand the messages that are
being communicated by robots, including the non-verbal ones. We conducted a
web-based video study wherein participants gave interpretations on the iconic
gestures and emblems that were produced by an anthropomorphic robot. Out of the
15 gestures presented, we found 6 robotic gestures that can be accurately
recognized by the human observer. These were nodding, clapping, hugging,
expressing anger, walking, and flying. We reviewed these gestures for their
meaning from literatures in human and animal behavior. We conclude by
discussing the possible implications of these gestures for the design of social
robots that are aimed to have engaging interactions with humans.
|
1212.6276 | Echo State Queueing Network: a new reservoir computing learning tool | cs.NE cs.AI cs.LG | In the last decade, a new computational paradigm was introduced in the field
of Machine Learning, under the name of Reservoir Computing (RC). RC models are
neural networks which a recurrent part (the reservoir) that does not
participate in the learning process, and the rest of the system where no
recurrence (no neural circuit) occurs. This approach has grown rapidly due to
its success in solving learning tasks and other computational applications.
Some success was also observed with another recently proposed neural network
designed using Queueing Theory, the Random Neural Network (RandNN). Both
approaches have good properties and identified drawbacks. In this paper, we
propose a new RC model called Echo State Queueing Network (ESQN), where we use
ideas coming from RandNNs for the design of the reservoir. ESQNs consist in
ESNs where the reservoir has a new dynamics inspired by recurrent RandNNs. The
paper positions ESQNs in the global Machine Learning area, and provides
examples of their use and performances. We show on largely used benchmarks that
ESQNs are very accurate tools, and we illustrate how they compare with standard
ESNs.
|
1212.6298 | Design of Intelligent Agents Based System for Commodity Market
Simulation with JADE | cs.MA cs.AI | A market of potato commodity for industry scale usage is engaging several
types of actors. They are farmers, middlemen, and industries. A multi-agent
system has been built to simulate these actors into agent entities, based on
manually given parameters within a simulation scenario file. Each type of
agents has its own fuzzy logic representing actual actors' knowledge, to be
used to interpreting values and take appropriated decision of it while on
simulation. The system will simulate market activities with programmed
behaviors then produce the results as spreadsheet and chart graph files. These
results consist of each agent's yearly finance and commodity data. The system
will also predict each of next value from these outputs.
|
1212.6303 | A brief experience on journey through hardware developments for image
processing and its applications on Cryptography | cs.AR cs.CR cs.CV | The importance of embedded applications on image and video
processing,communication and cryptography domain has been taking a larger space
in current research era. Improvement of pictorial information for betterment of
human perception like deblurring, de-noising in several fields such as
satellite imaging, medical imaging etc are renewed research thrust.
Specifically we would like to elaborate our experience on the significance of
computer vision as one of the domains where hardware implemented algorithms
perform far better than those implemented through software. So far embedded
design engineers have successfully implemented their designs by means of
Application Specific Integrated Circuits (ASICs) and/or Digital Signal
Processors (DSP), however with the advancement of VLSI technology a very
powerful hardware device namely the Field Programmable Gate Array (FPGA)
combining the key advantages of ASICs and DSPs was developed which have the
possibility of reprogramming making them a very attractive device for rapid
prototyping.Communication of image and video data in multiple FPGA is no longer
far away from the thrust of secured transmission among them, and then the
relevance of cryptography is indeed unavoidable. This paper shows how the
Xilinx hardware development platform as well Mathworks Matlab can be used to
develop hardware based computer vision algorithms and its corresponding crypto
transmission channel between multiple FPGA platform from a system level
approach, making it favourable for developing a hardware-software co-design
environment.
|
1212.6316 | On-line relational SOM for dissimilarity data | stat.ML cs.LG | In some applications and in order to address real world situations better,
data may be more complex than simple vectors. In some examples, they can be
known through their pairwise dissimilarities only. Several variants of the Self
Organizing Map algorithm were introduced to generalize the original algorithm
to this framework. Whereas median SOM is based on a rough representation of the
prototypes, relational SOM allows representing these prototypes by a virtual
combination of all elements in the data set. However, this latter approach
suffers from two main drawbacks. First, its complexity can be large. Second,
only a batch version of this algorithm has been studied so far and it often
provides results having a bad topographic organization. In this article, an
on-line version of relational SOM is described and justified. The algorithm is
tested on several datasets, including categorical data and graphs, and compared
with the batch version and with other SOM algorithms for non vector data.
|
1212.6323 | Localized Algorithm of Community Detection on Large-Scale Decentralized
Social Networks | cs.SI physics.soc-ph stat.ML | Despite the overwhelming success of the existing Social Networking Services
(SNS), their centralized ownership and control have led to serious concerns in
user privacy, censorship vulnerability and operational robustness of these
services. To overcome these limitations, Distributed Social Networks (DSN) have
recently been proposed and implemented. Under these new DSN architectures, no
single party possesses the full knowledge of the entire social network. While
this approach solves the above problems, the lack of global knowledge for the
DSN nodes makes it much more challenging to support some common but critical
SNS services like friends discovery and community detection. In this paper, we
tackle the problem of community detection for a given user under the constraint
of limited local topology information as imposed by common DSN architectures.
By considering the Personalized Page Rank (PPR) approach as an ink spilling
process, we justify its applicability for decentralized community detection
using limited local topology information.Our proposed PPR-based solution has a
wide range of applications such as friends recommendation, targeted
advertisement, automated social relationship labeling and sybil defense. Using
data collected from a large-scale SNS in practice, we demonstrate our adapted
version of PPR can significantly outperform the basic PR as well as two other
commonly used heuristics. The inclusion of a few manually labeled friends in
the Escape Vector (EV) can boost the performance considerably (64.97% relative
improvement in terms of Area Under the ROC Curve (AUC)).
|
1212.6325 | Existence of Oscillations in Cyclic Gene Regulatory Networks with Time
Delay | cs.SY math.OC q-bio.MN | This paper is concerned with conditions for the existence of oscillations in
gene regulatory networks with negative cyclic feedback, where time delays in
transcription, translation and translocation process are explicitly considered.
The primary goal of this paper is to propose systematic analysis tools that are
useful for a broad class of cyclic gene regulatory networks, and to provide
novel biological insights. To this end, we adopt a simplified model that is
suitable for capturing the essence of a large class of gene regulatory
networks. It is first shown that local instability of the unique equilibrium
state results in oscillations based on a Poincare-Bendixson type theorem. Then,
a graphical existence condition, which is equivalent to the local instability
of a unique equilibrium, is derived. Based on the graphical condition, the
existence condition is analytically presented in terms of biochemical
parameters. This allows us to find the dimensionless parameters that primarily
affect the existence of oscillations, and to provide biological insights. The
analytic conditions and biological insights are illustrated with two existing
biochemical networks, Repressilator and the Hes7 gene regulatory networks.
|
1212.6331 | Modeling collective human mobility: Understanding exponential law of
intra-urban movement | physics.soc-ph cs.SI | It is very important to understand urban mobility patterns because most trips
are concentrated in urban areas. In the paper, a new model is proposed to model
collective human mobility in urban areas. The model can be applied to predict
individual flows not only in intra-city but also in countries or a larger
range. Based on the model, it can be concluded that the exponential law of
distance distribution is attributed to decreasing exponentially of average
density of human travel demands. Since the distribution of human travel demands
only depends on urban planning, population distribution, regional functions and
so on, it illustrates that these inherent properties of cities are impetus to
drive collective human movements.
|
1212.6371 | The Weight Distribution of a Class of Cyclic Codes Related to Hermitian
Forms Graphs | cs.IT math.CO math.IT | The determination of weight distribution of cyclic codes involves evaluation
of Gauss sums and exponential sums. Despite of some cases where a neat
expression is available, the computation is generally rather complicated. In
this note, we determine the weight distribution of a class of reducible cyclic
codes whose dual codes may have arbitrarily many zeros. This goal is achieved
by building an unexpected connection between the corresponding exponential sums
and the spectrums of Hermitian forms graphs.
|
1212.6383 | Heuristics Miners for Streaming Event Data | cs.DB | More and more business activities are performed using information systems.
These systems produce such huge amounts of event data that existing systems are
unable to store and process them. Moreover, few processes are in steady-state
and due to changing circumstances processes evolve and systems need to adapt
continuously. Since conventional process discovery algorithms have been defined
for batch processing, it is difficult to apply them in such evolving
environments. Existing algorithms cannot cope with streaming event data and
tend to generate unreliable and obsolete results.
In this paper, we discuss the peculiarities of dealing with streaming event
data in the context of process mining. Subsequently, we present a general
framework for defining process mining algorithms in settings where it is
impossible to store all events over an extended period or where processes
evolve while being analyzed. We show how the Heuristics Miner, one of the most
effective process discovery algorithms for practical applications, can be
modified using this framework. Different stream-aware versions of the
Heuristics Miner are defined and implemented in ProM. Moreover, experimental
results on artificial and real logs are reported.
|
1212.6388 | Trajectory tracking control of kites with system delay | cs.SY math.OC | A previously published algorithm for trajectory tracking control of tethered
wings, i.e. kites, is updated in light of recent experimental evidence. The
algorithm is, furthermore, analyzed in the framework of delay differential
equations. It is shown how the presence of system delay influences the
stability of the control system, and a methodology is derived for gain
selection using the Lambert W function. The validity of the methodology is
demonstrated with simulation results. The analysis sheds light on previously
poorly understood stability problems.
|
1212.6437 | Joint Sensing and Power Allocation in Nonconvex Cognitive Radio Games:
Nash Equilibria and Distributed Algorithms | cs.IT math.IT | In this paper, we propose a novel class of Nash problems for Cognitive Radio
(CR) networks, modeled as Gaussian frequency-selective interference channels,
wherein each secondary user (SU) competes against the others to maximize his
own opportunistic throughput by choosing jointly the sensing duration, the
detection thresholds, and the vector power allocation. The proposed general
formulation allows to accommodate several (transmit) power and
(deterministic/probabilistic) interference constraints, such as constraints on
the maximum individual and/or aggregate (probabilistic) interference tolerable
at the primary receivers. To keep the optimization as decentralized as
possible, global (coupling) interference constraints are imposed by penalizing
each SU with a set of time-varying prices based upon his contribution to the
total interference; the prices are thus additional variable to optimize. The
resulting players' optimization problems are nonconvex; moreover, there are
possibly price clearing conditions associated with the global constraints to be
satisfied by the solution. All this makes the analysis of the proposed games a
challenging task; none of classical results in the game theory literature can
be successfully applied. The main contribution of this paper is to develop a
novel optimization-based theory for studying the proposed nonconvex games; we
provide a comprehensive analysis of the existence and uniqueness of a standard
Nash equilibrium, devise alternative best-response based algorithms, and
establish their convergence.
|
1212.6456 | A universal assortativity measure for network analysis | physics.soc-ph cs.SI physics.data-an | Characterizing the connectivity tendency of a network is a fundamental
problem in network science. The traditional and well-known assortativity
coefficient is calculated on a per-network basis, which is of little use to
partial connection tendency of a network. This paper proposes a universal
assortativity coefficient(UAC), which is based on the unambiguous definition of
each individual edge's contribution to the global assortativity coefficient
(GAC). It is able to reveal the connection tendency of microscopic, mesoscopic,
macroscopic structures and any given part of a network. Applying UAC to real
world networks, we find that, contrary to the popular expectation, most
networks (notably the AS-level Internet topology) have markedly more
assortative edges/nodes than dissortaive ones despite their global
dissortativity. Consequently, networks can be categorized along two
dimensions--single global assortativity and local assortativity statistics.
Detailed anatomy of the AS-level Internet topology further illustrates how UAC
can be used to decipher the hidden patterns of connection tendencies on
different scales.
|
1212.6465 | Quantized Iterative Message Passing Decoders with Low Error Floor for
LDPC Codes | cs.IT math.IT | The error floor phenomenon observed with LDPC codes and their graph-based,
iterative, message-passing (MP) decoders is commonly attributed to the
existence of error-prone substructures -- variously referred to as near
codewords, trapping sets, absorbing sets, or pseudocodewords -- in a Tanner
graph representation of the code. Many approaches have been proposed to lower
the error floor by designing new LDPC codes with fewer such substructures or by
modifying the decoding algorithm. Using a theoretical analysis of iterative MP
decoding in an idealized trapping set scenario, we show that a contributor to
the error floors observed in the literature may be the imprecise implementation
of decoding algorithms and, in particular, the message quantization rules used.
We then propose a new quantization method -- (q+1)-bit quasi-uniform
quantization -- that efficiently increases the dynamic range of messages,
thereby overcoming a limitation of conventional quantization schemes. Finally,
we use the quasi-uniform quantizer to decode several LDPC codes that suffer
from high error floors with traditional fixed-point decoder implementations.
The performance simulation results provide evidence that the proposed
quantization scheme can, for a wide variety of codes, significantly lower error
floors with minimal increase in decoder complexity.
|
1212.6478 | The degrees of freedom of the Group Lasso for a General Design | cs.IT math.IT | In this paper, we are concerned with regression problems where covariates can
be grouped in nonoverlapping blocks, and where only a few of them are assumed
to be active. In such a situation, the group Lasso is an at- tractive method
for variable selection since it promotes sparsity of the groups. We study the
sensitivity of any group Lasso solution to the observations and provide its
precise local parameterization. When the noise is Gaussian, this allows us to
derive an unbiased estimator of the degrees of freedom of the group Lasso. This
result holds true for any fixed design, no matter whether it is under- or
overdetermined. With these results at hand, various model selec- tion criteria,
such as the Stein Unbiased Risk Estimator (SURE), are readily available which
can provide an objectively guided choice of the optimal group Lasso fit.
|
1212.6519 | Dialectics of Knowledge Representation in a Granular Rough Set Theory | cs.AI cs.LO | The concepts of rough and definite objects are relatively more determinate
than those of granules and granulation in general rough set theory (RST) [1].
Representation of rough objects can however depend on the dialectical relation
between granulation and definiteness. In this research, we make this exact in
the context of RST over proto-transitive approximation spaces. This approach
can be directly extended to many other types of RST. These are used for
formulating an extended concept of knowledge interpretation (KI)(relative the
situation for classical RST) and the problem of knowledge representation (KR)
is solved. These will be of direct interest in granular KR in RST as developed
by the present author [2] and of rough objects in general. In [3], these have
already been used for five different semantics by the present author. This is
an extended version of [4] with key examples and more results.
|
1212.6521 | A Frequency-Domain Encoding for Neuroevolution | cs.AI | Neuroevolution has yet to scale up to complex reinforcement learning tasks
that require large networks. Networks with many inputs (e.g. raw video) imply a
very high dimensional search space if encoded directly. Indirect methods use a
more compact genotype representation that is transformed into networks of
potentially arbitrary size. In this paper, we present an indirect method where
networks are encoded by a set of Fourier coefficients which are transformed
into network weight matrices via an inverse Fourier-type transform. Because
there often exist network solutions whose weight matrices contain regularity
(i.e. adjacent weights are correlated), the number of coefficients required to
represent these networks in the frequency domain is much smaller than the
number of weights (in the same way that natural images can be compressed by
ignore high-frequency components). This "compressed" encoding is compared to
the direct approach where search is conducted in the weight space on the
high-dimensional octopus arm task. The results show that representing networks
in the frequency domain can reduce the search-space dimensionality by as much
as two orders of magnitude, both accelerating convergence and yielding more
general solutions.
|
1212.6526 | High-SNR Asymptotics of Mutual Information for Discrete Constellations
with Applications to BICM | cs.IT math.IT | Asymptotic expressions of the mutual information between any discrete input
and the corresponding output of the scalar additive white Gaussian noise
channel are presented in the limit as the signal-to-noise ratio (SNR) tends to
infinity. Asymptotic expressions of the symbol-error probability (SEP) and the
minimum mean-square error (MMSE) achieved by estimating the channel input given
the channel output are also developed. It is shown that for any input
distribution, the conditional entropy of the channel input given the output,
MMSE and SEP have an asymptotic behavior proportional to the Gaussian
Q-function. The argument of the Q-function depends only on the minimum
Euclidean distance (MED) of the constellation and the SNR, and the
proportionality constants are functions of the MED and the probabilities of the
pairs of constellation points at MED. The developed expressions are then
generalized to study the high-SNR behavior of the generalized mutual
information (GMI) for bit-interleaved coded modulation (BICM). By means of
these asymptotic expressions, the long-standing conjecture that Gray codes are
the binary labelings that maximize the BICM-GMI at high SNR is proven. It is
further shown that for any equally spaced constellation whose size is a power
of two, there always exists an anti-Gray code giving the lowest BICM-GMI at
high SNR.
|
1212.6527 | Discovering Basic Emotion Sets via Semantic Clustering on a Twitter
Corpus | cs.AI cs.CL | A plethora of words are used to describe the spectrum of human emotions, but
how many emotions are there really, and how do they interact? Over the past few
decades, several theories of emotion have been proposed, each based around the
existence of a set of 'basic emotions', and each supported by an extensive
variety of research including studies in facial expression, ethology, neurology
and physiology. Here we present research based on a theory that people transmit
their understanding of emotions through the language they use surrounding
emotion keywords. Using a labelled corpus of over 21,000 tweets, six of the
basic emotion sets proposed in existing literature were analysed using Latent
Semantic Clustering (LSC), evaluating the distinctiveness of the semantic
meaning attached to the emotional label. We hypothesise that the more distinct
the language is used to express a certain emotion, then the more distinct the
perception (including proprioception) of that emotion is, and thus more
'basic'. This allows us to select the dimensions best representing the entire
spectrum of emotion. We find that Ekman's set, arguably the most frequently
used for classifying emotions, is in fact the most semantically distinct
overall. Next, taking all analysed (that is, previously proposed) emotion terms
into account, we determine the optimal semantically irreducible basic emotion
set using an iterative LSC algorithm. Our newly-derived set (Accepting,
Ashamed, Contempt, Interested, Joyful, Pleased, Sleepy, Stressed) generates a
6.1% increase in distinctiveness over Ekman's set (Angry, Disgusted, Joyful,
Sad, Scared). We also demonstrate how using LSC data can help visualise
emotions. We introduce the concept of an Emotion Profile and briefly analyse
compound emotions both visually and mathematically.
|
1212.6550 | Alternating Directions Dual Decomposition | cs.AI | We propose AD3, a new algorithm for approximate maximum a posteriori (MAP)
inference on factor graphs based on the alternating directions method of
multipliers. Like dual decomposition algorithms, AD3 uses worker nodes to
iteratively solve local subproblems and a controller node to combine these
local solutions into a global update. The key characteristic of AD3 is that
each local subproblem has a quadratic regularizer, leading to a faster
consensus than subgradient-based dual decomposition, both theoretically and in
practice. We provide closed-form solutions for these AD3 subproblems for binary
pairwise factors and factors imposing first-order logic constraints. For
arbitrary factors (large or combinatorial), we introduce an active set method
which requires only an oracle for computing a local MAP configuration, making
AD3 applicable to a wide range of problems. Experiments on synthetic and
realworld problems show that AD3 compares favorably with the state-of-the-art.
|
1212.6556 | Quantitative Timed Simulation Functions and Refinement Metrics for Timed
Systems (Full Version) | cs.SY cs.GT | We introduce quantatitive timed refinement and timed simulation (directed)
metrics, incorporating zenoness check s, for timed systems. These metrics
assign positive real numbers between zero and infinity which quantify the
\emph{timing mismatches} between two timed systems, amongst non-zeno runs. We
quantify timing mismatches in three ways: (1) the maximal timing mismatch that
can arise, (2) the "steady-state" maximal timing mismatches, where initial
transient timing mismatches are ignored; and (3) the (long-run) average timing
mismatches amongst two systems. These three kinds of mismatches constitute
three important types of timing differences. Our event times are the
\emph{global times}, measured from the start of the system execution, not just
the time durations of individual steps. We present algorithms over timed
automata for computing the three quantitative simulation distances to within
any desired degree of accuracy. In order to compute the values of the
quantitative simulation distances, we use a game theoretic formulation. We
introduce two new kinds of objectives for two player games on finite-state game
graphs: (1) eventual debit-sum level objectives, and (2) average debit-sum
level objectives. We present algorithms for computing the optimal values for
these objectives in graph games, and then use these algorithms to compute the
values of the timed simulation distances over timed automata.
|
1212.6574 | Proceedings First International Workshop on Formal Techniques for
Safety-Critical Systems | cs.LO cs.SE cs.SY | This volume contains the proceedings of the First International Workshop of
Formal Techniques for Safety-Critical Systems (FTSCS 2012), held in Kyoto on
November 12, 2012, as a satellite event of the ICFEM conference.
The aim of this workshop is to bring together researchers and engineers
interested in the application of (semi-)formal methods to improve the quality
of safety-critical computer systems. FTSCS is particularly interested in
industrial applications of formal methods. Topics include:
- the use of formal methods for safety-critical and QoS-critical systems,
including avionics, automotive, and medical systems; - methods, techniques and
tools to support automated analysis, certification, debugging, etc.; - analysis
methods that address the limitations of formal methods in industry; - formal
analysis support for modeling languages used in industry, such as AADL,
Ptolemy, SysML, SCADE, Modelica, etc.; and - code generation from validated
models.
The workshop received 25 submissions; 21 of these were regular papers and 4
were tool/work-in-progress/position papers. Each submission was reviewed by
three referees; based on the reviews and extensive discussions, the program
committee selected nine regular papers, which are included in this volume. Our
program also included an invited talk by Ralf Huuck.
|
1212.6592 | Social Teaching: Being Informative vs. Being Right in Sequential
Decision Making | cs.IT math.IT | We show that it can be suboptimal for Bayesian decision-making agents
employing social learning to use correct prior probabilities as their initial
beliefs. We consider sequential Bayesian binary hypothesis testing where each
individual agent makes a binary decision based on an initial belief, a private
signal, and the decisions of all earlier-acting agents---with the actions of
precedent agents causing updates of the initial belief. Each agent acts to
minimize Bayes risk, with all agents sharing the same Bayes costs for Type I
(false alarm) and Type II (missed detection) errors. The effect of the set of
initial beliefs on the decision-making performance of the last agent is
studied. The last agent makes the best decision when the initial beliefs are
inaccurate. When the private signals are described by Gaussian likelihoods, the
optimal initial beliefs are not haphazard but rather follow a systematic
pattern: the earlier-acting agents should act as if the prior probability is
larger than it is in reality when the true prior probability is small, and vice
versa. We interpret this as being open minded toward the unlikely hypothesis.
The early-acting agents face a trade-off between making a correct decision and
being maximally informative to the later-acting agents.
|
1212.6602 | Multidimensional Analytic Signals and the Bedrosian Identity | cs.IT math.CA math.IT | The analytic signal method via the Hilbert transform is a key tool in signal
analysis and processing, especially in the time-frquency analysis. Imaging and
other applications to multidimensional signals call for extension of the method
to higher dimensions. We justify the usage of partial Hilbert transforms to
define multidimensional analytic signals from both engineering and mathematical
perspectives. The important associated Bedrosian identity $T(fg)=fTg$ for
partial Hilbert transforms $T$ are then studied. Characterizations and several
necessity theorems are established. We also make use of the identity to
construct basis functions for the time-frequency analysis.
|
1212.6626 | Blind Adaptive Interference Suppression Based on Set-Membership
Constrained Constant-Modulus Algorithms with Time-Varying Bounds | cs.IT math.IT | This work presents blind constrained constant modulus (CCM) adaptive
algorithms based on the set-membership filtering (SMF) concept and incorporates
dynamic bounds {for interference suppression} applications. We develop
stochastic gradient and recursive least squares type algorithms based on the
CCM design criterion in accordance with the specifications of the SMF concept.
We also propose a blind framework that includes channel and amplitude
estimators that take into account parameter estimation dependency, multiple
access interference (MAI) and inter-symbol interference (ISI) to address the
important issue of bound specification in multiuser communications. A
convergence and tracking analysis of the proposed algorithms is carried out
along with the development of analytical expressions to predict their
performance. Simulations for a number of scenarios of interest with a DS-CDMA
system show that the proposed algorithms outperform previously reported
techniques with a smaller number of parameter updates and a reduced risk of
overbounding or underbounding.
|
1212.6627 | Exploring Relay Cooperation Scheme for Load-Balance Control in Two-hop
Secure Communication System | cs.IT cs.CR cs.NI math.IT | This work considers load-balance control among the relays under the secure
transmission protocol via relay cooperation in two-hop wireless networks
without the information of both eavesdropper channels and locations. The
available two-hop secure transmission protocols in physical layer secrecy
framework cannot provide a flexible load-balance control, which may
significantly limit their application scopes. This paper proposes a secure
transmission protocol in case that the path-loss is identical between all pairs
of nodes, in which the relay is randomly selected from the first $k$ preferable
assistant relays. This protocol enables load-balance among relays to be
flexibly controlled by a proper setting of the parameter $k$, and covers the
available works as special cases, like ones with the optimal relay selection
($k=1$) and ones with the random relay selection ($k = n$, i.e. the number of
system nodes). The theoretic analysis is further provided to determine the
maximum number of eavesdroppers one network can tolerate by applying the
proposed protocol to ensure a desired performance in terms of the secrecy
outage probability and transmission outage probability.
|
1212.6636 | A Dichotomy on the Complexity of Consistent Query Answering for Atoms
with Simple Keys | cs.DB | We study the problem of consistent query answering under primary key
violations. In this setting, the relations in a database violate the key
constraints and we are interested in maximal subsets of the database that
satisfy the constraints, which we call repairs. For a boolean query Q, the
problem CERTAINTY(Q) asks whether every such repair satisfies the query or not;
the problem is known to be always in coNP for conjunctive queries. However,
there are queries for which it can be solved in polynomial time. It has been
conjectured that there exists a dichotomy on the complexity of CERTAINTY(Q) for
conjunctive queries: it is either in PTIME or coNP-complete. In this paper, we
prove that the conjecture is indeed true for the case of conjunctive queries
without self-joins, where each atom has as a key either a single attribute
(simple key) or all attributes of the atom.
|
1212.6640 | Exploring mutexes, the Oracle RDBMS retrial spinlocks | cs.DB cs.DC cs.PF | Spinlocks are widely used in database engines for processes synchronization.
KGX mutexes is new retrial spinlocks appeared in contemporary Oracle versions
for submicrosecond synchronization. The mutex contention is frequently observed
in highly concurrent OLTP environments.
This work explores how Oracle mutexes operate, spin, and sleep. It develops
predictive mathematical model and discusses parameters and statistics related
to mutex performance tuning, as well as results of contention experiments.
|
1212.6643 | Nonanticipative Rate Distortion Function and Filtering Theory: A weak
Convergence Approach | cs.IT cs.SY math.IT | In this paper the relation between nonanticipative rate distortion function
(RDF) and Bayesian filtering theory is further investigated on general Polish
spaces. The relation is established via an optimization on the space of
conditional distributions of the so-called directed information subject to
fidelity constraints. Existence of the optimal reproduction distribution of the
nonanticipative RDF is shown using the topology of weak convergence of
probability measures. Subsequently, we use the solution of the nonanticipative
RDF to present the realization of a multidimensional partially observable
source over a scalar Gaussian channel. We show that linear encoders are
optimal, establishing joint source-channel coding in real-time.
|
1212.6646 | Blind Adaptive MIMO Receivers for Space-Time Block-Coded DS-CDMA Systems
in Multipath Channels Using the Constant Modulus Criterion | cs.IT math.IT | We propose blind adaptive multi-input multi-output (MIMO) linear receivers
for DS-CDMA systems using multiple transmit antennas and space-time block codes
(STBC) in multipath channels. A space-time code-constrained constant modulus
(CCM) design criterion based on constrained optimization techniques is
considered and recursive least squares (RLS) adaptive algorithms are developed
for estimating the parameters of the linear receivers. A blind space-time
channel estimation method for MIMO DS-CDMA systems with STBC based on a
subspace approach is also proposed along with an efficient RLS algorithm.
Simulations for a downlink scenario assess the proposed algorithms in several
situations against existing methods.
|
1212.6659 | Focus of Attention for Linear Predictors | stat.ML cs.AI cs.LG | We present a method to stop the evaluation of a prediction process when the
result of the full evaluation is obvious. This trait is highly desirable in
prediction tasks where a predictor evaluates all its features for every example
in large datasets. We observe that some examples are easier to classify than
others, a phenomenon which is characterized by the event when most of the
features agree on the class of an example. By stopping the feature evaluation
when encountering an easy- to-classify example, the predictor can achieve
substantial gains in computation. Our method provides a natural attention
mechanism for linear predictors where the predictor concentrates most of its
computation on hard-to-classify examples and quickly discards easy-to-classify
ones. By modifying a linear prediction algorithm such as an SVM or AdaBoost to
include our attentive method we prove that the average number of features
computed is O(sqrt(n log 1/sqrt(delta))) where n is the original number of
features, and delta is the error rate incurred due to early stopping. We
demonstrate the effectiveness of Attentive Prediction on MNIST, Real-sim,
Gisette, and synthetic datasets.
|
1212.6663 | Blind Multilinear Identification | cs.IT math.IT | We discuss a technique that allows blind recovery of signals or blind
identification of mixtures in instances where such recovery or identification
were previously thought to be impossible: (i) closely located or highly
correlated sources in antenna array processing, (ii) highly correlated
spreading codes in CDMA radio communication, (iii) nearly dependent spectra in
fluorescent spectroscopy. This has important implications --- in the case of
antenna array processing, it allows for joint localization and extraction of
multiple sources from the measurement of a noisy mixture recorded on multiple
sensors in an entirely deterministic manner. In the case of CDMA, it allows the
possibility of having a number of users larger than the spreading gain. In the
case of fluorescent spectroscopy, it allows for detection of nearly identical
chemical constituents. The proposed technique involves the solution of a
bounded coherence low-rank multilinear approximation problem. We show that
bounded coherence allows us to establish existence and uniqueness of the
recovered solution. We will provide some statistical motivation for the
approximation problem and discuss greedy approximation bounds. To provide the
theoretical underpinnings for this technique, we develop a corresponding theory
of sparse separable decompositions of functions, including notions of rank and
nuclear norm that specialize to the usual ones for matrices and operators but
apply to also hypermatrices and tensors.
|
1212.6686 | Outage Performance of AF-based Time Division Broadcasting Protocol in
the Presence of Co-channel Interference | cs.IT math.IT | In this paper, we investigate the outage performance of time division
broadcasting (TDBC) protocol in independent but non-identical Rayleigh
flat-fading channels, where all nodes are interfered by a finite number of
co-channel interferers. We assume that the relay operates in the
amplified-and-forward mode. A tight lower bound as well as the asymptotic
expression of the outage probability is obtained in closed-form. Through both
theoretic analyses and simulation results, we show that the achievable
diversity of TDBC protocol is zero in the interference-limited scenario.
Moreover, we study the impacts of interference power, number of interferers and
relay placement on the outage probability. Finally, the correctness of our
analytic results is validated via computer simulations.
|
1212.6734 | Pushing the Limits of LTE: A Survey on Research Enhancing the Standard | cs.IT math.IT | Cellular networks are an essential part of todays communication
infrastructure. The ever-increasing demand for higher data-rates calls for a
close cooperation between researchers and industry/standardization experts
which hardly exists in practice. In this article we give an overview about our
efforts in trying to bridge this gap. Our research group provides a
standard-compliant open-source simulation platform for 3GPP LTE that enables
reproducible research in a well-defined environment. We demonstrate that much
innovative research under the confined framework of a real-world standard is
still possible, sometimes even encouraged. With examplary samples of our
research work we investigate on the potential of several important research
areas under typical practical conditions.
|
1212.6745 | Two-Dimensional Kolmogorov Complexity and Validation of the Coding
Theorem Method by Compressibility | cs.CC cs.IT math.IT | We propose a measure based upon the fundamental theoretical concept in
algorithmic information theory that provides a natural approach to the problem
of evaluating $n$-dimensional complexity by using an $n$-dimensional
deterministic Turing machine. The technique is interesting because it provides
a natural algorithmic process for symmetry breaking generating complex
$n$-dimensional structures from perfectly symmetric and fully deterministic
computational rules producing a distribution of patterns as described by
algorithmic probability. Algorithmic probability also elegantly connects the
frequency of occurrence of a pattern with its algorithmic complexity, hence
effectively providing estimations to the complexity of the generated patterns.
Experiments to validate estimations of algorithmic complexity based on these
concepts are presented, showing that the measure is stable in the face of some
changes in computational formalism and that results are in agreement with the
results obtained using lossless compression algorithms when both methods
overlap in their range of applicability. We then use the output frequency of
the set of 2-dimensional Turing machines to classify the algorithmic complexity
of the space-time evolutions of Elementary Cellular Automata.
|
1212.6806 | Leveraging Sociological Models for Predictive Analytics | cs.SI physics.soc-ph | There is considerable interest in developing techniques for predicting human
behavior, for instance to enable emerging contentious situations to be forecast
or the nature of ongoing but hidden activities to be inferred. A promising
approach to this problem is to identify and collect appropriate empirical data
and then apply machine learning methods to these data to generate the
predictions. This paper shows the performance of such learning algorithms often
can be improved substantially by leveraging sociological models in their
development and implementation. In particular, we demonstrate that
sociologically-grounded learning algorithms outperform gold-standard methods in
three important and challenging tasks: 1.) inferring the (unobserved) nature of
relationships in adversarial social networks, 2.) predicting whether nascent
social diffusion events will go viral, and 3.) anticipating and defending
future actions of opponents in adversarial settings. Significantly, the new
algorithms perform well even when there is limited data available for their
training and execution.
|
1212.6808 | Early Warning Analysis for Social Diffusion Events | cs.SI physics.soc-ph | There is considerable interest in developing predictive capabilities for
social diffusion processes, for instance to permit early identification of
emerging contentious situations, rapid detection of disease outbreaks, or
accurate forecasting of the ultimate reach of potentially viral ideas or
behaviors. This paper proposes a new approach to this predictive analytics
problem, in which analysis of meso-scale network dynamics is leveraged to
generate useful predictions for complex social phenomena. We begin by deriving
a stochastic hybrid dynamical systems (S-HDS) model for diffusion processes
taking place over social networks with realistic topologies; this modeling
approach is inspired by recent work in biology demonstrating that S-HDS offer a
useful mathematical formalism with which to represent complex, multi-scale
biological network dynamics. We then perform formal stochastic reachability
analysis with this S-HDS model and conclude that the outcomes of social
diffusion processes may depend crucially upon the way the early dynamics of the
process interacts with the underlying network's community structure and
core-periphery structure. This theoretical finding provides the foundations for
developing a machine learning algorithm that enables accurate early warning
analysis for social diffusion events. The utility of the warning algorithm, and
the power of network-based predictive metrics, are demonstrated through an
empirical investigation of the propagation of political memes over social media
networks. Additionally, we illustrate the potential of the approach for
security informatics applications through case studies involving early warning
analysis of large-scale protests events and politically-motivated cyber
attacks.
|
1212.6810 | Web Analytics for Security Informatics | cs.SI physics.soc-ph | An enormous volume of security-relevant information is present on the Web,
for instance in the content produced each day by millions of bloggers
worldwide, but discovering and making sense of these data is very challenging.
This paper considers the problem of exploring and analyzing the Web to realize
three fundamental objectives: 1.) security relevant information discovery; 2.)
target situational awareness, typically by making (near) real-time inferences
concerning events and activities from available observations; and 3.)
predictive analysis, to include providing early warning for crises and forming
predictions regarding likely outcomes of emerging issues and contemplated
interventions. The proposed approach involves collecting and integrating three
types of Web data, textual, relational, and temporal, to perform assessments
and generate insights that would be difficult or impossible to obtain using
standard methods. We demonstrate the efficacy of the framework by summarizing a
number of successful real-world deployments of the methodology.
|
1212.6817 | Stability Analysis Of Delayed System Using Bodes Integral | cs.SY | The PID controller parameters can be adjusted in such a manner that it gives
the desired frequency response and the results are found using the Bodes
integral formula in order to adjust the slope of the nyquist curve in a desired
manner. The same idea is applied for plants with time delay . The same has also
been done in a new approach . The delay term is approximated as a transfer
function using Pade approximation and then the Bode integral is used to
determine the controller parameters. Both the methodologies are demonstrated
with MATLAB simulation of representative plants and accompanying PID
controllers. A proper comparison of the two methodologies is also done. The PID
controller parameters are also tuned using a real coded Genetic Algorithm (GA)
and a proper comparison is done between the three methods.
|
1212.6837 | Autonomously Learning to Visually Detect Where Manipulation Will Succeed | cs.RO cs.AI cs.CV | Visual features can help predict if a manipulation behavior will succeed at a
given location. For example, the success of a behavior that flips light
switches depends on the location of the switch. Within this paper, we present
methods that enable a mobile manipulator to autonomously learn a function that
takes an RGB image and a registered 3D point cloud as input and returns a 3D
location at which a manipulation behavior is likely to succeed. Given a pair of
manipulation behaviors that can change the state of the world between two sets
(e.g., light switch up and light switch down), classifiers that detect when
each behavior has been successful, and an initial hint as to where one of the
behaviors will be successful, the robot autonomously trains a pair of support
vector machine (SVM) classifiers by trying out the behaviors at locations in
the world and observing the results. When an image feature vector associated
with a 3D location is provided as input to one of the SVMs, the SVM predicts if
the associated manipulation behavior will be successful at the 3D location. To
evaluate our approach, we performed experiments with a PR2 robot from Willow
Garage in a simulated home using behaviors that flip a light switch, push a
rocker-type light switch, and operate a drawer. By using active learning, the
robot efficiently learned SVMs that enabled it to consistently succeed at these
tasks. After training, the robot also continued to learn in order to adapt in
the event of failure.
|
1212.6846 | Maximizing a Nonnegative, Monotone, Submodular Function Constrained to
Matchings | cs.DS cs.AI cs.CC cs.LG stat.ML | Submodular functions have many applications. Matchings have many
applications. The bitext word alignment problem can be modeled as the problem
of maximizing a nonnegative, monotone, submodular function constrained to
matchings in a complete bipartite graph where each vertex corresponds to a word
in the two input sentences and each edge represents a potential word-to-word
translation. We propose a more general problem of maximizing a nonnegative,
monotone, submodular function defined on the edge set of a complete graph
constrained to matchings; we call this problem the CSM-Matching problem.
CSM-Matching also generalizes the maximum-weight matching problem, which has a
polynomial-time algorithm; however, we show that it is NP-hard to approximate
CSM-Matching within a factor of e/(e-1) by reducing the max k-cover problem to
it. Our main result is a simple, greedy, 3-approximation algorithm for
CSM-Matching. Then we reduce CSM-Matching to maximizing a nonnegative,
monotone, submodular function over two matroids, i.e., CSM-2-Matroids.
CSM-2-Matroids has a (2+epsilon)-approximation algorithm - called LSV2. We show
that we can find a (4+epsilon)-approximate solution to CSM-Matching using LSV2.
We extend this approach to similar problems.
|
1212.6855 | Multi-Directional Flow as Touch-Stone to Assess Models of Pedestrian
Dynamics | physics.soc-ph cs.MA | For simulation models of pedestrian dynamics there are always the issues of
calibration and validation. These are usually done by comparing measured
properties of the dynamics found in observation, experiments and simulation in
certain scenarios. For this the scenarios first need to be sensitive to
parameter changes of a particular model or - if models are compared -
differences between models. Second it is helpful if the exhibited differences
can be expressed in quantities which are as simple as possible ideally a single
number. Such a scenario is proposed in this contribution together with
evaluation measures. In an example evaluation of a particular model it is shown
that the proposed evaluation measures are very sensitive to parameter changes
and therefore summarize differences effects of parameter changes and
differences between models efficiently, sometimes in a single number. It is
shown how the symmetry which exists in the achiral geometry of the proposed
example scenario is broken in particular simulation runs exhibiting chiral
dynamics, while in the statistics of 1,000 simulation runs there is a symmetry
between left- and right-chiral dynamics. In the course of the symmetry breaking
differences between models and parameter settings are amplified which is the
origin of the high sensitivity of the scenario against parameter changes.
|
1212.6856 | Emergence of Equilibria from Individual Strategies in Online Content
Diffusion | cs.GT cs.NI cs.SI | Social scientists have observed that human behavior in society can often be
modeled as corresponding to a threshold type policy. A new behavior would
propagate by a procedure in which an individual adopts the new behavior if the
fraction of his neighbors or friends having adopted the new behavior exceeds
some threshold. In this paper we study the question of whether the emergence of
threshold policies may be modeled as a result of some rational process which
would describe the behavior of non-cooperative rational members of some social
network. We focus on situations in which individuals take the decision whether
to access or not some content, based on the number of views that the content
has. Our analysis aims at understanding not only the behavior of individuals,
but also the way in which information about the quality of a given content can
be deduced from view counts when only part of the viewers that access the
content are informed about its quality. In this paper we present a game
formulation for the behavior of individuals using a meanfield model: the number
of individuals is approximated by a continuum of atomless players and for which
the Wardrop equilibrium is the solution concept. We derive conditions on the
problem's parameters that result indeed in the emergence of threshold
equilibria policies. But we also identify some parameters in which other
structures are obtained for the equilibrium behavior of individuals.
|
1212.6857 | A Trichotomy for Regular Simple Path Queries on Graphs | cs.DB cs.DM | Regular path queries (RPQs) select nodes connected by some path in a graph.
The edge labels of such a path have to form a word that matches a given regular
expression. We investigate the evaluation of RPQs with an additional constraint
that prevents multiple traversals of the same nodes. Those regular simple path
queries (RSPQs) find several applications in practice, yet they quickly become
intractable, even for basic languages such as (aa)* or a*ba*.
In this paper, we establish a comprehensive classification of regular
languages with respect to the complexity of the corresponding regular simple
path query problem. More precisely, we identify the fragment that is maximal in
the following sense: regular simple path queries can be evaluated in polynomial
time for every regular language L that belongs to this fragment and evaluation
is NP-complete for languages outside this fragment. We thus fully characterize
the frontier between tractability and intractability for RSPQs, and we refine
our results to show the following trichotomy: Evaluations of RSPQs is either
AC0, NL-complete or NP-complete in data complexity, depending on the regular
language L. The fragment identified also admits a simple characterization in
terms of regular expressions.
Finally, we also discuss the complexity of the following decision problem:
decide, given a language L, whether finding a regular simple path for L is
tractable. We consider several alternative representations of L: DFAs, NFAs or
regular expressions, and prove that this problem is NL-complete for the first
representation and PSPACE-complete for the other two. As a conclusion we extend
our results from edge-labeled graphs to vertex-labeled graphs and vertex-edge
labeled graphs.
|
1212.6922 | Training a Functional Link Neural Network Using an Artificial Bee Colony
for Solving a Classification Problems | cs.NE cs.LG | Artificial Neural Networks have emerged as an important tool for
classification and have been widely used to classify a non-linear separable
pattern. The most popular artificial neural networks model is a Multilayer
Perceptron (MLP) as it is able to perform classification task with significant
success. However due to the complexity of MLP structure and also problems such
as local minima trapping, over fitting and weight interference have made neural
network training difficult. Thus, the easy way to avoid these problems is to
remove the hidden layers. This paper presents the ability of Functional Link
Neural Network (FLNN) to overcome the complexity structure of MLP by using
single layer architecture and propose an Artificial Bee Colony (ABC)
optimization for training the FLNN. The proposed technique is expected to
provide better learning scheme for a classifier in order to get more accurate
classification result
|
1212.6930 | Private Broadcasting over Independent Parallel Channels | cs.IT math.IT | We study private broadcasting of two messages to two groups of receivers over
independent parallel channels. One group consists of an arbitrary number of
receivers interested in a common message, whereas the other group has only one
receiver. Each message must be kept confidential from the receiver(s) in the
other group. Each of the sub-channels is degraded, but the order of receivers
on each channel can be different. While corner points of the capacity region
were characterized in earlier works, we establish the capacity region and show
the optimality of a superposition strategy. For the case of parallel Gaussian
channels, we show that a Gaussian input distribution is optimal. We also
discuss an extension of our setup to broadcasting over a block-fading channel
and demonstrate significant performance gains using the proposed scheme over a
baseline time-sharing scheme.
|
1212.6933 | On Automation and Medical Image Interpretation, With Applications for
Laryngeal Imaging | cs.CV | Indeed, these are exciting times. We are in the heart of a digital
renaissance. Automation and computer technology allow engineers and scientists
to fabricate processes that amalgamate quality of life. We anticipate much
growth in medical image interpretation and understanding, due to the influx of
computer technologies. This work should serve as a guide to introduce the
reader to core themes in theoretical computer science, as well as imaging
applications for understanding vocal-fold vibrations. In this work, we motivate
the use of automation, review some mathematical models of computation. We
present a proof of a classical problem in image analysis that cannot be
automated by means of algorithms. Furthermore, discuss some applications for
processing medical images of the vocal folds, and discuss some of the
exhilarating directions the art of automation will take vocal-fold image
interpretation and quite possibly other areas of biomedical image analysis.
|
1212.6952 | On Minimizing Data-read and Download for Storage-Node Recovery | cs.IT math.IT | We consider the problem of efficient recovery of the data stored in any
individual node of a distributed storage system, from the rest of the nodes.
Applications include handling failures and degraded reads. We measure
efficiency in terms of the amount of data-read and the download required. To
minimize the download, we focus on the minimum bandwidth setting of the
'regenerating codes' model for distributed storage. Under this model, the
system has a total of n nodes, and the data stored in any node must be
(efficiently) recoverable from any d of the other (n-1) nodes. Lower bounds on
the two metrics under this model were derived previously; it has also been
shown that these bounds are achievable for the amount of data-read and download
when d=n-1, and for the amount of download alone when d<n-1.
In this paper, we complete this picture by proving the converse result, that
when d<n-1, these lower bounds are strictly loose with respect to the amount of
read required. The proof is information-theoretic, and hence applies to
non-linear codes as well. We also show that under two (practical) relaxations
of the problem setting, these lower bounds can be met for both read and
download simultaneously.
|
1212.6958 | Fast Solutions to Projective Monotone Linear Complementarity Problems | cs.LG math.OC | We present a new interior-point potential-reduction algorithm for solving
monotone linear complementarity problems (LCPs) that have a particular special
structure: their matrix $M\in{\mathbb R}^{n\times n}$ can be decomposed as
$M=\Phi U + \Pi_0$, where the rank of $\Phi$ is $k<n$, and $\Pi_0$ denotes
Euclidean projection onto the nullspace of $\Phi^\top$. We call such LCPs
projective. Our algorithm solves a monotone projective LCP to relative accuracy
$\epsilon$ in $O(\sqrt n \ln(1/\epsilon))$ iterations, with each iteration
requiring $O(nk^2)$ flops. This complexity compares favorably with
interior-point algorithms for general monotone LCPs: these algorithms also
require $O(\sqrt n \ln(1/\epsilon))$ iterations, but each iteration needs to
solve an $n\times n$ system of linear equations, a much higher cost than our
algorithm when $k\ll n$. Our algorithm works even though the solution to a
projective LCP is not restricted to lie in any low-rank subspace.
|
1301.0006 | Predictive Non-equilibrium Social Science | cs.SI physics.soc-ph | Non-Equilibrium Social Science (NESS) emphasizes dynamical phenomena, for
instance the way political movements emerge or competing organizations
interact. This paper argues that predictive analysis is an essential element of
NESS, occupying a central role in its scientific inquiry and representing a key
activity of practitioners in domains such as economics, public policy, and
national security. We begin by clarifying the distinction between models which
are useful for prediction and the much more common explanatory models studied
in the social sciences. We then investigate a challenging real-world predictive
analysis case study, and find evidence that the poor performance of standard
prediction methods does not indicate an absence of human predictability but
instead reflects (1.) incorrect assumptions concerning the predictive utility
of explanatory models, (2.) misunderstanding regarding which features of social
dynamics actually possess predictive power, and (3.) practical difficulties
exploiting predictive representations.
|
1301.0014 | Propelinear 1-perfect codes from quadratic functions | cs.IT cs.DM math.CO math.IT | Perfect codes obtained by the Vasil'ev--Sch\"onheim construction from a
linear base code and quadratic switching functions are transitive and,
moreover, propelinear. This gives at least $\exp(cN^2)$ propelinear $1$-perfect
codes of length $N$ over an arbitrary finite field, while an upper bound on the
number of transitive codes is $\exp(C(N\ln N)^2)$. Keywords: perfect code,
propelinear code, transitive code, automorphism group, Boolean function.
|
1301.0015 | Bethe Bounds and Approximating the Global Optimum | cs.LG stat.ML | Inference in general Markov random fields (MRFs) is NP-hard, though
identifying the maximum a posteriori (MAP) configuration of pairwise MRFs with
submodular cost functions is efficiently solvable using graph cuts. Marginal
inference, however, even for this restricted class, is in #P. We prove new
formulations of derivatives of the Bethe free energy, provide bounds on the
derivatives and bracket the locations of stationary points, introducing a new
technique called Bethe bound propagation. Several results apply to pairwise
models whether associative or not. Applying these to discretized
pseudo-marginals in the associative case we present a polynomial time
approximation scheme for global optimization provided the maximum degree is
$O(\log n)$, and discuss several extensions.
|
1301.0026 | Bounding Lossy Compression using Lossless Codes at Reduced Precision | cs.MM cs.IT math.IT | An alternative approach to two-part 'critical compression' is presented.
Whereas previous results were based on summing a lossless code at reduced
precision with a lossy-compressed error or noise term, the present approach
uses a similar lossless code at reduced precision to establish absolute bounds
which constrain an arbitrary lossy data compression algorithm applied to the
original data.
|
1301.0039 | Generating Property-Directed Potential Invariants By Backward Analysis | cs.LO cs.CE cs.SE | This paper addresses the issue of lemma generation in a k-induction-based
formal analysis of transition systems, in the linear real/integer arithmetic
fragment. A backward analysis, powered by quantifier elimination, is used to
output preimages of the negation of the proof objective, viewed as unauthorized
states, or gray states. Two heuristics are proposed to take advantage of this
source of information. First, a thorough exploration of the possible
partitionings of the gray state space discovers new relations between state
variables, representing potential invariants. Second, an inexact exploration
regroups and over-approximates disjoint areas of the gray state space, also to
discover new relations between state variables. k-induction is used to isolate
the invariants and check if they strengthen the proof objective. These
heuristics can be used on the first preimage of the backward exploration, and
each time a new one is output, refining the information on the gray states. In
our context of critical avionics embedded systems, we show that our approach is
able to outperform other academic or commercial tools on examples of interest
in our application field. The method is introduced and motivated through two
main examples, one of which was provided by Rockwell Collins, in a
collaborative formal verification framework.
|
1301.0043 | A Framework for Analysing Driver Interactions with Semi-Autonomous
Vehicles | cs.HC cs.RO cs.SY | Semi-autonomous vehicles are increasingly serving critical functions in
various settings from mining to logistics to defence. A key characteristic of
such systems is the presence of the human (drivers) in the control loop. To
ensure safety, both the driver needs to be aware of the autonomous aspects of
the vehicle and the automated features of the vehicle built to enable safer
control. In this paper we propose a framework to combine empirical models
describing human behaviour with the environment and system models. We then
analyse, via model checking, interaction between the models for desired safety
properties. The aim is to analyse the design for safe vehicle-driver
interaction. We demonstrate the applicability of our approach using a case
study involving semi-autonomous vehicles where the driver fatigue are factors
critical to a safe journey.
|
1301.0047 | On Distributed Online Classification in the Midst of Concept Drifts | math.OC cs.DC cs.LG cs.SI physics.soc-ph | In this work, we analyze the generalization ability of distributed online
learning algorithms under stationary and non-stationary environments. We derive
bounds for the excess-risk attained by each node in a connected network of
learners and study the performance advantage that diffusion strategies have
over individual non-cooperative processing. We conduct extensive simulations to
illustrate the results.
|
1301.0048 | Generating High-Order Threshold Functions with Multiple Thresholds | cs.NE | In this paper, we consider situations in which a given logical function is
realized by a multithreshold threshold function. In such situations, constant
functions can be easily obtained from multithreshold threshold functions, and
therefore, we can show that it becomes possible to optimize a class of
high-order neural networks. We begin by proposing a generating method for
threshold functions in which we use a vector that determines the boundary
between the linearly separable function and the high-order threshold function.
By applying this method to high-order threshold functions, we show that
functions with the same weight as, but a different threshold than, a threshold
function generated by the generation process can be easily obtained. We also
show that the order of the entire network can be extended while maintaining the
structure of given functions.
|
1301.0068 | Optimal Assembly for High Throughput Shotgun Sequencing | q-bio.GN cs.DS cs.IT math.IT q-bio.QM | We present a framework for the design of optimal assembly algorithms for
shotgun sequencing under the criterion of complete reconstruction. We derive a
lower bound on the read length and the coverage depth required for
reconstruction in terms of the repeat statistics of the genome. Building on
earlier works, we design a de Brujin graph based assembly algorithm which can
achieve very close to the lower bound for repeat statistics of a wide range of
sequenced genomes, including the GAGE datasets. The results are based on a set
of necessary and sufficient conditions on the DNA sequence and the reads for
reconstruction. The conditions can be viewed as the shotgun sequencing analogue
of Ukkonen-Pevzner's necessary and sufficient conditions for Sequencing by
Hybridization.
|
1301.0079 | Zero-Delay and Causal Single-User and Multi-User Lossy Source Coding
with Decoder Side Information | cs.IT math.IT | We consider zero-delay single-user and multi-user source coding with average
distortion constraint and decoder side information. The zero-delay constraint
translates into causal (sequential) encoder and decoder pairs as well as the
use of instantaneous codes. For the single-user setting, we show that optimal
performance is attained by time sharing at most two scalar encoder-decoder
pairs, that use zero-error side information codes. Side information lookahead
is shown to useless in this setting. We show that the restriction to causal
encoding functions is the one that causes the performance degradation, compared
to unrestricted systems, and not the sequential decoders or instantaneous
codes. Furthermore, we show that even without delay constraints, if either the
encoder or decoder are restricted a-priori to be scalar, the performance loss
cannot be compensated by the other component, which can be scalar as well
without further loss. Finally, we show that the multi-terminal source coding
problem can be solved in the zero-delay regime and the rate-distortion region
is given.
|
1301.0080 | How to Understand LMMSE Transceiver Design for MIMO Systems From
Quadratic Matrix Programming | cs.IT math.IT | In this paper, a unified linear minimum mean-square-error (LMMSE) transceiver
design framework is investigated, which is suitable for a wide range of
wireless systems. The unified design is based on an elegant and powerful
mathematical programming technology termed as quadratic matrix programming
(QMP). Based on QMP it can be observed that for different wireless systems,
there are certain common characteristics which can be exploited to design LMMSE
transceivers e.g., the quadratic forms. It is also discovered that evolving
from a point-to-point MIMO system to various advanced wireless systems such as
multi-cell coordinated systems, multi-user MIMO systems, MIMO cognitive radio
systems, amplify-and-forward MIMO relaying systems and so on, the quadratic
nature is always kept and the LMMSE transceiver designs can always be carried
out via iteratively solving a number of QMP problems. A comprehensive framework
on how to solve QMP problems is also given. The work presented in this paper is
likely to be the first shoot for the transceiver design for the future
ever-changing wireless systems.
|
1301.0082 | CloudSVM : Training an SVM Classifier in Cloud Computing Systems | cs.LG cs.DC | In conventional method, distributed support vector machines (SVM) algorithms
are trained over pre-configured intranet/internet environments to find out an
optimal classifier. These methods are very complicated and costly for large
datasets. Hence, we propose a method that is referred as the Cloud SVM training
mechanism (CloudSVM) in a cloud computing environment with MapReduce technique
for distributed machine learning applications. Accordingly, (i) SVM algorithm
is trained in distributed cloud storage servers that work concurrently; (ii)
merge all support vectors in every trained cloud node; and (iii) iterate these
two steps until the SVM converges to the optimal classifier function. Large
scale data sets are not possible to train using SVM algorithm on a single
computer. The results of this study are important for training of large scale
data sets for machine learning applications. We provided that iterative
training of splitted data set in cloud computing environment using SVM will
converge to a global optimal classifier in finite iteration size.
|
1301.0087 | Opportunistic DF-AF Selection Relaying with Optimal Relay Selection in
Nakagami-m Fading Environments | cs.IT math.IT | An opportunistic DF-AF selection relaying scheme with maximal received
signal-to-noise ratio (SNR) at the destination is investigated in this paper.
The outage probability of the opportunistic DF-AF selection relaying scheme
over Nakagami-m fading channels is analyzed, and a closed-form solution is
obtained. We perform asymptotic analysis of the outage probability in high SNR
domain. The coding gain and the diversity order are obtained. For the purpose
of comparison, the asymptotic analysis of opportunistic AF scheme in Nakagami-m
fading channels is also performed by using the Squeeze Theorem. In addition, we
prove that compared with the opportunistic DF scheme and opportunistic AF
scheme, the opportunistic DF-AF selection relaying scheme has better outage
performance.
|
1301.0091 | On the Robust Optimal Stopping Problem | math.PR cs.SY math.OC q-fin.PR | We study a robust optimal stopping problem with respect to a set $\cP$ of
mutually singular probabilities. This can be interpreted as a zero-sum
controller-stopper game in which the stopper is trying to maximize its pay-off
while an adverse player wants to minimize this payoff by choosing an evaluation
criteria from $\cP$. We show that the \emph{upper Snell envelope $\ol{Z}$} of
the reward process $Y$ is a supermartingale with respect to an appropriately
defined nonlinear expectation $\ul{\sE}$, and $\ol{Z}$ is further an
$\ul{\sE}-$martingale up to the first time $\t^*$ when $\ol{Z}$ meets $Y$.
Consequently, $\t^*$ is the optimal stopping time for the robust optimal
stopping problem and the corresponding zero-sum game has a value. Although the
result seems similar to the one obtained in the classical optimal stopping
theory, the mutual singularity of probabilities and the game aspect of the
problem give rise to major technical hurdles, which we circumvent using some
new methods.
|
1301.0093 | Robustness of Sparse Recovery via $F$-minimization: A Topological
Viewpoint | cs.IT math.IT | A recent trend in compressed sensing is to consider non-convex optimization
techniques for sparse recovery. The important case of $F$-minimization has
become of particular interest, for which the exact reconstruction condition
(ERC) in the noiseless setting can be precisely characterized by the null space
property (NSP). However, little work has been done concerning its robust
reconstruction condition (RRC) in the noisy setting. We look at the null space
of the measurement matrix as a point on the Grassmann manifold, and then study
the relation between the ERC and RRC sets, denoted as $\Omega_J$ and
$\Omega_J^r$, respectively. It is shown that $\Omega_J^r$ is the interior of
$\Omega_J$, from which a previous result of the equivalence of ERC and RRC for
$\ell_p$-minimization follows easily as a special case. Moreover, when $F$ is
non-decreasing, it is shown that
$\overline{\Omega}_J\setminus\interior(\Omega_J)$ is a set of measure zero and
of the first category. As a consequence, the probabilities of ERC and RRC are
the same if the measurement matrix $\mathbf{A}$ is randomly generated according
to a continuous distribution. Quantitatively, if the null space
$\mathcal{N}(\bf A)$ lies in the "$d$-interior" of $\Omega_J$, then RRC will be
satisfied with the robustness constant
$C=\frac{2+2d}{d\sigma_{\min}(\mathbf{A}^{\top})}$; and conversely if RRC holds
with $C=\frac{2-2d}{d\sigma_{\max}(\mathbf{A}^{\top})}$, then $\mathcal{N}(\bf
A)$ must lie in $d$-interior of $\Omega_J$. We also present several rules for
comparing the performances of different cost functions. Finally, these results
are capitalized to derive achievable tradeoffs between the measurement rate and
robustness with the aid of Gordon's escape through the mesh theorem or a
connection between NSP and the restricted eigenvalue condition.
|
1301.0094 | Joint Iterative Power Allocation and Linear Interference Suppression
Algorithms in Cooperative DS-CDMA Networks | cs.IT math.IT | This work presents joint iterative power allocation and interference
suppression algorithms for spread spectrum networks which employ multiple hops
and the amplify-and-forward cooperation strategy for both the uplink and the
downlink. We propose a joint constrained optimization framework that considers
the allocation of power levels across the relays subject to individual and
global power constraints and the design of linear receivers for interference
suppression. We derive constrained linear minimum mean-squared error (MMSE)
expressions for the parameter vectors that determine the optimal power levels
across the relays and the linear receivers. In order to solve the proposed
optimization problems, we develop cost-effective algorithms for adaptive joint
power allocation, and estimation of the parameters of the receiver and the
channels. An analysis of the optimization problem is carried out and shows that
the problem can have its convexity enforced by an appropriate choice of the
power constraint parameter, which allows the algorithms to avoid problems with
local minima. A study of the complexity and the requirements for feedback
channels of the proposed algorithms is also included for completeness.
Simulation results show that the proposed algorithms obtain significant gains
in performance and capacity over existing non-cooperative and cooperative
schemes.
|
1301.0097 | Set-Membership Adaptive Algorithms based on Time-Varying Error Bounds
for Interference Suppression | cs.IT math.IT | This work presents set-membership adaptive algorithms based on time-varying
error bounds for CDMA interference suppression. We introduce a modified family
of set-membership adaptive algorithms for parameter estimation with
time-varying error bounds. The algorithms considered include modified versions
of the set-membership normalized least mean squares (SM-NLMS), the affine
projection (SM-AP) and the bounding ellipsoidal adaptive constrained (BEACON)
recursive least-squares technique. The important issue of error bound
specification is addressed in a new framework that takes into account parameter
estimation dependency, multi-access and inter-symbol interference for DS-CDMA
communications. An algorithm for tracking and estimating the interference power
is proposed and analyzed. This algorithm is then incorporated into the proposed
time-varying error bound mechanisms. Computer simulations show that the
proposed algorithms are capable of outperforming previously reported techniques
with a significantly lower number of parameter updates and a reduced risk of
overbounding or underbounding.
|
1301.0104 | Policy Evaluation with Variance Related Risk Criteria in Markov Decision
Processes | cs.LG stat.ML | In this paper we extend temporal difference policy evaluation algorithms to
performance criteria that include the variance of the cumulative reward. Such
criteria are useful for risk management, and are important in domains such as
finance and process control. We propose both TD(0) and LSTD(lambda) variants
with linear function approximation, prove their convergence, and demonstrate
their utility in a 4-dimensional continuous state space problem.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.