id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1311.3355 | HINO: a BFO-aligned ontology representing human molecular interactions
and pathways | cs.AI cs.DB q-bio.MN | Many database resources, such as Reactome, collect manually annotated
reactions, interactions, and pathways from peer-reviewed publications. The
interactors (e.g., a protein), interactions, and pathways in these data
resources are often represented as instances in using BioPAX, a standard
pathway data exchange format. However, these interactions are better
represented as classes (or universals) since they always occur given
appropriate conditions. This study aims to represent various human interaction
pathways and networks as classes via a formal ontology aligned with the Basic
Formal Ontology (BFO). Towards this goal, the Human Interaction Network
Ontology (HINO) was generated by extending the BFO-aligned Interaction Network
Ontology (INO). All human pathways and associated processes and interactors
listed in Reactome and represented in BioPAX were first converted to ontology
classes by aligning them under INO. Related terms and associated relations and
hierarchies from external ontologies (e.g., CHEBI and GO) were also retrieved
and imported into HINO. HINO ontology terms were resolved in the linked
ontology data server Ontobee. The RDF triples stored in the RDF triple store
are queryable through a SPARQL program. Such an ontology system supports
advanced pathway data integration and applications.
|
1311.3365 | Deriving the Qubit from Entropy Principles | quant-ph cs.IT math-ph math.IT math.MP | The Heisenberg uncertainty principle is one of the most famous features of
quantum mechanics. However, the non-determinism implied by the Heisenberg
uncertainty principle --- together with other prominent aspects of quantum
mechanics such as superposition, entanglement, and nonlocality --- poses deep
puzzles about the underlying physical reality, even while these same features
are at the heart of exciting developments such as quantum cryptography,
algorithms, and computing. These puzzles might be resolved if the mathematical
structure of quantum mechanics were built up from physically interpretable
axioms, but it is not. We propose three physically-based axioms which together
characterize the simplest quantum system, namely the qubit. Our starting point
is the class of all no-signaling theories. Each such theory can be regarded as
a family of empirical models, and we proceed to associate entropies, i.e.,
measures of information, with these models. To do this, we move to phase space
and impose the condition that entropies are real-valued. This requirement,
which we call the Information Reality Principle, arises because in order to
represent all no-signaling theories (including quantum mechanics itself) in
phase space, it is necessary to allow negative probabilities (Wigner [1932]).
Our second and third principles take two important features of quantum
mechanics and turn them into deliberately chosen physical axioms. One axiom is
an Uncertainty Principle, stated in terms of entropy. The other axiom is an
Unbiasedness Principle, which requires that whenever there is complete
certainty about the outcome of a measurement in one of three mutually
orthogonal directions, there must be maximal uncertainty about the outcomes in
each of the two other directions.
|
1311.3368 | Anytime Belief Propagation Using Sparse Domains | stat.ML cs.AI cs.LG | Belief Propagation has been widely used for marginal inference, however it is
slow on problems with large-domain variables and high-order factors. Previous
work provides useful approximations to facilitate inference on such models, but
lacks important anytime properties such as: 1) providing accurate and
consistent marginals when stopped early, 2) improving the approximation when
run longer, and 3) converging to the fixed point of BP. To this end, we propose
a message passing algorithm that works on sparse (partially instantiated)
domains, and converges to consistent marginals using dynamic message
scheduling. The algorithm grows the sparse domains incrementally, selecting the
next value to add using prioritization schemes based on the gradients of the
marginal inference objective. Our experiments demonstrate local anytime
consistency and fast convergence, providing significant speedups over BP to
obtain low-error marginals: up to 25 times on grid models, and up to 6 times on
a real-world natural language processing task.
|
1311.3387 | Performance of General STCs over Spatially Correlated MIMO
Single-keyhole Channels | cs.IT math.IT | For MIMO Rayleigh channels, it has been shown that transmitter correlations
always degrade the performance of general space-time codes (STCs) in high SNR
regimes. In this correspondence, however, we show that when MIMO channels
experience single-keyhole conditions, the effect of spatial correlations
between transmission antennas is more sophisticated for general STCs: when
$M>N$ (i.e., the number of transmission antennas is greater than the number of
receiving antennas), depending on how the correlation matrix $\mathbf{P}$
beamforms the code word difference matrix $\mathbf{\Delta}$, the PEP
performance of general STCs can be either degraded or improved in high SNR
regimes. We provide a new measure, which is based on the eigenvalues of
$\mathbf{\Delta}$ and the numbers of transmission and receiving antennas, to
exam if there exists certain correlation matrices that can improve the
performance of general STCs in high SNR regimes. Previous studies on the effect
of spatial correlations over single-keyhole channels only concentrated on
orthogonal STCs, while our study here is for general STCs and can also be used
to explain previous findings for orthogonal STCs.
|
1311.3391 | A Class of Six-weight Cyclic Codes and Their Weight Distribution | math.NT cs.IT math.IT | In this paper, a family of six-weight cyclic codes over GF(p) whose duals
have two zeros is presented, where p is an odd prime. And the weight
distribution of these cyclic codes is determined.
|
1311.3394 | Integrated Expert Recommendation Model for Online Communities | cs.SI cs.IR | Online communities have become vital places for Web 2.0 users to share
knowledge and experiences. Recently, finding expertise user in community has
become an important research issue. This paper proposes a novel cascaded model
for expert recommendation using aggregated knowledge extracted from enormous
contents and social network features. Vector space model is used to compute the
relevance of published content with respect to a specific query while PageRank
algorithm is applied to rank candidate experts. The experimental results show
that the proposed model is an effective recommendation which can guarantee that
the most candidate experts are both highly relevant to the specific queries and
highly influential in corresponding areas.
|
1311.3405 | The STONE Transform: Multi-Resolution Image Enhancement and Real-Time
Compressive Video | cs.CV | Compressed sensing enables the reconstruction of high-resolution signals from
under-sampled data. While compressive methods simplify data acquisition, they
require the solution of difficult recovery problems to make use of the
resulting measurements. This article presents a new sensing framework that
combines the advantages of both conventional and compressive sensing. Using the
proposed \stone transform, measurements can be reconstructed instantly at
Nyquist rates at any power-of-two resolution. The same data can then be
"enhanced" to higher resolutions using compressive methods that leverage
sparsity to "beat" the Nyquist limit. The availability of a fast direct
reconstruction enables compressive measurements to be processed on small
embedded devices. We demonstrate this by constructing a real-time compressive
video camera.
|
1311.3416 | Quantum synchronizable codes from finite geometries | quant-ph cs.IT math.IT | Quantum synchronizable error-correcting codes are special quantum
error-correcting codes that are designed to correct both the effect of quantum
noise on qubits and misalignment in block synchronization. It is known that in
principle such a code can be constructed through a combination of a classical
linear code and its subcode if the two are both cyclic and dual-containing.
However, finding such classical codes that lead to promising quantum
synchronizable error-correcting codes is not a trivial task. In fact, although
there are two families of classical codes that are proved to produce quantum
synchronizable codes with good minimum distances and highest possible tolerance
against misalignment, their code lengths have been restricted to primes and
Mersenne numbers. In this paper, examining the incidence vectors of projective
spaces over the finite fields of characteristic $2$, we give quantum
synchronizable codes from cyclic codes whose lengths are not primes or Mersenne
numbers. These projective geometric codes achieve good performance in quantum
error correction and possess the best possible ability to recover
synchronization, thereby enriching the variety of good quantum synchronizable
codes. We also extend the current knowledge of cyclic codes in classical coding
theory by explicitly giving generator polynomials of the finite geometric codes
and completely characterizing the minimum weight nonzero codewords. In addition
to the codes based on projective spaces, we carry out a similar analysis on the
well-known cyclic codes from Euclidean spaces that are known to be majority
logic decodable and determine their exact minimum distances.
|
1311.3428 | Low-complexity End-to-End Performance Optimization in MIMO Full-Duplex
Relay Systems | cs.IT math.IT | In this paper, we deal with the deployment of full-duplex relaying in
amplify-and-forward (AF) cooperative networks with multiple-antenna terminals.
In contrast to previous studies, which focus on the spatial mitigation of the
loopback interference (LI) at the relay node, a joint precoding/decoding design
that maximizes the end-to-end (e2e) performance is investigated. The proposed
precoding incorporates rank-1 zero-forcing (ZF) LI suppression at the relay
node and is derived in closed-form by solving appropriate optimization
problems. In order to further reduce system complexity, the antenna selection
(AS) problem for full-duplex AF cooperative systems is discussed. We
investigate different AS schemes to select a single transmit antenna at both
the source and the relay, as well as a single receive antenna at both the relay
and the destination. To facilitate comparison, exact outage probability
expressions and asymptotic approximations of the proposed AS schemes are
provided. In order to overcome zero-diversity effects associated with the AS
operation, a simple power allocation scheme at the relay node is also
investigated and its optimal value is analytically derived. Numerical and
simulation results show that the joint ZF-based precoding significantly
improves e2e performance, while AS schemes are efficient solutions for
scenarios with strict computational constraints.
|
1311.3429 | Hydrodynamic surrogate models for bio-inspired micro-swimming robots | physics.flu-dyn cs.RO | Research on untethered micro-swimming robots is growing fast owing to their
potential impact on minimally invasive medical procedures. Candidate propulsion
mechanisms of robots are based on flagellar mechanisms of microorganisms such
as rotating rigid helices and traveling plane-waves on flexible rods and
parameterized by wavelength, amplitude, and frequency. For design and control
of swimming robots, accurate real-time models are necessary to compute
trajectories, velocities and hydrodynamic forces acting on robots. Resistive
force theory (RFT) provides an excellent framework for the development of
real-time six degrees-of-freedom surrogate models for design optimization and
control. However, the accuracy of RFT-based models depends strongly on
hydrodynamic interactions. Here, we introduce interaction coefficients that
only multiply body resistance coefficients with no modification to local
resistance coefficients on the tail. Interaction coefficients are obtained for
a single specimen of Vibrio Algino reported in the literature, and used in the
RFT model for comparisons of the forward-swimming component of the resultant
velocities and body rotation rates against other specimens. Furthermore, CFD
simulations are used to obtain forward and lateral velocities and body rotation
rates of bio-inspired swimmers with helical tails and traveling-plane waves for
a range of amplitudes and wavelengths. Interaction coefficients are obtained
from the CFD simulation for the helical tail with the specified amplitude and
wavelength and used in the RFT model for comparisons of velocities and body
rotation rates for other designs. Comparisons indicate that hydrodynamic models
that employ interaction coefficients prove to be viable surrogates for
computationally intensive three-dimensional time-dependent CFD models.
|
1311.3475 | Social Influence and the Collective Dynamics of Opinion Formation | physics.soc-ph cs.SI nlin.AO | Social influence is the process by which individuals adapt their opinion,
revise their beliefs, or change their behavior as a result of social
interactions with other people. In our strongly interconnected society, social
influence plays a prominent role in many self-organized phenomena such as
herding in cultural markets, the spread of ideas and innovations, and the
amplification of fears during epidemics. Yet, the mechanisms of opinion
formation remain poorly understood, and existing physics-based models lack
systematic empirical validation. Here, we report two controlled experiments
showing how participants answering factual questions revise their initial
judgments after being exposed to the opinion and confidence level of others.
Based on the observation of 59 experimental subjects exposed to peer-opinion
for 15 different items, we draw an influence map that describes the strength of
peer influence during interactions. A simple process model derived from our
observations demonstrates how opinions in a group of interacting people can
converge or split over repeated interactions. In particular, we identify two
major attractors of opinion: (i) the expert effect, induced by the presence of
a highly confident individual in the group, and (ii) the majority effect,
caused by the presence of a critical mass of laypeople sharing similar
opinions. Additional simulations reveal the existence of a tipping point at
which one attractor will dominate over the other, driving collective opinion in
a given direction. These findings have implications for understanding the
mechanisms of public opinion formation and managing conflicting situations in
which self-confident and better informed minorities challenge the views of a
large uninformed majority.
|
1311.3485 | A New Algorithm for Distributed Nonparametric Sequential Detection | cs.IT math.IT | We consider nonparametric sequential hypothesis testing problem when the
distribution under the null hypothesis is fully known but the alternate
hypothesis corresponds to some other unknown distribution with some loose
constraints. We propose a simple algorithm to address the problem. These
problems are primarily motivated from wireless sensor networks and spectrum
sensing in Cognitive Radios. A decentralized version utilizing spatial
diversity is also proposed. Its performance is analysed and asymptotic
properties are proved. The simulated and analysed performance of the algorithm
is compared with an earlier algorithm addressing the same problem with similar
assumptions. We also modify the algorithm for optimizing performance when
information about the prior probabilities of occurrence of the two hypotheses
are known.
|
1311.3494 | Fundamental Limits of Online and Distributed Algorithms for Statistical
Learning and Estimation | cs.LG stat.ML | Many machine learning approaches are characterized by information constraints
on how they interact with the training data. These include memory and
sequential access constraints (e.g. fast first-order methods to solve
stochastic optimization problems); communication constraints (e.g. distributed
learning); partial access to the underlying data (e.g. missing features and
multi-armed bandits) and more. However, currently we have little understanding
how such information constraints fundamentally affect our performance,
independent of the learning problem semantics. For example, are there learning
problems where any algorithm which has small memory footprint (or can use any
bounded number of bits from each example, or has certain communication
constraints) will perform worse than what is possible without such constraints?
In this paper, we describe how a single set of results implies positive answers
to the above, for several different settings.
|
1311.3508 | Demographic and Structural Characteristics to Rationalize Link Formation
in Online Social Networks | cs.SI | Recent years have seen tremendous growth of many online social networks such
as Facebook, LinkedIn and MySpace. People connect to each other through these
networks forming large social communities providing researchers rich datasets
to understand, model and predict social interactions and behaviors. New
contacts in these networks can be formed either due to an individual's
demographic profile such as age group, gender, geographic location or due to
network's structural dynamics such as triadic closure and preferential
attachment, or a combination of both demographic and structural
characteristics.
A number of network generation models have been proposed in the last decade
to explain the structure, evolution and processes taking place in different
types of networks, and notably social networks. Network generation models
studied in the literature primarily consider structural properties, and in some
cases an individual's demographic profile in the formation of new social
contacts. These models do not present a mechanism to combine both structural
and demographic characteristics for the formation of new links. In this paper,
we propose a new network generation algorithm which incorporates both these
characteristics to model growth of a network.We use different publicly
available Facebook datasets as benchmarks to demonstrate the correctness of the
proposed network generation model.
|
1311.3515 | Model predictive control of voltage profiles in MV networks with
distributed generation | cs.SY | The Model Predictive Control (MPC) approach is used in this paper to control
the voltage profiles in MV networks with distributed generation. The proposed
algorithm lies at the intermediate level of a three-layer hierarchical
structure. At the upper level a static Optimal Power Flow (OPF) manager
computes the required voltage profiles to be transmitted to the MPC level,
while at the lower level local Automatic Voltage Regulators (AVR), one for each
Distributed Generator (DG), track the reactive power reference values computed
by MPC. The control algorithm is based on an impulse response model of the
system, easily obtained by means of a detailed simulator of the network, and
allows to cope with constraints on the voltage profiles and/or on the reactive
power flows along the network. If these constraints cannot be satisfied by
acting on the available DGs, the algorithm acts on the On-Load Tap Changing
(OLTC) transformer. A radial rural network with two feeders, eight DGs, and
thirty-one loads is used as case study. The model of the network is implemented
in DIgSILENT PowerFactory, while the control algorithm runs in Matlab. A number
of simulation results is reported to witness the main characteristics and
limitations of the proposed approach.
|
1311.3527 | A new information dimension of complex networks | cs.SI physics.soc-ph | The fractal and self-similarity properties are revealed in many real complex
networks. However, the classical information dimension of complex networks is
not practical for real complex networks. In this paper, a new information
dimension to characterize the dimension of complex networks is proposed. The
difference of information for each box in the box-covering algorithm of complex
networks is considered by this measure. The proposed method is applied to
calculate the fractal dimensions of some real networks. Our results show that
the proposed method is efficient for fractal dimension of complex networks.
|
1311.3533 | The Hot Bit I: The Szilard-Landauer Correspondence | cs.IT math-ph math.IT math.MP | We present a precise formulation of a correspondence between information and
thermodynamics that was first observed by Szilard, and later studied by
Landauer. The correspondence identifies available free energy with relative
entropy, and provides a dictionary between information and thermodynamics. We
precisely state and prove this correspondence. The paper should be broadly
accessible since we assume no prior knowledge of information theory, developing
it axiomatically, and we assume almost no thermodynamic background.
|
1311.3534 | Enhancing the Energy Efficiency of Radio Base Stations | cs.IT math.IT | This thesis is concerned with the energy efficiency of cellular networks. It
studies the dominant power consumer in future cellular networks, the Long Term
Evolution radio base stations (BS), and proposes mechanisms that enhance the BS
energy efficiency by reducing its power consumption under target rate
constraints. These mechanisms trade spare capacity for power saving.
|
1311.3566 | Improving The Scalability By Contact Information Compression In Routing | cs.IT cs.NI math.IT | The existence of reduced scalability and delivery leads to the development of
scalable routing by contact information compression. The previous work dealt
with the result of consistent analysis in the performance of DTN hierarchical
routing (DHR). It increases as the source to destination distance increases
with decreases in the routing performance. This paper focuses on improving the
scalability and delivery through contact information compression algorithm and
also addresses the problem of power awareness routing to increase the lifetime
of the overall network. Thus implementing the contact information compression
(CIC) algorithm the estimated shortest path (ESP) is detected dynamically. The
scalability and release are more improved during multipath multicasting, which
delivers the information to a collection of target concurrently in a single
transmission from the source
|
1311.3596 | Simulation-based optimization of transportation costs in high pressure
gas grid | cs.SY | Design, architecture and deployment details of a decision support system
engineered to minimize operating costs of compressor stations in a gas network
are presented. The system employs standard simulation software for pipelines,
combined with well known optimization routine for finding optimal station
control profiles in a repetitive way. A list of custom improvements is
presented that make the system capable and robust enough to perform the
optimization tasks. Implementation process is described in detail, covering the
case of handling extra optimality criteria postulated by the user. Benefits
from using the system and lessons learned are presented in the conclusions
section.
|
1311.3598 | A Statistical Model of Information Evaporation of Perfectly Reflecting
Black Holes | quant-ph cs.IT gr-qc hep-th math.IT | We provide a statistical communication model for the phenomenon of quantum
information evaporation from black holes. A black hole behaves as a reflecting
quantum channel in a very special regime, which allows for a receiver to
perfectly recover the absorbed quantum information. The quantum channel of a
perfectly reflecting (PR) black hole is the probabilistically weighted sum of
infinitely many qubit cloning channels. In this work, we reveal the statistical
communication background of the information evaporation process of PR black
holes. We show that the density of the cloned quantum particles in function of
the PR black hole's mass approximates a Chi-square distribution, while the
stimulated emission process is characterized by zero-mean, circular symmetric
complex Gaussian random variables. The results lead to the existence of
Rayleigh random distributed coefficients in the probability density evolution,
which confirms the presence of Rayleigh fading (a special type of random
fluctuation) in the statistical communication model of black hole information
evaporation.
|
1311.3613 | A Bayesian approach to sparse channel estimation in OFDM systems | cs.IT math.IT | In this work, we address the problem of estimating sparse communication
channels in OFDM systems in the presence of carrier frequency offset (CFO) and
unknown noise variance. To this end, we consider a convex optimization problem,
including a probability function, accounting for the sparse nature of the
communication channel. We use the Expectation-Maximization (EM) algorithm to
solve the corresponding Maximum A Posteriori (MAP) estimation problem. We show
that, by concentrating the cost function in one variable, namely the CFO, the
channel estimate can be obtained in closed form within the EM framework in the
maximization step. We present an example where we estimate the communication
channel, the CFO, the symbol, the noise variance, and the parameter defining
the prior distribution of the estimates. We compare the bit error rate
performance of our proposed MAP approach against Maximum Likelihood.
|
1311.3618 | Describing Textures in the Wild | cs.CV | Patterns and textures are defining characteristics of many natural objects: a
shirt can be striped, the wings of a butterfly can be veined, and the skin of
an animal can be scaly. Aiming at supporting this analytical dimension in image
understanding, we address the challenging problem of describing textures with
semantic attributes. We identify a rich vocabulary of forty-seven texture terms
and use them to describe a large dataset of patterns collected in the wild.The
resulting Describable Textures Dataset (DTD) is the basis to seek for the best
texture representation for recognizing describable texture attributes in
images. We port from object recognition to texture recognition the Improved
Fisher Vector (IFV) and show that, surprisingly, it outperforms specialized
texture descriptors not only on our problem, but also in established material
recognition datasets. We also show that the describable attributes are
excellent texture descriptors, transferring between datasets and tasks; in
particular, combined with IFV, they significantly outperform the
state-of-the-art by more than 8 percent on both FMD and KTHTIPS-2b benchmarks.
We also demonstrate that they produce intuitive descriptions of materials and
Internet images.
|
1311.3633 | A coordination model for ultra-large scale systems of systems | cs.SY | The ultra large multi-agent systems are becoming increasingly popular due to
quick decay of the individual production costs and the potential of speeding up
the solving of complex problems. Examples include nano-robots, or systems of
nano-satellites for dangerous meteorite detection, or cultures of stem cells
for organ regeneration or nerve repair. The topics associated with these
systems are usually dealt within the theories of intelligent swarms or
biologically inspired computation systems. Stochastic models play an important
role and they are based on various formulations of the mechanical statistics.
In these cases, the main assumption is that the swarm elements have a simple
behaviour and that some average properties can be deduced for the entire swarm.
In contrast, complex systems in areas like aeronautics are formed by elements
with sophisticated behaviour, which are even autonomous. In situations like
this, a new approach to swarm coordination is necessary. We present a
stochastic model where the swarm elements are communicating autonomous systems,
the coordination is separated from the component autonomous activity and the
entire swarm can be abstracted away as a piecewise deterministic Markov
process, which constitutes one of the most popular model in stochastic control.
Keywords: ultra large multi-agent systems, system of systems, autonomous
systems, stochastic hybrid systems.
|
1311.3646 | Online Coded Caching | cs.IT cs.NI math.IT | We consider a basic content distribution scenario consisting of a single
origin server connected through a shared bottleneck link to a number of users
each equipped with a cache of finite memory. The users issue a sequence of
content requests from a set of popular files, and the goal is to operate the
caches as well as the server such that these requests are satisfied with the
minimum number of bits sent over the shared link. Assuming a basic Markov model
for renewing the set of popular files, we characterize approximately the
optimal long-term average rate of the shared link. We further prove that the
optimal online scheme has approximately the same performance as the optimal
offline scheme, in which the cache contents can be updated based on the entire
set of popular files before each new request. To support these theoretical
results, we propose an online coded caching scheme termed coded least-recently
sent (LRS) and simulate it for a demand time series derived from the dataset
made available by Netflix for the Netflix Prize. For this time series, we show
that the proposed coded LRS algorithm significantly outperforms the popular
least-recently used (LRU) caching algorithm.
|
1311.3651 | Smoothed Analysis of Tensor Decompositions | cs.DS cs.LG stat.ML | Low rank tensor decompositions are a powerful tool for learning generative
models, and uniqueness results give them a significant advantage over matrix
decomposition methods. However, tensors pose significant algorithmic challenges
and tensors analogs of much of the matrix algebra toolkit are unlikely to exist
because of hardness results. Efficient decomposition in the overcomplete case
(where rank exceeds dimension) is particularly challenging. We introduce a
smoothed analysis model for studying these questions and develop an efficient
algorithm for tensor decomposition in the highly overcomplete case (rank
polynomial in the dimension). In this setting, we show that our algorithm is
robust to inverse polynomial error -- a crucial property for applications in
learning since we are only allowed a polynomial number of samples. While
algorithms are known for exact tensor decomposition in some overcomplete
settings, our main contribution is in analyzing their stability in the
framework of smoothed analysis.
Our main technical contribution is to show that tensor products of perturbed
vectors are linearly independent in a robust sense (i.e. the associated matrix
has singular values that are at least an inverse polynomial). This key result
paves the way for applying tensor methods to learning problems in the smoothed
setting. In particular, we use it to obtain results for learning multi-view
models and mixtures of axis-aligned Gaussians where there are many more
"components" than dimensions. The assumption here is that the model is not
adversarially chosen, formalized by a perturbation of model parameters. We
believe this an appealing way to analyze realistic instances of learning
problems, since this framework allows us to overcome many of the usual
limitations of using tensor methods.
|
1311.3669 | Scalable Influence Estimation in Continuous-Time Diffusion Networks | cs.SI cs.LG | If a piece of information is released from a media site, can it spread, in 1
month, to a million web pages? This influence estimation problem is very
challenging since both the time-sensitive nature of the problem and the issue
of scalability need to be addressed simultaneously. In this paper, we propose a
randomized algorithm for influence estimation in continuous-time diffusion
networks. Our algorithm can estimate the influence of every node in a network
with |V| nodes and |E| edges to an accuracy of $\varepsilon$ using
$n=O(1/\varepsilon^2)$ randomizations and up to logarithmic factors
O(n|E|+n|V|) computations. When used as a subroutine in a greedy influence
maximization algorithm, our proposed method is guaranteed to find a set of
nodes with an influence of at least (1-1/e)OPT-2$\varepsilon$, where OPT is the
optimal value. Experiments on both synthetic and real-world data show that the
proposed method can easily scale up to networks of millions of nodes while
significantly improves over previous state-of-the-arts in terms of the accuracy
of the estimated influence and the quality of the selected nodes in maximizing
the influence.
|
1311.3674 | Diversity and Social Network Structure in Collective Decision Making:
Evolutionary Perspectives with Agent-Based Simulations | cs.MA cs.NE cs.SI physics.soc-ph | Collective, especially group-based, managerial decision making is crucial in
organizations. Using an evolutionary theoretic approach to collective decision
making, agent-based simulations were conducted to investigate how human
collective decision making would be affected by the agents' diversity in
problem understanding and/or behavior in discussion, as well as by their social
network structure. Simulation results indicated that groups with consistent
problem understanding tended to produce higher utility values of ideas and
displayed better decision convergence, but only if there was no group-level
bias in collective problem understanding. Simulation results also indicated the
importance of balance between selection-oriented (i.e., exploitative) and
variation-oriented (i.e., explorative) behaviors in discussion to achieve
quality final decisions. Expanding the group size and introducing non-trivial
social network structure generally improved the quality of ideas at the cost of
decision convergence. Simulations with different social network topologies
revealed collective decision making on small-world networks with high local
clustering tended to achieve highest decision quality more often than on random
or scale-free networks. Implications of this evolutionary theory and simulation
approach for future managerial research on collective, group, and multi-level
decision making are discussed.
|
1311.3715 | Recognizing Image Style | cs.CV | The style of an image plays a significant role in how it is viewed, but style
has received little attention in computer vision research. We describe an
approach to predicting style of images, and perform a thorough evaluation of
different image features for these tasks. We find that features learned in a
multi-layer network generally perform best -- even when trained with object
class (not style) labels. Our large-scale learning methods results in the best
published performance on an existing dataset of aesthetic ratings and
photographic style annotations. We present two novel datasets: 80K Flickr
photographs annotated with 20 curated style labels, and 85K paintings annotated
with 25 style/genre labels. Our approach shows excellent classification
performance on both datasets. We use the learned classifiers to extend
traditional tag-based image search to consider stylistic constraints, and
demonstrate cross-dataset understanding of style.
|
1311.3732 | Exploiting Direct and Indirect Information for Friend Suggestion in
ZingMe | cs.SI cs.IR physics.soc-ph | Friend suggestion is a fundamental problem in social networks with the goal
of assisting users in creating more relationships, and thereby enhances
interest of users to the social networks. This problem is often considered to
be the link prediction problem in the network. ZingMe is one of the largest
social networks in Vietnam. In this paper, we analyze the current approach for
the friend suggestion problem in ZingMe, showing its limitations and
disadvantages. We propose a new efficient approach for friend suggestion that
uses information from the network structure, attributes and interactions of
users to create resources for the evaluation of friend connection amongst
users. Friend connection is evaluated exploiting both direct communication
between the users and information from other ones in the network. The proposed
approach has been implemented in a new system version of ZingMe. We conducted
experiments, exploiting a dataset derived from the users' real use of ZingMe,
to compare the newly proposed approach to the current approach and some
well-known ones for the accuracy of friend suggestion. The experimental results
show that the newly proposed approach outperforms the current one, i.e., by an
increase of 7% to 98% on average in the friend suggestion accuracy. The
proposed approach also outperforms other ones for users who have a small number
of friends with improvements from 20% to 85% on average. In this paper, we also
discuss a number of open issues and possible improvements for the proposed
approach.
|
1311.3735 | Ensemble Relational Learning based on Selective Propositionalization | cs.LG cs.AI | Dealing with structured data needs the use of expressive representation
formalisms that, however, puts the problem to deal with the computational
complexity of the machine learning process. Furthermore, real world domains
require tools able to manage their typical uncertainty. Many statistical
relational learning approaches try to deal with these problems by combining the
construction of relevant relational features with a probabilistic tool. When
the combination is static (static propositionalization), the constructed
features are considered as boolean features and used offline as input to a
statistical learner; while, when the combination is dynamic (dynamic
propositionalization), the feature construction and probabilistic tool are
combined into a single process. In this paper we propose a selective
propositionalization method that search the optimal set of relational features
to be used by a probabilistic learner in order to minimize a loss function. The
new propositionalization approach has been combined with the random subspace
ensemble method. Experiments on real-world datasets shows the validity of the
proposed method.
|
1311.3764 | Modeling systemic risks in financial markets | q-fin.RM cs.SI physics.soc-ph | We survey systemic risks to financial markets and present a high-level
description of an algorithm that measures systemic risk in terms of coupled
networks.
|
1311.3772 | Impact of system state dynamics on PMU placement in the electric power
grid | cs.SY | The goal of this paper is to study the impact of the dynamic nature of bus
voltage magnitudes and phase angles, which constitute the state of the power
system, on the phasor measurement unit (PMU) placement problem. To facilitate
this study, the placement problem is addressed from the perspective of the
electrical structure which, unlike existing work on PMU placement, accounts for
the sensitivity between power injections and nodal phase angle differences
between various buses in the power network. A linear dynamic model captures the
time evolution of system states, and a simple procedure is devised to estimate
the state transition function at each time instant. The placement problem is
formulated as a series (time steps) of binary integer programs, with the goal
to obtain the minimum number of PMUs at each time step for complete network
observability in the absence of zero injection measurements. Experiments are
conducted on several standard IEEE test bus systems. The main thesis of this
study is that, owing to the dynamic nature of the system states, for optimal
power system operation the best one could do is to install a PMU on each bus of
the given network, though it is undesirable from an economic standpoint.
|
1311.3773 | Non-Convex Compressed Sensing Using Partial Support Information | cs.IT math.IT math.OC | In this paper we address the recovery conditions of weighted $\ell_p$
minimization for signal reconstruction from compressed sensing measurements
when partial support information is available. We show that weighted $\ell_p$
minimization with $0<p<1$ is stable and robust under weaker sufficient
conditions compared to weighted $\ell_1$ minimization. Moreover, the sufficient
recovery conditions of weighted $\ell_p$ are weaker than those of regular
$\ell_p$ minimization if at least $50%$ of the support estimate is accurate. We
also review some algorithms which exist to solve the non-convex $\ell_p$
problem and illustrate our results with numerical experiments.
|
1311.3779 | On Pole Placement and Invariant Subspaces | cs.SY | The classical eigenvalue assignment problem is revisited in this note. We
derive an analytic expression for pole placement which represents a slight
generalization of the celebrated Bass-Gura and Ackermann formulae, and also is
closely related to the modal procedure of Simon and Mitter.
|
1311.3800 | Structural Weights in Ontology Matching | cs.AI cs.IR | Ontology matching finds correspondences between similar entities of different
ontologies. Two ontologies may be similar in some aspects such as structure,
semantic etc. Most ontology matching systems integrate multiple matchers to
extract all the similarities that two ontologies may have. Thus, we face a
major problem to aggregate different similarities. Some matching systems use
experimental weights for aggregation of similarities among different matchers
while others use machine learning approaches and optimization algorithms to
find optimal weights to assign to different matchers. However, both approaches
have their own deficiencies. In this paper, we will point out the problems and
shortcomings of current similarity aggregation strategies. Then, we propose a
new strategy, which enables us to utilize the structural information of
ontologies to get weights of matchers, for the similarity aggregation task. For
achieving this goal, we create a new Ontology Matching system which it uses
three available matchers, namely GMO, ISub and VDoc. We have tested our
similarity aggregation strategy on the OAEI 2012 data set. Experimental results
show significant improvements in accuracies of several cases, especially in
matching the classes of ontologies. We will compare the performance of our
similarity aggregation strategy with other well-known strategies
|
1311.3808 | Periodicity Extraction using Superposition of Distance Matching Function
and One-dimensional Haar Wavelet Transform | cs.CV | Periodicity of a texture is one of the important visual characteristics and
is often used as a measure for textural discrimination at the structural level.
Knowledge about periodicity of a texture is very essential in the field of
texture synthesis and texture compression and also in the design of frieze and
wall papers. In this paper, we propose a method of periodicity extraction from
noisy images based on superposition of distance matching function (DMF) and
wavelet decomposition without de-noising the test images. Overall DMFs are
subjected to single-level Haar wavelet decomposition to obtain approximate and
detailed coefficients. Extracted coefficients help in determination of
periodicities in row and column directions. We illustrate the usefulness and
the effectiveness of the proposed method in a texture synthesis application.
|
1311.3826 | Weak Singular Hybrid Automata | cs.FL cs.CC cs.SY | The framework of Hybrid automata, introduced by Alur, Courcourbetis,
Henzinger, and Ho, provides a formal modeling and analysis environment to
analyze the interaction between the discrete and the continuous parts of
cyber-physical systems. Hybrid automata can be considered as generalizations of
finite state automata augmented with a finite set of real-valued variables
whose dynamics in each state is governed by a system of ordinary differential
equations. Moreover, the discrete transitions of hybrid automata are guarded by
constraints over the values of these real-valued variables, and enable
discontinuous jumps in the evolution of these variables. Singular hybrid
automata are a subclass of hybrid automata where dynamics is specified by
state-dependent constant vectors. Henzinger, Kopke, Puri, and Varaiya showed
that for even very restricted subclasses of singular hybrid automata, the
fundamental verification questions, like reachability and schedulability, are
undecidable. In this paper we present \emph{weak singular hybrid automata}
(WSHA), a previously unexplored subclass of singular hybrid automata, and show
the decidability (and the exact complexity) of various verification questions
for this class including reachability (NP-Complete) and LTL model-checking
(PSPACE-Complete). We further show that extending WSHA with a single
unrestricted clock or extending WSHA with unrestricted variable updates lead to
undecidability of reachability problem.
|
1311.3829 | Planning based on classification by induction graph | cs.AI | In Artificial Intelligence, planning refers to an area of research that
proposes to develop systems that can automatically generate a result set, in
the form of an integrated decision-making system through a formal procedure,
known as plan. Instead of resorting to the scheduling algorithms to generate
plans, it is proposed to operate the automatic learning by decision tree to
optimize time. In this paper, we propose to build a classification model by
induction graph from a learning sample containing plans that have an associated
set of descriptors whose values change depending on each plan. This model will
then operate for classifying new cases by assigning the appropriate plan.
|
1311.3837 | SBML for optimizing decision support's tools | cs.CE | Many theoretical works and tools on epidemiological field reflect the
emphasis on decision-making Tools by both public health and the scientific
community, which continues to increase. Indeed, in the epidemiological field,
modeling tools are proving a very important way in helping to make decision.
However, the variety, the large volume of data and the nature of epidemics lead
us to seek solutions to alleviate the heavy burden imposed on both experts and
developers. In this paper, we present a new approach: the passage of an
epidemic model realized in Bio-PEPA to a narrative language using the basics of
SBML language. Our goal is to allow on one hand, epidemiologists to verify and
validate the model, and the other hand, developers to optimize the model in
order to achieve a better model of decision making. We also present some
preliminary results and some suggestions to improve the simulated model.
|
1311.3840 | Mixing Energy Models in Genetic Algorithms for On-Lattice Protein
Structure Prediction | cs.CE cs.NE | Protein structure prediction (PSP) is computationally a very challenging
problem. The challenge largely comes from the fact that the energy function
that needs to be minimised in order to obtain the native structure of a given
protein is not clearly known. A high resolution 20x20 energy model could better
capture the behaviour of the actual energy function than a low resolution
energy model such as hydrophobic polar. However, the fine grained details of
the high resolution interaction energy matrix are often not very informative
for guiding the search. In contrast, a low resolution energy model could
effectively bias the search towards certain promising directions. In this
paper, we develop a genetic algorithm that mainly uses a high resolution energy
model for protein structure evaluation but uses a low resolution HP energy
model in focussing the search towards exploring structures that have
hydrophobic cores. We experimentally show that this mixing of energy models
leads to significant lower energy structures compared to the state-of-the-art
results.
|
1311.3859 | Mapping cognitive ontologies to and from the brain | stat.ML cs.LG q-bio.NC | Imaging neuroscience links brain activation maps to behavior and cognition
via correlational studies. Due to the nature of the individual experiments,
based on eliciting neural response from a small number of stimuli, this link is
incomplete, and unidirectional from the causal point of view. To come to
conclusions on the function implied by the activation of brain regions, it is
necessary to combine a wide exploration of the various brain functions and some
inversion of the statistical inference. Here we introduce a methodology for
accumulating knowledge towards a bidirectional link between observed brain
activity and the corresponding function. We rely on a large corpus of imaging
studies and a predictive engine. Technically, the challenges are to find
commonality between the studies without denaturing the richness of the corpus.
The key elements that we contribute are labeling the tasks performed with a
cognitive ontology, and modeling the long tail of rare paradigms in the corpus.
To our knowledge, our approach is the first demonstration of predicting the
cognitive content of completely new brain images. To that end, we propose a
method that predicts the experimental paradigms across different studies.
|
1311.3868 | On the automorphism groups of binary linear codes | cs.IT math.CO math.IT | Let C be a binary linear code and suppose that its automorphism group
contains a non trivial subgroup G. What can we say about C knowing G? In this
paper we collect some answers to this question in the cases G=C_p, G=C_2p and
G=D_2p (p an odd prime), with a particular regard to the case in which C is
self-dual. Furthermore we generalize some methods used in other papers on this
subject. Finally we give a short survey on the problem of determining the
automorphism group of a putative self-dual [72,36,16] code, in order to show
where these methods can be applied.
|
1311.3877 | Optimal Networks from Error Correcting Codes | cs.IT cs.NI math.CO math.IT | To address growth challenges facing large Data Centers and supercomputing
clusters a new construction is presented for scalable, high throughput, low
latency networks. The resulting networks require 1.5-5 times fewer switches,
2-6 times fewer cables, have 1.2-2 times lower latency and correspondingly
lower congestion and packet losses than the best present or proposed networks
providing the same number of ports at the same total bisection. These advantage
ratios increase with network size. The key new ingredient is the exact
equivalence discovered between the problem of maximizing network bisection for
large classes of practically interesting Cayley graphs and the problem of
maximizing codeword distance for linear error correcting codes. Resulting
translation recipe converts existent optimal error correcting codes into
optimal throughput networks.
|
1311.3879 | Answering SPARQL queries modulo RDF Schema with paths | cs.DB | SPARQL is the standard query language for RDF graphs. In its strict
instantiation, it only offers querying according to the RDF semantics and would
thus ignore the semantics of data expressed with respect to (RDF) schemas or
(OWL) ontologies. Several extensions to SPARQL have been proposed to query RDF
data modulo RDFS, i.e., interpreting the query with RDFS semantics and/or
considering external ontologies. We introduce a general framework which allows
for expressing query answering modulo a particular semantics in an homogeneous
way. In this paper, we discuss extensions of SPARQL that use regular
expressions to navigate RDF graphs and may be used to answer queries
considering RDFS semantics. We also consider their embedding as extensions of
SPARQL. These SPARQL extensions are interpreted within the proposed framework
and their drawbacks are presented. In particular, we show that the PSPARQL
query language, a strict extension of SPARQL offering transitive closure,
allows for answering SPARQL queries modulo RDFS graphs with the same complexity
as SPARQL through a simple transformation of the queries. We also consider
languages which, in addition to paths, provide constraints. In particular, we
present and compare nSPARQL and our proposal CPSPARQL. We show that CPSPARQL is
expressive enough to answer full SPARQL queries modulo RDFS. Finally, we
compare the expressiveness and complexity of both nSPARQL and the corresponding
fragment of CPSPARQL, that we call cpSPARQL. We show that both languages have
the same complexity through cpSPARQL, being a proper extension of SPARQL graph
patterns, is more expressive than nSPARQL.
|
1311.3882 | Sampling Content Distributed Over Graphs | cs.SI physics.soc-ph | Despite recent effort to estimate topology characteristics of large graphs
(i.e., online social networks and peer-to-peer networks), little attention has
been given to develop a formal methodology to characterize the vast amount of
content distributed over these networks. Due to the large scale nature of these
networks, exhaustive enumeration of this content is computationally
prohibitive. In this paper, we show how one can obtain content properties by
sampling only a small fraction of vertices. We first show that when sampling is
naively applied, this can produce a huge bias in content statistics (i.e.,
average number of content duplications). To remove this bias, one may use
maximum likelihood estimation to estimate content characteristics. However our
experimental results show that one needs to sample most vertices in the graph
to obtain accurate statistics using such a method. To address this challenge,
we propose two efficient estimators: special copy estimator (SCE) and weighted
copy estimator (WCE) to measure content characteristics using available
information in sampled contents. SCE uses the special content copy indicator to
compute the estimate, while WCE derives the estimate based on meta-information
in sampled vertices. We perform experiments to show WCE and SCE are cost
effective and also ``{\em asymptotically unbiased}''. Our methodology provides
a new tool for researchers to efficiently query content distributed in large
scale networks.
|
1311.3887 | Relating different quantum generalizations of the conditional Renyi
entropy | quant-ph cs.IT math.IT | Recently a new quantum generalization of the Renyi divergence and the
corresponding conditional Renyi entropies was proposed. Here we report on a
surprising relation between conditional Renyi entropies based on this new
generalization and conditional Renyi entropies based on the quantum relative
Renyi entropy that was used in previous literature. Our result generalizes the
well-known duality relation H(A|B) + H(A|C) = 0 of the conditional von Neumann
entropy for tripartite pure states to Renyi entropies of two different kinds.
As a direct application, we prove a collection of inequalities that relate
different conditional Renyi entropies and derive a new entropic uncertainty
relation.
|
1311.3900 | The Stabilizing Role of Global Alliances in the Dynamics of Coalition
Forming | physics.soc-ph cs.SI | Coalition forming is investigated among countries, which are coupled with
short range interactions, under the influence of external fields produced by
the existence of global alliances. The model rests on the natural model of
coalition forming inspired from Statistical Physics, where instabilities are a
consequence of decentralized maximization of the individual benefits of actors
within their long horizon of rationality as the ability to envision a way
through intermediate loosing states, to a better configuration. The effects of
those external incentives on the interactions between countries and the
eventual stabilization of coalitions are studied. The results shed a new light
on the understanding of the complex phenomena of stabilization and
fragmentation in the coalition dynamics and on the possibility to design stable
coalitions. In addition to the formal implementation of the model, the
phenomena is illustrated through some historical cases of conflicts in Western
Europe.
|
1311.3918 | Sum Secrecy Rate in Full-Duplex Wiretap Channel with Imperfect CSI | cs.IT math.IT | In this paper, we consider the achievable sum secrecy rate in full-duplex
wiretap channel in the presence of an eavesdropper and imperfect channel state
information (CSI). We assume that the users participating in full-duplex
communication and the eavesdropper have single antenna each. The users have
individual transmit power constraints. They also transmit jamming signals to
improve the secrecy rates. We obtain the achievable perfect secrecy rate region
by maximizing the sum secrecy rate. We also obtain the corresponding optimum
powers of the message signals and the jamming signals. Numerical results that
show the impact of imperfect CSI on the achievable secrecy rate region are
presented.
|
1311.3959 | Clustering Markov Decision Processes For Continual Transfer | cs.AI cs.LG | We present algorithms to effectively represent a set of Markov decision
processes (MDPs), whose optimal policies have already been learned, by a
smaller source subset for lifelong, policy-reuse-based transfer learning in
reinforcement learning. This is necessary when the number of previous tasks is
large and the cost of measuring similarity counteracts the benefit of transfer.
The source subset forms an `$\epsilon$-net' over the original set of MDPs, in
the sense that for each previous MDP $M_p$, there is a source $M^s$ whose
optimal policy has $<\epsilon$ regret in $M_p$. Our contributions are as
follows. We present EXP-3-Transfer, a principled policy-reuse algorithm that
optimally reuses a given source policy set when learning for a new MDP. We
present a framework to cluster the previous MDPs to extract a source subset.
The framework consists of (i) a distance $d_V$ over MDPs to measure
policy-based similarity between MDPs; (ii) a cost function $g(\cdot)$ that uses
$d_V$ to measure how good a particular clustering is for generating useful
source tasks for EXP-3-Transfer and (iii) a provably convergent algorithm,
MHAV, for finding the optimal clustering. We validate our algorithms through
experiments in a surveillance domain.
|
1311.3961 | HEVAL: Yet Another Human Evaluation Metric | cs.CL | Machine translation evaluation is a very important activity in machine
translation development. Automatic evaluation metrics proposed in literature
are inadequate as they require one or more human reference translations to
compare them with output produced by machine translation. This does not always
give accurate results as a text can have several different translations. Human
evaluation metrics, on the other hand, lacks inter-annotator agreement and
repeatability. In this paper we have proposed a new human evaluation metric
which addresses these issues. Moreover this metric also provides solid grounds
for making sound assumptions on the quality of the text produced by a machine
translation.
|
1311.3979 | Precision improvement of MEMS gyros for indoor mobile robots with
horizontal motion inspired by methods of TRIZ | cs.RO cs.SY | In the paper, the problem of precision improvement for the MEMS gyrosensors
on indoor robots with horizontal motion is solved by methods of TRIZ ("the
theory of inventive problem solving").
|
1311.3982 | Inferring Multilateral Relations from Dynamic Pairwise Interactions | cs.AI cs.SI | Correlations between anomalous activity patterns can yield pertinent
information about complex social processes: a significant deviation from normal
behavior, exhibited simultaneously by multiple pairs of actors, provides
evidence for some underlying relationship involving those pairs---i.e., a
multilateral relation. We introduce a new nonparametric Bayesian latent
variable model that explicitly captures correlations between anomalous
interaction counts and uses these shared deviations from normal activity
patterns to identify and characterize multilateral relations. We showcase our
model's capabilities using the newly curated Global Database of Events,
Location, and Tone, a dataset that has seen considerable interest in the social
sciences and the popular press, but which has is largely unexplored by the
machine learning community. We provide a detailed analysis of the latent
structure inferred by our model and show that the multilateral relations
correspond to major international events and long-term international
relationships. These findings lead us to recommend our model for any
data-driven analysis of interaction networks where dynamic interactions over
the edges provide evidence for latent social structure.
|
1311.3984 | Improving the performance of algorithms to find communities in networks | physics.soc-ph cs.SI | Many algorithms to detect communities in networks typically work without any
information on the cluster structure to be found, as one has no a priori
knowledge of it, in general. Not surprisingly, knowing some features of the
unknown partition could help its identification, yielding an improvement of the
performance of the method. Here we show that, if the number of clusters were
known beforehand, standard methods, like modularity optimization, would
considerably gain in accuracy, mitigating the severe resolution bias that
undermines the reliability of the results of the original unconstrained
version. The number of clusters can be inferred from the spectra of the
recently introduced non-backtracking and flow matrices, even in benchmark
graphs with realistic community structure. The limit of such two-step procedure
is the overhead of the computation of the spectra.
|
1311.3987 | Big Data and Cross-Document Coreference Resolution: Current State and
Future Opportunities | cs.CL cs.DC cs.IR | Information Extraction (IE) is the task of automatically extracting
structured information from unstructured/semi-structured machine-readable
documents. Among various IE tasks, extracting actionable intelligence from
ever-increasing amount of data depends critically upon Cross-Document
Coreference Resolution (CDCR) - the task of identifying entity mentions across
multiple documents that refer to the same underlying entity. Recently, document
datasets of the order of peta-/tera-bytes has raised many challenges for
performing effective CDCR such as scaling to large numbers of mentions and
limited representational power. The problem of analysing such datasets is
called "big data". The aim of this paper is to provide readers with an
understanding of the central concepts, subtasks, and the current
state-of-the-art in CDCR process. We provide assessment of existing
tools/techniques for CDCR subtasks and highlight big data challenges in each of
them to help readers identify important and outstanding issues for further
investigation. Finally, we provide concluding remarks and discuss possible
directions for future work.
|
1311.3995 | Compressed Sensing for Energy-Efficient Wireless Telemonitoring:
Challenges and Opportunities | cs.IT math.IT stat.ML | As a lossy compression framework, compressed sensing has drawn much attention
in wireless telemonitoring of biosignals due to its ability to reduce energy
consumption and make possible the design of low-power devices. However, the
non-sparseness of biosignals presents a major challenge to compressed sensing.
This study proposes and evaluates a spatio-temporal sparse Bayesian learning
algorithm, which has the desired ability to recover such non-sparse biosignals.
It exploits both temporal correlation in each individual biosignal and
inter-channel correlation among biosignals from different channels. The
proposed algorithm was used for compressed sensing of multichannel
electroencephalographic (EEG) signals for estimating vehicle drivers'
drowsiness. Results showed that the drowsiness estimation was almost unaffected
even if raw EEG signals (containing various artifacts) were compressed by 90%.
|
1311.4013 | Percolation on the institute-enterprise R&D collaboration networks | physics.soc-ph cs.SI | Realistic network-like systems are usually composed of multiple networks with
interacting relations such as school-enterprise research and development
collaboration networks. Here we study the percolation properties of a special
kind of that R&D collaboration networks, namely institute-enterprise R&D
collaboration networks. We introduce two actual IERDCNs to show their
structural properties, and present a mathematical framework based on generating
functions for analyzing an interacting network with any connection probability.
Then we illustrate the percolation threshold and structural parameter
arithmetic in the sub-critical and supercritical regimes. We compare the
predictions of our mathematical framework and arithmetic to data for two real
R&D collaboration networks and a number of simulations, and we find that they
are in remarkable agreement with the data. We show applications of the
framework to electronics R&D collaboration networks.
|
1311.4015 | A Three-class ROC for Evaluating Doubletalk Detectors in Acoustic Echo
Cancellation | cs.IT math.IT | Doubletalk detector (DTD) is essential to keep adaptive filter from diverging
in the presence of near-end speech in acoustic echo cancellation (AEC), and
there was a receiver operating characteristic (ROC) to characterize DTD
performance. However, the traditional ROC for evaluating DTD used a static
time-invariant room acoustic impulse response and could not evaluate DTDs which
distinguish echo path change from doubletalk. We solve these problems by
extending the traditional binary detection ROC to three class, and simulations
show the efficiency of the proposed method.
|
1311.4029 | Blind Deconvolution with Non-local Sparsity Reweighting | cs.CV | Blind deconvolution has made significant progress in the past decade. Most
successful algorithms are classified either as Variational or Maximum
a-Posteriori ($MAP$). In spite of the superior theoretical justification of
variational techniques, carefully constructed $MAP$ algorithms have proven
equally effective in practice. In this paper, we show that all successful $MAP$
and variational algorithms share a common framework, relying on the following
key principles: sparsity promotion in the gradient domain, $l_2$ regularization
for kernel estimation, and the use of convex (often quadratic) cost functions.
Our observations lead to a unified understanding of the principles required for
successful blind deconvolution. We incorporate these principles into a novel
algorithm that improves significantly upon the state of the art.
|
1311.4033 | A Comparative Study of Histogram Equalization Based Image Enhancement
Techniques for Brightness Preservation and Contrast Enhancement | cs.CV | Histogram Equalization is a contrast enhancement technique in the image
processing which uses the histogram of image. However histogram equalization is
not the best method for contrast enhancement because the mean brightness of the
output image is significantly different from the input image. There are several
extensions of histogram equalization has been proposed to overcome the
brightness preservation challenge. Contrast enhancement using brightness
preserving bi-histogram equalization (BBHE) and Dualistic sub image histogram
equalization (DSIHE) which divides the image histogram into two parts based on
the input mean and median respectively then equalizes each sub histogram
independently. This paper provides review of different popular histogram
equalization techniques and experimental study based on the absolute mean
brightness error (AMBE), peak signal to noise ratio (PSNR), Structure
similarity index (SSI) and Entropy.
|
1311.4040 | Enhanced XML Validation using SRML | cs.DB | Data validation is becoming more and more important with the ever-growing
amount of data being consumed and transmitted by systems over the Internet. It
is important to ensure that the data being sent is valid as it may contain
entry errors, which may be consumed by different systems causing further
errors. XML has become the defacto standard for data transfer. The XML Schema
Definition language (XSD) was created to help XML structural validation and
provide a schema for data type restrictions, however it does not allow for more
complex situations. In this article we introduce a way to provide rule based
XML validation and correction through the extension and improvement of our SRML
metalanguage. We also explore the option of applying it in a database as a
trigger for CRUD operations allowing more granular dataset validation on an
atomic level allowing for more complex dataset record validation rules.
|
1311.4056 | A generalized evidence distance | cs.AI cs.IT math.IT | Dempster-Shafer theory of evidence (D-S theory) is widely used in uncertain
information process. The basic probability assignment(BPA) is a key element in
D-S theory. How to measure the distance between two BPAs is an open issue. In
this paper, a new method to measure the distance of two BPAs is proposed. The
proposed method is a generalized of existing evidence distance. Numerical
examples are illustrated that the proposed method can overcome the shortcomings
of existing methods.
|
1311.4064 | Methods for Integrating Knowledge with the Three-Weight Optimization
Algorithm for Hybrid Cognitive Processing | cs.AI | In this paper we consider optimization as an approach for quickly and
flexibly developing hybrid cognitive capabilities that are efficient, scalable,
and can exploit knowledge to improve solution speed and quality. In this
context, we focus on the Three-Weight Algorithm, which aims to solve general
optimization problems. We propose novel methods by which to integrate knowledge
with this algorithm to improve expressiveness, efficiency, and scaling, and
demonstrate these techniques on two example problems (Sudoku and circle
packing).
|
1311.4082 | Can a biologically-plausible hierarchy effectively replace face
detection, alignment, and recognition pipelines? | cs.CV | The standard approach to unconstrained face recognition in natural
photographs is via a detection, alignment, recognition pipeline. While that
approach has achieved impressive results, there are several reasons to be
dissatisfied with it, among them is its lack of biological plausibility. A
recent theory of invariant recognition by feedforward hierarchical networks,
like HMAX, other convolutional networks, or possibly the ventral stream,
implies an alternative approach to unconstrained face recognition. This
approach accomplishes detection and alignment implicitly by storing
transformations of training images (called templates) rather than explicitly
detecting and aligning faces at test time. Here we propose a particular
locality-sensitive hashing based voting scheme which we call "consensus of
collisions" and show that it can be used to approximate the full 3-layer
hierarchy implied by the theory. The resulting end-to-end system for
unconstrained face recognition operates on photographs of faces taken under
natural conditions, e.g., Labeled Faces in the Wild (LFW), without aligning or
cropping them, as is normally done. It achieves a drastic improvement in the
state of the art on this end-to-end task, reaching the same level of
performance as the best systems operating on aligned, closely cropped images
(no outside training data). It also performs well on two newer datasets,
similar to LFW, but more difficult: LFW-jittered (new here) and SUFR-W.
|
1311.4086 | A hybrid decision support system : application on healthcare | cs.AI cs.LG | Many systems based on knowledge, especially expert systems for medical
decision support have been developed. Only systems are based on production
rules, and cannot learn and evolve only by updating them. In addition, taking
into account several criteria induces an exorbitant number of rules to be
injected into the system. It becomes difficult to translate medical knowledge
or a support decision as a simple rule. Moreover, reasoning based on generic
cases became classic and can even reduce the range of possible solutions. To
remedy that, we propose an approach based on using a multi-criteria decision
guided by a case-based reasoning (CBR) approach.
|
1311.4088 | The Optimization of Running Queries in Relational Databases Using
ANT-Colony Algorithm | cs.DB cs.NE | The issue of optimizing queries is a cost-sensitive process and with respect
to the number of associated tables in a query, its number of permutations grows
exponentially. On one hand, in comparison with other operators in relational
database, join operator is the most difficult and complicated one in terms of
optimization for reducing its runtime. Accordingly, various algorithms have so
far been proposed to solve this problem. On the other hand, the success of any
database management system (DBMS) means exploiting the query model. In the
current paper, the heuristic ant algorithm has been proposed to solve this
problem and improve the runtime of join operation. Experiments and observed
results reveal the efficiency of this algorithm compared to its similar
algorithms.
|
1311.4096 | Distributed Data Storage Systems with Opportunistic Repair | cs.IT math.IT | The reliability of erasure-coded distributed storage systems, as measured by
the mean time to data loss (MTTDL), depends on the repair bandwidth of the
code. Repair-efficient codes provide reliability values several orders of
magnitude better than conventional erasure codes. Current state of the art
codes fix the number of helper nodes (nodes participating in repair) a priori.
In practice, however, it is desirable to allow the number of helper nodes to be
adaptively determined by the network traffic conditions. In this work, we
propose an opportunistic repair framework to address this issue. It is shown
that there exists a threshold on the storage overhead, below which such an
opportunistic approach does not lose any efficiency from the optimal
storage-repair-bandwidth tradeoff; i.e. it is possible to construct a code
simultaneously optimal for different numbers of helper nodes. We further
examine the benefits of such opportunistic codes, and derive the MTTDL
improvement for two repair models: one with limited total repair bandwidth and
the other with limited individual-node repair bandwidth. In both settings, we
show orders of magnitude improvement in MTTDL. Finally, the proposed framework
is examined in a network setting where a significant improvement in MTTDL is
observed.
|
1311.4111 | Dynamic Resource Allocation for Multiple-Antenna Wireless Power Transfer | cs.IT math.IT | We consider a point-to-point multiple-input-single-output (MISO) system where
a receiver harvests energy from a wireless power transmitter to power itself
for various applications. The transmitter performs energy beamforming by using
an instantaneous channel state information (CSI). The CSI is estimated at the
receiver by training via a preamble, and fed back to the transmitter. The
channel estimate is more accurate when longer preamble is used, but less time
is left for wireless power transfer before the channel changes. To maximize the
harvested energy, in this paper, we address the key challenge of balancing the
time resource used for channel estimation and wireless power transfer (WPT),
and also investigate the allocation of energy resource used for wireless power
transfer. First, we consider the general scenario where the preamble length is
allowed to vary dynamically. Taking into account the effects of imperfect CSI,
the optimal preamble length is obtained online by solving a dynamic programming
(DP) problem. The solution is shown to be a threshold-type policy that depends
only on the channel estimate power. Next, we consider the scenario in which the
preamble length is fixed. The optimal preamble length is optimized offline.
Furthermore, we derive the optimal power allocation schemes for both scenarios.
For the scenario of dynamic-length preamble, the power is allocated according
to both the optimal preamble length and the channel estimate power; while for
the scenario of fixed-length preamble, the power is allocated according to only
the channel estimate power. The analysis results are validated by numerical
simulations. Encouragingly, with optimal power allocation, the harvested energy
by using optimized fixed-length preamble is almost the same as the harvested
energy by employing dynamic-length preamble, hence allowing a low-complexity
WPT system to be implemented in practice.
|
1311.4115 | A Proof Of The Block Model Threshold Conjecture | math.PR cs.SI | We study a random graph model named the "block model" in statistics and the
"planted partition model" in theoretical computer science. In its simplest
form, this is a random graph with two equal-sized clusters, with a
between-class edge probability of $q$ and a within-class edge probability of
$p$.
A striking conjecture of Decelle, Krzkala, Moore and Zdeborov\'a based on
deep, non-rigorous ideas from statistical physics, gave a precise prediction
for the algorithmic threshold of clustering in the sparse planted partition
model. In particular, if $p = a/n$ and $q = b/n$, $s=(a-b)/2$ and $p=(a+b)/2$
then Decelle et al.\ conjectured that it is possible to efficiently cluster in
a way correlated with the true partition if $s^2 > p$ and impossible if $s^2 <
p$. By comparison, the best-known rigorous result is that of Coja-Oghlan, who
showed that clustering is possible if $s^2 > C p \ln p$ for some sufficiently
large $C$.
In a previous work, we proved that indeed it is information theoretically
impossible to to cluster if $s^2 < p$ and furthermore it is information
theoretically impossible to even estimate the model parameters from the graph
when $s^2 < p$. Here we complete the proof of the conjecture by providing an
efficient algorithm for clustering in a way that is correlated with the true
partition when $s^2 > p$. A different independent proof of the same result was
recently obtained by Laurent Massoulie.
|
1311.4121 | Application of Rough Set Theory in Data Mining | cs.DB | Rough set theory is a new method that deals with vagueness and uncertainty
emphasized in decision making. Data mining is a discipline that has an
important contribution to data analysis, discovery of new meaningful knowledge,
and autonomous decision making. The rough set theory offers a viable approach
for decision rule extraction from data.This paper, introduces the fundamental
concepts of rough set theory and other aspects of data mining, a discussion of
data representation with rough set theory including pairs of attribute-value
blocks, information tables reducts, indiscernibility relation and decision
tables. Additionally, the rough set approach to lower and upper approximations
and certain possible rule sets concepts are introduced. Finally, some
description about applications of the data mining system with rough set theory
is included.
|
1311.4126 | Temporal prediction of epidemic patterns in community networks | physics.soc-ph cond-mat.stat-mech cs.SI | Most previous studies of epidemic dynamics on complex networks suppose that
the disease will eventually stabilize at either a disease-free state or an
endemic one. In reality, however, some epidemics always exhibit sporadic and
recurrent behaviour in one region because of the invasion from an endemic
population elsewhere. In this paper we address this issue and study a
susceptible-infected-susceptible epidemiological model on a network consisting
of two communities, where the disease is endemic in one community but
alternates between outbreaks and extinctions in the other. We provide a
detailed characterization of the temporal dynamics of epidemic patterns in the
latter community. In particular, we investigate the time duration of both
outbreak and extinction, and the time interval between two consecutive
inter-community infections, as well as their frequency distributions. Based on
the mean-field theory, we theoretically analyze these three timescales and
their dependence on the average node degree of each community, the transmission
parameters, and the number of intercommunity links, which are in good agreement
with simulations, except when the probability of overlaps between successive
outbreaks is too large. These findings aid us in better understanding the
bursty nature of disease spreading in a local community, and thereby suggesting
effective time-dependent control strategies.
|
1311.4150 | Towards Big Topic Modeling | cs.LG cs.DC cs.IR stat.ML | To solve the big topic modeling problem, we need to reduce both time and
space complexities of batch latent Dirichlet allocation (LDA) algorithms.
Although parallel LDA algorithms on the multi-processor architecture have low
time and space complexities, their communication costs among processors often
scale linearly with the vocabulary size and the number of topics, leading to a
serious scalability problem. To reduce the communication complexity among
processors for a better scalability, we propose a novel communication-efficient
parallel topic modeling architecture based on power law, which consumes orders
of magnitude less communication time when the number of topics is large. We
combine the proposed communication-efficient parallel architecture with the
online belief propagation (OBP) algorithm referred to as POBP for big topic
modeling tasks. Extensive empirical results confirm that POBP has the following
advantages to solve the big topic modeling problem: 1) high accuracy, 2)
communication-efficient, 3) fast speed, and 4) constant memory usage when
compared with recent state-of-the-art parallel LDA algorithms on the
multi-processor architecture.
|
1311.4151 | Lattice-cell : Hybrid approach for text categorization | cs.IR | In this paper, we propose a new text categorization framework based on
Concepts Lattice and cellular automata. In this framework, concept structure
are modeled by a Cellular Automaton for Symbolic Induction (CASI). Our
objective is to reduce time categorization caused by the Concept Lattice. We
examine, by experiments the performance of the proposed approach and compare it
with other algorithms such as Naive Bayes and k nearest neighbors. The results
show performance improvement while reducing time categorization.
|
1311.4158 | Unsupervised Learning of Invariant Representations in Hierarchical
Architectures | cs.CV cs.LG | The present phase of Machine Learning is characterized by supervised learning
algorithms relying on large sets of labeled examples ($n \to \infty$). The next
phase is likely to focus on algorithms capable of learning from very few
labeled examples ($n \to 1$), like humans seem able to do. We propose an
approach to this problem and describe the underlying theory, based on the
unsupervised, automatic learning of a ``good'' representation for supervised
learning, characterized by small sample complexity ($n$). We consider the case
of visual object recognition though the theory applies to other domains. The
starting point is the conjecture, proved in specific cases, that image
representations which are invariant to translations, scaling and other
transformations can considerably reduce the sample complexity of learning. We
prove that an invariant and unique (discriminative) signature can be computed
for each image patch, $I$, in terms of empirical distributions of the
dot-products between $I$ and a set of templates stored during unsupervised
learning. A module performing filtering and pooling, like the simple and
complex cells described by Hubel and Wiesel, can compute such estimates.
Hierarchical architectures consisting of this basic Hubel-Wiesel moduli inherit
its properties of invariance, stability, and discriminability while capturing
the compositional organization of the visual world in terms of wholes and
parts. The theory extends existing deep learning convolutional architectures
for image and speech recognition. It also suggests that the main computational
goal of the ventral stream of visual cortex is to provide a hierarchical
representation of new objects/images which is invariant to transformations,
stable, and discriminative for recognition---and that this representation may
be continuously learned in an unsupervised way during development and visual
experience.
|
1311.4163 | Interactive Distributed Detection: Architecture and Performance Analysis | cs.IT math.IT math.OC math.ST stat.AP stat.TH | This paper studies the impact of interactive fusion on detection performance
in tandem fusion networks with conditionally independent observations. Within
the Neyman-Pearson framework, two distinct regimes are considered: the fixed
sample size test and the large sample test. For the former, it is established
that interactive distributed detection may strictly outperform the one-way
tandem fusion structure. However, for the large sample regime, it is shown that
interactive fusion has no improvement on the asymptotic performance
characterized by the Kullback-Leibler (KL) distance compared with the simple
one-way tandem fusion. The results are then extended to interactive fusion
systems where the fusion center and the sensor may undergo multiple steps of
memoryless interactions or that involve multiple peripheral sensors, as well as
to interactive fusion with soft sensor outputs.
|
1311.4166 | A Visibility Graph Averaging Aggregation Operator | cs.AI | The problem of aggregation is considerable importance in many disciplines. In
this paper, a new type of operator called visibility graph averaging (VGA)
aggregation operator is proposed. This proposed operator is based on the
visibility graph which can convert a time series into a graph. The weights are
obtained according to the importance of the data in the visibility graph.
Finally, the VGA operator is used in the analysis of the TAIEX database to
illustrate that it is practical and compared with the classic aggregation
operators, it shows its advantage that it not only implements the aggregation
of the data purely, but also conserves the time information, and meanwhile, the
determination of the weights is more reasonable.
|
1311.4180 | Towards a New Science of a Clinical Data Intelligence | cs.CY cs.AI | In this paper we define Clinical Data Intelligence as the analysis of data
generated in the clinical routine with the goal of improving patient care. We
define a science of a Clinical Data Intelligence as a data analysis that
permits the derivation of scientific, i.e., generalizable and reliable results.
We argue that a science of a Clinical Data Intelligence is sensible in the
context of a Big Data analysis, i.e., with data from many patients and with
complete patient information. We discuss that Clinical Data Intelligence
requires the joint efforts of knowledge engineering, information extraction
(from textual and other unstructured data), and statistics and statistical
machine learning. We describe some of our main results as conjectures and
relate them to a recently funded research project involving two major German
university hospitals.
|
1311.4211 | Network communities within and across borders | physics.soc-ph cs.SI | We investigate the impact of borders on the topology of spatially embedded
networks. Indeed territorial subdivisions and geographical borders
significantly hamper the geographical span of networks thus playing a key role
in the formation of network communities. This is especially important in
scientific and technological policy-making, highlighting the interplay between
pressure for the internationalization to lead towards a global innovation
system and the administrative borders imposed by the national and regional
institutions. In this study we introduce an outreach index to quantify the
impact of borders on the community structure and apply it to the case of the
European and US patent co-inventors networks. We find that (a) the US
connectivity decays as a power of distance, whereas we observe a faster
exponential decay for Europe; (b) European network communities essentially
correspond to nations and contiguous regions while US communities span multiple
states across the whole country without any characteristic geographic scale. We
confirm our findings by means of a set of simulations aimed at exploring the
relationship between different patterns of cross-border community structures
and the outreach index.
|
1311.4224 | On the Mixed H2/H-infinity Loop Shaping Trade-offs in Fractional Order
Control of the AVR System | cs.SY math.OC | This paper looks at frequency domain design of a fractional order (FO) PID
controller for an Automatic Voltage Regulator (AVR) system. Various performance
criteria of the AVR system are formulated as system norms and is then coupled
with an evolutionary multi-objective optimization (MOO) algorithm to yield
Pareto optimal design trade-offs. The conflicting performance measures consist
of the mixed H2/H-infinity designs for objectives like set-point tracking, load
disturbance and noise rejection, controller effort and as such are an
exhaustive study of various conflicting design objectives. A fuzzy logic based
mechanism is used to identify the best compromise solution on the Pareto
fronts. The advantages and disadvantages of using a FOPID controller over the
conventional PID controller, which are popular for industrial use, are
enunciated from the presented simulations. The relevance and impact of FO
controller design from the perspective of the dynamics of AVR control loop is
also discussed.
|
1311.4235 | On the definition of a general learning system with user-defined
operators | cs.LG | In this paper, we push forward the idea of machine learning systems whose
operators can be modified and fine-tuned for each problem. This allows us to
propose a learning paradigm where users can write (or adapt) their operators,
according to the problem, data representation and the way the information
should be navigated. To achieve this goal, data instances, background
knowledge, rules, programs and operators are all written in the same functional
language, Erlang. Since changing operators affect how the search space needs to
be explored, heuristics are learnt as a result of a decision process based on
reinforcement learning where each action is defined as a choice of operator and
rule. As a result, the architecture can be seen as a 'system for writing
machine learning systems' or to explore new operators where the policy reuse
(as a kind of transfer learning) is allowed. States and actions are represented
in a Q matrix which is actually a table, from which a supervised model is
learnt. This makes it possible to have a more flexible mapping between old and
new problems, since we work with an abstraction of rules and actions. We
include some examples sharing reuse and the application of the system gErl to
IQ problems. In order to evaluate gErl, we will test it against some structured
problems: a selection of IQ test tasks and some experiments on some structured
prediction problems (list patterns).
|
1311.4252 | Contour polygonal approximation using shortest path in networks | physics.comp-ph cs.CV | Contour polygonal approximation is a simplified representation of a contour
by line segments, so that the main characteristics of the contour remain in a
small number of line segments. This paper presents a novel method for polygonal
approximation based on the Complex Networks theory. We convert each point of
the contour into a vertex, so that we model a regular network. Then we
transform this network into a Small-World Complex Network by applying some
transformations over its edges. By analyzing of network properties, especially
the geodesic path, we compute the polygonal approximation. The paper presents
the main characteristics of the method, as well as its functionality. We
evaluate the proposed method using benchmark contours, and compare its results
with other polygonal approximation methods.
|
1311.4276 | Data Mining of Online Genealogy Datasets for Revealing Lifespan Patterns
in Human Population | cs.SI q-bio.PE stat.AP | Online genealogy datasets contain extensive information about millions of
people and their past and present family connections. This vast amount of data
can assist in identifying various patterns in human population. In this study,
we present methods and algorithms which can assist in identifying variations in
lifespan distributions of human population in the past centuries, in detecting
social and genetic features which correlate with human lifespan, and in
constructing predictive models of human lifespan based on various features
which can easily be extracted from genealogy datasets.
We have evaluated the presented methods and algorithms on a large online
genealogy dataset with over a million profiles and over 9 million connections,
all of which were collected from the WikiTree website. Our findings indicate
that significant but small positive correlations exist between the parents'
lifespan and their children's lifespan. Additionally, we found slightly higher
and significant correlations between the lifespans of spouses. We also
discovered a very small positive and significant correlation between longevity
and reproductive success in males, and a small and significant negative
correlation between longevity and reproductive success in females. Moreover,
our machine learning algorithms presented better than random classification
results in predicting which people who outlive the age of 50 will also outlive
the age of 80.
We believe that this study will be the first of many studies which utilize
the wealth of data on human populations, existing in online genealogy datasets,
to better understand factors which influence human lifespan. Understanding
these factors can assist scientists in providing solutions for successful
aging.
|
1311.4294 | Exponential Approximation of Bandlimited Functions from Average
Oversampling | cs.IT math.IT | Weighted average sampling is more practical and numerically more stable than
sampling at single points as in the classical Shannon sampling framework. Using
the frame theory, one can completely reconstruct a bandlimited function from
its suitably-chosen average sample data. When only finitely many sample data
are available, truncating the complete reconstruction series with the standard
dual frame results in very slow convergence. We present in this note a method
of reconstructing a bandlimited function from finite average oversampling with
an exponentially-decaying approximation error.
|
1311.4296 | Reflection methods for user-friendly submodular optimization | cs.LG cs.NA cs.RO math.OC | Recently, it has become evident that submodularity naturally captures widely
occurring concepts in machine learning, signal processing and computer vision.
Consequently, there is need for efficient optimization procedures for
submodular functions, especially for minimization problems. While general
submodular minimization is challenging, we propose a new method that exploits
existing decomposability of submodular functions. In contrast to previous
approaches, our method is neither approximate, nor impractical, nor does it
need any cumbersome parameter tuning. Moreover, it is easy to implement and
parallelize. A key component of our method is a formulation of the discrete
submodular minimization problem as a continuous best approximation problem that
is solved through a sequence of reflections, and its solution can be easily
thresholded to obtain an optimal discrete solution. This method solves both the
continuous and discrete formulations of the problem, and therefore has
applications in learning, inference, and reconstruction. In our experiments, we
illustrate the benefits of our method on two image segmentation tasks.
|
1311.4306 | Distributed bounded-error state estimation for partitioned systems based
on practical robust positive invariance | cs.SY math.OC | We propose a partition-based state estimator for linear discrete-time systems
composed by coupled subsystems affected by bounded disturbances. The
architecture is distributed in the sense that each subsystem is equipped with a
local state estimator that exploits suitable pieces of information from parent
subsystems. Moreover, differently from methods based on moving horizon
estimation, our approach does not require the on-line solution to optimization
problems. Our state-estimation scheme, that is based on the notion of practical
robust positive invariance developed in Rakovic 2011, also guarantees
satisfaction of constraints on local estimation errors and it can be updated
with a limited computational effort when subsystems are added or removed.
|
1311.4310 | Achievable Rate Region of the Bidirectional Buffer-Aided Relay Channel
with Block Fading | cs.IT math.IT | The bidirectional relay channel, in which two users communicate with each
other through a relay node, is a simple but fundamental and practical network
architecture. In this paper, we consider the block fading bidirectional relay
channel and propose efficient transmission strategies that exploit the block
fading property of the channel. Thereby, we consider a decode-and-forward relay
and assume that a direct link between the two users is not present. Our aim is
to characterize the long-term achievable rate region and to develop protocols
which achieve all points of the obtained rate region. Specifically, in the
bidirectional relay channel, there exist six possible transmission modes: four
point-to-point modes (user 1-to-relay, user 2-to-relay, relay-to-user 1,
relay-to-user 2), a multiple-access mode (both users to the relay), and a
broadcast mode (the relay to both users). Most existing protocols assume a
fixed schedule for using a subset of the aforementioned transmission modes.
Motivated by this limitation, we develop protocols which are not restricted to
adhere to a predefined schedule for using the transmission modes. In fact,
based on the instantaneous channel state information (CSI) of the involved
links, the proposed protocol selects the optimal transmission mode in each time
slot to maximize the long-term achievable rate region. Thereby, we consider two
different types of transmit power constraints: 1) a joint long-term power
constraint for all nodes, and 2) a fixed transmit power for each node.
Furthermore, to enable the use of a non-predefined schedule for transmission
mode selection, the relay has to be equipped with two buffers for storage of
the information received from both users. As data buffering increases the
end-to-end delay, we consider both delay-unconstrained and delay-constrained
transmission in the paper.
|
1311.4319 | Ranking Algorithms by Performance | cs.AI cs.LG | A common way of doing algorithm selection is to train a machine learning
model and predict the best algorithm from a portfolio to solve a particular
problem. While this method has been highly successful, choosing only a single
algorithm has inherent limitations -- if the choice was bad, no remedial action
can be taken and parallelism cannot be exploited, to name but a few problems.
In this paper, we investigate how to predict the ranking of the portfolio
algorithms on a particular problem. This information can be used to choose the
single best algorithm, but also to allocate resources to the algorithms
according to their rank. We evaluate a range of approaches to predict the
ranking of a set of algorithms on a problem. We furthermore introduce a
framework for categorizing ranking predictions that allows to judge the
expressiveness of the predictive output. Our experimental evaluation
demonstrates on a range of data sets from the literature that it is beneficial
to consider the relationship between algorithms when predicting rankings. We
furthermore show that relatively naive approaches deliver rankings of good
quality already.
|
1311.4336 | Temporal scaling in information propagation | cs.SI physics.soc-ph | For the study of information propagation, one fundamental problem is
uncovering universal laws governing the dynamics of information propagation.
This problem, from the microscopic perspective, is formulated as estimating the
propagation probability that a piece of information propagates from one
individual to another. Such a propagation probability generally depends on two
major classes of factors: the intrinsic attractiveness of information and the
interactions between individuals. Despite the fact that the temporal effect of
attractiveness is widely studied, temporal laws underlying individual
interactions remain unclear, causing inaccurate prediction of information
propagation on evolving social networks. In this report, we empirically study
the dynamics of information propagation, using the dataset from a
population-scale social media website. We discover a temporal scaling in
information propagation: the probability a message propagates between two
individuals decays with the length of time latency since their latest
interaction, obeying a power-law rule. Leveraging the scaling law, we further
propose a temporal model to estimate future propagation probabilities between
individuals, reducing the error rate of information propagation prediction from
6.7% to 2.6% and improving viral marketing with 9.7% incremental customers.
|
1311.4362 | Sparse Identification of Posynomial Models | cs.SY | Posynomials are nonnegative combinations of monomials with possibly
fractional and both positive and negative exponents. Posynomial models are
widely used in various engineering design endeavors, such as circuits,
aerospace and structural design, mainly due to the fact that design problems
cast in terms of posynomial objectives and constraints can be solved
efficiently by means of a convex optimization technique known as geometric
programming (GP). However, while quite a vast literature exists on GP-based
design, very few contributions can yet be found on the problem of identifying
posynomial models from experimental data. Posynomial identification amounts to
determining not only the coefficients of the combination, but also the
exponents in the monomials, which renders the identification problem
numerically hard. In this draft, we propose an approach to the identification
of multivariate posynomial models, based on the expansion on a given
large-scale basis of monomials. The model is then identified by seeking
coefficients of the combination that minimize a mixed objective, composed by a
term representing the fitting error and a term inducing sparsity in the
representation, which results in a problem formulation of the ``square-root
LASSO'' type, with nonnegativity constraints on the variables. We propose to
solve the problem via a sequential coordinate-descent scheme, which is suitable
for large-scale implementations.
|
1311.4369 | Distributed Widely Linear Complex Kalman Filtering | cs.SY cs.IT math.IT | We introduce cooperative sequential state space estimation in the domain of
augmented complex statistics, whereby nodes in a network collaborate locally to
estimate noncircular complex signals. For rigour, a distributed augmented
(widely linear) complex Kalman filter (D-ACKF) suited to the generality of
complex signals is introduced, allowing for unified treatment of both proper
(rotation invariant) and improper (rotation dependent) signal distributions.
Its duality with the bivariate real-valued distributed Kalman filter, along
with several issues of implementation are also illuminated. The analysis and
simulations show that unlike existing distributed Kalman filter solutions, the
D-ACKF caters for both the improper data and the correlations between nodal
observation noises, thus providing enhanced performance in real-world
scenarios.
|
1311.4419 | Perception and Steering Control in Paired Bat Flight | cs.SY cs.RO physics.bio-ph | Animals within groups need to coordinate their reactions to perceived
environmental features and to each other in order to safely move from one point
to another. This paper extends our previously published work on the flight
patterns of Myotis velifer that have been observed in a habitat near Johnson
City, Texas. Each evening, these bats emerge from a cave in sequences of small
groups that typically contain no more than three or four individuals, and they
thus provide ideal subjects for studying leader-follower behaviors. By
analyzing the flight paths of a group of M. velifer, the data show that the
flight behavior of a follower bat is influenced by the flight behavior of a
leader bat in a way that is not well explained by existing pursuit laws, such
as classical pursuit, constant bearing and motion camouflage. Thus we propose
an alternative steering law based on virtual loom, a concept we introduce to
capture the geometrical configuration of the leader-follower pair. It is shown
that this law may be integrated with our previously proposed vision-enabled
steering laws to synthesize trajectories, the statistics of which fit with
those of the bats in our data set. The results suggest that bats use perceived
information of both the environment and their neighbors for navigation.
|
1311.4420 | CAVDM: Cellular Automata Based Video Cloud Mining Framework for
Information Retrieval | cs.IR | Cloud Mining technique can be applied to various documents. Acquisition and
storage of video data is an easy task but retrieval of information from video
data is a challenging task. So video Cloud Mining plays an important role in
efficient video data management for information retrieval. This paper proposes
a Cellular Automata based framework for video Cloud Mining to extract the
information from video data. This includes developing the technique for shot
detection then key frame analysis is considered to compare the frames of each
shot to each others to define the relationship between shots. Cellular automata
based hierarchical clustering technique is adopted to make a group of similar
shots to detect the particular event on some requirement as per user demand.
|
1311.4431 | Mathematical Foundations for Information Theory in Diffusion-Based
Molecular Communications | cs.IT math.IT q-bio.MN | Molecular communication emerges as a promising communication paradigm for
nanotechnology. However, solid mathematical foundations for
information-theoretic analysis of molecular communication have not yet been
built. In particular, no one has ever proven that the channel coding theorem
applies for molecular communication, and no relationship between information
rate capacity (maximum mutual information) and code rate capacity (supremum
achievable code rate) has been established. In this paper, we focus on a major
subclass of molecular communication - the diffusion-based molecular
communication. We provide solid mathematical foundations for information theory
in diffusion-based molecular communication by creating a general
diffusion-based molecular channel model in measure-theoretic form and prove its
channel coding theorems. Various equivalence relationships between statistical
and operational definitions of channel capacity are also established, including
the most classic information rate capacity and code rate capacity. As
byproducts, we have shown that the diffusion-based molecular channel is with
"asymptotically decreasing input memory and anticipation" and "d-continuous".
Other properties of diffusion-based molecular channel such as stationarity or
ergodicity are also proven.
|
1311.4439 | 60 GHz Wireless Link Within Metal Enclosures: Channel Measurements and
System Analysis | cs.IT math.IT | Wireless channel measurement results for 60 GHz within a closed metal cabinet
are provided. A metal cabinet is chosen to emulate the environment within a
mechatronic system, which have metal enclosures in general. A frequency domain
sounding technique is used to measure the wireless channel for different
volumes of the metal enclosure, considering both line-of-sight (LOS) and
non-line-of-sight (NLOS) scenarios. Large-scale and small-scale characteristics
of the wireless channel are extracted in order to build a comprehensive channel
model. In contrast to conventional indoor channels at 60 GHz, the channel in
the metal enclosure is highly reflective resulting in a rich scattering
environment with a significantly large root-mean-square (RMS) delay spread.
Based on the obtained measurement results, the bit error rate (BER) performance
is evaluated for a wideband orthogonal frequency division multiplexing (OFDM)
system.
|
1311.4460 | Information slows down hierarchy growth | physics.soc-ph cs.SI | We consider models of growing multi-level systems wherein the growth process
is driven by rules of tournament selection. A system can be conceived as an
evolving tree with a new node being attached to a contestant node at the best
hierarchy level (a level nearest to the tree root). The proposed evolution
reflects limited information on system properties available to new nodes. It
can also be expressed in terms of population dynamics. Two models are
considered: a constant tournament (CT) model wherein the number of tournament
participants is constant throughout system evolution, and a proportional
tournament (PT) model where this number increases proportionally to the growing
size of the system itself. The results of analytical calculations based on a
rate equation fit well to numerical simulations for both models. In the CT
model all hierarchy levels emerge but the birth time of a consecutive hierarchy
level increases exponentially or faster for each new level. The number of nodes
at the first hierarchy level grows logarithmically in time, while the size of
the last, "worst" hierarchy level oscillates quasi log-periodically. In the PT
model the occupations of the first two hierarchy levels increase linearly but
worse hierarchy levels either do not emerge at all or appear only by chance in
early stage of system evolution to further stop growing at all. The results
allow to conclude that information available to each new node in tournament
dynamics restrains the emergence of new hierarchy levels and that it is the
absolute amount of information, not relative, which governs such behavior.
|
1311.4468 | Stochastic processes and feedback-linearisation for online
identification and Bayesian adaptive control of fully-actuated mechanical
systems | cs.LG cs.SY physics.data-an stat.ML | This work proposes a new method for simultaneous probabilistic identification
and control of an observable, fully-actuated mechanical system. Identification
is achieved by conditioning stochastic process priors on observations of
configurations and noisy estimates of configuration derivatives. In contrast to
previous work that has used stochastic processes for identification, we
leverage the structural knowledge afforded by Lagrangian mechanics and learn
the drift and control input matrix functions of the control-affine system
separately. We utilise feedback-linearisation to reduce, in expectation, the
uncertain nonlinear control problem to one that is easy to regulate in a
desired manner. Thereby, our method combines the flexibility of nonparametric
Bayesian learning with epistemological guarantees on the expected closed-loop
trajectory. We illustrate our method in the context of torque-actuated pendula
where the dynamics are learned with a combination of normal and log-normal
processes.
|
1311.4472 | A Component Lasso | stat.ML cs.LG | We propose a new sparse regression method called the component lasso, based
on a simple idea. The method uses the connected-components structure of the
sample covariance matrix to split the problem into smaller ones. It then solves
the subproblems separately, obtaining a coefficient vector for each one. Then,
it uses non-negative least squares to recombine the different vectors into a
single solution. This step is useful in selecting and reweighting components
that are correlated with the response. Simulated and real data examples show
that the component lasso can outperform standard regression methods such as the
lasso and elastic net, achieving a lower mean squared error as well as better
support recovery.
|
1311.4486 | Discriminative Density-ratio Estimation | cs.LG | The covariate shift is a challenging problem in supervised learning that
results from the discrepancy between the training and test distributions. An
effective approach which recently drew a considerable attention in the research
community is to reweight the training samples to minimize that discrepancy. In
specific, many methods are based on developing Density-ratio (DR) estimation
techniques that apply to both regression and classification problems. Although
these methods work well for regression problems, their performance on
classification problems is not satisfactory. This is due to a key observation
that these methods focus on matching the sample marginal distributions without
paying attention to preserving the separation between classes in the reweighted
space. In this paper, we propose a novel method for Discriminative
Density-ratio (DDR) estimation that addresses the aforementioned problem and
aims at estimating the density-ratio of joint distributions in a class-wise
manner. The proposed algorithm is an iterative procedure that alternates
between estimating the class information for the test data and estimating new
density ratio for each class. To incorporate the estimated class information of
the test data, a soft matching technique is proposed. In addition, we employ an
effective criterion which adopts mutual information as an indicator to stop the
iterative procedure while resulting in a decision boundary that lies in a
sparse region. Experiments on synthetic and benchmark datasets demonstrate the
superiority of the proposed method in terms of both accuracy and robustness.
|
1311.4527 | A message-passing algorithm for multi-agent trajectory planning | cs.AI cs.DC cs.MA cs.RO cs.SY | We describe a novel approach for computing collision-free \emph{global}
trajectories for $p$ agents with specified initial and final configurations,
based on an improved version of the alternating direction method of multipliers
(ADMM). Compared with existing methods, our approach is naturally
parallelizable and allows for incorporating different cost functionals with
only minor adjustments. We apply our method to classical challenging instances
and observe that its computational requirements scale well with $p$ for several
cost functionals. We also show that a specialization of our algorithm can be
used for {\em local} motion planning by solving the problem of joint
optimization in velocity space.
|
1311.4529 | Incremental Discovery of Prominent Situational Facts | cs.DB | We study the novel problem of finding new, prominent situational facts, which
are emerging statements about objects that stand out within certain contexts.
Many such facts are newsworthy---e.g., an athlete's outstanding performance in
a game, or a viral video's impressive popularity. Effective and efficient
identification of these facts assists journalists in reporting, one of the main
goals of computational journalism. Technically, we consider an ever-growing
table of objects with dimension and measure attributes. A situational fact is a
"contextual" skyline tuple that stands out against historical tuples in a
context, specified by a conjunctive constraint involving dimension attributes,
when a set of measure attributes are compared. New tuples are constantly added
to the table, reflecting events happening in the real world. Our goal is to
discover constraint-measure pairs that qualify a new tuple as a contextual
skyline tuple, and discover them quickly before the event becomes yesterday's
news. A brute-force approach requires exhaustive comparison with every tuple,
under every constraint, and in every measure subspace. We design algorithms in
response to these challenges using three corresponding ideas---tuple reduction,
constraint pruning, and sharing computation across measure subspaces. We also
adopt a simple prominence measure to rank the discovered facts when they are
numerous. Experiments over two real datasets validate the effectiveness and
efficiency of our techniques.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.