id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1309.7669 | Quantum Tomography From Few Full-Rank Observables | math-ph cs.IT math.IT math.MP math.PR | We establish that the PhaseLift algorithm recovers pure states from a
constant number of full-rank observables with high probability.
|
1309.7676 | An upper bound on prototype set size for condensed nearest neighbor | cs.LG stat.ML | The condensed nearest neighbor (CNN) algorithm is a heuristic for reducing
the number of prototypical points stored by a nearest neighbor classifier,
while keeping the classification rule given by the reduced prototypical set
consistent with the full set. I present an upper bound on the number of
prototypical points accumulated by CNN. The bound originates in a bound on the
number of times the decision rule is updated during training in the multiclass
perceptron algorithm, and thus is independent of training set size.
|
1309.7686 | Recent developments in research on catalytic reaction networks | cs.CE q-bio.MN | Over the last years, analyses performed on a stochastic model of catalytic
reaction networks have provided some indications about the reasons why wet-lab
experiments hardly ever comply with the phase transition typically predicted by
theoretical models with regard to the emergence of collectively
self-replicating sets of molecule (also defined as autocatalytic sets, ACSs), a
phenomenon that is often observed in nature and that is supposed to have played
a major role in the emergence of the primitive forms of life. The model at
issue has allowed to reveal that the emerging ACSs are characterized by a
general dynamical fragility, which might explain the difficulty to observe them
in lab experiments. In this work, the main results of the various analyses are
reviewed, with particular regard to the factors able to affect the generic
properties of catalytic reactions network, for what concerns, not only the
probability of ACSs to be observed, but also the overall activity of the
system, in terms of production of new species, reactions and matter.
|
1309.7687 | Chemical communication between synthetic and natural cells: a possible
experimental design | cs.CE q-bio.MN | The bottom-up construction of synthetic cells is one of the most intriguing
and interesting research arenas in synthetic biology. Synthetic cells are built
by encapsulating biomolecules inside lipid vesicles (liposomes), allowing the
synthesis of one or more functional proteins. Thanks to the in situ synthesized
proteins, synthetic cells become able to perform several biomolecular
functions, which can be exploited for a large variety of applications. This
paves the way to several advanced uses of synthetic cells in basic science and
biotechnology, thanks to their versatility, modularity, biocompatibility, and
programmability. In the previous WIVACE (2012) we presented the
state-of-the-art of semi-synthetic minimal cell (SSMC) technology and
introduced, for the first time, the idea of chemical communication between
synthetic cells and natural cells. The development of a proper synthetic
communication protocol should be seen as a tool for the nascent field of
bio/chemical-based Information and Communication Technologies (bio-chem-ICTs)
and ultimately aimed at building soft-wet-micro-robots. In this contribution
(WIVACE, 2013) we present a blueprint for realizing this project, and show some
preliminary experimental results. We firstly discuss how our research goal
(based on the natural capabilities of biological systems to manipulate chemical
signals) finds a proper place in the current scientific and technological
contexts. Then, we shortly comment on the experimental approaches from the
viewpoints of (i) synthetic cell construction, and (ii) bioengineering of
microorganisms, providing up-to-date results from our laboratory. Finally, we
shortly discuss how autopoiesis can be used as a theoretical framework for
defining synthetic minimal life, minimal cognition, and as bridge between
synthetic biology and artificial intelligence.
|
1309.7688 | Evolution and development of complex computational systems using the
paradigm of metabolic computing in Epigenetic Tracking | cs.CE q-bio.MN | Epigenetic Tracking (ET) is an Artificial Embryology system which allows for
the evolution and development of large complex structures built from artificial
cells. In terms of the number of cells, the complexity of the bodies generated
with ET is comparable with the complexity of biological organisms. We have
previously used ET to simulate the growth of multicellular bodies with
arbitrary 3-dimensional shapes which perform computation using the paradigm of
"metabolic computing". In this paper we investigate the memory capacity of such
computational structures and analyse the trade-off between shape and
computation. We now plan to build on these foundations to create a
biologically-inspired model in which the encoding of the phenotype is efficient
(in terms of the compactness of the genome) and evolvable in tasks involving
non-trivial computation, robust to damage and capable of self-maintenance and
self-repair.
|
1309.7689 | Application of a Semi-automatic Algorithm for Identification of
Molecular Components in SBML Models | cs.CE q-bio.MN | Reactions forming a pathway can be rewritten by making explicit the different
molecular components involved in them. A molecular component represents a
biological entity (e.g. a protein) in all its states (free, bound, degraded,
etc.). In this paper we show the application of a component identification
algorithm to a number of real-world models to experimentally validate the
approach. Components identification allows subpathways to be computed to better
understand the pathway functioning.
|
1309.7690 | A Hybrid Monte Carlo Ant Colony Optimization Approach for Protein
Structure Prediction in the HP Model | cs.NE cs.CE | The hydrophobic-polar (HP) model has been widely studied in the field of
protein structure prediction (PSP) both for theoretical purposes and as a
benchmark for new optimization strategies. In this work we introduce a new
heuristics based on Ant Colony Optimization (ACO) and Markov Chain Monte Carlo
(MCMC) that we called Hybrid Monte Carlo Ant Colony Optimization (HMCACO). We
describe this method and compare results obtained on well known HP instances in
the 3 dimensional cubic lattice to those obtained with standard ACO and
Simulated Annealing (SA). All methods were implemented using an unconstrained
neighborhood and a modified objective function to prevent the creation of
overlapping walks. Results show that our methods perform better than the other
heuristics in all benchmark instances.
|
1309.7691 | A model of protocell based on the introduction of a semi-permeable
membrane in a stochastic model of catalytic reaction networks | cs.CE q-bio.MN | In this work we introduce some preliminary analyses on the role of a
semi-permeable membrane in the dynamics of a stochastic model of catalytic
reaction sets (CRSs) of molecules. The results of the simulations performed on
ensembles of randomly generated reaction schemes highlight remarkable
differences between this very simple protocell description model and the
classical case of the continuous stirred-tank reactor (CSTR). In particular, in
the CSTR case, distinct simulations with the same reaction scheme reach the
same dynamical equilibrium, whereas, in the protocell case, simulations with
identical reaction schemes can reach very different dynamical states, despite
starting from the same initial conditions.
|
1309.7692 | A Model of Colonic Crypts using SBML Spatial | cs.CE q-bio.MN | The Spatial Processes package enables an explicit definition of a spatial
environment on top of the normal dynamic modeling SBML capabilities. The
possibility of an explicit representation of spatial dynamics increases the
representation power of SBML. In this work we used those new SBML features to
define an extensive model of colonic crypts composed of the main cellular types
(from stem cells to fully differentiated cells), alongside their spatial
dynamics.
|
1309.7693 | Analysis of the spatial and dynamical properties of a multiscale model
of intestinal crypts | cs.CE cs.CG cs.DM q-bio.CB | The preliminary analyses on a multiscale model of intestinal crypt dynamics
are here presented. The model combines a morphological model, based on the
Cellular Potts Model (CPM), and a gene regulatory network model, based on Noisy
Random Boolean Networks (NRBNs). Simulations suggest that the stochastic
differentiation process is itself sufficient to ensure the general homeostasis
in the asymptotic states, as proven by several measures.
|
1309.7694 | Self Organizing Maps to efficiently cluster and functionally interpret
protein conformational ensembles | cs.CE q-bio.BM | An approach that combines Self-Organizing maps, hierarchical clustering and
network components is presented, aimed at comparing protein conformational
ensembles obtained from multiple Molecular Dynamic simulations. As a first
result the original ensembles can be summarized by using only the
representative conformations of the clusters obtained. In addition the network
components analysis allows to discover and interpret the dynamic behavior of
the conformations won by each neuron. The results showed the ability of this
approach to efficiently derive a functional interpretation of the protein
dynamics described by the original conformational ensemble, highlighting its
potential as a support for protein engineering.
|
1309.7695 | GPU-powered Simulation Methodologies for Biological Systems | cs.CE cs.DC | The study of biological systems witnessed a pervasive cross-fertilization
between experimental investigation and computational methods. This gave rise to
the development of new methodologies, able to tackle the complexity of
biological systems in a quantitative manner. Computer algorithms allow to
faithfully reproduce the dynamics of the corresponding biological system, and,
at the price of a large number of simulations, it is possible to extensively
investigate the system functioning across a wide spectrum of natural
conditions. To enable multiple analysis in parallel, using cheap, diffused and
highly efficient multi-core devices we developed GPU-powered simulation
algorithms for stochastic, deterministic and hybrid modeling approaches, so
that also users with no knowledge of GPUs hardware and programming can easily
access the computing power of graphics engines.
|
1309.7696 | An ensemble approach to the study of the emergence of metabolic and
proliferative disorders via Flux Balance Analysis | cs.CE q-bio.MN | An extensive rewiring of cell metabolism supports enhanced proliferation in
cancer cells. We propose a systems level approach to describe this phenomenon
based on Flux Balance Analysis (FBA). The approach does not explicit a cell
biomass formation reaction to be maximized, but takes into account an ensemble
of alternative flux distributions that match the cancer metabolic rewiring
(CMR) phenotype description. The underlying concept is that the analysis the
common/distinguishing properties of the ensemble can provide indications on how
CMR is achieved and sustained and thus on how it can be controlled.
|
1309.7697 | Semi-structured data extraction and modelling: the WIA Project | cs.SE cs.CY cs.NE | Over the last decades, the amount of data of all kinds available
electronically has increased dramatically. Data are accessible through a range
of interfaces including Web browsers, database query languages,
application-specific interfaces, built on top of a number of different data
exchange formats. All these data span from un-structured to highly structured
data. Very often, some of them have structure even if the structure is
implicit, and not as rigid or regular as that found in standard database
systems. Spreadsheet documents are prototypical in this respect. Spreadsheets
are the lightweight technology able to supply companies with easy to build
business management and business intelligence applications, and business people
largely adopt spreadsheets as smart vehicles for data files generation and
sharing. Actually, the more spreadsheets grow in complexity (e.g., their use in
product development plans and quoting), the more their arrangement,
maintenance, and analysis appear as a knowledge-driven activity. The
algorithmic approach to the problem of automatic data structure extraction from
spreadsheet documents (i.e., grid-structured and free topological-related data)
emerges from the WIA project: Worksheets Intelligent Analyser. The
WIA-algorithm shows how to provide a description of spreadsheet contents in
terms of higher level of abstractions or conceptualisations. In particular, the
WIA-algorithm target is about the extraction of i) the calculus work-flow
implemented in the spreadsheets formulas and ii) the logical role played by the
data which take part into the calculus. The aim of the resulting
conceptualisations is to provide spreadsheets with abstract representations
useful for further model refinements and optimizations through evolutionary
algorithms computations.
|
1309.7698 | Signed Networks, Triadic Interactions and the Evolution of Cooperation | cs.SI cs.GT cs.NE physics.soc-ph | We outline a model to study the evolution of cooperation in a population of
agents playing the prisoner's dilemma in signed networks. We highlight that if
only dyadic interactions are taken into account, cooperation never evolves.
However, when triadic considerations are introduced, a window of opportunity
for emergence of cooperation as a stable behaviour emerges.
|
1309.7702 | Impact of local information in growing networks | cs.SI physics.soc-ph | We present a new model of the evolutionary dynamics and the growth of on-line
social networks. The model emulates people's strategies for acquiring
information in social networks, emphasising the local subjective view of an
individual and what kind of information the individual can acquire when
arriving in a new social context. The model proceeds through two phases: (a) a
discovery phase, in which the individual becomes aware of the surrounding world
and (b) an elaboration phase, in which the individual elaborates locally the
information trough a cognitive-inspired algorithm. Model generated networks
reproduce main features of both theoretical and real-world networks, such as
high clustering coefficient, low characteristic path length, strong division in
communities, and variability of degree distributions.
|
1309.7712 | Downlink Training Techniques for FDD Massive MIMO Systems: Open-Loop and
Closed-Loop Training with Memory | cs.IT math.IT | The concept of deploying a large number of antennas at the base station,
often called massive multiple-input multiple-output (MIMO), has drawn
considerable interest because of its potential ability to revolutionize current
wireless communication systems. Most literature on massive MIMO systems assumes
time division duplexing (TDD), although frequency division duplexing (FDD)
dominates current cellular systems. Due to the large number of transmit
antennas at the base station, currently standardized approaches would require a
large percentage of the precious downlink and uplink resources in FDD massive
MIMO be used for training signal transmissions and channel state information
(CSI) feedback. To reduce the overhead of the downlink training phase, we
propose practical open-loop and closed-loop training frameworks in this paper.
We assume the base station and the user share a common set of training signals
in advance. In open-loop training, the base station transmits training signals
in a round-robin manner, and the user successively estimates the current
channel using long-term channel statistics such as temporal and spatial
correlations and previous channel estimates. In closed-loop training, the user
feeds back the best training signal to be sent in the future based on channel
prediction and the previously received training signals. With a small amount of
feedback from the user to the base station, closed-loop training offers better
performance in the data communication phase, especially when the
signal-to-noise ratio is low, the number of transmit antennas is large, or
prior channel estimates are not accurate at the beginning of the communication
setup, all of which would be mostly beneficial for massive MIMO systems.
|
1309.7723 | On the Effect of Data Contamination on Track Purity | cs.IT math.IT | This paper is concerned with performance analysis for data association, in a
target tracking environment. Effects of misassociation are considered in a
simple (linear) multiscan framework so as to provide closed-form expressions of
the probability of correct association. In this paper, we focus on the
development of explicit approximations of this probability. Via rigorous
calculations the effect of dimensioning parameters (number of scans, false
measurement positions or densities) is analyzed, for various modelings of the
false measurements. Remarkably, it is possible to derive very simple
expressions of the probability of correct association which are independent of
the scenario kinematic parameters.
|
1309.7731 | Convex Structured Controller Design | cs.SY math.OC | We consider the problem of synthesizing optimal linear feedback policies
subject to arbitrary convex constraints on the feedback matrix. This is known
to be a hard problem in the usual formulations ($\Htwo,\Hinf,\LQR$) and
previous works have focused on characterizing classes of structural constraints
that allow efficient solution through convex optimization or dynamic
programming techniques. In this paper, we propose a new control objective and
show that this formulation makes the problem of computing optimal linear
feedback matrices convex under arbitrary convex constraints on the feedback
matrix. This allows us to solve problems in decentralized control (sparsity in
the feedback matrices), control with delays and variable impedance control.
Although the control objective is nonstandard, we present theoretical and
empirical evidence that it agrees well with standard notions of control. We
also present an extension to nonlinear control affine systems. We present
numerical experiments validating our approach.
|
1309.7734 | Some New Results on the Cross Correlation of $m$-sequences | cs.IT math.IT | The determination of the cross correlation between an $m$-sequence and its
decimated sequence has been a long-standing research problem. Considering a
ternary $m$-sequence of period $3^{3r}-1$, we determine the cross correlation
distribution for decimations $d=3^{r}+2$ and $d=3^{2r}+2$, where $\gcd(r,3)=1$.
Meanwhile, for a binary $m$-sequence of period $2^{2lm}-1$, we make an initial
investigation for the decimation $d=\frac{2^{2lm}-1}{2^{m}+1}+2^{s}$, where $l
\ge 2$ is even and $0 \le s \le 2m-1$. It is shown that the cross correlation
takes at least four values. Furthermore, we confirm the validity of two famous
conjectures due to Sarwate et al. and Helleseth in this case.
|
1309.7735 | Long-Term Profit-Maximizing Incentive for Crowd Sensing in Mobile Social
Networks | cs.SI cs.GT cs.NI | Crowd sensing is a new paradigm that leverages pervasive sensor-equipped
mobile devices to provide sensing services like forensic analysis, documenting
public spaces, and collaboratively constructing statistical models. Extensive
user participation is indispensable for achieving good service quality.
Nowadays, most of existing mechanisms focus on guaranteeing good service
quality based on instantaneous extensive user participation for crowd sensing
applications. Little attention has been dedicated to maximizing long-term
service quality for crowd sensing applications due to their asymmetric
interests, preferences, selfish behaviors, etc. To fill these gaps, in this
paper, we derive the closed expression of the marginal sensing data quality
based on the monopoly aggregation in economics. Furthermore, we design
marginalquality based incentive mechanisms for long-term crowd sensing
applications, not only to enhance extensive user participation by maximizing
the expected total profits of mobile users, but also to stimulate mobile users
to produce high-quality contents by applying the marginal quality. Finally,
simulation results show that our mechanisms outperform the existing solutions.
|
1309.7750 | An Extensive Experimental Study on the Cluster-based Reference Set
Reduction for speeding-up the k-NN Classifier | cs.LG | The k-Nearest Neighbor (k-NN) classification algorithm is one of the most
widely-used lazy classifiers because of its simplicity and ease of
implementation. It is considered to be an effective classifier and has many
applications. However, its major drawback is that when sequential search is
used to find the neighbors, it involves high computational cost. Speeding-up
k-NN search is still an active research field. Hwang and Cho have recently
proposed an adaptive cluster-based method for fast Nearest Neighbor searching.
The effectiveness of this method is based on the adjustment of three
parameters. However, the authors evaluated their method by setting specific
parameter values and using only one dataset. In this paper, an extensive
experimental study of this method is presented. The results, which are based on
five real life datasets, illustrate that if the parameters of the method are
carefully defined, one can achieve even better classification performance.
|
1309.7776 | A new large class of functions not APN infinitely often | cs.IT math.IT | In this paper, we show that there is no vectorial Boolean function of degree
4e, with e satisfaying certain conditions, which is APN over infinitely many
extensions of its field of definition. It is a new step in the proof of the
conjecture of Aubry, McGuire and Rodier
|
1309.7804 | On statistics, computation and scalability | stat.ML cs.LG math.ST stat.TH | How should statistical procedures be designed so as to be scalable
computationally to the massive datasets that are increasingly the norm? When
coupled with the requirement that an answer to an inferential question be
delivered within a certain time budget, this question has significant
repercussions for the field of statistics. With the goal of identifying
"time-data tradeoffs," we investigate some of the statistical consequences of
computational perspectives on scability, in particular divide-and-conquer
methodology and hierarchies of convex relaxations.
|
1309.7817 | Performance Analysis of Massive MIMO for Cell-Boundary Users | cs.IT math.IT | In this paper, we consider massive multiple-input multiple-output (MIMO)
systems for both downlink and uplink scenarios, where three radio units (RUs)
connected via one digital unit (DU) support multiple user equipments (UEs) at
the cell-boundary through the same radio resource, i.e., the same
time-frequency slot. For downlink transmitter options, the study considers
zero-forcing (ZF) and maximum ratio transmission (MRT), while for uplink
receiver options it considers ZF and maximum ratio combining (MRC). For the sum
rate of each of these, we derive simple closed-form formulas. In the simple but
practically relevant case where uniform power is allocated to all downlink data
streams, we observe that, for the downlink, vector normalization is better for
ZF while matrix normalization is better for MRT. For a given antenna and user
configuration, we also derive analytically the signal-to-noise-ratio (SNR)
level below which MRC should be used instead of ZF. Numerical simulations
confirm our analytical results.
|
1309.7823 | The role of detachment of in-links in scale-free networks | math.PR cs.IT math.IT | Real-world networks may exhibit detachment phenomenon determined by the
cancelling of previously existing connections. We discuss a tractable extension
of Yule model to account for this feature. Analytical results are derived and
discussed both asymptotically and for a finite number of links. Comparison with
the original model is performed in the supercritical case. The first-order
asymptotic tail behavior of the two models is similar but differences arise in
the second-order term. We explicitly refer to World Wide Web modeling and we
show the agreement of the proposed model on very recent data. However, other
possible network applications are also mentioned.
|
1309.7824 | Linear Regression from Strategic Data Sources | cs.GT cs.LG math.ST stat.TH | Linear regression is a fundamental building block of statistical data
analysis. It amounts to estimating the parameters of a linear model that maps
input features to corresponding outputs. In the classical setting where the
precision of each data point is fixed, the famous Aitken/Gauss-Markov theorem
in statistics states that generalized least squares (GLS) is a so-called "Best
Linear Unbiased Estimator" (BLUE). In modern data science, however, one often
faces strategic data sources, namely, individuals who incur a cost for
providing high-precision data.
In this paper, we study a setting in which features are public but
individuals choose the precision of the outputs they reveal to an analyst. We
assume that the analyst performs linear regression on this dataset, and
individuals benefit from the outcome of this estimation. We model this scenario
as a game where individuals minimize a cost comprising two components: (a) an
(agent-specific) disclosure cost for providing high-precision data; and (b) a
(global) estimation cost representing the inaccuracy in the linear model
estimate. In this game, the linear model estimate is a public good that
benefits all individuals. We establish that this game has a unique non-trivial
Nash equilibrium. We study the efficiency of this equilibrium and we prove
tight bounds on the price of stability for a large class of disclosure and
estimation costs. Finally, we study the estimator accuracy achieved at
equilibrium. We show that, in general, Aitken's theorem does not hold under
strategic data sources, though it does hold if individuals have identical
disclosure costs (up to a multiplicative factor). When individuals have
non-identical costs, we derive a bound on the improvement of the equilibrium
estimation cost that can be achieved by deviating from GLS, under mild
assumptions on the disclosure cost functions.
|
1309.7841 | Asynchronous Gossip for Averaging and Spectral Ranking | cs.DC cs.IT cs.SY math.IT math.OC | We consider two variants of the classical gossip algorithm. The first variant
is a version of asynchronous stochastic approximation. We highlight a
fundamental difficulty associated with the classical asynchronous gossip
scheme, viz., that it may not converge to a desired average, and suggest an
alternative scheme based on reinforcement learning that has guaranteed
convergence to the desired average. We then discuss a potential application to
a wireless network setting with simultaneous link activation constraints. The
second variant is a gossip algorithm for distributed computation of the
Perron-Frobenius eigenvector of a nonnegative matrix. While the first variant
draws upon a reinforcement learning algorithm for an average cost controlled
Markov decision problem, the second variant draws upon a reinforcement learning
algorithm for risk-sensitive control. We then discuss potential applications of
the second variant to ranking schemes, reputation networks, and principal
component analysis.
|
1309.7842 | Difference Balanced Functions and Their Generalized Difference Sets | math.CO cs.IT math.IT | Difference balanced functions from $F_{q^n}^*$ to $F_q$ are closely related
to combinatorial designs and naturally define $p$-ary sequences with the ideal
two-level autocorrelation. In the literature, all existing such functions are
associated with the $d$-homogeneous property, and it was conjectured by Gong
and Song that difference balanced functions must be $d$-homogeneous. First we
characterize difference balanced functions by generalized difference sets with
respect to two exceptional subgroups. We then derive several necessary and
sufficient conditions for $d$-homogeneous difference balanced functions. In
particular, we reveal an unexpected equivalence between the $d$-homogeneous
property and multipliers of generalized difference sets. By determining these
multipliers, we prove the Gong-Song conjecture for $q$ prime. Furthermore, we
show that every difference balanced function must be balanced or an affine
shift of a balanced function.
|
1309.7843 | Energy Efficient Telemonitoring of Physiological Signals via Compressed
Sensing: A Fast Algorithm and Power Consumption Evaluation | cs.IT math.IT | Wireless telemonitoring of physiological signals is an important topic in
eHealth. In order to reduce on-chip energy consumption and extend sensor life,
recorded signals are usually compressed before transmission. In this paper, we
adopt compressed sensing (CS) as a low-power compression framework, and propose
a fast block sparse Bayesian learning (BSBL) algorithm to reconstruct original
signals. Experiments on real-world fetal ECG signals and epilepsy EEG signals
showed that the proposed algorithm has good balance between speed and data
reconstruction fidelity when compared to state-of-the-art CS algorithms.
Further, we implemented the CS-based compression procedure and a low-power
compression procedure based on a wavelet transform in Filed Programmable Gate
Array (FPGA), showing that the CS-based compression can largely save energy and
other on-chip computing resources.
|
1309.7901 | Prefactor Reduction of the Guruswami-Sudan Interpolation Step | cs.IT math.IT | The concept of prefactors is considered in order to decrease the complexity
of the Guruswami-Sudan interpolation step for generalized Reed-Solomon codes.
It is shown that the well-known re-encoding projection due to Koetter et al.
leads to one type of such prefactors. The new type of Sierpinski prefactors is
introduced. The latter are based on the fact that many binomial coefficients in
the Hasse derivative associated with the Guruswami-Sudan interpolation step are
zero modulo the base field characteristic. It is shown that both types of
prefactors can be combined and how arbitrary prefactors can be used to derive a
reduced Guruswami-Sudan interpolation step.
|
1309.7910 | A Simple Proof of Maxwell Saturation for Coupled Scalar Recursions | cs.IT math.IT | Low-density parity-check (LDPC) convolutional codes (or spatially-coupled
codes) were recently shown to approach capacity on the binary erasure channel
(BEC) and binary-input memoryless symmetric channels. The mechanism behind this
spectacular performance is now called threshold saturation via spatial
coupling. This new phenomenon is characterized by the belief-propagation
threshold of the spatially-coupled ensemble increasing to an intrinsic noise
threshold defined by the uncoupled system. In this paper, we present a simple
proof of threshold saturation that applies to a wide class of coupled scalar
recursions. Our approach is based on constructing potential functions for both
the coupled and uncoupled recursions. Our results actually show that the fixed
point of the coupled recursion is essentially determined by the minimum of the
uncoupled potential function and we refer to this phenomenon as Maxwell
saturation. A variety of examples are considered including the
density-evolution equations for: irregular LDPC codes on the BEC, irregular
low-density generator matrix codes on the BEC, a class of generalized LDPC
codes with BCH component codes, the joint iterative decoding of LDPC codes on
intersymbol-interference channels with erasure noise, and the compressed
sensing of random vectors with i.i.d. components.
|
1309.7912 | An Image-Based Fluid Surface Pattern Model | cs.CV | This work aims at generating a model of the ocean surface and its dynamics
from one or more video cameras. The idea is to model wave patterns from video
as a first step towards a larger system of photogrammetric monitoring of marine
conditions for use in offshore oil drilling platforms. The first part of the
proposed approach consists in reducing the dimensionality of sensor data made
up of the many pixels of each frame of the input video streams. This enables
finding a concise number of most relevant parameters to model the temporal
dataset, yielding an efficient data-driven model of the evolution of the
observed surface. The second part proposes stochastic modeling to better
capture the patterns embedded in the data. One can then draw samples from the
final model, which are expected to simulate the behavior of previously observed
flow, in order to determine conditions that match new observations. In this
paper we focus on proposing and discussing the overall approach and on
comparing two different techniques for dimensionality reduction in the first
stage: principal component analysis and diffusion maps. Work is underway on the
second stage of constructing better stochastic models of fluid surface dynamics
as proposed here.
|
1309.7919 | Critical Transitions In a Model of a Genetic Regulatory System | nlin.CD cs.CE cs.CG math.AP q-bio.GN q-bio.QM | We consider a model for substrate-depletion oscillations in genetic systems,
based on a stochastic differential equation with a slowly evolving external
signal. We show the existence of critical transitions in the system. We apply
two methods to numerically test the synthetic time series generated by the
system for early indicators of critical transitions: a detrended fluctuation
analysis method, and a novel method based on topological data analysis
(persistence diagrams).
|
1309.7935 | Maximizing Utility Among Selfish Users in Social Groups | cs.NI cs.DS cs.SI | We consider the problem of a social group of users trying to obtain a
"universe" of files, first from a server and then via exchange amongst
themselves. We consider the selfish file-exchange paradigm of give-and-take,
whereby two users can exchange files only if each has something unique to offer
the other. We are interested in maximizing the number of users who can obtain
the universe through a schedule of file-exchanges. We first present a practical
paradigm of file acquisition. We then present an algorithm which ensures that
at least half the users obtain the universe with high probability for $n$ files
and $m=O(\log n)$ users when $n\rightarrow\infty$, thereby showing an
approximation ratio of 2. Extending these ideas, we show a $1+\epsilon_1$ -
approximation algorithm for $m=O(n)$, $\epsilon_1>0$ and a $(1+z)/2
+\epsilon_2$ - approximation algorithm for $m=O(n^z)$, $z>1$, $\epsilon_2>0$.
Finally, we show that for any $m=O(e^{o(n)})$, there exists a schedule of file
exchanges which ensures that at least half the users obtain the universe.
|
1309.7937 | Stationary Cycling Induced by Switched Functional Electrical Stimulation
Control | cs.SY | Functional electrical stimulation (FES) is used to activate the dysfunctional
lower limb muscles of individuals with neuromuscular disorders to produce
cycling as a means of exercise and rehabilitation. However, FES-cycling is
still metabolically inefficient and yields low power output at the cycle crank
compared to able-bodied cycling. Previous literature suggests that these
problems are symptomatic of poor muscle control and non-physiological muscle
fiber recruitment. The latter is a known problem with FES in general, and the
former motivates investigation of better control methods for FES-cycling.In
this paper, a stimulation pattern for quadriceps femoris-only FES-cycling is
derived based on the effectiveness of knee joint torque in producing forward
pedaling. In addition, a switched sliding-mode controller is designed for the
uncertain, nonlinear cycle-rider system with autonomous state-dependent
switching. The switched controller yields ultimately bounded tracking of a
desired trajectory in the presence of an unknown, time-varying, bounded
disturbance, provided a reverse dwell-time condition is satisfied by
appropriate choice of the control gains and a sufficient desired cadence.
Stability is derived through Lyapunov methods for switched systems, and
experimental results demonstrate the performance of the switched control system
under typical cycling conditions.
|
1309.7958 | A Statistical Learning Based System for Fake Website Detection | cs.CY cs.LG | Existing fake website detection systems are unable to effectively detect fake
websites. In this study, we advocate the development of fake website detection
systems that employ classification methods grounded in statistical learning
theory (SLT). Experimental results reveal that a prototype system developed
using SLT-based methods outperforms seven existing fake website detection
systems on a test bed encompassing 900 real and fake websites.
|
1309.7959 | Exploration and Exploitation in Visuomotor Prediction of Autonomous
Agents | cs.LG cs.CV math.DS | This paper discusses various techniques to let an agent learn how to predict
the effects of its own actions on its sensor data autonomously, and their
usefulness to apply them to visual sensors. An Extreme Learning Machine is used
for visuomotor prediction, while various autonomous control techniques that can
aid the prediction process by balancing exploration and exploitation are
discussed and tested in a simple system: a camera moving over a 2D greyscale
image.
|
1309.7960 | A Classification of Configuration Spaces of Planar Robot Arms with
Application to a Continuous Inverse Kinematics Problem | math.DG cs.RO | Using results on the topology of moduli space of polygons [Jaggi, 92;
Kapovich and Millson, 94], it can be shown that for a planar robot arm with $n$
segments there are some values of the base-length, $z$, at which the
configuration space of the constrained arm (arm with its end effector fixed)
has two disconnected components, while at other values the constrained
configuration space has one connected component. We first review some of these
known results.
Then the main design problem addressed in this paper is the construction of
pairs of continuous inverse kinematics for arbitrary robot arms, with the
property that the two inverse kinematics agree when the constrained
configuration space has a single connected component, but they give distinct
configurations (one in each connected component) when the configuration space
of the constrained arm has two components. This design is made possible by a
fundamental theoretical contribution in this paper -- a classification of
configuration spaces of robot arms such that the type of path that the system
(robot arm) takes through certain critical values of the forward kinematics
function is completely determined by the class to which the configuration space
of the arm belongs. This classification result makes the aforesaid design
problem tractable, making it sufficient to design a pair of inverse kinematics
for each class of configuration spaces (three of them in total).
We discuss the motivation for this work, which comes from a more extensive
problem of motion planning for the end effector of a robot arm requiring us to
continuously sample one configuration from each connected component of the
constrained configuration spaces.
We demonstrate the low complexity of the presented algorithm through a
Javascript + HTML5 based implementation available at
http://hans.math.upenn.edu/~subhrabh/nowiki/robot_arm_JS-HTML5/arm.html
|
1309.7964 | A General Formula for the Mismatch Capacity | cs.IT math.IT | The fundamental limits of channels with mismatched decoding are addressed. A
general formula is established for the mismatch capacity of a general channel,
defined as a sequence of conditional distributions with a general decoding
metrics sequence. We deduce an identity between the Verd\'{u}-Han general
channel capacity formula, and the mismatch capacity formula applied to Maximum
Likelihood decoding metric. Further, several upper bounds on the capacity are
provided, and a simpler expression for a lower bound is derived for the case of
a non-negative decoding metric. The general formula is specialized to the case
of finite input and output alphabet channels with a type-dependent metric. The
closely related problem of threshold mismatched decoding is also studied, and a
general expression for the threshold mismatch capacity is obtained. As an
example of threshold mismatch capacity, we state a general expression for the
erasures-only capacity of the finite input and output alphabet channel. We
observe that for every channel there exists a (matched) threshold decoder which
is capacity achieving. Additionally, necessary and sufficient conditions are
stated for a channel to have a strong converse. Csisz\'{a}r and Narayan's
conjecture is proved for bounded metrics, providing a positive answer to the
open problem introduced in [1], i.e., that the "product-space" improvement of
the lower random coding bound, $C_q^{(\infty)}(W)$, is indeed the mismatch
capacity of the discrete memoryless channel $W$. We conclude by presenting an
identity between the threshold capacity and $C_q^{(\infty)}(W)$ in the DMC
case.
|
1309.7971 | Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial
Intelligence (2013) | cs.AI | This is the Proceedings of the Twenty-Ninth Conference on Uncertainty in
Artificial Intelligence, which was held in Bellevue, WA, August 11-15, 2013
|
1309.7982 | On the Feature Discovery for App Usage Prediction in Smartphones | cs.LG | With the increasing number of mobile Apps developed, they are now closely
integrated into daily life. In this paper, we develop a framework to predict
mobile Apps that are most likely to be used regarding the current device status
of a smartphone. Such an Apps usage prediction framework is a crucial
prerequisite for fast App launching, intelligent user experience, and power
management of smartphones. By analyzing real App usage log data, we discover
two kinds of features: The Explicit Feature (EF) from sensing readings of
built-in sensors, and the Implicit Feature (IF) from App usage relations. The
IF feature is derived by constructing the proposed App Usage Graph (abbreviated
as AUG) that models App usage transitions. In light of AUG, we are able to
discover usage relations among Apps. Since users may have different usage
behaviors on their smartphones, we further propose one personalized feature
selection algorithm. We explore minimum description length (MDL) from the
training data and select those features which need less length to describe the
training data. The personalized feature selection can successfully reduce the
log size and the prediction time. Finally, we adopt the kNN classification
model to predict Apps usage. Note that through the features selected by the
proposed personalized feature selection algorithm, we only need to keep these
features, which in turn reduces the prediction time and avoids the curse of
dimensionality when using the kNN classifier. We conduct a comprehensive
experimental study based on a real mobile App usage dataset. The results
demonstrate the effectiveness of the proposed framework and show the predictive
capability for App usage prediction.
|
1310.0005 | Message passing optimization of Harmonic Influence Centrality | math.OC cs.SI cs.SY | This paper proposes a new measure of node centrality in social networks, the
Harmonic Influence Centrality, which emerges naturally in the study of social
influence over networks. Using an intuitive analogy between social and
electrical networks, we introduce a distributed message passing algorithm to
compute the Harmonic Influence Centrality of each node. Although its design is
based on theoretical results which assume the network to have no cycle, the
algorithm can also be successfully applied on general graphs.
|
1310.0036 | Personal Identification from Lip-Print Features using a Statistical
Model | cs.CV | This paper presents a novel approach towards identification of human beings
from the statistical analysis of their lip prints. Lip features are extracted
by studying the spatial orientations of the grooves present in lip prints of
individuals using standard edge detection techniques. Horizontal, vertical and
diagonal groove features are analysed using connected-component analysis to
generate the region-specific edge datasets. Comparison between test and
reference sample datasets against a threshold value to define a match yield
satisfactory results. FAR, FRR and ROC metrics have been used to gauge the
performance of the algorithm for real-world deployment in unimodal and
multimodal biometric verification systems.
|
1310.0046 | Spectra of random graphs with community structure and arbitrary degrees | cs.SI cond-mat.stat-mech physics.soc-ph | Using methods from random matrix theory researchers have recently calculated
the full spectra of random networks with arbitrary degrees and with community
structure. Both reveal interesting spectral features, including deviations from
the Wigner semicircle distribution and phase transitions in the spectra of
community structured networks. In this paper we generalize both calculations,
giving a prescription for calculating the spectrum of a network with both
community structure and an arbitrary degree distribution. In general the
spectrum has two parts, a continuous spectral band, which can depart strongly
from the classic semicircle form, and a set of outlying eigenvalues that
indicate the presence of communities.
|
1310.0054 | Towards Optimal Secure Distributed Storage Systems with Exact Repair | cs.IT math.IT | Distributed storage systems in the presence of a wiretapper are considered. A
distributed storage system (DSS) is parameterized by three parameters (n, k,d),
in which a file stored across n distributed nodes, can be recovered from any k
out of n nodes. If a node fails, any d out of (n-1) nodes help in the repair of
the failed node. For such a (n,k,d)-DSS, two types of wiretapping scenarios are
investigated: (a) Type-I (node) adversary which can wiretap the data stored on
any l<k nodes; and a more severe (b) Type-II (repair data) adversary which can
wiretap the contents of the repair data that is used to repair a set of l
failed nodes over time. The focus of this work is on the practically relevant
setting of exact repair regeneration in which the repair process must replace a
failed node by its exact replica. We make new progress on several non-trivial
instances of this problem which prior to this work have been open. The main
contribution of this paper is the optimal characterization of the secure
storage-vs-exact-repair-bandwidth tradeoff region of a (n,k,d)-DSS, with n<=4
and any l<k in the presence of both Type-I and Type-II adversaries. While the
problem remains open for a general (n,k,d)-DSS with n>4, we present extensions
of these results to a (n, n-1,n-1)-DSS, in presence of a Type-II adversary that
can observe the repair data of any l=(n-2) nodes. The key technical
contribution of this work is in developing novel information theoretic converse
proofs for the Type-II adversarial scenario. From our results, we show that in
the presence of Type-II attacks, the only efficient point in the
storage-vs-exact-repair-bandwidth tradeoff is the MBR (minimum bandwidth
regenerating) point. This is in sharp contrast to the case of a Type-I attack
in which the storage-vs-exact-repair-bandwidth tradeoff allows a spectrum of
operating points beyond the MBR point.
|
1310.0058 | Some issues with Quasi-Steady State Model in Long-term Stability | cs.SY | The Quasi Steady-State (QSS) model of long-term dynamics relies on the idea
of time-scale decomposition. Assuming that the fast variables are infinitely
fast and are stable in the long-term, the QSS model replaces the differential
equations of transient dynamics by their equilibrium equations to reduce
complexity and increase computation efficiency. Although the idea of QSS model
is intuitive, its theoretical foundation has not yet been developed. In this
paper, several counter examples in which the QSS model fails to provide a
correct approximation of the complete dynamic model in power system are
presented and the reasons of the failure are explained from the viewpoint of
nonlinear analysis.
|
1310.0063 | Online Approximate Optimal Station Keeping of an Autonomous Underwater
Vehicle | cs.SY cs.RO math.OC | Online approximation of an optimal station keeping strategy for a fully
actuated six degrees-of-freedom autonomous underwater vehicle is considered.
The developed controller is an approximation of the solution to a two player
zero-sum game where the controller is the minimizing player and an external
disturbance is the maximizing player. The solution is approximated using a
reinforcement learning-based actor-critic framework. The result guarantees
uniformly ultimately bounded (UUB) convergence of the states and UUB
convergence of the approximated policies to the optimal polices without the
requirement of persistence of excitation.
|
1310.0064 | Online Approximate Optimal Path-Following for a Kinematic Unicycle | cs.SY math.OC | Online approximation of an infinite horizon optimal path-following strategy
for a kinematic unicycle is considered. The solution to the optimal control
problem is approximated using an approximate dynamic programming technique that
uses concurrent-learning-based adaptive update laws to estimate the unknown
value function. The developed controller overcomes challenges with the
approximation of the infinite horizon value function using an auxiliary
function that describes the motion of a virtual target on the desired path. The
developed controller guarantees uniformly ultimately bounded (UUB) convergence
of the vehicle to a desired path while maintaining a desired speed profile and
UUB convergence of the approximate policy to the optimal policy. Simulation
results are included to demonstrate the controller's performance.
|
1310.0068 | Automatic estimation of the regularization parameter in 2-D focusing
gravity inversion: an application to the Safo manganese mine in northwest of
Iran | cs.CE | We investigate the use of Tikhonov regularization with the minimum support
stabilizer for underdetermined 2-D inversion of gravity data. This stabilizer
produces models with non-smooth properties which is useful for identifying
geologic structures with sharp boundaries. A very important aspect of using
Tikhonov regularization is the choice of the regularization parameter that
controls the trade off between the data fidelity and the stabilizing
functional. The L-curve and generalized cross validation techniques, which only
require the relative sizes of the uncertainties in the observations are
considered. Both criteria are applied in an iterative process for which at each
iteration a value for regularization parameter is estimated. Suitable values
for the regularization parameter are successfully determined in both cases for
synthetic but practically relevant examples. Whenever the geologic situation
permits, it is easier and more efficient to model the subsurface with a 2-D
algorithm, rather than to apply a full 3-D approach. Then, because the problem
is not large it is appropriate to use the generalized singular value
decomposition for solving the problem efficiently. The method is applied on a
profile of gravity data acquired over the Safo mining camp in Maku-Iran, which
is well known for manganese ores. The presented results demonstrate success in
reconstructing the geometry and density distribution of the subsurface source.
|
1310.0097 | Analysis of Amoeba Active Contours | cs.CV | Subject of this paper is the theoretical analysis of structure-adaptive
median filter algorithms that approximate curvature-based PDEs for image
filtering and segmentation. These so-called morphological amoeba filters are
based on a concept introduced by Lerallut et al. They achieve similar results
as the well-known geodesic active contour and self-snakes PDEs. In the present
work, the PDE approximated by amoeba active contours is derived for a general
geometric situation and general amoeba metric. This PDE is structurally similar
but not identical to the geodesic active contour equation. It reproduces the
previous PDE approximation results for amoeba median filters as special cases.
Furthermore, modifications of the basic amoeba active contour algorithm are
analysed that are related to the morphological force terms frequently used with
geodesic active contours. Experiments demonstrate the basic behaviour of amoeba
active contours and its similarity to geodesic active contours.
|
1310.0101 | Robust Adaptive Beamforming Algorithms Based on the Constrained Constant
Modulus Criterion | cs.IT math.IT | We present a robust adaptive beamforming algorithm based on the worst-case
criterion and the constrained constant modulus approach, which exploits the
constant modulus property of the desired signal. Similarly to the existing
worst-case beamformer with the minimum variance design, the problem can be
reformulated as a second-order cone (SOC) program and solved with interior
point methods. An analysis of the optimization problem is carried out and
conditions are obtained for enforcing its convexity and for adjusting its
parameters. Furthermore, low-complexity robust adaptive beamforming algorithms
based on the modified conjugate gradient (MCG) and an alternating optimization
strategy are proposed. The proposed low-complexity algorithms can compute the
existing worst-case constrained minimum variance (WC-CMV) and the proposed
worst-case constrained constant modulus (WC-CCM) designs with a quadratic cost
in the number of parameters. Simulations show that the proposed WC-CCM
algorithm performs better than existing robust beamforming algorithms.
Moreover, the numerical results also show that the performances of the proposed
low-complexity algorithms are equivalent or better than that of existing robust
algorithms, whereas the complexity is more than an order of magnitude lower.
|
1310.0110 | An information measure for comparing top $k$ lists | cs.IT cs.LG math.IT | Comparing the top $k$ elements between two or more ranked results is a common
task in many contexts and settings. A few measures have been proposed to
compare top $k$ lists with attractive mathematical properties, but they face a
number of pitfalls and shortcomings in practice. This work introduces a new
measure to compare any two top k lists based on measuring the information these
lists convey. Our method investigates the compressibility of the lists, and the
length of the message to losslessly encode them gives a natural and robust
measure of their variability. This information-theoretic measure objectively
reconciles all the main considerations that arise when measuring
(dis-)similarity between lists: the extent of their non-overlapping elements in
each of the lists; the amount of disarray among overlapping elements between
the lists; the measurement of displacement of actual ranks of their overlapping
elements.
|
1310.0120 | Covering sets for limited-magnitude errors | cs.IT math.IT math.NT | For a set
$\cM=\{-\mu,-\mu+1,\ldots, \lambda\}\setminus\{0\}$ with non-negative
integers $\lambda,\mu<q$ not both 0, a subset $\cS$ of the residue class ring
$\Z_q$ modulo an integer $q\ge 1$ is called a $(\lambda,\mu;q)$-\emph{covering
set} if $$ \cM \cS=\{ms \bmod q : m\in \cM,\ s\in \cS\}=\Z_q. $$ Small covering
sets play an important role in codes correcting limited-magnitude errors. We
give an explicit construction of a $(\lambda,\mu;q)$-covering set $\cS$ which
is of the size $q^{1 + o(1)}\max\{\lambda,\mu\}^{-1/2}$ for almost all integers
$q\ge 1$ and of optimal size $p\max\{\lambda,\mu\}^{-1}$ if $q=p$ is prime.
Furthermore, using a bound on the fourth moment of character sums of Cochrane
and Shi we prove the bound $$\omega_{\lambda,\mu}(q)\le
q^{1+o(1)}\max\{\lambda,\mu\}^{-1/2},$$ for any integer $q\ge 1$, however the
proof of this bound is not constructive.
|
1310.0129 | The squashed entanglement of a quantum channel | quant-ph cs.IT math.IT | This paper defines the squashed entanglement of a quantum channel as the
maximum squashed entanglement that can be registered by a sender and receiver
at the input and output of a quantum channel, respectively. A new subadditivity
inequality for the original squashed entanglement measure of Christandl and
Winter leads to the conclusion that the squashed entanglement of a quantum
channel is an additive function of a tensor product of any two quantum
channels. More importantly, this new subadditivity inequality, along with prior
results of Christandl, Winter, et al., establishes the squashed entanglement of
a quantum channel as an upper bound on the quantum communication capacity of
any channel assisted by unlimited forward and backward classical communication.
A similar proof establishes this quantity as an upper bound on the private
capacity of a quantum channel assisted by unlimited forward and backward public
classical communication. This latter result is relevant as a limitation on
rates achievable in quantum key distribution. As an important application, we
determine that these capacities can never exceed log((1+eta)/(1-eta)) for a
pure-loss bosonic channel for which a fraction eta of the input photons make it
to the output on average. The best known lower bound on these capacities is
equal to log(1/(1-eta)). Thus, in the high-loss regime for which eta << 1, this
new upper bound demonstrates that the protocols corresponding to the above
lower bound are nearly optimal.
|
1310.0132 | The 4-error linear complexity distribution for $2^n$-periodic binary
sequences | cs.CR cs.IT math.IT | By using the sieve method of combinatorics, we study $k$-error linear
complexity distribution of $2^n$-periodic binary sequences based on Games-Chan
algorithm. For $k=4,5$, the complete counting functions on the $k$-error linear
complexity of $2^n$-periodic balanced binary sequences (with linear complexity
less than $2^n$) are presented. As a consequence of the result, the complete
counting functions on the 4-error linear complexity of $2^n$-periodic binary
sequences (with linear complexity $2^n$ or less than $2^n$) are obvious.
Generally, the complete counting functions on the $k$-error linear complexity
of $2^n$-periodic binary sequences can be obtained with a similar approach.
|
1310.0133 | Online Performance Optimization of a DC Motor Driving a Variable Pitch
Propeller | math.OC cs.SY | A practical online optimization scheme is developed for performance
optimization of an electrical aircraft propulsion system. The goal is to
minimize the power extraction of the propulsion system for any given thrust
value. The online optimizer computes the optimum pitch angle of a variable
pitch propeller by minimizing the power of the system for a command thrust
value. This algorithm is tested on a DC motor driving a variable pitch
propeller; the experimental hardware setup of the DC motor along with its
variable pitch propeller is also described. Experimental results show the
efficiency and practicality of the proposed online optimization scheme.
Outstanding issues are sketched.
|
1310.0141 | Hopping over Big Data: Accelerating Ad-hoc OLAP Queries with Grasshopper
Algorithms | cs.DB | This paper presents a family of algorithms for fast subset filtering within
ordered sets of integers representing composite keys. Applications include
significant acceleration of (ad-hoc) analytic queries against a data warehouse
without any additional indexing. The algorithms work for point, range and set
restrictions on multiple attributes, in any combination, and are inherently
multidimensional. The main idea consists in intelligent combination of
sequential crawling with jumps over large portions of irrelevant keys. The way
to combine them is adaptive to characteristics of the underlying data store.
|
1310.0145 | Optimal Routing and Scheduling of Charge for Electric Vehicles: Case
Study | cs.SY cs.MA | In Colombia, there is an increasing interest about improving public
transportation. One of the proposed strategies in that way is the use battery
electric vehicles (BEVs). One of the new challenges is the BEVs routing
problem, which is subjected to the traditional issues of the routing problems,
and must also consider the particularities of autonomy, charge and battery
degradation of the BEVs. In this work, a scheme that coordinates the routing,
scheduling of charge and operating costs of BEVs is proposed. The simplified
operating costs have been modeled considering both charging fees and battery
degradation. A case study is presented, in order to illustrate the proposed
methodology. The given case considers an airport shuttle service scenario, in
which energy consumption of the BEVs is estimated based on experimentally
measured driving patterns.
|
1310.0154 | Incoherence-Optimal Matrix Completion | cs.IT cs.LG math.IT stat.ML | This paper considers the matrix completion problem. We show that it is not
necessary to assume joint incoherence, which is a standard but unintuitive and
restrictive condition that is imposed by previous studies. This leads to a
sample complexity bound that is order-wise optimal with respect to the
incoherence parameter (as well as to the rank $r$ and the matrix dimension $n$
up to a log factor). As a consequence, we improve the sample complexity of
recovering a semidefinite matrix from $O(nr^{2}\log^{2}n)$ to $O(nr\log^{2}n)$,
and the highest allowable rank from $\Theta(\sqrt{n}/\log n)$ to
$\Theta(n/\log^{2}n)$. The key step in proof is to obtain new bounds on the
$\ell_{\infty,2}$-norm, defined as the maximum of the row and column norms of a
matrix. To illustrate the applicability of our techniques, we discuss
extensions to SVD projection, structured matrix completion and semi-supervised
clustering, for which we provide order-wise improvements over existing results.
Finally, we turn to the closely-related problem of low-rank-plus-sparse matrix
decomposition. We show that the joint incoherence condition is unavoidable here
for polynomial-time algorithms conditioned on the Planted Clique conjecture.
This means it is intractable in general to separate a rank-$\omega(\sqrt{n})$
positive semidefinite matrix and a sparse matrix. Interestingly, our results
show that the standard and joint incoherence conditions are associated
respectively with the information (statistical) and computational aspects of
the matrix decomposition problem.
|
1310.0163 | The elliptic model for social fluxes | physics.soc-ph cs.SI | In this paper, a model (called the elliptic model) is proposed to estimate
the number of social ties between two locations using population data in a
similar manner to how transportation research deals with trips. To overcome the
asymmetry of transportation models, the new model considers that the number of
relationships between two locations is inversely proportional to the population
in the ellipse whose foci are in these two locations. The elliptic model is
evaluated by considering the anonymous communications patterns of 25 million
users from three different countries, where a location has been assigned to
each user based on their most used phone tower or billing zip code. With this
information, spatial social networks are built at three levels of resolution:
tower, city and region for each of the three countries. The elliptic model
achieves a similar performance when predicting communication fluxes as
transportation models do when predicting trips. This shows that human
relationships are influenced at least as much by geography as is human
mobility.
|
1310.0171 | Object Detection Using Keygraphs | cs.CV | We propose a new framework for object detection based on a generalization of
the keypoint correspondence framework. This framework is based on replacing
keypoints by keygraphs, i.e. isomorph directed graphs whose vertices are
keypoints, in order to explore relative and structural information. Unlike
similar works in the literature, we deal directly with graphs in the entire
pipeline: we search for graph correspondences instead of searching for
individual point correspondences and then building graph correspondences from
them afterwards. We also estimate the pose from graph correspondences instead
of falling back to point correspondences through a voting table. The
contributions of this paper are the proposed framework and an implementation
that properly handles its inherent issues of loss of locality and combinatorial
explosion, showing its viability for real-time applications. In particular, we
introduce the novel concept of keytuples to solve a running time issue. The
accuracy of the implementation is shown by results of over 800 experiments with
a well-known database of images. The speed is illustrated by real-time tracking
with two different cameras in ordinary hardware.
|
1310.0201 | Cross-Recurrence Quantification Analysis of Categorical and Continuous
Time Series: an R package | cs.CL stat.AP | This paper describes the R package crqa to perform cross-recurrence
quantification analysis of two time series of either a categorical or
continuous nature. Streams of behavioral information, from eye movements to
linguistic elements, unfold over time. When two people interact, such as in
conversation, they often adapt to each other, leading these behavioral levels
to exhibit recurrent states. In dialogue, for example, interlocutors adapt to
each other by exchanging interactive cues: smiles, nods, gestures, choice of
words, and so on. In order for us to capture closely the goings-on of dynamic
interaction, and uncover the extent of coupling between two individuals, we
need to quantify how much recurrence is taking place at these levels. Methods
available in crqa would allow researchers in cognitive science to pose such
questions as how much are two people recurrent at some level of analysis, what
is the characteristic lag time for one person to maximally match another, or
whether one person is leading another. First, we set the theoretical ground to
understand the difference between 'correlation' and 'co-visitation' when
comparing two time series, using an aggregative or cross-recurrence approach.
Then, we describe more formally the principles of cross-recurrence, and show
with the current package how to carry out analyses applying them. We end the
paper by comparing computational efficiency, and results' consistency, of crqa
R package, with the benchmark MATLAB toolbox crptoolbox. We show perfect
comparability between the two libraries on both levels.
|
1310.0229 | Evolutionary Algorithm for Graph Anonymization | cs.DB cs.SI | In recent years there has been a significant increase in the use of graphs as
a tool for representing information. It is very important to preserve the
privacy of users when one wants to publish this information, especially in the
case of social graphs. In this case, it is essential to implement an
anonymization process in the data in order to preserve users' privacy. In this
paper we present an algorithm for graph anonymization, called Evolutionary
Algorithm for Graph Anonymization (EAGA), based on edge modifications to
preserve the k-anonymity model.
|
1310.0234 | Group Sparse Beamforming for Green Cloud-RAN | cs.IT math.IT | A cloud radio access network (Cloud-RAN) is a network architecture that holds
the promise of meeting the explosive growth of mobile data traffic. In this
architecture, all the baseband signal processing is shifted to a single
baseband unit (BBU) pool, which enables efficient resource allocation and
interference management. Meanwhile, conventional powerful base stations can be
replaced by low-cost low-power remote radio heads (RRHs), producing a green and
low-cost infrastructure. However, as all the RRHs need to be connected to the
BBU pool through optical transport links, the transport network power
consumption becomes significant. In this paper, we propose a new framework to
design a green Cloud-RAN, which is formulated as a joint RRH selection and
power minimization beamforming problem. To efficiently solve this problem, we
first propose a greedy selection algorithm, which is shown to provide near-
optimal performance. To further reduce the complexity, a novel group sparse
beamforming method is proposed by inducing the group-sparsity of beamformers
using the weighted $\ell_1/\ell_2$-norm minimization, where the group sparsity
pattern indicates those RRHs that can be switched off. Simulation results will
show that the proposed algorithms significantly reduce the network power
consumption and demonstrate the importance of considering the transport link
power consumption.
|
1310.0250 | Use of Solr and Xapian in the Invenio document repository software | cs.IR cs.DL | Invenio is a free comprehensive web-based document repository and digital
library software suite originally developed at CERN. It can serve a variety of
use cases from an institutional repository or digital library to a web journal.
In order to fully use full-text documents for efficient search and ranking,
Solr was integrated into Invenio through a generic bridge. Solr indexes
extracted full-texts and most relevant metadata. Consequently, Invenio takes
advantage of Solr's efficient search and word similarity ranking capabilities.
In this paper, we first give an overview of Invenio, its capabilities and
features. We then present our open source Solr integration as well as
scalability challenges that arose for an Invenio-based multi-million record
repository: the CERN Document Server. We also compare our Solr adapter to an
alternative Xapian adapter using the same generic bridge. Both integrations are
distributed with the Invenio package and ready to be used by the institutions
using or adopting Invenio.
|
1310.0282 | Uncovering patterns of inter-urban trip and spatial interaction from
social media check-in data | cs.SI physics.soc-ph | The article revisits spatial interaction and distance decay from the
perspective of human mobility patterns and spatially-embedded networks based on
an empirical data set. We extract nationwide inter-urban movements in China
from a check-in data set that covers half million individuals and 370 cities to
analyze the underlying patterns of trips and spatial interactions. By fitting
the gravity model, we find that the observed spatial interactions are governed
by a power law distance decay effect. The obtained gravity model also well
reproduces the exponential trip displacement distribution. However, due to the
ecological fallacy issue, the movement of an individual may not obey the same
distance decay effect. We also construct a spatial network where the edge
weights denote the interaction strengths. The communities detected from the
network are spatially connected and roughly consistent with province
boundaries. We attribute this pattern to different distance decay parameters
between intra-province and inter-province trips.
|
1310.0291 | Mismatched Quantum Filtering and Entropic Information | quant-ph cs.IT math.IT | Quantum filtering is a signal processing technique that estimates the
posterior state of a quantum system under continuous measurements and has
become a standard tool in quantum information processing, with applications in
quantum state preparation, quantum metrology, and quantum control. If the
filter assumes a nominal model that differs from reality, however, the
estimation accuracy is bound to suffer. Here I derive identities that relate
the excess error caused by quantum filter mismatch to the relative entropy
between the true and nominal observation probability measures, with one
identity for Gaussian measurements, such as optical homodyne detection, and
another for Poissonian measurements, such as photon counting. These identities
generalize recent seminal results in classical information theory and provide
new operational meanings to relative entropy, mutual information, and channel
capacity in the context of quantum experiments.
|
1310.0296 | Tracking Control for FES-Cycling based on Force Direction Efficiency
with Antagonistic Bi-Articular Muscles | cs.SY | A functional electrical stimulation (FES)-based tracking controller is
developed to enable cycling based on a strategy to yield force direction
efficiency by exploiting antagonistic bi-articular muscles. Given the input
redundancy naturally occurring among multiple muscle groups, the force
direction at the pedal is explicitly determined as a means to improve the
efficiency of cycling. A model of a stationary cycle and rider is developed as
a closed-chain mechanism. A strategy is then developed to switch between muscle
groups for improved efficiency based on the force direction of each muscle
group. Stability of the developed controller is analyzed through Lyapunov-based
methods.
|
1310.0302 | Surface Registration Using Genetic Algorithm in Reduced Search Space | cs.CV | Surface registration is a technique that is used in various areas such as
object recognition and 3D model reconstruction. Problem of surface registration
can be analyzed as an optimization problem of seeking a rigid motion between
two different views. Genetic algorithms can be used for solving this
optimization problem, both for obtaining the robust parameter estimation and
for its fine-tuning. The main drawback of genetic algorithms is that they are
time consuming which makes them unsuitable for online applications. Modern
acquisition systems enable the implementation of the solutions that would
immediately give the information on the rotational angles between the different
views, thus reducing the dimension of the optimization problem. The paper gives
an analysis of the genetic algorithm implemented in the conditions when the
rotation matrix is known and a comparison of these results with results when
this information is not available.
|
1310.0305 | Filtering for More Accurate Dense Tissue Segmentation in Digitized
Mammograms | cs.CV | Breast tissue segmentation into dense and fat tissue is important for
determining the breast density in mammograms. Knowing the breast density is
important both in diagnostic and computer-aided detection applications. There
are many different ways to express the density of a breast and good quality
segmentation should provide the possibility to perform accurate classification
no matter which classification rule is being used. Knowing the right breast
density and having the knowledge of changes in the breast density could give a
hint of a process which started to happen within a patient. Mammograms
generally suffer from a problem of different tissue overlapping which results
in the possibility of inaccurate detection of tissue types. Fibroglandular
tissue presents rather high attenuation of X-rays and is visible as brighter in
the resulting image but overlapping fibrous tissue and blood vessels could
easily be replaced with fibroglandular tissue in automatic segmentation
algorithms. Small blood vessels and microcalcifications are also shown as
bright objects with similar intensities as dense tissue but do have some
properties which makes possible to suppress them from the final results. In
this paper we try to divide dense and fat tissue by suppressing the scattered
structures which do not represent glandular or dense tissue in order to divide
mammograms more accurately in the two major tissue types. For suppressing blood
vessels and microcalcifications we have used Gabor filters of different size
and orientation and a combination of morphological operations on filtered image
with enhanced contrast.
|
1310.0306 | Flexible Visual Quality Inspection in Discrete Manufacturing | cs.CV | Most visual quality inspections in discrete manufacturing are composed of
length, surface, angle or intensity measurements. Those are implemented as
end-user configurable inspection tools that should not require an image
processing expert to set up. Currently available software solutions providing
such capability use a flowchart based programming environment, but do not fully
address an inspection flowchart robustness and can require a redefinition of
the flowchart if a small variation is introduced. In this paper we propose an
acquire-register-analyze image processing pattern designed for discrete
manufacturing that aims to increase the robustness of the inspection flowchart
by consistently addressing variations in product position, orientation and
size. A proposed pattern is transparent to the end-user and simplifies the
flowchart. We describe a developed software solution that is a practical
implementation of the proposed pattern. We give an example of its real-life use
in industrial production of electric components.
|
1310.0307 | Using the Random Sprays Retinex Algorithm for Global Illumination
Estimation | cs.CV | In this paper the use of Random Sprays Retinex (RSR) algorithm for global
illumination estimation is proposed and its feasibility tested. Like other
algorithms based on the Retinex model, RSR also provides local illumination
estimation and brightness adjustment for each pixel and it is faster than other
path-wise Retinex algorithms. As the assumption of the uniform illumination
holds in many cases, it should be possible to use the mean of local
illumination estimations of RSR as a global illumination estimation for images
with (assumed) uniform illumination allowing also the accuracy to be easily
measured. Therefore we propose a method for estimating global illumination
estimation based on local RSR results. To our best knowledge this is the first
time that RSR algorithm is used to obtain global illumination estimation. For
our tests we use a publicly available color constancy image database for
testing. The results are presented and discussed and it turns out that the
proposed method outperforms many existing unsupervised color constancy
algorithms. The source code is available at
http://www.fer.unizg.hr/ipg/resources/color_constancy/.
|
1310.0308 | Combining Spatio-Temporal Appearance Descriptors and Optical Flow for
Human Action Recognition in Video Data | cs.CV | This paper proposes combining spatio-temporal appearance (STA) descriptors
with optical flow for human action recognition. The STA descriptors are local
histogram-based descriptors of space-time, suitable for building a partial
representation of arbitrary spatio-temporal phenomena. Because of the
possibility of iterative refinement, they are interesting in the context of
online human action recognition. We investigate the use of dense optical flow
as the image function of the STA descriptor for human action recognition, using
two different algorithms for computing the flow: the Farneb\"ack algorithm and
the TVL1 algorithm. We provide a detailed analysis of the influencing optical
flow algorithm parameters on the produced optical flow fields. An extensive
experimental validation of optical flow-based STA descriptors in human action
recognition is performed on the KTH human action dataset. The encouraging
experimental results suggest the potential of our approach in online human
action recognition.
|
1310.0310 | A Novel Georeferenced Dataset for Stereo Visual Odometry | cs.CV | In this work, we present a novel dataset for assessing the accuracy of stereo
visual odometry. The dataset has been acquired by a small-baseline stereo rig
mounted on the top of a moving car. The groundtruth is supplied by a consumer
grade GPS device without IMU. Synchronization and alignment between GPS
readings and stereo frames are recovered after the acquisition. We show that
the attained groundtruth accuracy allows to draw useful conclusions in
practice. The presented experiments address influence of camera calibration,
baseline distance and zero-disparity features to the achieved reconstruction
performance.
|
1310.0311 | Multiclass Road Sign Detection using Multiplicative Kernel | cs.CV | We consider the problem of multiclass road sign detection using a
classification function with multiplicative kernel comprised from two kernels.
We show that problems of detection and within-foreground classification can be
jointly solved by using one kernel to measure object-background differences and
another one to account for within-class variations. The main idea behind this
approach is that road signs from different foreground variations can share
features that discriminate them from backgrounds. The classification function
training is accomplished using SVM, thus feature sharing is obtained through
support vector sharing. Training yields a family of linear detectors, where
each detector corresponds to a specific foreground training sample. The
redundancy among detectors is alleviated using k-medoids clustering. Finally,
we report detection and classification results on a set of road sign images
obtained from a camera on a moving vehicle.
|
1310.0312 | The importance of stimulus noise analysis for self-motion studies | cs.SY q-bio.NC | Motion simulators are widely employed in basic and applied research to study
the neural mechanisms of perception and action under inertial stimulations. In
these studies, uncontrolled simulator-introduced noise inevitably leads to a
mismatch between the reproduced motion and the trajectories meticulously
designed by the experimenter, possibly resulting in undesired motion cues to
the investigated system. An understanding of the simulator response to
different motion commands is therefore a crucial yet often underestimated step
towards the interpretation of experimental results. In this work, we developed
analysis methods based on signal processing techniques to quantify the noise in
the actual motion, and its deterministic and stochastic components. Our methods
allow comparisons between commanded and actual motion as well as between
different actual motion profiles. A specific practical example from one of our
studies is used to illustrate the methodologies and their relevance, but this
does not detract from its general applicability. Analyses of the simulator
inertial recordings show direction-dependent noise and nonlinearity related to
the command amplitude. The Signal-to-Noise Ratio is one order of magnitude
higher for the larger motion amplitudes we tested, compared to the smaller
motion amplitudes. Deterministic and stochastic noise components are of similar
magnitude for the weaker motions, whereas for stronger motions the
deterministic component dominates the stochastic component. The effect of
simulator noise on animal/human motion sensitivity is discussed. We conclude
that accurate analyses of a simulator motion are a crucial prerequisite for the
investigation of uncertainty in self-motion perception.
|
1310.0314 | Global Localization Based on 3D Planar Surface Segments | cs.CV | Global localization of a mobile robot using planar surface segments extracted
from depth images is considered. The robot's environment is represented by a
topological map consisting of local models, each representing a particular
location modeled by a set of planar surface segments. The discussed
localization approach segments a depth image acquired by a 3D camera into
planar surface segments which are then matched to model surface segments. The
robot pose is estimated by the Extended Kalman Filter using surface segment
pairs as measurements. The reliability and accuracy of the considered approach
are experimentally evaluated using a mobile robot equipped by a Microsoft
Kinect sensor.
|
1310.0315 | Computer Vision Systems in Road Vehicles: A Review | cs.CV | The number of road vehicles significantly increased in recent decades. This
trend accompanied a build-up of road infrastructure and development of various
control systems to increase road traffic safety, road capacity and travel
comfort. In traffic safety significant development has been made and today's
systems more and more include cameras and computer vision methods. Cameras are
used as part of the road infrastructure or in vehicles. In this paper a review
on computer vision systems in vehicles from the stand point of traffic
engineering is given. Safety problems of road vehicles are presented, current
state of the art in-vehicle vision systems is described and open problems with
future research directions are discussed.
|
1310.0316 | Classifying Traffic Scenes Using The GIST Image Descriptor | cs.CV | This paper investigates classification of traffic scenes in a very low
bandwidth scenario, where an image should be coded by a small number of
features. We introduce a novel dataset, called the FM1 dataset, consisting of
5615 images of eight different traffic scenes: open highway, open road,
settlement, tunnel, tunnel exit, toll booth, heavy traffic and the overpass. We
evaluate the suitability of the GIST descriptor as a representation of these
images, first by exploring the descriptor space using PCA and k-means
clustering, and then by using an SVM classifier and recording its 10-fold
cross-validation performance on the introduced FM1 dataset. The obtained
recognition rates are very encouraging, indicating that the use of the GIST
descriptor alone could be sufficiently descriptive even when very high
performance is required.
|
1310.0317 | An Overview and Evaluation of Various Face and Eyes Detection Algorithms
for Driver Fatigue Monitoring Systems | cs.CV | In this work various methods and algorithms for face and eyes detection are
examined in order to decide which of them are applicable for use in a driver
fatigue monitoring system. In the case of face detection the standard
Viola-Jones face detector has shown best results, while the method of finding
the eye centers by means of gradients has proven to be most appropriate in the
case of eyes detection. The later method has also a potential for retrieving
behavioral parameters needed for estimation of the level of driver fatigue.
This possibility will be examined in future work.
|
1310.0319 | Second Croatian Computer Vision Workshop (CCVW 2013) | cs.CV | Proceedings of the Second Croatian Computer Vision Workshop (CCVW 2013,
http://www.fer.unizg.hr/crv/ccvw2013) held September 19, 2013, in Zagreb,
Croatia. Workshop was organized by the Center of Excellence for Computer Vision
of the University of Zagreb.
|
1310.0322 | Optical Flow on Evolving Surfaces with Space and Time Regularisation | math.OC cs.CV | We extend the concept of optical flow with spatiotemporal regularisation to a
dynamic non-Euclidean setting. Optical flow is traditionally computed from a
sequence of flat images. The purpose of this paper is to introduce variational
motion estimation for images that are defined on an evolving surface.
Volumetric microscopy images depicting a live zebrafish embryo serve as both
biological motivation and test data.
|
1310.0337 | A Class of Binomial Permutation Polynomials | math.NT cs.IT math.CO math.IT | In this note, a criterion for a class of binomials to be permutation
polynomials is proposed. As a consequence, many classes of binomial permutation
polynomials and monomial complete permutation polynomials are obtained. The
exponents in these monomials are of Niho type.
|
1310.0354 | Deep and Wide Multiscale Recursive Networks for Robust Image Labeling | cs.CV cs.LG | Feedforward multilayer networks trained by supervised learning have recently
demonstrated state of the art performance on image labeling problems such as
boundary prediction and scene parsing. As even very low error rates can limit
practical usage of such systems, methods that perform closer to human accuracy
remain desirable. In this work, we propose a new type of network with the
following properties that address what we hypothesize to be limiting aspects of
existing methods: (1) a `wide' structure with thousands of features, (2) a
large field of view, (3) recursive iterations that exploit statistical
dependencies in label space, and (4) a parallelizable architecture that can be
trained in a fraction of the time compared to benchmark multilayer
convolutional networks. For the specific image labeling problem of boundary
prediction, we also introduce a novel example weighting algorithm that improves
segmentation accuracy. Experiments in the challenging domain of connectomic
reconstruction of neural circuity from 3d electron microscopy data show that
these "Deep And Wide Multiscale Recursive" (DAWMR) networks lead to new levels
of image labeling performance. The highest performing architecture has twelve
layers, interwoven supervised and unsupervised stages, and uses an input field
of view of 157,464 voxels ($54^3$) to make a prediction at each image location.
We present an associated open source software package that enables the simple
and flexible creation of DAWMR networks.
|
1310.0365 | The complex-valued encoding for dicision-making based on aliasing data | cs.CV | It is proposed a complex valued channel encoding for multidimensional data.
The basic approach contains overlapping of complex nonlinear mappings. Its
development leads to sparse representation of multi-channel data, increasing
their dimensions and the distance between the images.
|
1310.0371 | Decentralized formation control with connectivity maintenance and
collision avoidance under limited and intermittent sensing | cs.SY math.OC | A decentralized switched controller is developed for dynamic agents to
perform global formation configuration convergence while maintaining network
connectivity and avoiding collision within agents and between stationary
obstacles, using only local feedback under limited and intermittent sensing.
Due to the intermittent sensing, constant position feedback may not be
available for agents all the time. Intermittent sensing can also lead to a
disconnected network or collisions between agents. Using a navigation function
framework, a decentralized switched controller is developed to navigate the
agents to the desired positions while ensuring network maintenance and
collision avoidance.
|
1310.0375 | Network Reconstruction from Intrinsic Noise | cs.SY math.OC | This paper considers the problem of inferring an unknown network of dynamical
systems driven by unknown, intrinsic, noise inputs. Equivalently we seek to
identify direct causal dependencies among manifest variables only from
observations of these variables. For linear, time-invariant systems of minimal
order, we characterise under what conditions this problem is well posed. We
first show that if the transfer matrix from the inputs to manifest states is
minimum phase, this problem has a unique solution irrespective of the network
topology. This is equivalent to there being only one valid spectral factor (up
to a choice of signs of the inputs) of the output spectral density.
If the assumption of phase-minimality is relaxed, we show that the problem is
characterised by a single Algebraic Riccati Equation (ARE), of dimension
determined by the number of latent states. The number of solutions to this ARE
is an upper bound on the number of solutions for the network. We give necessary
and sufficient conditions for any two dynamical networks to have equal output
spectral density, which can be used to construct all equivalent networks.
Extensive simulations quantify the number of solutions for a range of problem
sizes. For a slightly simpler case, we also provide an algorithm to construct
all equivalent networks from the output spectral density.
|
1310.0395 | Protein Threading Based on Nonlinear Integer Programming | cs.DS cs.CE | Protein threading is a method of computational protein structure prediction
used for protein sequences which have the same fold as proteins of known
structures but do not have homologous proteins with known structure. The most
popular algorithm is based on linear integer programming. In this paper, we
consider methods based on nonlinear integer programming. Actually, the existing
linear integer programming is directly linearized from the original quadratic
integer programming. We then develop corresponding efficient algorithms.
|
1310.0402 | Incentive Design for Direct Load Control Programs | cs.SY | We study the problem of optimal incentive design for voluntary participation
of electricity customers in a Direct Load Scheduling (DLS) program, a new form
of Direct Load Control (DLC) based on a three way communication protocol
between customers, embedded controls in flexible appliances, and the central
entity in charge of the program. Participation decisions are made in real-time
on an event-based basis, with every customer that needs to use a flexible
appliance considering whether to join the program given current incentives.
Customers have different interpretations of the level of risk associated with
committing to pass over the control over the consumption schedule of their
devices to an operator, and these risk levels are only privately known. The
operator maximizes his expected profit of operating the DLS program by posting
the right participation incentives for different appliance types, in a publicly
available and dynamically updated table. Customers are then faced with the
dynamic decision making problem of whether to take the incentives and
participate or not. We define an optimization framework to determine the
profit-maximizing incentives for the operator. In doing so, we also investigate
the utility that the operator expects to gain from recruiting different types
of devices. These utilities also provide an upper-bound on the benefits that
can be attained from any type of demand response program.
|
1310.0432 | Online Learning of Dynamic Parameters in Social Networks | math.OC cs.LG cs.SI stat.ML | This paper addresses the problem of online learning in a dynamic setting. We
consider a social network in which each individual observes a private signal
about the underlying state of the world and communicates with her neighbors at
each time period. Unlike many existing approaches, the underlying state is
dynamic, and evolves according to a geometric random walk. We view the scenario
as an optimization problem where agents aim to learn the true state while
suffering the smallest possible loss. Based on the decomposition of the global
loss function, we introduce two update mechanisms, each of which generates an
estimate of the true state. We establish a tight bound on the rate of change of
the underlying state, under which individuals can track the parameter with a
bounded variance. Then, we characterize explicit expressions for the steady
state mean-square deviation(MSD) of the estimates from the truth, per
individual. We observe that only one of the estimators recovers the optimal
MSD, which underscores the impact of the objective function decomposition on
the learning quality. Finally, we provide an upper bound on the regret of the
proposed methods, measured as an average of errors in estimating the parameter
in a finite time.
|
1310.0446 | A maximum entropy model for opinions in social groups | physics.soc-ph cs.SI stat.AP | We study how the opinions of a group of individuals determine their spatial
distribution and connectivity, through an agent-based model. The interaction
between agents is described by a Potts-like Hamiltonian in which agents are
allowed to move freely without an underlying lattice (the average network
topology connecting them is determined from the parameters). This kind of model
was derived using maximum entropy statistical inference under fixed expectation
values of certain probabilities that (we propose) are relevant to social
organization. Control parameters emerge as Lagrange multipliers of the maximum
entropy problem, and they can be associated with the level of consequence
between the personal beliefs and external opinions, and the tendency to
socialize with peers of similar or opposing views. These parameters define a
phase diagram for the social system, which we studied using Monte Carlo
Metropolis simulations. Our model presents both first and second-order phase
transitions, depending on the ratio between the internal consequence and the
interaction with others. We have found a critical value for the level of
internal consequence, below which the personal beliefs of the agents seem to be
irrelevant.
|
1310.0505 | Modeling Information Diffusion in Online Social Networks with Partial
Differential Equations | cs.SI physics.soc-ph | Online social networks such as Twitter and Facebook have gained tremendous
popularity for information exchange. The availability of unprecedented amounts
of digital data has accelerated research on information diffusion in online
social networks. However, the mechanism of information spreading in online
social networks remains elusive due to the complexity of social interactions
and rapid change of online social networks. Much of prior work on information
diffusion over online social networks has based on empirical and statistical
approaches. The majority of dynamical models arising from information diffusion
over online social networks involve ordinary differential equations which only
depend on time. In a number of recent papers, the authors propose to use
partial differential equations(PDEs) to characterize temporal and spatial
patterns of information diffusion over online social networks. Built on
intuitive cyber-distances such as friendship hops in online social networks,
the reaction-diffusion equations take into account influences from various
external out-of-network sources, such as the mainstream media, and provide a
new analytic framework to study the interplay of structural and topical
influences on information diffusion over online social networks. In this
survey, we discuss a number of PDE-based models that are validated with real
datasets collected from popular online social networks such as Digg and
Twitter. Some new developments including the conservation law of information
flow in online social networks and information propagation speeds based on
traveling wave solutions are presented to solidify the foundation of the PDE
models and highlight the new opportunities and challenges for mathematicians as
well as computer scientists and researchers in online social networks.
|
1310.0509 | Summary Statistics for Partitionings and Feature Allocations | cs.LG stat.ML | Infinite mixture models are commonly used for clustering. One can sample from
the posterior of mixture assignments by Monte Carlo methods or find its maximum
a posteriori solution by optimization. However, in some problems the posterior
is diffuse and it is hard to interpret the sampled partitionings. In this
paper, we introduce novel statistics based on block sizes for representing
sample sets of partitionings and feature allocations. We develop an
element-based definition of entropy to quantify segmentation among their
elements. Then we propose a simple algorithm called entropy agglomeration (EA)
to summarize and visualize this information. Experiments on various infinite
mixture posteriors as well as a feature allocation dataset demonstrate that the
proposed statistics are useful in practice.
|
1310.0522 | EVOC: A Computer Model of the Evolution of Culture | cs.MA cs.NE | EVOC is a computer model of the EVOlution of Culture. It consists of neural
network based agents that invent ideas for actions, and imitate neighbors'
actions. EVOC replicates using a different fitness function the results
obtained with an earlier model (MAV), including (1) an increase in mean fitness
of actions, and (2) an increase and then decrease in the diversity of actions.
Diversity of actions is positively correlated with number of needs, population
size and density, and with the erosion of borders between populations. Slowly
eroding borders maximize diversity, fostering specialization followed by
sharing of fit actions. Square (as opposed to toroidal) worlds also exhibit
higher diversity. Introducing a leader that broadcasts its actions throughout
the population increases the fitness of actions but reduces diversity; these
effects diminish the more leaders there are. Low density populations have less
fit ideas but broadcasting diminishes this effect.
|
1310.0530 | On the group-theoretic structure of lifted filter banks | cs.IT math.IT | The polyphase-with-advance matrix representations of whole-sample symmetric
(WS) unimodular filter banks form a multiplicative matrix Laurent polynomial
group. Elements of this group can always be factored into lifting matrices with
half-sample symmetric (HS) off-diagonal lifting filters; such linear phase
lifting factorizations are specified in the ISO/IEC JPEG 2000 image coding
standard. Half-sample symmetric unimodular filter banks do not form a group,
but such filter banks can be partially factored into a cascade of whole-sample
antisymmetric (WA) lifting matrices starting from a concentric, equal-length HS
base filter bank. An algebraic framework called a group lifting structure has
been introduced to formalize the group-theoretic aspects of matrix lifting
factorizations. Despite their pronounced differences, it has been shown that
the group lifting structures for both the WS and HS classes satisfy a polyphase
order-increasing property that implies uniqueness ("modulo rescaling") of
irreducible group lifting factorizations in both group lifting structures.
These unique factorization results can in turn be used to characterize the
group-theoretic structure of the groups generated by the WS and HS group
lifting structures.
|
1310.0547 | Growth of scale-free networks under heterogeneous control | physics.soc-ph cs.SI | Real-life networks often encounter vertex dysfunctions, which are usually
followed by recoveries after appropriate maintenances. In this paper we present
our research on a model of scale-free networks whose vertices are regularly
removed and put back. Both the frequency and length of time of the
disappearance of each vertex depend on the degree of the vertex, creating a
heterogeneous control over the network. Our simulation results show very
interesting growth pattern of this kind of networks. We also find that the
scale-free property of the degree distribution is maintained in the proposed
heterogeneously controlled networks. However, the overall growth rate of the
networks in our model can be remarkably reduced if the inactive periods of the
vertices are kept long.
|
1310.0557 | Near-Capacity Adaptive Analog Fountain Codes for Wireless Channels | cs.IT math.IT | In this paper, we propose a capacity-approaching analog fountain code (AFC)
for wireless channels. In AFC, the number of generated coded symbols is
potentially limitless. In contrast to the conventional binary rateless codes,
each coded symbol in AFC is a real-valued symbol, generated as a weighted sum
of $d$ randomly selected information bits, where $d$ and the weight
coefficients are randomly selected from predefined probability mass functions.
The coded symbols are then directly transmitted through wireless channels. We
analyze the error probability of AFC and design the weight set to minimize the
error probability. Simulation results show that AFC achieves the capacity of
the Gaussian channel in a wide range of signal to noise ratio (SNR).
|
1310.0573 | Improving the Quality of MT Output using Novel Name Entity Translation
Scheme | cs.CL | This paper presents a novel approach to machine translation by combining the
state of art name entity translation scheme. Improper translation of name
entities lapse the quality of machine translated output. In this work, name
entities are transliterated by using statistical rule based approach. This
paper describes the translation and transliteration of name entities from
English to Punjabi. We have experimented on four types of name entities which
are: Proper names, Location names, Organization names and miscellaneous.
Various rules for the purpose of syllabification have been constructed.
Transliteration of name entities is accomplished with the help of Probability
calculation. N-Gram probabilities for the extracted syllables have been
calculated using statistical machine translation toolkit MOSES.
|
1310.0575 | Development of Marathi Part of Speech Tagger Using Statistical Approach | cs.CL | Part-of-speech (POS) tagging is a process of assigning the words in a text
corresponding to a particular part of speech. A fundamental version of POS
tagging is the identification of words as nouns, verbs, adjectives etc. For
processing natural languages, Part of Speech tagging is a prominent tool. It is
one of the simplest as well as most constant and statistical model for many NLP
applications. POS Tagging is an initial stage of linguistics, text analysis
like information retrieval, machine translator, text to speech synthesis,
information extraction etc. In POS Tagging we assign a Part of Speech tag to
each word in a sentence and literature. Various approaches have been proposed
to implement POS taggers. In this paper we present a Marathi part of speech
tagger. It is morphologically rich language. Marathi is spoken by the native
people of Maharashtra. The general approach used for development of tagger is
statistical using Unigram, Bigram, Trigram and HMM Methods. It presents a clear
idea about all the algorithms with suitable examples. It also introduces a tag
set for Marathi which can be used for tagging Marathi text. In this paper we
have shown the development of the tagger as well as compared to check the
accuracy of taggers output. The three Marathi POS taggers viz. Unigram, Bigram,
Trigram and HMM gives the accuracy of 77.38%, 90.30%, 91.46% and 93.82%
respectively.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.