id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1308.5133 | Performance Measurement Under Increasing Environmental Uncertainty In
The Context of Interval Type-2 Fuzzy Logic Based Robotic Sailing | cs.RO cs.NE cs.SY | Performance measurement of robotic controllers based on fuzzy logic,
operating under uncertainty, is a subject area which has been somewhat ignored
in the current literature. In this paper standard measures such as RMSE are
shown to be inappropriate for use under conditions where the environmental
uncertainty changes significantly between experiments. An overview of current
methods which have been applied by other authors is presented, followed by a
design of a more sophisticated method of comparison. This method is then
applied to a robotic control problem to observe its outcome compared with a
single measure. Results show that the technique described provides a more
robust method of performance comparison than less complex methods allowing
better comparisons to be drawn.
|
1308.5136 | Extending Similarity Measures of Interval Type-2 Fuzzy Sets to General
Type-2 Fuzzy Sets | cs.AI | Similarity measures provide one of the core tools that enable reasoning about
fuzzy sets. While many types of similarity measures exist for type-1 and
interval type-2 fuzzy sets, there are very few similarity measures that enable
the comparison of general type-2 fuzzy sets. In this paper, we introduce a
general method for extending existing interval type-2 similarity measures to
similarity measures for general type-2 fuzzy sets. Specifically, we show how
similarity measures for interval type-2 fuzzy sets can be employed in
conjunction with the zSlices based general type-2 representation for fuzzy sets
to provide measures of similarity which preserve all the common properties
(i.e. reflexivity, symmetry, transitivity and overlapping) of the original
interval type-2 similarity measure. We demonstrate examples of such extended
fuzzy measures and provide comparisons between (different types of) interval
and general type-2 fuzzy measures.
|
1308.5137 | Measuring the Directional Distance Between Fuzzy Sets | cs.AI | The measure of distance between two fuzzy sets is a fundamental tool within
fuzzy set theory. However, current distance measures within the literature do
not account for the direction of change between fuzzy sets; a useful concept in
a variety of applications, such as Computing With Words. In this paper, we
highlight this utility and introduce a distance measure which takes the
direction between sets into account. We provide details of its application for
normal and non-normal, as well as convex and non-convex fuzzy sets. We
demonstrate the new distance measure using real data from the MovieLens dataset
and establish the benefits of measuring the direction between fuzzy sets.
|
1308.5138 | Artificial Immune Systems (INTROS 2) | cs.NE cs.ET | The biological immune system is a robust, complex, adaptive system that
defends the body from foreign pathogens. It is able to categorize all cells (or
molecules) within the body as self or non-self substances. It does this with
the help of a distributed task force that has the intelligence to take action
from a local and also a global perspective using its network of chemical
messengers for communication. There are two major branches of the immune
system. The innate immune system is an unchanging mechanism that detects and
destroys certain invading organisms, whilst the adaptive immune system responds
to previously unknown foreign cells and builds a response to them that can
remain in the body over a long period of time. This remarkable information
processing biological system has caught the attention of computer science in
recent years.
A novel computational intelligence technique, inspired by immunology, has
emerged, called Artificial Immune Systems. Several concepts from the immune
system have been extracted and applied for solution to real world science and
engineering problems. In this tutorial, we briefly describe the immune system
metaphors that are relevant to existing Artificial Immune Systems methods. We
will then show illustrative real-world problems suitable for Artificial Immune
Systems and give a step-by-step algorithm walkthrough for one such problem. A
comparison of the Artificial Immune Systems to other well-known algorithms,
areas for future work, tips & tricks and a list of resources will round this
tutorial off. It should be noted that as Artificial Immune Systems is still a
young and evolving field, there is not yet a fixed algorithm template and hence
actual implementations might differ somewhat from time to time and from those
examples given here.
|
1308.5144 | Detect adverse drug reactions for drug Pioglitazone | cs.CE | In this study we propose a novel method to successfully detect the ADRs using
feature matrix and feature selection. A feature matrix, which characterizes the
medical events before patients take drugs or after patients take drugs, is
created from THIN database. The feature selection method of Student's t-test is
used to detect the significant features from thousands of medical events. The
significant ADRs, which are corresponding to significant features, are
detected. Experiments are performed on the drug Pioglitazone. Compared to other
computerized methods, our proposed method achieves good performance.
|
1308.5146 | Compressive Multiplexing of Correlated Signals | cs.IT math.IT stat.AP | We present a general architecture for the acquisition of ensembles of
correlated signals. The signals are multiplexed onto a single line by mixing
each one against a different code and then adding them together, and the
resulting signal is sampled at a high rate. We show that if the $M$ signals,
each bandlimited to $W/2$ Hz, can be approximated by a superposition of $R < M$
underlying signals, then the ensemble can be recovered by sampling at a rate
within a logarithmic factor of $RW$ (as compared to the Nyquist rate of $MW$).
This sampling theorem shows that the correlation structure of the signal
ensemble can be exploited in the acquisition process even though it is unknown
a priori.
The reconstruction of the ensemble is recast as a low-rank matrix recovery
problem from linear measurements. The architectures we are considering impose a
certain type of structure on the linear operators. Although our results depend
on the mixing forms being random, this imposed structure results in a very
different type of random projection than those analyzed in the low-rank
recovery literature to date.
|
1308.5149 | Sub-Nyquist Sampling for Power Spectrum Sensing in Cognitive Radios: A
Unified Approach | cs.IT math.IT | In light of the ever-increasing demand for new spectral bands and the
underutilization of those already allocated, the concept of Cognitive Radio
(CR) has emerged. Opportunistic users could exploit temporarily vacant bands
after detecting the absence of activity of their owners. One of the crucial
tasks in the CR cycle is therefore spectrum sensing and detection which has to
be precise and efficient. Yet, CRs typically deal with wideband signals whose
Nyquist rates are very high. In this paper, we propose to reconstruct the power
spectrum of such signals from sub-Nyquist samples, rather than the signal
itself as done in previous work, in order to perform detection. We consider
both sparse and non sparse signals as well as blind and non blind detection in
the sparse case. For each one of those scenarii, we derive the minimal sampling
rate allowing perfect reconstruction of the signal's power spectrum in a
noise-free environment and provide power spectrum recovery techniques that
achieve those rates. The analysis is performed for two different signal models
considered in the literature, which we refer to as the analog and digital
models, and shows that both lead to similar results. Simulations demonstrate
power spectrum recovery at the minimal rate in noise-free settings and show the
impact of several parameters on the detector performance, including
signal-to-noise ratio (SNR), sensing time and sampling rate.
|
1308.5168 | Is Somebody Watching Your Facebook Newsfeed? | cs.SI | With the popularity of Social Networking Services (SNS), more and more
sensitive information are stored online and associated with SNS accounts. The
obvious value of SNS accounts motivates the usage stealing problem --
unauthorized, stealthy use of SNS accounts on the devices owned/used by account
owners without any technology hacks. For example, anxious parents may use their
kids' SNS accounts to inspect the kids' social status; husbands/wives may use
their spouses' SNS accounts to spot possible affairs. Usage stealing could
happen anywhere in any form, and seriously invades the privacy of account
owners. However, there is no any currently known defense against such usage
stealing. To an SNS operator (e.g., Facebook Inc.), usage stealing is hard to
detect using traditional methods because such attackers come from the same IP
addresses/devices, use the same credentials, and share the same accounts as the
owners do.
In this paper, we propose a novel continuous authentication approach that
analyzes user browsing behavior to detect SNS usage stealing incidents. We use
Facebook as a case study and show that it is possible to detect such incidents
by analyzing SNS browsing behavior. Our experiment results show that our
proposal can achieve higher than 80% detection accuracy within 2 minutes, and
higher than 90% detection accuracy after 7 minutes of observation time.
|
1308.5169 | Degree Correlation in Scale-Free Graphs | cond-mat.stat-mech cs.SI physics.data-an physics.soc-ph | We obtain closed form expressions for the expected conditional degree
distribution and the joint degree distribution of the linear preferential
attachment model for network growth in the steady state. We consider the
multiple-destination preferential attachment growth model, where incoming nodes
at each timestep attach to $\beta$ existing nodes, selected by
degree-proportional probabilities. By the conditional degree distribution
$p(\ell| k)$, we mean the degree distribution of nodes that are connected to a
node of degree $k$. By the joint degree distribution $p(k,\ell)$, we mean the
proportion of links that connect nodes of degrees $k$ and $\ell$. In addition
to this growth model, we consider the shifted-linear preferential growth model
and solve for the same quantities, as well as a closed form expression for its
steady-state degree distribution.
|
1308.5190 | Contraction of online response to major events | physics.soc-ph cs.SI | Quantifying regularities in behavioral dynamics is of crucial interest for
understanding collective social events such as panics or political revolutions.
With the widespread use of digital communication media it has become possible
to study massive data streams of user-created content in which individuals
express their sentiments, often towards a specific topic. Here we investigate
messages from various online media created in response to major, collectively
followed events such as sport tournaments, presidential elections or a large
snow storm. We relate content length and message rate, and find a systematic
correlation during events which can be described by a power law relation - the
higher the excitation the shorter the messages. We show that on the one hand
this effect can be observed in the behavior of most regular users, and on the
other hand is accentuated by the engagement of additional user demographics who
only post during phases of high collective activity. Further, we identify the
distributions of content lengths as lognormals in line with statistical
linguistics, and suggest a phenomenological law for the systematic dependence
of the message rate to the lognormal mean parameter. Our measurements have
practical implications for the design of micro-blogging and messaging services.
In the case of the existing service Twitter, we show that the imposed limit of
140 characters per message currently leads to a substantial fraction of
possibly dissatisfying to compose tweets that need to be truncated by their
users.
|
1308.5200 | Manopt, a Matlab toolbox for optimization on manifolds | cs.MS cs.LG math.OC stat.ML | Optimization on manifolds is a rapidly developing branch of nonlinear
optimization. Its focus is on problems where the smooth geometry of the search
space can be leveraged to design efficient numerical algorithms. In particular,
optimization on manifolds is well-suited to deal with rank and orthogonality
constraints. Such structured constraints appear pervasively in machine learning
applications, including low-rank matrix completion, sensor network
localization, camera network registration, independent component analysis,
metric learning, dimensionality reduction and so on. The Manopt toolbox,
available at www.manopt.org, is a user-friendly, documented piece of software
dedicated to simplify experimenting with state of the art Riemannian
optimization algorithms. We aim particularly at reaching practitioners outside
our field.
|
1308.5202 | Throughput of Cognitive Radio Systems with Finite Blocklength Codes | cs.IT math.IT | In this paper, throughput achieved in cognitive radio channels with finite
blocklength codes under buffer limitations is studied. Cognitive users first
determine the activity of the primary users' through channel sensing and then
initiate data transmission at a power level that depends on the channel sensing
decisions. It is assumed that finite blocklength codes are employed in the data
transmission phase. Hence, errors can occur in reception and retransmissions
can be required. Primary users' activities are modeled as a two-state Markov
chain and an eight-state Markov chain is constructed in order to model the
cognitive radio channel. Channel state information (CSI) is assumed to be
perfectly known by either the secondary receiver only or both the secondary
transmitter and receiver. In the absence of CSI at the transmitter, fixed-rate
transmission is performed whereas under perfect CSI knowledge, for a given
target error probability, the transmitter varies the rate according to the
channel conditions. Under these assumptions, throughput in the presence of
buffer constraints is determined by characterizing the maximum constant arrival
rates that can be supported by the cognitive radio channel while satisfying
certain limits on buffer violation probabilities. Tradeoffs between throughput,
buffer constraints, coding blocklength, and sensing duration for both
fixed-rate and variable-rate transmissions are analyzed numerically. The
relations between average error probability, sensing threshold and sensing
duration are studied in the case of variable-rate transmissions.
|
1308.5211 | Two-layer Locally Repairable Codes for Distributed Storage Systems | cs.IT math.IT | In this paper, we propose locally repairable codes (LRCs) with optimal
minimum distance for distributed storage systems (DSS). A two-layer encoding
structure is employed to ensure data reconstruction and the designated repair
locality. The data is first encoded in the first layer by any existing maximum
distance separable (MDS) codes, and then the encoded symbols are divided into
non-overlapping groups and encoded by an MDS array code in the second layer.
The encoding in the second layer provides enough redundancy for local repair,
while the overall code performs recovery of the data based on redundancy from
both layers. Our codes can be constructed over a finite field with size growing
linearly with the total number of nodes in the DSS, and facilitate efficient
degraded reads.
|
1308.5239 | On Locally Decodable Source Coding | cs.IT math.IT | Locally decodable channel codes form a special class of error-correcting
codes with the property that the decoder is able to reconstruct any bit of the
input message from querying only a few bits of a noisy codeword. It is well
known that such codes require significantly more redundancy (in particular have
vanishing rate) compared to their non-local counterparts. In this paper, we
define a dual problem, i.e. locally decodable source codes (LDSC). We consider
both almost lossless (block error) and lossy (bit error) cases. In almost
lossless case, we show that optimal compression (to entropy) is possible with
O(log n) queries to compressed string by the decompressor. We also show the
following converse bounds: 1) linear LDSC cannot achieve any rate below one,
with a bounded number of queries, 2) rate of any source coding with linear
decoder (not necessarily local) in one, 3) for 2 queries, any code construction
cannot have a rate below one. In lossy case, we show that any rate above rate
distortion is achievable with a bounded number of queries. We also show that,
rate distortion is achievable with any scaling number of queries. We provide an
achievability bound in the finite block-length regime and compare it with the
existing bounds in succinct data structures literature.
|
1308.5249 | A Note on Sparsification by Frames | cs.IT math.IT | The purpose of this note is to establish a new generalized
Dictionary-Restricted Isometry Property (D-RIP) sparsity bound constant for
compressed sensing.
For fulfilling D-RIP, the constant $\delta_k$ is used in the definition: $(1
-\delta_k)\|D v\|_2^2 \le \|\Phi D v\|_2^2 \le (1 + \delta_k)\|D v\|^2$. We
prove that signals with $k$-sparse $D$-representation can be reconstructed if
$\delta_{2k} < \frac{2}3$.
The approach in this note can be extended to obtain other D-RIP bounds (i.e.,
$\delta_{tk}$).
|
1308.5269 | A comparative analysis of methods for estimating axon diameter using DWI | cs.NE | The importance of studying the brain microstructure is described and the
existing and state of the art non-invasive methods for the investigation of the
brain microstructure using Diffusion Weighted Magnetic Resonance Imaging (DWI)
is studied. In the next step, Cramer-Rao Lower Bound (CRLB) analysis is
described and utilised for assessment of the minimum estimation error and
uncertainty level of different Diffusion Weighted Magnetic Resonance (DWMR)
signal decay models. The analyses are performed considering the best scenario
through which, we assume that the models are the appropriate representation of
the measured phenomena. This includes the study of the sensitivity of the
estimations to the measurement and model parameters. It is demonstrated that
none of the existing models can achieve a reasonable minimum uncertainty level
under typical measurement setup. At the end, the practical obstacles for
achieving higher performance in clinical and experimental environments are
studied and their effects on feasibility of the methods are discussed.
|
1308.5273 | CrowdGrader: Crowdsourcing the Evaluation of Homework Assignments | cs.SI cs.IR | Crowdsourcing offers a practical method for ranking and scoring large amounts
of items. To investigate the algorithms and incentives that can be used in
crowdsourcing quality evaluations, we built CrowdGrader, a tool that lets
students submit and collaboratively grade solutions to homework assignments. We
present the algorithms and techniques used in CrowdGrader, and we describe our
results and experience in using the tool for several computer-science
assignments.
CrowdGrader combines the student-provided grades into a consensus grade for
each submission using a novel crowdsourcing algorithm that relies on a
reputation system. The algorithm iterativerly refines inter-dependent estimates
of the consensus grades, and of the grading accuracy of each student. On
synthetic data, the algorithm performs better than alternatives not based on
reputation. On our preliminary experimental data, the performance seems
dependent on the nature of review errors, with errors that can be ascribed to
the reviewer being more tractable than those arising from random external
events. To provide an incentive for reviewers, the grade each student receives
in an assignment is a combination of the consensus grade received by their
submissions, and of a reviewing grade capturing their reviewing effort and
accuracy. This incentive worked well in practice.
|
1308.5275 | The Lovasz-Bregman Divergence and connections to rank aggregation,
clustering, and web ranking | cs.LG cs.IR stat.ML | We extend the recently introduced theory of Lovasz-Bregman (LB) divergences
(Iyer & Bilmes, 2012) in several ways. We show that they represent a distortion
between a 'score' and an 'ordering', thus providing a new view of rank
aggregation and order based clustering with interesting connections to web
ranking. We show how the LB divergences have a number of properties akin to
many permutation based metrics, and in fact have as special cases forms very
similar to the Kendall-$\tau$ metric. We also show how the LB divergences
subsume a number of commonly used ranking measures in information retrieval,
like the NDCG and AUC. Unlike the traditional permutation based metrics,
however, the LB divergence naturally captures a notion of "confidence" in the
orderings, thus providing a new representation to applications involving
aggregating scores as opposed to just orderings. We show how a number of
recently used web ranking models are forms of Lovasz-Bregman rank aggregation
and also observe that a natural form of Mallow's model using the LB divergence
has been used as conditional ranking models for the 'Learning to Rank' problem.
|
1308.5281 | Ensemble of Distributed Learners for Online Classification of Dynamic
Data Streams | cs.LG | We present an efficient distributed online learning scheme to classify data
captured from distributed, heterogeneous, and dynamic data sources. Our scheme
consists of multiple distributed local learners, that analyze different streams
of data that are correlated to a common event that needs to be classified. Each
learner uses a local classifier to make a local prediction. The local
predictions are then collected by each learner and combined using a weighted
majority rule to output the final prediction. We propose a novel online
ensemble learning algorithm to update the aggregation rule in order to adapt to
the underlying data dynamics. We rigorously determine a bound for the worst
case misclassification probability of our algorithm which depends on the
misclassification probabilities of the best static aggregation rule, and of the
best local classifier. Importantly, the worst case misclassification
probability of our algorithm tends asymptotically to 0 if the misclassification
probability of the best static aggregation rule or the misclassification
probability of the best local classifier tend to 0. Then we extend our
algorithm to address challenges specific to the distributed implementation and
we prove new bounds that apply to these settings. Finally, we test our scheme
by performing an evaluation study on several data sets. When applied to data
sets widely used by the literature dealing with dynamic data streams and
concept drift, our scheme exhibits performance gains ranging from 34% to 71%
with respect to state of the art solutions.
|
1308.5286 | R-Score: Reputation-based Scoring of Research Groups | cs.DL cs.IR | To manage the problem of having a higher demand for resources than
availability of funds, research funding agencies usually rank the major
research groups in their area of knowledge. This ranking relies on a careful
analysis of the research groups in terms of their size, number of PhDs
graduated, research results and their impact, among other variables. While
research results are not the only variable to consider, they are frequently
given special attention because of the notoriety they confer to the researchers
and the programs they are affiliated with. In here we introduce a new metric
for quantifying publication output, called R-Score for reputation-based score,
which can be used in support to the ranking of research groups or programs. The
novelty is that the metric depends solely on the listings of the publications
of the members of a group, with no dependency on citation counts. R-Score has
some interesting properties: (a) it does not require access to the contents of
published material, (b) it can be curated to produce highly accurate results,
and (c) it can be naturally used to compare publication output of research
groups (e.g., graduate programs) inside a same country, geographical area, or
across the world. An experiment comparing the publication output of 25 CS
graduate programs from Brazil suggests that R-Score can be quite useful for
providing early insights into the publication patterns of the various research
groups one wants to compare.
|
1308.5304 | Enhancing Secrecy with Multi-Antenna Transmission in Wireless Ad Hoc
Networks | cs.IT math.IT | We study physical-layer security in wireless ad hoc networks and investigate
two types of multi-antenna transmission schemes for providing secrecy
enhancements. To establish secure transmission against malicious eavesdroppers,
we consider the generation of artificial noise with either sectoring or
beamforming. For both approaches, we provide a statistical characterization and
tradeoff analysis of the outage performance of the legitimate communication and
the eavesdropping links. We then investigate the networkwide secrecy throughput
performance of both schemes in terms of the secrecy transmission capacity, and
study the optimal power allocation between the information signal and the
artificial noise. Our analysis indicates that, under transmit power
optimization, the beamforming scheme outperforms the sectoring scheme, except
for the case where the number of transmit antennas are sufficiently large. Our
study also reveals some interesting differences between the optimal power
allocation for the sectoring and beamforming schemes.
|
1308.5315 | Edge-detection applied to moving sand dunes on Mars | cs.CV | Here we discuss the application of an edge detection filter, the Sobel filter
of GIMP, to the recently discovered motion of some sand dunes on Mars. The
filter allows a good comparison of an image HiRISE of 2007 and an image of 1999
recorded by the Mars Global Surveyor of the dunes in the Nili Patera caldera,
measuring therefore the motion of the dunes on a longer period of time than
that previously investigated.
|
1308.5317 | Peer Pressure Shapes Consensus, Leadership, and Innovations in Social
Groups | physics.soc-ph cs.SI | What is the effect of the combined direct and indirect social influences-peer
pressure (PP)-on a social groups collective decisions? We present a model that
captures PP as a function of the socio-cultural distance between individuals in
a social group. Using this model and empirical data from 15 real-world social
networks we found that the PP level determines how fast a social group reaches
consensus. More importantly, the levels of PP determine the leaders who can
achieve full control of their social groups. PP can overcome barriers imposed
upon a consensus by the existence of tightly connected communities with local
leaders or the existence of leaders with poor cohesiveness of opinions. A
moderate level of PP is also necessary to explain the rate at which innovations
diffuse through a variety of social groups.
|
1308.5321 | Evolution Theory of Self-Evolving Autonomous Problem Solving Systems | cs.AI | The present study gives a mathematical framework for self-evolution within
autonomous problem solving systems. Special attention is set on universal
abstraction, thereof generation by net block homomorphism, consequently
multiple order solving systems and the overall decidability of the set of the
solutions. By overlapping presentation of nets new abstraction relation among
nets is formulated alongside with consequent alphabetical net block renetting
system proportional to normal forms of renetting systems regarding the
operational power. A new structure in self-evolving problem solving is
established via saturation by groups of equivalence relations and iterative
closures of generated quotient transducer algebras over the whole evolution.
|
1308.5329 | Monitoring with uncertainty | cs.LO cs.LG cs.SY | We discuss the problem of runtime verification of an instrumented program
that misses to emit and to monitor some events. These gaps can occur when a
monitoring overhead control mechanism is introduced to disable the monitor of
an application with real-time constraints. We show how to use statistical
models to learn the application behavior and to "fill in" the introduced gaps.
Finally, we present and discuss some techniques developed in the last three
years to estimate the probability that a property of interest is violated in
the presence of an incomplete trace.
|
1308.5330 | Combinatorial Abstractions of Dynamical Systems | cs.SY cs.LO | Formal verification has been successfully developed in computer science for
verifying combinatorial classes of models and specifications. In like manner,
formal verification methods have been developed for dynamical systems. However,
the verification of system properties, such as safety, is based on reachability
calculations, which are the sources of insurmountable complexity. This talk
addresses indirect verification methods, which are based on abstracting the
dynamical systems by models of reduced complexity and preserving central
properties of the original systems.
|
1308.5331 | Networked Embedded Control Systems: from Modelling to Implementation | cs.SY | Networked Embedded Control Systems are distributed control systems where the
communication among plants, sensors, actuators and controllers occurs in a
shared network. They have been the subject of intensive study in the last few
years. In this paper we survey our contribution to this research topic.
|
1308.5332 | An Integrated Framework for Diagnosis and Prognosis of Hybrid Systems | cs.SY cs.AI cs.SE | Complex systems are naturally hybrid: their dynamic behavior is both
continuous and discrete. For these systems, maintenance and repair are an
increasing part of the total cost of final product. Efficient diagnosis and
prognosis techniques have to be adopted to detect, isolate and anticipate
faults. This paper presents an original integrated theoretical framework for
diagnosis and prognosis of hybrid systems. The formalism used for hybrid
diagnosis is enriched in order to be able to follow the evolution of an aging
law for each fault of the system. The paper presents a methodology for
interleaving diagnosis and prognosis in a hybrid framework.
|
1308.5333 | Completeness of Lyapunov Abstraction | cs.SY | In this work, we continue our study on discrete abstractions of dynamical
systems. To this end, we use a family of partitioning functions to generate an
abstraction. The intersection of sub-level sets of the partitioning functions
defines cells, which are regarded as discrete objects. The union of cells makes
up the state space of the dynamical systems. Our construction gives rise to a
combinatorial object - a timed automaton. We examine sound and complete
abstractions. An abstraction is said to be sound when the flow of the time
automata covers the flow lines of the dynamical systems. If the dynamics of the
dynamical system and the time automaton are equivalent, the abstraction is
complete.
The commonly accepted paradigm for partitioning functions is that they ought
to be transversal to the studied vector field. We show that there is no
complete partitioning with transversal functions, even for particular dynamical
systems whose critical sets are isolated critical points. Therefore, we allow
the directional derivative along the vector field to be non-positive in this
work. This considerably complicates the abstraction technique. For
understanding dynamical systems, it is vital to study stable and unstable
manifolds and their intersections. These objects appear naturally in this work.
Indeed, we show that for an abstraction to be complete, the set of critical
points of an abstraction function shall contain either the stable or unstable
manifold of the dynamical system.
|
1308.5334 | Approximated Symbolic Computations over Hybrid Automata | cs.SY cs.FL cs.LO | Hybrid automata are a natural framework for modeling and analyzing systems
which exhibit a mixed discrete continuous behaviour. However, the standard
operational semantics defined over such models implicitly assume perfect
knowledge of the real systems and infinite precision measurements. Such
assumptions are not only unrealistic, but often lead to the construction of
misleading models. For these reasons we believe that it is necessary to
introduce more flexible semantics able to manage with noise, partial
information, and finite precision instruments. In particular, in this paper we
integrate in a single framework based on approximated semantics different over
and under-approximation techniques for hybrid automata. Our framework allows to
both compare, mix, and generalize such techniques obtaining different
approximated reachability algorithms.
|
1308.5335 | World Automata: a compositional approach to model implicit communication
in hierarchical Hybrid Systems | cs.FL cs.MA | We propose an extension of Hybrid I/O Automata (HIOAs) to model agent systems
and their implicit communication through perturbation of the environment, like
localization of objects or radio signals diffusion and detection. The new
object, called World Automaton (WA), is built in such a way to preserve as much
as possible of the compositional properties of HIOAs and its underlying theory.
From the formal point of view we enrich classical HIOAs with a set of world
variables whose values are functions both of time and space. World variables
are treated similarly to local variables of HIOAs, except in parallel
composition, where the perturbations produced by world variables are summed. In
such way, we obtain a structure able to model both agents and environments,
thus inducing a hierarchy in the model and leading to the introduction of a new
operator. Indeed this operator, called inplacement, is needed to represent the
possibility of an object (WA) of living inside another object/environment (WA).
|
1308.5336 | HyLTL: a temporal logic for model checking hybrid systems | cs.LO cs.FL cs.SY | The model-checking problem for hybrid systems is a well known challenge in
the scientific community. Most of the existing approaches and tools are limited
to safety properties only, or operates by transforming the hybrid system to be
verified into a discrete one, thus loosing information on the continuous
dynamics of the system. In this paper we present a logic for specifying complex
properties of hybrid systems called HyLTL, and we show how it is possible to
solve the model checking problem by translating the formula into an equivalent
hybrid automaton. In this way the problem is reduced to a reachability problem
on hybrid automata that can be solved by using existing tools.
|
1308.5338 | A stochastic hybrid model of a biological filter | cs.LG cs.CE q-bio.MN | We present a hybrid model of a biological filter, a genetic circuit which
removes fast fluctuations in the cell's internal representation of the extra
cellular environment. The model takes the classic feed-forward loop (FFL) motif
and represents it as a network of continuous protein concentrations and binary,
unobserved gene promoter states. We address the problem of statistical
inference and parameter learning for this class of models from partial,
discrete time observations. We show that the hybrid representation leads to an
efficient algorithm for approximate statistical inference in this circuit, and
show its effectiveness on a simulated data set.
|
1308.5339 | A Simple Stochastic Differential Equation with Discontinuous Drift | cs.SY math.NA | In this paper we study solutions to stochastic differential equations (SDEs)
with discontinuous drift. We apply two approaches: The Euler-Maruyama method
and the Fokker-Planck equation and show that a candidate density function based
on the Euler-Maruyama method approximates a candidate density function based on
the stationary Fokker-Planck equation. Furthermore, we introduce a smooth
function which approximates the discontinuous drift and apply the
Euler-Maruyama method and the Fokker-Planck equation with this input. The point
of departure for this work is a particular SDE with discontinuous drift.
|
1308.5354 | Convex Optimization Approaches for Blind Sensor Calibration using
Sparsity | cs.IT math.IT | We investigate a compressive sensing framework in which the sensors introduce
a distortion to the measurements in the form of unknown gains. We focus on
blind calibration, using measures performed on multiple unknown (but sparse)
signals and formulate the joint recovery of the gains and the sparse signals as
a convex optimization problem. We divide this problem in 3 subproblems with
different conditions on the gains, specifially (i) gains with different
amplitude and the same phase, (ii) gains with the same amplitude and different
phase and (iii) gains with different amplitude and phase. In order to solve the
first case, we propose an extension to the basis pursuit optimization which can
estimate the unknown gains along with the unknown sparse signals. For the
second case, we formulate a quadratic approach that eliminates the unknown
phase shifts and retrieves the unknown sparse signals. An alternative form of
this approach is also formulated to reduce complexity and memory requirements
and provide scalability with respect to the number of input signals. Finally
for the third case, we propose a formulation that combines the earlier two
approaches to solve the problem. The performance of the proposed algorithms is
investigated extensively through numerical simulations, which demonstrates that
simultaneous signal recovery and calibration is possible with convex methods
when sufficiently many (unknown, but sparse) calibrating signals are provided.
|
1308.5373 | Five Families of Three-Weight Ternary Cyclic Codes and Their Duals | cs.IT math.IT | As a subclass of linear codes, cyclic codes have applications in consumer
electronics, data storage systems, and communication systems as they have
efficient encoding and decoding algorithms. In this paper, five families of
three-weight ternary cyclic codes whose duals have two zeros are presented. The
weight distributions of the five families of cyclic codes are settled. The
duals of two families of the cyclic codes are optimal.
|
1308.5374 | Dynamic Reasoning Systems | cs.AI cs.LO | A {\it dynamic reasoning system} (DRS) is an adaptation of a conventional
formal logical system that explicitly portrays reasoning as a temporal
activity, with each extralogical input to the system and each inference rule
application being viewed as occurring at a distinct time step. Every DRS
incorporates some well-defined logic together with a controller that serves to
guide the reasoning process in response to user inputs. Logics are generic,
whereas controllers are application-specific. Every controller does,
nonetheless, provide an algorithm for nonmonotonic belief revision. The general
notion of a DRS comprises a framework within which one can formulate the logic
and algorithms for a given application and prove that the algorithms are
correct, i.e., that they serve to (i) derive all salient information and (ii)
preserve the consistency of the belief set. This paper illustrates the idea
with ordinary first-order predicate calculus, suitably modified for the present
purpose, and two examples. The latter example revisits some classic
nonmonotonic reasoning puzzles (Opus the Penguin, Nixon Diamond) and shows how
these can be resolved in the context of a DRS, using an expanded version of
first-order logic that incorporates typed predicate symbols. All concepts are
rigorously defined and effectively computable, thereby providing the foundation
for a future software implementation.
|
1308.5380 | Effects of Crowding Perception on Self-organized Pedestrian Flows Using
Adaptive Agent-based Model | cs.MA | Pedestrian behavior has much more complicated characteristics in a dense
crowd and thus attracts the widespread interest of scientists and engineers.
However, even successful modeling approaches such as pedestrian models based on
particle systems are still not fully considered the perceptive mechanism
underlying collective pedestrian behavior. This paper extends a behavioral
heuristics-based pedestrian model to an adaptive agent-based model, which
explicitly considers the crowding effect of neighboring individuals and
perception anisotropy on the representation of a pedestrians visual
information. The adaptive agents with crowding perception are constructed to
investigate complex, selforganized collective dynamics of pedestrian motion.
The proposed model simulates selforganized pedestrian flows in good
quantitative agreement with empirical data. The selforganized phenomena include
lane formation in bidirectional flow and fundamental diagrams of unidirectional
flow. Simulation results show that the emergence of lane formation in
bidirectional flow can be well reproduced. To investigate this further,
increasing view distance has a significant effect on reducing the number of
lanes, increasing lane width, and stabilizing the self-organized lanes. The
paper also discusses phase transitions of fundamental diagrams of pedestrian
crowds with unidirectional flow. It is found that the heterogeneity of how
pedestrians perceive crowding in the population has a remarkable impact on the
flow quality, which results in the buildup of congestion and rapidly decreases
the efficiency of pedestrian flows. It also indicates that the concept of
heterogeneity may be used to explain the instability of phase transitions.
|
1308.5421 | Measuring Privacy Leakage for IDS Rules | cs.CR cs.IT math.IT | This paper proposes a measurement approach for estimating the privacy leakage
from Intrusion Detection System (IDS) alarms. Quantitative information flow
analysis is used to build a theoretical model of privacy leakage from IDS
rules, based on information entropy. This theoretical model is subsequently
verified empirically both based on simulations and in an experimental study.
The analysis shows that the metric is able to distinguish between IDS rules
that have no or low expected privacy leakage and IDS rules with a significant
risk of leaking sensitive information, for example on user behaviour. The
analysis is based on measurements of number of IDS alarms, data length and data
entropy for relevant parts of IDS rules (for example payload). This is a
promising approach that opens up for privacy benchmarking of Managed Security
Service providers.
|
1308.5423 | A Literature Review: Stemming Algorithms for Indian Languages | cs.CL | Stemming is the process of extracting root word from the given inflection
word. It also plays significant role in numerous application of Natural
Language Processing (NLP). The stemming problem has addressed in many contexts
and by researchers in many disciplines. This expository paper presents survey
of some of the latest developments on stemming algorithms in data mining and
also presents with some of the solutions for various Indian language stemming
algorithms along with the results.
|
1308.5434 | Multilevel Topological Interference Management | cs.IT math.IT | The robust principles of treating interference as noise (TIN) when it is
sufficiently weak, and avoiding it when it is not, form the background for this
work. Combining TIN with the topological interference management (TIM)
framework that identifies optimal interference avoidance schemes, a baseline
TIM-TIN approach is proposed which decomposes a network into TIN and TIM
components, allocates the signal power levels to each user in the TIN
component, allocates signal vector space dimensions to each user in the TIM
component, and guarantees that the product of the two is an achievable number
of signal dimensions available to each user in the original network.
|
1308.5447 | On Conditions for Uniqueness in Sparse Phase Retrieval | cs.IT math.IT | The phase retrieval problem has a long history and is an important problem in
many areas of optics. Theoretical understanding of phase retrieval is still
limited and fundamental questions such as uniqueness and stability of the
recovered solution are not yet fully understood. This paper provides several
additions to the theoretical understanding of sparse phase retrieval. In
particular we show that if the measurement ensemble can be chosen freely, as
few as 4k-1 phaseless measurements suffice to guarantee uniqueness of a
k-sparse M-dimensional real solution. We also prove that 2(k^2-k+1) Fourier
magnitude measurements are sufficient under rather general conditions.
|
1308.5465 | Stability of Phase Retrievable Frames | math.FA cs.CV stat.ML | In this paper we study the property of phase retrievability by redundant
sysems of vectors under perturbations of the frame set. Specifically we show
that if a set $\fc$ of $m$ vectors in the complex Hilbert space of dimension n
allows for vector reconstruction from magnitudes of its coefficients, then
there is a perturbation bound $\rho$ so that any frame set within $\rho$ from
$\fc$ has the same property. In particular this proves the recent construction
in \cite{BH13} is stable under perturbations. By the same token we reduce the
critical cardinality conjectured in \cite{BCMN13a} to proving a stability
result for non phase-retrievable frames.
|
1308.5470 | Ashes 2013 - A network theory analysis of Cricket strategies | physics.soc-ph cs.SI physics.pop-ph | We demonstrate in this paper the use of tools of complex network theory to
describe the strategy of Australia and England in the recently concluded Ashes
2013 Test series. Using partnership data made available by cricinfo during the
Ashes 2013 Test series, we generate batting partnership network (BPN) for each
team, in which nodes correspond to batsmen and links represent runs scored in
partnerships between batsmen. The resulting network display a visual summary of
the pattern of run-scoring by each team, which helps us in identifying
potential weakness in a batting order. We use different centrality scores to
quantify the performance, relative importance and effect of removing a player
from the team. We observe that England is an extremely well connected team, in
which lower order batsmen consistently contributed significantly to the team
score. Contrary to this Australia showed dependence on their top order batsmen.
|
1308.5480 | Flaglets for studying the large-scale structure of the Universe | cs.IT astro-ph.CO astro-ph.IM math.IT | Pressing questions in cosmology such as the nature of dark matter and dark
energy can be addressed using large galaxy surveys, which measure the
positions, properties and redshifts of galaxies in order to map the large-scale
structure of the Universe. We review the Fourier-Laguerre transform, a novel
transform in 3D spherical coordinates which is based on spherical harmonics
combined with damped Laguerre polynomials and appropriate for analysing galaxy
surveys. We also recall the construction of flaglets, 3D wavelets obtained
through a tiling of the Fourier-Laguerre space, which can be used to extract
scale-dependent, spatially localised features on the ball. We exploit a
sampling theorem to obtain exact Fourier-Laguerre and flaglet transforms, such
that band-limited signals can analysed and reconstructed at floating point
accuracy on a finite number of voxels on the ball. We present a potential
application of the flaglet transform for finding voids in galaxy surveys and
studying the large-scale structure of the Universe.
|
1308.5499 | Linear models and linear mixed effects models in R with linguistic
applications | cs.CL | This text is a conceptual introduction to mixed effects modeling with
linguistic applications, using the R programming environment. The reader is
introduced to linear modeling and assumptions, as well as to mixed
effects/multilevel modeling, including a discussion of random intercepts,
random slopes and likelihood ratio tests. The example used throughout the text
focuses on the phonetic analysis of voice pitch data.
|
1308.5513 | The Metabolism and Growth of Web Forums | physics.soc-ph cs.CY cs.SI | We view web forums as virtual living organisms feeding on user's attention
and investigate how these organisms grow at the expense of collective
attention. We find that the "body mass" ($PV$) and "energy consumption" ($UV$)
of the studied forums exhibits the allometric growth property, i.e., $PV_t \sim
UV_t ^ \theta$. This implies that within a forum, the network transporting
attention flow between threads has a structure invariant of time, despite of
the continuously changing of the nodes (threads) and edges (clickstreams). The
observed time-invariant topology allows us to explain the dynamics of networks
by the behavior of threads. In particular, we describe the clickstream
dissipation on threads using the function $D_i \sim T_i ^ \gamma$, in which
$T_i$ is the clickstreams to node $i$ and $D_i$ is the clickstream dissipated
from $i$. It turns out that $\gamma$, an indicator for dissipation efficiency,
is negatively correlated with $\theta$ and $1/\gamma$ sets the lower boundary
for $\theta$. Our findings have practical consequences. For example, $\theta$
can be used as a measure of the "stickiness" of forums, because it quantifies
the stable ability of forums to convert $UV$ into $PV$, i.e., to remain users
"lock-in" the forum. Meanwhile, the correlation between $\gamma$ and $\theta$
provides a convenient method to evaluate the `stickiness" of forums. Finally,
we discuss an optimized "body mass" of forums at around $10^5$ that minimizes
$\gamma$ and maximizes $\theta$.
|
1308.5546 | Sparse and Non-Negative BSS for Noisy Data | stat.ML cs.LG | Non-negative blind source separation (BSS) has raised interest in various
fields of research, as testified by the wide literature on the topic of
non-negative matrix factorization (NMF). In this context, it is fundamental
that the sources to be estimated present some diversity in order to be
efficiently retrieved. Sparsity is known to enhance such contrast between the
sources while producing very robust approaches, especially to noise. In this
paper we introduce a new algorithm in order to tackle the blind separation of
non-negative sparse sources from noisy measurements. We first show that
sparsity and non-negativity constraints have to be carefully applied on the
sought-after solution. In fact, improperly constrained solutions are unlikely
to be stable and are therefore sub-optimal. The proposed algorithm, named nGMCA
(non-negative Generalized Morphological Component Analysis), makes use of
proximal calculus techniques to provide properly constrained solutions. The
performance of nGMCA compared to other state-of-the-art algorithms is
demonstrated by numerical experiments encompassing a wide variety of settings,
with negligible parameter tuning. In particular, nGMCA is shown to provide
robustness to noise and performs well on synthetic mixtures of real NMR
spectra.
|
1308.5571 | Cooperative Network Coded ARQ Strategies for Two Way Relay Channel | cs.IT math.IT | In this paper, novel cooperative automatic repeat request (ARQ) methods with
network coding are proposed for two way relaying network. Upon a failed
transmission of a packet, the network enters cooperation phase, where the
retransmission of the packets is aided by the relay node. The proposed approach
integrates network coding into cooperative ARQ, aiming to improve the network
throughput by reducing the number of retransmissions. For successive
retransmission, three different methods for choosing the retransmitting node
are considered. The throughput of the methods are analyzed and compared. The
analysis is based on binary Markov channel which takes the correlation of the
channel coefficients in time into account. Analytical results show that the
proposed use of network coding result in throughput performance superior to
traditional ARQ and cooperative ARQ without network coding. It is also observed
that correlation can have significant effect on the performance of the proposed
cooperative network coded ARQ approach. In particular the proposed approach is
advantageous for slow to moderately fast fading channels.
|
1308.5576 | A Comparison of Algorithms for Learning Hidden Variables in Normal
Graphs | stat.ML cs.IT cs.SY math.IT | A Bayesian factor graph reduced to normal form consists in the
interconnection of diverter units (or equal constraint units) and
Single-Input/Single-Output (SISO) blocks. In this framework localized
adaptation rules are explicitly derived from a constrained maximum likelihood
(ML) formulation and from a minimum KL-divergence criterion using KKT
conditions. The learning algorithms are compared with two other updating
equations based on a Viterbi-like and on a variational approximation
respectively. The performance of the various algorithm is verified on synthetic
data sets for various architectures. The objective of this paper is to provide
the programmer with explicit algorithms for rapid deployment of Bayesian graphs
in the applications.
|
1308.5585 | Rewriting XPath Queries using View Intersections: Tractability versus
Completeness | cs.DB | The standard approach for optimization of XPath queries by rewriting using
views techniques consists in navigating inside a view's output, thus allowing
the usage of only one view in the rewritten query. Algorithms for richer
classes of XPath rewritings, using intersection or joins on node identifiers,
have been proposed, but they either lack completeness guarantees, or require
additional information about the data. We identify the tightest restrictions
under which an XPath can be rewritten in polynomial time using an intersection
of views and propose an algorithm that works for any documents or type of
identifiers. As a side-effect, we analyze the complexity of the related problem
of deciding if an XPath with intersection can be equivalently rewritten as one
without intersection or union. We extend our formal study of the view-based
rewriting problem for XPath by describing also (i) algorithms for more complex
rewrite plans, with no limitations on the number of intersection and navigation
steps inside view outputs they employ, and (ii) adaptations of our techniques
to deal with XML documents without persistent node Ids, in the presence of XML
keys. Complementing our computational complexity study, we describe a
proof-of-concept implementation of our techniques and possible choices that may
speed up execution in practice, regarding how rewrite plans are built, tested
and executed. We also give a thorough experimental evaluation of these
techniques, focusing on scalability and the running time improvements achieved
by the execution of view-based plans.
|
1308.5597 | Sparse Channel Estimation by Factor Graphs | cs.IT cs.SY math.IT | The problem of estimating a sparse channel, i.e. a channel with a few
non-zero taps, appears in various areas of communications. Recently, we have
developed an algorithm based on iterative alternating minimization which
iteratively detects the location and the value of the taps. This algorithms
involves an approximate Maximum A Posteriori (MAP) probability scheme for
detection of the location of taps, while a least square method is used for
estimating the values at each iteration. In this work, based on the method of
factor graphs and message passing algorithms, we will compute an exact solution
for the MAP estimation problem. Indeed, we first find a factor graph model of
this problem, and then perform the well-known min-sum algorithm on the edges of
this graph. Consequently, we will find an exact estimator for the MAP problem
that its complexity grows linearly with respect to the channel memory. By
substituting this estimator in the mentioned alternating minimization method,
we will propose an estimator that will nearly achieve the Cramer-Rao bound of
the genie-aided estimation of sparse channels (estimation based on knowing the
location of non-zero taps of the channel), while it can perform faster than
most of the proposed algorithms in literature.
|
1308.5614 | Quantum Noise Filtering via Cross-Correlations | quant-ph cs.IT math.IT | Motivated by successful classical models for noise reduction, we suggest a
quantum technique for filtering noise out of quantum states. The purpose of
this paper is twofold: presenting a simple construction of quantum
cross-correlations between two wave-functions, and presenting a scheme for a
quantum noise filtering. We follow a well-known scheme in classical
communication theory that attenuates random noise, and show that one can build
a quantum analog by using non-trace-preserving operators. By this we introduce
a classically motivated signal processing scheme to quantum information theory,
which can help reducing quantum noise, and particularly, phase flip noise.
|
1308.5661 | Detection of copy-move forgery in digital images based on DCT | cs.CV cs.CR | With rapid advances in digital information processing systems, and more
specifically in digital image processing software, there is a widespread
development of advanced tools and techniques for digital image forgery. One of
the techniques most commonly used is the Copy-move forgery which proceeds by
copying a part of an image and pasting it into the same image, in order to
maliciously hide an object or a region. In this paper, we propose a method to
detect this specific kind of counterfeit. Firstly, the color image is converted
from RGB color space to YCbCr color space and then the R, G, B and Y-component
are splitted into fixed-size overlapping blocks and, features are extracted
from the R, G and B-components image blocks on one hand and on the other, from
the DCT representation of the R, G, B and Ycomponent image block. The feature
vectors obtained are then lexicographically sorted to make similar image blocks
neighbors and duplicated image blocks are identified using Euclidean distance
as similarity criterion. Experimental results showed that the proposed method
can detect the duplicated regions when there is more than one copy move forged
area in the image and even in case of slight rotations, JPEG compression,
shift, scale, blur and noise addition.
|
1308.5673 | Nonlocal linear compression of two-photon time interval distribution | quant-ph cs.IT math.IT | We propose a linear compression technique for the time interval distribution
of photon pairs. Using a partially frequency-entangled two-photon (TP) state
with appropriate mean time width, the compressed TP time interval width can be
kept in the minimum limit set by the phase modulation, and is independent of
its initial width. As a result of this effect, ultra-narrow TP time interval
distribution can be compressed with relatively slow phase modulators to
decrease the damage of the phase-instability arising from the phase modulation
process.
|
1308.5678 | Modeling the Dynamics of Infectious Diseases in Different Scale-Free
Networks with the Same Degree Distribution | q-bio.PE cs.SI physics.soc-ph | The transmission dynamics of some infectious diseases is related to the
contact structure between individuals in a network. We used five algorithms to
generate contact networks with different topological structure but with the
same scale-free degree distribution. We simulated the spread of acute and
chronic infectious diseases on these networks, using SI (Susceptible -
Infected) and SIS (Susceptible - Infected - Susceptible) epidemic models. In
the simulations, our objective was to observe the effects of the topological
structure of the networks on the dynamics and prevalence of the simulated
diseases. We found that the dynamics of spread of an infectious disease on
different networks with the same degree distribution may be considerably
different.
|
1308.5703 | A Principled Approach to Bridging the Gap between Graph Data and their
Schemas | cs.DB | Although RDF graphs have schema information associated with them, in practice
it is very common to find cases in which data do not fully conform to their
schema. A prominent example of this is DBpedia, which is RDF data extracted
from Wikipedia, a publicly editable source of information. In such situations,
it becomes interesting to study the structural properties of the actual data,
because the schema gives an incomplete description of the organization of a
dataset. In this paper we have approached the study of the structuredness of an
RDF graph in a principled way: we propose a framework for specifying
structuredness functions, which gauge the degree to which an RDF graph conforms
to a schema. In particular, we first define a formal language for specifying
structuredness functions with expressions we call rules. This language allows a
user or a database administrator to state a rule to which an RDF graph may
fully or partially conform. Then we consider the issue of discovering a
refinement of a sort (type) by partitioning the dataset into subsets whose
structuredness is over a specified threshold. In particular, we prove that the
natural decision problem associated to this refinement problem is NP-complete,
and we provide a natural translation of this problem into Integer Linear
Programming (ILP). Finally, we test this ILP solution with two real world
datasets, DBpedia Persons and WordNet Nouns, and 4 different and intuitive
rules, which gauge the structuredness in different ways. The rules give
meaningful refinements of the datasets, showing that our language can be a
powerful tool for understanding the structure of RDF data.
|
1308.5706 | On the computation of directional scale-discretized wavelet transforms
on the sphere | cs.IT astro-ph.IM math.IT | We review scale-discretized wavelets on the sphere, which are directional and
allow one to probe oriented structure in data defined on the sphere.
Furthermore, scale-discretized wavelets allow in practice the exact synthesis
of a signal from its wavelet coefficients. We present exact and efficient
algorithms to compute the scale-discretized wavelet transform of band-limited
signals on the sphere. These algorithms are implemented in the publicly
available S2DW code. We release a new version of S2DW that is parallelized and
contains additional code optimizations. Note that scale-discretized wavelets
can be viewed as a directional generalization of needlets. Finally, we outline
future improvements to the algorithms presented, which can be achieved by
exploiting a new sampling theorem on the sphere developed recently by some of
the authors.
|
1308.5724 | Proceedings Second International Workshop on Hybrid Systems and Biology | cs.CE cs.LO cs.SY | This volume contains the proceedings of the Second International Workshop
Hybrid Systems and Biology (HSB 2013) held in Taormina (Italy), on September
2th, 2013. The workshop is affiliated to the 12th European Conference on
Artificial Life (ECAL 2013).
Systems biology aims at providing a system-level understanding of biological
systems by unveiling their structure, dynamics and control methods. Due to the
intrinsic multi-scale nature of these systems in space, in organization levels
and in time, it is extremely difficult to model them in a uniform way, e.g., by
means of differential equations or discrete stochastic processes. Furthermore,
such models are often not easily amenable to formal analysis, and their
simulations at the organ or even at the cell levels are frequently impractical.
Indeed, an important open problem is finding appropriate computational models
that scale well for both simulation and formal analysis of biological
processes. Hybrid modeling techniques, combining discrete and continuous
processes, are gaining more and more attention in such a context, and they have
been successfully applied to capture the behavior of many biological complex
systems, ranging from genetic networks, biochemical reactions, signaling
pathways, cardiac tissues electro-physiology, and tumor genesis. This workshop
aims at bringing together researchers in computer science, mathematics, and
life sciences, interested in the opportunities and the challenges of hybrid
modeling applied to systems biology.
The workshop programme included the keynote presentation of Alessandro
Astolfi (Imperial College of London, UK) on Immune response enhancement via
hybrid control. Furthermore, 8 papers were selected out of 13 submissions by
the Program Committee of HSB 2013. The papers in this volume address the hybrid
modeling of a number important biological processes (iron homeostasis network,
mammalian cell cycle, vascular endothelial growth factor (VEGF), genetic
regulatory network in mammalian sclera) and, the formalisms and techniques for
specifying and validating properties of biological systems (such as,
robustness, oscillations).
|
1308.5728 | Notes on Coherent Feedback Control for Linear Quantum Systems | quant-ph cs.SY math.OC | This paper considers some formulations and possible approaches to the
coherent LQG and $H^\infty$ quantum control problems. Some new results for
these problems are presented in the case of annihilation operator only quantum
systems showing that in this case, the optimal controllers are trivial
controllers.
|
1308.5737 | Further Results on Permutation Polynomials over Finite Fields | cs.IT math.IT | Permutation polynomials are an interesting subject of mathematics and have
applications in other areas of mathematics and engineering. In this paper, we
develop general theorems on permutation polynomials over finite fields. As a
demonstration of the theorems, we present a number of classes of explicit
permutation polynomials on $\gf_q$.
|
1308.5786 | Real-time dynamic spectrum management for multi-user multi-carrier
communication systems | cs.IT math.IT | Dynamic spectrum management is recognized as a key technique to tackle
interference in multi-user multi-carrier communication systems and networks.
However existing dynamic spectrum management algorithms may not be suitable
when the available computation time and compute power are limited, i.e., when a
very fast responsiveness is required. In this paper, we present a new paradigm,
theory and algorithm for real-time dynamic spectrum management (RT-DSM) under
tight real-time constraints. Specifically, a RT-DSM algorithm can be stopped at
any point in time while guaranteeing a feasible and improved solution. This is
enabled by the introduction of a novel difference-of-variables (DoV)
transformation and problem reformulation, for which a primal coordinate ascent
approach is proposed with exact line search via a logarithmicly scaled grid
search. The concrete proposed algorithm is referred to as iterative power
difference balancing (IPDB). Simulations for different realistic wireline and
wireless interference limited systems demonstrate its good performance, low
complexity and wide applicability under different configurations.
|
1308.5793 | Channel Upgradation for Non-Binary Input Alphabets and MACs | cs.IT math.IT | Consider a single-user or multiple-access channel with a large output
alphabet. A method to approximate the channel by an upgraded version having a
smaller output alphabet is presented and analyzed. The original channel is not
necessarily symmetric and does not necessarily have a binary input alphabet.
Also, the input distribution is not necessarily uniform. The approximation
method is instrumental when constructing capacity achieving polar codes for an
asymmetric channel with a non-binary input alphabet. Other settings in which
the method is instrumental are the wiretap setting as well as the lossy source
coding setting.
|
1308.5807 | Multi-Objective Particle Swarm Optimization for Facility Location
Problem in Wireless Mesh Networks | cs.NI cs.NE | Wireless mesh networks have seen a real progress due of their implementation
at a low cost. They present one of Next Generation Networks technologies and
can serve as home, companies and universities networks. In this paper, we
propose and discuss a new multi-objective model for nodes deployment
optimization in Multi-Radio Multi-Channel Wireless Mesh Networks. We exploit
the trade-off between network cost and the overall network performance. This
optimization problem is solved simultaneously by using a meta-heuristic method
that returns a non-dominated set of near optimal solutions. A comparative study
was driven to evaluate the efficiency of the proposed model.
|
1308.5809 | Spectrum optimization in multi-user multi-carrier systems with iterative
convex and nonconvex approximation methods | cs.IT math.IT | Several practical multi-user multi-carrier communication systems are
characterized by a multi-carrier interference channel system model where the
interference is treated as noise. For these systems, spectrum optimization is a
promising means to mitigate interference. This however corresponds to a
challenging nonconvex optimization problem. Existing iterative convex
approximation (ICA) methods consist in solving a series of improving convex
approximations and are typically implemented in a per-user iterative approach.
However they do not take this typical iterative implementation into account in
their design. This paper proposes a novel class of iterative approximation
methods that focuses explicitly on the per-user iterative implementation, which
allows to relax the problem significantly, dropping joint convexity and even
convexity requirements for the approximations. A systematic design framework is
proposed to construct instances of this novel class, where several new
iterative approximation methods are developed with improved per-user convex and
nonconvex approximations that are both tighter and simpler to solve (in
closed-form). As a result, these novel methods display a much faster
convergence speed and require a significantly lower computational cost.
Furthermore, a majority of the proposed methods can tackle the issue of getting
stuck in bad locally optimal solutions, and hence improve solution quality
compared to existing ICA methods.
|
1308.5820 | Design of a non-linear power system stabiliser using the concept of the
feedback linearisation based on the back-stepping technique | cs.SY | This study proposes a feedback linearisation based on the back-stepping
method with simple implementation and unique design process to design a
non-linear controller with a goal of improving both steady-state and transient
stability. The proposed method is designed based on a standard third-order
model of synchronous generator. A comparison based on simulation is then
performed between the proposed method and two conventional control schemes
(i.e. conventional power system stabiliser and direct feedback linearisation).
The simulation results demonstrate that fast response, robustness, damping,
steady-state and transient stability as well as voltage regulation are all
achieved satisfactorily.
|
1308.5835 | Backhaul-Aware Interference Management in the Uplink of Wireless Small
Cell Networks | cs.NI cs.GT cs.LG | The design of distributed mechanisms for interference management is one of
the key challenges in emerging wireless small cell networks whose backhaul is
capacity limited and heterogeneous (wired, wireless and a mix thereof). In this
paper, a novel, backhaul-aware approach to interference management in wireless
small cell networks is proposed. The proposed approach enables macrocell user
equipments (MUEs) to optimize their uplink performance, by exploiting the
presence of neighboring small cell base stations. The problem is formulated as
a noncooperative game among the MUEs that seek to optimize their delay-rate
tradeoff, given the conditions of both the radio access network and the --
possibly heterogeneous -- backhaul. To solve this game, a novel, distributed
learning algorithm is proposed using which the MUEs autonomously choose their
optimal uplink transmission strategies, given a limited amount of available
information. The convergence of the proposed algorithm is shown and its
properties are studied. Simulation results show that, under various types of
backhauls, the proposed approach yields significant performance gains, in terms
of both average throughput and delay for the MUEs, when compared to existing
benchmark algorithms.
|
1308.5846 | A Domain Decomposition Approach to Implementing Fault Slip in
Finite-Element Models of Quasi-static and Dynamic Crustal Deformation | physics.geo-ph cs.CE cs.MS | We employ a domain decomposition approach with Lagrange multipliers to
implement fault slip in a finite-element code, PyLith, for use in both
quasi-static and dynamic crustal deformation applications. This integrated
approach to solving both quasi-static and dynamic simulations leverages common
finite-element data structures and implementations of various boundary
conditions, discretization schemes, and bulk and fault rheologies. We have
developed a custom preconditioner for the Lagrange multiplier portion of the
system of equations that provides excellent scalability with problem size
compared to conventional additive Schwarz methods. We demonstrate application
of this approach using benchmarks for both quasi-static viscoelastic
deformation and dynamic spontaneous rupture propagation that verify the
numerical implementation in PyLith.
|
1308.5865 | A Survey and Taxonomy of Graph Sampling | cs.SI math.PR stat.ME | Graph sampling is a technique to pick a subset of vertices and/ or edges from
original graph. It has a wide spectrum of applications, e.g. survey hidden
population in sociology [54], visualize social graph [29], scale down Internet
AS graph [27], graph sparsification [8], etc. In some scenarios, the whole
graph is known and the purpose of sampling is to obtain a smaller graph. In
other scenarios, the graph is unknown and sampling is regarded as a way to
explore the graph. Commonly used techniques are Vertex Sampling, Edge Sampling
and Traversal Based Sampling. We provide a taxonomy of different graph sampling
objectives and graph sampling approaches. The relations between these
approaches are formally argued and a general framework to bridge theoretical
analysis and practical implementation is provided. Although being smaller in
size, sampled graphs may be similar to original graphs in some way. We are
particularly interested in what graph properties are preserved given a sampling
procedure. If some properties are preserved, we can estimate them on the
sampled graphs, which gives a way to construct efficient estimators. If one
algorithm relies on the perserved properties, we can expect that it gives
similar output on original and sampled graphs. This leads to a systematic way
to accelerate a class of graph algorithms. In this survey, we discuss both
classical text-book type properties and some advanced properties. The landscape
is tabularized and we see a lot of missing works in this field. Some
theoretical studies are collected in this survey and simple extensions are
made. Most previous numerical evaluation works come in an ad hoc fashion, i.e.
evaluate different type of graphs, different set of properties, and different
sampling algorithms. A systematical and neutral evaluation is needed to shed
light on further graph sampling studies.
|
1308.5876 | Hierarchized block wise image approximation by greedy pursuit strategies | cs.CV | An approach for effective implementation of greedy selection methodologies,
to approximate an image partitioned into blocks, is proposed. The method is
specially designed for approximating partitions on a transformed image. It
evolves by selecting, at each iteration step, i) the elements for approximating
each of the blocks partitioning the image and ii) the hierarchized sequence in
which the blocks are approximated to reach the required global condition on
sparsity.
|
1308.5884 | Smooth Max-Information as One-Shot Generalization for Mutual Information | quant-ph cs.IT math.IT | We study formal properties of smooth max-information, a generalization of von
Neumann mutual information derived from the max-relative entropy. Recent work
suggests that it is a useful quantity in one-shot channel coding, quantum rate
distortion theory and the physics of quantum many-body systems.
Max-information can be defined in multiple ways. We demonstrate that
different smoothed definitions are essentially equivalent (up to logarithmic
terms in the smoothing parameters). These equivalence relations allow us to
derive new chain rules for the max-information in terms of min- and
max-entropies, thus extending the smooth entropy formalism to mutual
information.
|
1308.5885 | On the weight distributions of several classes of cyclic codes from APN
monomials | cs.IT math.IT | Let $m\geq 3$ be an odd integer and $p$ be an odd prime. % with $p-1=2^rh$,
where $h$ is an odd integer.
In this paper, many classes of three-weight cyclic codes over
$\mathbb{F}_{p}$ are presented via an examination of the condition for the
cyclic codes $\mathcal{C}_{(1,d)}$ and $\mathcal{C}_{(1,e)}$, which have
parity-check polynomials $m_1(x)m_d(x)$ and $m_1(x)m_e(x)$ respectively, to
have the same weight distribution, where $m_i(x)$ is the minimal polynomial of
$\pi^{-i}$ over $\mathbb{F}_{p}$ for a primitive element $\pi$ of
$\mathbb{F}_{p^m}$. %For $p=3$, the duals of five classes of the proposed
cyclic codes are optimal in the sense that they meet certain bounds on linear
codes. Furthermore, for $p\equiv 3 \pmod{4}$ and positive integers $e$ such
that there exist integers $k$ with $\gcd(m,k)=1$ and $\tau\in\{0,1,\cdots,
m-1\}$ satisfying $(p^k+1)\cdot e\equiv 2 p^{\tau}\pmod{p^m-1}$, the value
distributions of the two exponential sums $T(a,b)=\sum\limits_{x\in
\mathbb{F}_{p^m}}\omega^{\Tr(ax+bx^e)}$ and $ S(a,b,c)=\sum\limits_{x\in
\mathbb{F}_{p^m}}\omega^{\Tr(ax+bx^e+cx^s)}, $ where $s=(p^m-1)/2$, are
settled. As an application, the value distribution of $S(a,b,c)$ is utilized to
investigate the weight distribution of the cyclic codes $\mathcal{C}_{(1,e,s)}$
with parity-check polynomial $m_1(x)m_e(x)m_s(x)$. In the case of $p=3$ and
even $e$ satisfying the above condition, the duals of the cyclic codes
$\mathcal{C}_{(1,e,s)}$ have the optimal minimum distance.
|
1308.5906 | Biological effects and equivalent doses in radiotherapy: a software
solution | cs.CE physics.med-ph | The limits of TDF (time, dose, and fractionation) and linear quadratic models
have been known for a long time. Medical physicists and physicians are required
to provide fast and reliable interpretations regarding the delivered doses or
any future prescriptions relating to treatment changes. We therefore propose a
calculation interface under the GNU license to be used for equivalent doses,
biological doses, and normal tumor complication probability (Lyman model). The
methodology used draws from several sources: the linear-quadratic-linear model
of Astrahan, the repopulation effects of Dale, and the prediction of
multi-fractionated treatments of Thames. The results are obtained from an
algorithm that minimizes an ad-hoc cost function, and then compared to the
equivalent dose computed using standard calculators in seven French
radiotherapy centers.
|
1308.5923 | Quantum network exploration with a faulty sense of direction | quant-ph cs.MA | We develop a model which can be used to analyse the scenario of exploring
quantum network with a distracted sense of direction. Using this model we
analyse the behaviour of quantum mobile agents operating with non-adaptive and
adaptive strategies which can be employed in this scenario. We introduce the
notion of node visiting suitable for analysing quantum superpositions of states
by distinguishing between visiting and attaining a position. We show that
without a proper model of adaptiveness, it is not possible for the party
representing the distraction in the sense of direction, to obtain the results
analogous to the classical case. Moreover, with additional control resources
the total number of attained positions is maintained if the number of visited
positions is strictly limited.
|
1308.5933 | Framework Model for Database Replication within the Availability Zones | cs.DB | This paper presents a proposed model for database replication model in
private cloud availability regions, which is an enhancement of the SQL Server
AlwaysOn Layers of Protection Model presents by Microsoft in 2012. The
enhancement concentrates in the database replication for private cloud
availability regions through the use of primary and secondary servers. The
processes of proposed model during the client send Write/Read Request to the
server, in synchronous and semi synchronous replication level has been
described in details also the processes of proposed model when the client send
Write/Read Request to the Primary Server presented in details. All the types of
automatic failover situations are presented in this thesis. Using the proposed
models will increase the performance because each one of the secondary servers
will open for Read / Write and allow the clients to connect to the nearby
secondary and less loading on each server. Keywords: Availability Regions,
Cloud Computing, Database Replication, SQL Server AlwaysOn, Synchronization.
|
1308.5938 | Theoretic Shaping Bounds for Single Letter Constraints and Mismatched
Decoding | cs.IT math.IT | Shaping gain is attained in schemes where a shaped subcode is chosen from a
larger codebook by a codeword selection process. This includes the popular
method of Trellis Shaping (TS), originally proposed by Forney for average power
reduction. The decoding process of such schemes is mismatched, since it is
aware of only the large codebook. This study models such schemes by a random
code construction and derives achievable bounds on the transmission rate under
matched and mismatched decoding. For matched decoding the bound is obtained
using a modified asymptotic equipartition property (AEP) theorem derived to
suit this particular code construction. For mismatched decoding, relying on the
large codebook performance is generally wrong, since the performance of the
non-typical codewords within the large codebook may differ substantially from
the typical ones. Hence, we present two novel lower bounds on the capacity
under mismatched decoding. The first is based upon Gallager's random exponent,
whereas the second on a modified version of the joint-typicality decoder.
|
1308.5964 | Automated, Credible Autocoding of An Unmanned Aggressive Maneuvering Car
Controller | cs.SY | This article describes the application of a credible autocoding framework for
control systems towards a nonlinear car controller example. The framework
generates code, along with guarantees of high level functional properties about
the code that can be independently verified. These high-level functional
properties not only serves as a certificate of good system behvaior but also
can be used to guarantee the absence of runtime errors. In one of our previous
works, we have constructed a prototype autocoder with proofs that demonstrates
this framework in a fully automatic fashion for linear and quasi-nonlinear
controllers. With the nonlinear car example, we propose to further extend the
prototype's dataflow annotation language environment with with several new
annotation symbols to enable the expression of general predicates and dynamical
systems. We demonstrate manually how the new extensions to the prototype
autocoder work on the car controller using the output language Matlab. Finally,
we discuss the requirements and scalability issues of the automatic analysis
and verification of the documented output code.
|
1308.6003 | Improving Sparse Associative Memories by Escaping from Bogus Fixed
Points | cs.NE cs.IT math.IT | The Gripon-Berrou neural network (GBNN) is a recently invented recurrent
neural network embracing a LDPC-like sparse encoding setup which makes it
extremely resilient to noise and errors. A natural use of GBNN is as an
associative memory. There are two activation rules for the neuron dynamics,
namely sum-of-sum and sum-of-max. The latter outperforms the former in terms of
retrieval rate by a huge margin. In prior discussions and experiments, it is
believed that although sum-of-sum may lead the network to oscillate, sum-of-max
always converges to an ensemble of neuron cliques corresponding to previously
stored patterns. However, this is not entirely correct. In fact, sum-of-max
often converges to bogus fixed points where the ensemble only comprises a small
subset of the converged state. By taking advantage of this overlooked fact, we
can greatly improve the retrieval rate. We discuss this particular issue and
propose a number of heuristics to push sum-of-max beyond these bogus fixed
points. To tackle the problem directly and completely, a novel post-processing
algorithm is also developed and customized to the structure of GBNN.
Experimental results show that the new algorithm achieves a huge performance
boost in terms of both retrieval rate and run-time, compared to the standard
sum-of-max and all the other heuristics.
|
1308.6007 | Tree Codes and a Conjecture on Exponential Sums | cs.CC cs.IT math.IT math.NT | We propose a new conjecture on some exponential sums. These particular sums
have not apparently been considered in the literature. Subject to the
conjecture we obtain the first effective construction of asymptotically good
tree codes. The available numerical evidence is consistent with the conjecture
and is sufficient to certify codes for significant-length communications.
|
1308.6038 | On sparse interpolation and the design of deterministic interpolation
points | math.NA cs.IT math.IT | In this paper, we build up a framework for sparse interpolation. We first
investigate the theoretical limit of the number of unisolvent points for sparse
interpolation under a general setting and try to answer some basic questions of
this topic. We also explore the relation between classical interpolation and
sparse interpolation. We second consider the design of the interpolation points
for the $s$-sparse functions in high dimensional Chebyshev bases, for which the
possible applications include uncertainty quantification, numerically solving
stochastic or parametric PDEs and compressed sensing. Unlike the traditional
random sampling method, we present in this paper a deterministic method to
produce the interpolation points, and show its performance with $\ell_1$
minimization by analyzing the mutual incoherence of the interpolation matrix.
Numerical experiments show that the deterministic points have a similar
performance with that of the random points.
|
1308.6056 | Brain MRI Segmentation with Fast and Globally Convex Multiphase Active
Contours | cs.CV | Multiphase active contour based models are useful in identifying multiple
regions with different characteristics such as the mean values of regions. This
is relevant in brain magnetic resonance images (MRIs), allowing the
differentiation of white matter against gray matter. We consider a well defined
globally convex formulation of Vese and Chan multiphase active contour model
for segmenting brain MRI images. A well-established theory and an efficient
dual minimization scheme are thoroughly described which guarantees optimal
solutions and provides stable segmentations. Moreover, under the dual
minimization implementation our model perfectly describes disjoint regions by
avoiding local minima solutions. Experimental results indicate that the
proposed approach provides better accuracy than other related multiphase active
contour algorithms even under severe noise, intensity inhomogeneities, and
partial volume effects.
|
1308.6062 | Structures and Transformations for Model Reduction of Linear Quantum
Stochastic Systems | quant-ph cs.SY math.OC | The purpose of this paper is to develop a model reduction theory for linear
quantum stochastic systems that are commonly encountered in quantum optics and
related fields, modeling devices such as optical cavities and optical
parametric amplifiers, as well as quantum networks composed of such devices.
Results are derived on subsystem truncation of such systems and it is shown
that this truncation preserves the physical realizability property of linear
quantum stochastic systems. It is also shown that the property of complete
passivity of linear quantum stochastic systems is preserved under subsystem
truncation. A necessary and sufficient condition for the existence of a
balanced realization of a linear quantum stochastic system under sympletic
transformations is derived. Such a condition turns out to be very restrictive
and will not be satisfied by generic linear quantum stochastic systems, thus
necessary and sufficient conditions for relaxed notions of simultaneous
diagonalization of the controllability and observability Gramians of linear
quantum stochastic systems under symplectic transformations are also obtained.
The notion of a quasi-balanced realization is introduced and it is shown that
all asymptotically stable completely passive linear quantum stochastic systems
have a quasi-balanced realization. Moreover, an explicit bound for the
subsystem truncation error on a quasi-balanceable linear quantum stochastic
system is provided. The results are applied in an example of model reduction in
the context of low-pass optical filtering of coherent light using a network of
optical cavities.
|
1308.6074 | Exploration and retrieval of whole-metagenome sequencing samples | q-bio.GN cs.CE cs.IR | Over the recent years, the field of whole metagenome shotgun sequencing has
witnessed significant growth due to the high-throughput sequencing technologies
that allow sequencing genomic samples cheaper, faster, and with better coverage
than before. This technical advancement has initiated the trend of sequencing
multiple samples in different conditions or environments to explore the
similarities and dissimilarities of the microbial communities. Examples include
the human microbiome project and various studies of the human intestinal tract.
With the availability of ever larger databases of such measurements, finding
samples similar to a given query sample is becoming a central operation. In
this paper, we develop a content-based exploration and retrieval method for
whole metagenome sequencing samples. We apply a distributed string mining
framework to efficiently extract all informative sequence $k$-mers from a pool
of metagenomic samples and use them to measure the dissimilarity between two
samples. We evaluate the performance of the proposed approach on two human gut
metagenome data sets as well as human microbiome project metagenomic samples.
We observe significant enrichment for diseased gut samples in results of
queries with another diseased sample and very high accuracy in discriminating
between different body sites even though the method is unsupervised. A software
implementation of the DSM framework is available at
https://github.com/HIITMetagenomics/dsm-framework
|
1308.6075 | Measuring the dimension of partially embedded networks | physics.soc-ph cs.SI physics.data-an | Scaling phenomena have been intensively studied during the past decade in the
context of complex networks. As part of these works, recently novel methods
have appeared to measure the dimension of abstract and spatially embedded
networks. In this paper we propose a new dimension measurement method for
networks, which does not require global knowledge on the embedding of the
nodes, instead it exploits link-wise information (link lengths, link delays or
other physical quantities). Our method can be regarded as a generalization of
the spectral dimension, that grasps the network's large-scale structure through
local observations made by a random walker while traversing the links. We apply
the presented method to synthetic and real-world networks, including road maps,
the Internet infrastructure and the Gowalla geosocial network. We analyze the
theoretically and empirically designated case when the length distribution of
the links has the form P(r) ~ 1/r. We show that while previous dimension
concepts are not applicable in this case, the new dimension measure still
exhibits scaling with two distinct scaling regimes. Our observations suggest
that the link length distribution is not sufficient in itself to entirely
control the dimensionality of complex networks, and we show that the proposed
measure provides information that complements other known measures.
|
1308.6086 | Distributed Compressed Sensing For Static and Time-Varying Networks | cs.IT math.IT | We consider the problem of in-network compressed sensing from distributed
measurements. Every agent has a set of measurements of a signal $x$, and the
objective is for the agents to recover $x$ from their collective measurements
using only communication with neighbors in the network. Our distributed
approach to this problem is based on the centralized Iterative Hard
Thresholding algorithm (IHT). We first present a distributed IHT algorithm for
static networks that leverages standard tools from distributed computing to
execute in-network computations with minimized bandwidth consumption. Next, we
address distributed signal recovery in networks with time-varying topologies.
The network dynamics necessarily introduce inaccuracies to our in-network
computations. To accommodate these inaccuracies, we show how centralized IHT
can be extended to include inexact computations while still providing the same
recovery guarantees as the original IHT algorithm. We then leverage these new
theoretical results to develop a distributed version of IHT for time-varying
networks. Evaluations show that our distributed algorithms for both static and
time-varying networks outperform previously proposed solutions in time and
bandwidth by several orders of magnitude.
|
1308.6111 | Finer filtration for matrix-valued cocycle based on Oseledec's
multiplicative ergodic theorem | math.DS cs.SY | In this paper, we improve the classical multiplicative ergodic theorem.
|
1308.6118 | Using tf-idf as an edge weighting scheme in user-object bipartite
networks | cs.SI cs.IR physics.soc-ph | Bipartite user-object networks are becoming increasingly popular in
representing user interaction data in a web or e-commerce environment. They
have certain characteristics and challenges that differentiates them from other
bipartite networks. This paper analyzes the properties of five real world
user-object networks. In all cases we found a heavy tail object degree
distribution with popular objects connecting together a large part of the users
causing significant edge inflation in the projected users network. We propose a
novel edge weighting strategy based on tf-idf and show that the new scheme
improves both the density and the quality of the community structure in the
projections. The improvement is also noticed when comparing to partially random
networks.
|
1308.6149 | The Extreme Right Filter Bubble | cs.SI cs.CY physics.soc-ph | Due to its status as the most popular video sharing platform, YouTube plays
an important role in the online strategy of extreme right groups, where it is
often used to host associated content such as music and other propaganda. In
this paper, we develop a categorization suitable for the analysis of extreme
right channels found on YouTube. By combining this with an NMF-based topic
modelling method, we categorize channels originating from links propagated by
extreme right Twitter accounts. This method is also used to categorize related
channels, which are determined using results returned by YouTube's related
video service. We identify the existence of a "filter bubble", whereby users
who access an extreme right YouTube video are highly likely to be recommended
further extreme right content.
|
1308.6175 | Connections Between Construction D and Related Constructions of Lattices | cs.IT math.IT | Most practical constructions of lattice codes with high coding gains are
multilevel constructions where each level corresponds to an underlying code
component. Construction D, Construction D$'$, and Forney's code formula are
classical constructions that produce such lattices explicitly from a family of
nested binary linear codes. In this paper, we investigate these three closely
related constructions along with the recently developed Construction A$'$ of
lattices from codes over the polynomial ring $\mathbb{F}_2[u]/u^a$. We show
that Construction by Code Formula produces a lattice packing if and only if the
nested codes being used are closed under Schur product, thus proving the
similarity of Construction D and Construction by Code Formula when applied to
Reed-Muller codes. In addition, we relate Construction by Code Formula to
Construction A$'$ by finding a correspondence between nested binary codes and
codes over $\mathbb{F}_2[u]/u^a$. This proves that any lattice constructible
using Construction by Code Formula is also constructible using Construction
A$'$. Finally, we show that Construction A$'$ produces a lattice if and only if
the corresponding code over $\mathbb{F}_2[u]/u^a$ is closed under shifted Schur
product.
|
1308.6181 | Bayesian Conditional Gaussian Network Classifiers with Applications to
Mass Spectra Classification | cs.LG stat.ML | Classifiers based on probabilistic graphical models are very effective. In
continuous domains, maximum likelihood is usually used to assess the
predictions of those classifiers. When data is scarce, this can easily lead to
overfitting. In any probabilistic setting, Bayesian averaging (BA) provides
theoretically optimal predictions and is known to be robust to overfitting. In
this work we introduce Bayesian Conditional Gaussian Network Classifiers, which
efficiently perform exact Bayesian averaging over the parameters. We evaluate
the proposed classifiers against the maximum likelihood alternatives proposed
so far over standard UCI datasets, concluding that performing BA improves the
quality of the assessed probabilities (conditional log likelihood) whilst
maintaining the error rate.
Overfitting is more likely to occur in domains where the number of data items
is small and the number of variables is large. These two conditions are met in
the realm of bioinformatics, where the early diagnosis of cancer from mass
spectra is a relevant task. We provide an application of our classification
framework to that problem, comparing it with the standard maximum likelihood
alternative, where the improvement of quality in the assessed probabilities is
confirmed.
|
1308.6206 | The Partner Units Configuration Problem: Completing the Picture | cs.AI cs.CC | The partner units problem (PUP) is an acknowledged hard benchmark problem for
the Logic Programming community with various industrial application fields like
surveillance, electrical engineering, computer networks or railway safety
systems. However, computational complexity remained widely unclear so far. In
this paper we provide all missing complexity results making the PUP better
exploitable for benchmark testing. Furthermore, we present QuickPup, a
heuristic search algorithm for PUP instances which outperforms all
state-of-the-art solving approaches and which is already in use in real world
industrial configuration environments.
|
1308.6207 | Decoding color codes by projection onto surface codes | quant-ph cs.IT math.IT | We propose a new strategy to decode color codes, which is based on the
projection of the error onto three surface codes. This provides a method to
transform every decoding algorithm of surface codes into a decoding algorithm
of color codes. Applying this idea to a family of hexagonal color codes, with
the perfect matching decoding algorithm for the three corresponding surface
codes, we find a phase error threshold of approximately 8.7%. Finally, our
approach enables us to establish a general lower bound on the error threshold
of a family of color codes depending on the threshold of the three
corresponding surface codes. These results are based on a chain complex
interpretation of surface codes and color codes.
|
1308.6220 | Simulated annealing: in mathematical global optimization computation,
hybrid with local or global search, and practical applications in
crystallography and molecular modelling | math.OC cs.CE physics.comp-ph | Simulated annealing (SA) was inspired from annealing in metallurgy, a
technique involving heating and controlled cooling of a material to increase
the size of its crystals and reduce their defects, both are attributes of the
material that depend on its thermodynamic free energy. In this Paper, firstly
we will study SA in details on its practical implementation. Then, hybrid pure
SA with local (or global) search optimization methods allows us to be able to
design several effective and efficient global search optimization methods. In
order to keep the original sense of SA, we clarify our understandings of SA in
crystallography and molecular modeling field through the studies of prion
amyloid fibrils.
|
1308.6242 | NRC-Canada: Building the State-of-the-Art in Sentiment Analysis of
Tweets | cs.CL | In this paper, we describe how we created two state-of-the-art SVM
classifiers, one to detect the sentiment of messages such as tweets and SMS
(message-level task) and one to detect the sentiment of a term within a
submissions stood first in both tasks on tweets, obtaining an F-score of 69.02
in the message-level task and 88.93 in the term-level task. We implemented a
variety of surface-form, semantic, and sentiment features. with sentiment-word
hashtags, and one from tweets with emoticons. In the message-level task, the
lexicon-based features provided a gain of 5 F-score points over all others.
Both of our systems can be replicated us available resources.
|
1308.6250 | Circumnavigation of an Unknown Target Using UAVs with Range and Range
Rate Measurements | cs.SY cs.RO math.OC | This paper presents two control algorithms enabling a UAV to circumnavigate
an unknown target using range and range rate (i.e., the derivative of range)
measurements. Given a prescribed orbit radius, both control algorithms (i) tend
to drive the UAV toward the tangent of prescribed orbit when the UAV is outside
or on the orbit, and (ii) apply zero control input if the UAV is inside the
desired orbit. The algorithms differ in that, the first algorithm is smooth and
unsaturated while the second algorithm is non-smooth and saturated. By
analyzing properties associated with the bearing angle of the UAV relative to
the target and through proper design of Lyapunov functions, it is shown that
both algorithms produce the desired orbit for an arbitrary initial state. Three
examples are provided as a proof of concept.
|
1308.6273 | New Algorithms for Learning Incoherent and Overcomplete Dictionaries | cs.DS cs.LG stat.ML | In sparse recovery we are given a matrix $A$ (the dictionary) and a vector of
the form $A X$ where $X$ is sparse, and the goal is to recover $X$. This is a
central notion in signal processing, statistics and machine learning. But in
applications such as sparse coding, edge detection, compression and super
resolution, the dictionary $A$ is unknown and has to be learned from random
examples of the form $Y = AX$ where $X$ is drawn from an appropriate
distribution --- this is the dictionary learning problem. In most settings, $A$
is overcomplete: it has more columns than rows. This paper presents a
polynomial-time algorithm for learning overcomplete dictionaries; the only
previously known algorithm with provable guarantees is the recent work of
Spielman, Wang and Wright who gave an algorithm for the full-rank case, which
is rarely the case in applications. Our algorithm applies to incoherent
dictionaries which have been a central object of study since they were
introduced in seminal work of Donoho and Huo. In particular, a dictionary is
$\mu$-incoherent if each pair of columns has inner product at most $\mu /
\sqrt{n}$.
The algorithm makes natural stochastic assumptions about the unknown sparse
vector $X$, which can contain $k \leq c \min(\sqrt{n}/\mu \log n, m^{1/2
-\eta})$ non-zero entries (for any $\eta > 0$). This is close to the best $k$
allowable by the best sparse recovery algorithms even if one knows the
dictionary $A$ exactly. Moreover, both the running time and sample complexity
depend on $\log 1/\epsilon$, where $\epsilon$ is the target accuracy, and so
our algorithms converge very quickly to the true dictionary. Our algorithm can
also tolerate substantial amounts of noise provided it is incoherent with
respect to the dictionary (e.g., Gaussian). In the noisy setting, our running
time and sample complexity depend polynomially on $1/\epsilon$, and this is
necessary.
|
1308.6276 | Fast community detection using local neighbourhood search | physics.soc-ph cs.SI | Communities play a crucial role to describe and analyse modern networks.
However, the size of those networks has grown tremendously with the increase of
computational power and data storage. While various methods have been developed
to extract community structures, their computational cost or the difficulty to
parallelize existing algorithms make partitioning real networks into
communities a challenging problem. In this paper, we propose to alter an
efficient algorithm, the Louvain method, such that communities are defined as
the connected components of a tree-like assignment graph. Within this
framework, we precisely describe the different steps of our algorithm and
demonstrate its highly parallelizable nature. We then show that despite its
simplicity, our algorithm has a partitioning quality similar to the original
method on benchmark graphs and even outperforms other algorithms. We also show
that, even on a single processor, our method is much faster and allows the
analysis of very large networks.
|
1308.6292 | Verification of Semantically-Enhanced Artifact Systems (Extended
Version) | cs.AI | Artifact-Centric systems have emerged in the last years as a suitable
framework to model business-relevant entities, by combining their static and
dynamic aspects. In particular, the Guard-Stage-Milestone (GSM) approach has
been recently proposed to model artifacts and their lifecycle in a declarative
way. In this paper, we enhance GSM with a Semantic Layer, constituted by a
full-fledged OWL 2 QL ontology linked to the artifact information models
through mapping specifications. The ontology provides a conceptual view of the
domain under study, and allows one to understand the evolution of the artifact
system at a higher level of abstraction. In this setting, we present a
technique to specify temporal properties expressed over the Semantic Layer, and
verify them according to the evolution in the underlying GSM model. This
technique has been implemented in a tool that exploits state-of-the-art
ontology-based data access technologies to manipulate the temporal properties
according to the ontology and the mappings, and that relies on the GSMC model
checker for verification.
|
1308.6295 | Robustness of community structure to node removal | physics.soc-ph cs.SI | The identification of modular structures is essential for characterizing real
networks formed by a mesoscopic level of organization where clusters contain
nodes with a high internal degree of connectivity. Many methods have been
developed to unveil community structures, but only a few studies have probed
their suitability in incomplete networks. Here we assess the accuracy of
community detection techniques in incomplete networks generated in sampling
processes. We show that the walktrap and fast greedy algorithms are highly
accurate for detecting the modular structure of incomplete complex networks
even if many of their nodes are removed. Furthermore, we implemented an
approach that improved the time performance of the walktrap and fast greedy
algorithms, while retaining the accuracy rate in identifying the community
membership of nodes. Taken together our results show that this new approach can
be applied to speed up virtually any community detection method in dense
complex networks, as it is the case of similarity networks.
|
1308.6297 | Crowdsourcing a Word-Emotion Association Lexicon | cs.CL | Even though considerable attention has been given to the polarity of words
(positive and negative) and the creation of large polarity lexicons, research
in emotion analysis has had to rely on limited and small emotion lexicons. In
this paper we show how the combined strength and wisdom of the crowds can be
used to generate a large, high-quality, word-emotion and word-polarity
association lexicon quickly and inexpensively. We enumerate the challenges in
emotion annotation in a crowdsourcing scenario and propose solutions to address
them. Most notably, in addition to questions about emotions associated with
terms, we show how the inclusion of a word choice question can discourage
malicious data entry, help identify instances where the annotator may not be
familiar with the target term (allowing us to reject such annotations), and
help obtain annotations at sense level (rather than at word level). We
conducted experiments on how to formulate the emotion-annotation questions, and
show that asking if a term is associated with an emotion leads to markedly
higher inter-annotator agreement than that obtained by asking if a term evokes
an emotion.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.