id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1306.3378 | Approximate Consensus Multi-Agent Control Under Stochastic Environment
with Application to Load Balancing | cs.SY | The paper is devoted to the approximate consensus problem for networks of
nonlinear agents with switching topology, noisy and delayed measurements. In
contrast to the existing stochastic approximation-based control algorithms
(protocols), a local voting protocol with nonvanishing step size is proposed.
Nonvanishing (e.g., constant) step size protocols give the opportunity to
achieve better convergence rate (by choosing proper step sizes) in coping with
time-varying loads and agent states. The price to pay is replacement of the
mean square convergence with an approximate one. To analyze dynamics of the
closed loop system, the so-called method of averaged models is used. It allows
to reduce analysis complexity of the closed loop system. In this paper the
upper bounds for mean square distance between the initial system and its
approximate averaged model are proposed. The proposed upper bounds are used to
obtain conditions for approximate consensus achievement.
The method is applied to the load balancing problem in stochastic dynamic
networks with incomplete information about the current states of agents and
with changing set of communication links. The load balancing problem is
formulated as consensus problem in noisy model with switched topology. The
conditions to achieve the optimal level of load balancing (in the sense that if
no new task arrives, all agents will finish at the same time) are obtained.
The performance of the system is evaluated analytically and by simulation. It
is shown that the performance of the adaptive multi-agent strategy with the
redistribution of tasks among "connected" neighbors is significantly better
than the performance without redistribution. The obtained results are important
for control of production networks, multiprocessor, sensor or multicomputer
networks, etc.
|
1306.3398 | Significant Scales in Community Structure | physics.soc-ph cs.DM cs.SI | Many complex networks show signs of modular structure, uncovered by community
detection. Although many methods succeed in revealing various partitions, it
remains difficult to detect at what scale some partition is significant. This
problem shows foremost in multi-resolution methods. We here introduce an
efficient method for scanning for resolutions in one such method. Additionally,
we introduce the notion of "significance" of a partition, based on subgraph
probabilities. Significance is independent of the exact method used, so could
also be applied in other methods, and can be interpreted as the gain in
encoding a graph by making use of a partition. Using significance, we can
determine "good" resolution parameters, which we demonstrate on benchmark
networks. Moreover, optimizing significance itself also shows excellent
performance. We demonstrate our method on voting data from the European
Parliament. Our analysis suggests the European Parliament has become
increasingly ideologically divided and that nationality plays no role.
|
1306.3409 | Constrained fractional set programs and their application in local
clustering and community detection | stat.ML cs.LG math.OC | The (constrained) minimization of a ratio of set functions is a problem
frequently occurring in clustering and community detection. As these
optimization problems are typically NP-hard, one uses convex or spectral
relaxations in practice. While these relaxations can be solved globally
optimally, they are often too loose and thus lead to results far away from the
optimum. In this paper we show that every constrained minimization problem of a
ratio of non-negative set functions allows a tight relaxation into an
unconstrained continuous optimization problem. This result leads to a flexible
framework for solving constrained problems in network analysis. While a
globally optimal solution for the resulting non-convex problem cannot be
guaranteed, we outperform the loose convex or spectral relaxations by a large
margin on constrained local clustering problems.
|
1306.3415 | Live-wire 3D medical images segmentation | cs.CV | This report describes the design, implementation, evaluation and original
enhancements to the Live-Wire method for 2D and 3D image segmentation.
Live-Wire 2D employs a semi-automatic paradigm; the user is asked to select a
few boundary points of the object to segment, to steer the process in the right
direction, while the result is displayed in real time. In our implementation
segmentation is extended to three dimensions by performing this process on a
slice-by-slice basis. User's time and involvement is further reduced by
allowing him to specify object contours in planes orthogonal to the slices. If
these planes are chosen strategically, Live-Wire 3D can perform 2D segmentation
in the plane of each slice automatically. This report also proposes two
improvements to the original method, path heating and a new graph edge feature
function based on variance of path properties along the boundary. We show that
these improvements lead up to a 33% reduction in interaction with the user, and
improved delineation in presence of strong interfering edges.
|
1306.3422 | Spontaneous centralization of control in a network of company ownerships | physics.soc-ph cs.SI q-fin.GN | We introduce a model for the adaptive evolution of a network of company
ownerships. In a recent work it has been shown that the empirical global
network of corporate control is marked by a central, tightly connected "core"
made of a small number of large companies which control a significant part of
the global economy. Here we show how a simple, adaptive "rich get richer"
dynamics can account for this characteristic, which incorporates the increased
buying power of more influential companies, and in turn results in even higher
control. We conclude that this kind of centralized structure can emerge without
it being an explicit goal of these companies, or as a result of a
well-organized strategy.
|
1306.3432 | Supporting Lemmas for RISE-based Control Methods | cs.SY math.OC | A class of continuous controllers termed Robust Integral of the Signum of the
Error (RISE) have been published over the last decade as a means to yield
asymptotic convergence of the tracking error for classes of nonlinear systems
that are subject to exogenous disturbances and/or modeling uncertainties. The
development of this class of controllers relies on a property related to the
integral of the signum of an error signal. A proof for this property is not
available in previous literature. The stability of some RISE controllers is
analyzed using differential inclusions. Such results rely on the hypothesis
that a set of points is Lebesgue negligible. This paper states and proves two
lemmas related to the properties.
|
1306.3440 | Comparison of OFDM and SC-DFE Capacities Without Channel Knowledge at
the Transmitter | cs.IT cs.SY math.IT | This letter provides a capacity analysis between OFDM and the ideal SC-DFE
when no channel knowledge is available at the transmitter. Through some
algebraic manipulation of the OFDM and SC-DFE capacities and using the
concavity property of the manipulated capacity function and Jensen's
inequality, we are able to prove that the SC-DFE capacity is always superior to
that of an OFDM scheme for 4- and 16-QAM for any given channel. For
higher-order modulations, however, the results indicate that OFDM may only
surpass the ideal SC-DFE capacity by a small amount in some specific scenarios.
|
1306.3474 | Classifying Single-Trial EEG during Motor Imagery with a Small Training
Set | cs.LG cs.HC stat.ML | Before the operation of a motor imagery based brain-computer interface (BCI)
adopting machine learning techniques, a cumbersome training procedure is
unavoidable. The development of a practical BCI posed the challenge of
classifying single-trial EEG with a small training set. In this letter, we
addressed this problem by employing a series of signal processing and machine
learning approaches to alleviate overfitting and obtained test accuracy similar
to training accuracy on the datasets from BCI Competition III and our own
experiments.
|
1306.3476 | Hyperparameter Optimization and Boosting for Classifying Facial
Expressions: How good can a "Null" Model be? | cs.CV cs.LG stat.ML | One of the goals of the ICML workshop on representation and learning is to
establish benchmark scores for a new data set of labeled facial expressions.
This paper presents the performance of a "Null" model consisting of
convolutions with random weights, PCA, pooling, normalization, and a linear
readout. Our approach focused on hyperparameter optimization rather than novel
model components. On the Facial Expression Recognition Challenge held by the
Kaggle website, our hyperparameter optimization approach achieved a score of
60% accuracy on the test data. This paper also introduces a new ensemble
construction variant that combines hyperparameter optimization with the
construction of ensembles. This algorithm constructed an ensemble of four
models that scored 65.5% accuracy. These scores rank 12th and 5th respectively
among the 56 challenge participants. It is worth noting that our approach was
developed prior to the release of the data set, and applied without
modification; our strong competition performance suggests that the TPE
hyperparameter optimization algorithm and domain expertise encoded in our Null
model can generalize to new image classification data sets.
|
1306.3478 | Symplectic spreads, planar functions and mutually unbiased bases | math.CO cs.IT math.IT | In this paper we give explicit descriptions of complete sets of mutually
unbiased bases (MUBs) and orthogonal decompositions of special Lie algebras
$sl_n(\mathbb{C})$ obtained from commutative and symplectic semifields, and
from some other non-semifield symplectic spreads. Relations between various
constructions are also studied. We show that the automorphism group of a
complete set of MUBs is isomorphic to the automorphism group of the
corresponding orthogonal decomposition of the Lie algebra $sl_n(\mathbb{C})$.
In the case of symplectic spreads this automorphism group is determined by the
automorphism group of the spread. By using the new notion of pseudo-planar
functions over fields of characteristic two we give new explicit constructions
of complete sets of MUBs.
|
1306.3484 | An Information Theoretic Study of Timing Side Channels in Two-user
Schedulers | cs.IT cs.CR math.IT | Timing side channels in two-user schedulers are studied. When two users share
a scheduler, one user may learn the other user's behavior from patterns of
service timings. We measure the information leakage of the resulting timing
side channel in schedulers serving a legitimate user and a malicious attacker,
using a privacy metric defined as the Shannon equivocation of the user's job
density. We show that the commonly used first-come-first-serve (FCFS) scheduler
provides no privacy as the attacker is able to to learn the user's job pattern
completely. Furthermore, we introduce an scheduling policy,
accumulate-and-serve scheduler, which services jobs from the user and attacker
in batches after buffering them. The information leakage in this scheduler is
mitigated at the price of service delays, and the maximum privacy is achievable
when large delays are added.
|
1306.3517 | Different Approaches to Community Evolution Prediction in Blogosphere | cs.SI physics.soc-ph | Predicting the future direction of community evolution is a problem with high
theoretical and practical significance. It allows to determine which
characteristics describing communities have importance from the point of view
of their future behaviour. Knowledge about the probable future career of the
community aids in the decision concerning investing in contact with members of
a given community and carrying out actions to achieve a key position in it. It
also allows to determine effective ways of forming opinions or to protect group
participants against such activities. In the paper, a new approach to group
identification and prediction of future events is presented together with the
comparison to existing method. Performed experiments prove a high quality of
prediction results. Comparison to previous studies shows that using many
measures to describe the group profile, and in consequence as a classifier
input, can improve predictions.
|
1306.3524 | Analysis of data in the form of graphs | physics.data-an cs.SI physics.soc-ph | We discuss the problem of extending data mining approaches to cases in which
data points arise in the form of individual graphs. Being able to find the
intrinsic low-dimensionality in ensembles of graphs can be useful in a variety
of modeling contexts, especially when coarse-graining the detailed graph
information is of interest. One of the main challenges in mining graph data is
the definition of a suitable pairwise similarity metric in the space of graphs.
We explore two practical solutions to solving this problem: one based on
finding subgraph densities, and one using spectral information. The approach is
illustrated on three test data sets (ensembles of graphs); two of these are
obtained from standard graph generating algorithms, while the graphs in the
third example are sampled as dynamic snapshots from an evolving network
simulation.
|
1306.3525 | Approximation Algorithms for Bayesian Multi-Armed Bandit Problems | cs.DS cs.LG | In this paper, we consider several finite-horizon Bayesian multi-armed bandit
problems with side constraints which are computationally intractable (NP-Hard)
and for which no optimal (or near optimal) algorithms are known to exist with
sub-exponential running time. All of these problems violate the standard
exchange property, which assumes that the reward from the play of an arm is not
contingent upon when the arm is played. Not only are index policies suboptimal
in these contexts, there has been little analysis of such policies in these
problem settings. We show that if we consider near-optimal policies, in the
sense of approximation algorithms, then there exists (near) index policies.
Conceptually, if we can find policies that satisfy an approximate version of
the exchange property, namely, that the reward from the play of an arm depends
on when the arm is played to within a constant factor, then we have an avenue
towards solving these problems. However such an approximate version of the
idling bandit property does not hold on a per-play basis and are shown to hold
in a global sense. Clearly, such a property is not necessarily true of
arbitrary single arm policies and finding such single arm policies is
nontrivial. We show that by restricting the state spaces of arms we can find
single arm policies and that these single arm policies can be combined into
global (near) index policies where the approximate version of the exchange
property is true in expectation. The number of different bandit problems that
can be addressed by this technique already demonstrate its wide applicability.
|
1306.3529 | Scalable Successive-Cancellation Hardware Decoder for Polar Codes | cs.IT math.IT | Polar codes, discovered by Ar{\i}kan, are the first error-correcting codes
with an explicit construction to provably achieve channel capacity,
asymptotically. However, their error-correction performance at finite lengths
tends to be lower than existing capacity-approaching schemes. Using the
successive-cancellation algorithm, polar decoders can be designed for very long
codes, with low hardware complexity, leveraging the regular structure of such
codes. We present an architecture and an implementation of a scalable hardware
decoder based on this algorithm. This design is shown to scale to code lengths
of up to N = 2^20 on an Altera Stratix IV FPGA, limited almost exclusively by
the amount of available SRAM.
|
1306.3532 | Fast Marching Tree: a Fast Marching Sampling-Based Method for Optimal
Motion Planning in Many Dimensions | cs.RO | In this paper we present a novel probabilistic sampling-based motion planning
algorithm called the Fast Marching Tree algorithm (FMT*). The algorithm is
specifically aimed at solving complex motion planning problems in
high-dimensional configuration spaces. This algorithm is proven to be
asymptotically optimal and is shown to converge to an optimal solution faster
than its state-of-the-art counterparts, chiefly PRM* and RRT*. The FMT*
algorithm performs a "lazy" dynamic programming recursion on a predetermined
number of probabilistically-drawn samples to grow a tree of paths, which moves
steadily outward in cost-to-arrive space. As a departure from previous analysis
approaches that are based on the notion of almost sure convergence, the FMT*
algorithm is analyzed under the notion of convergence in probability: the extra
mathematical flexibility of this approach allows for convergence rate
bounds--the first in the field of optimal sampling-based motion planning.
Specifically, for a certain selection of tuning parameters and configuration
spaces, we obtain a convergence rate bound of order $O(n^{-1/d+\rho})$, where
$n$ is the number of sampled points, $d$ is the dimension of the configuration
space, and $\rho$ is an arbitrarily small constant. We go on to demonstrate
asymptotic optimality for a number of variations on FMT*, namely when the
configuration space is sampled non-uniformly, when the cost is not arc length,
and when connections are made based on the number of nearest neighbors instead
of a fixed connection radius. Numerical experiments over a range of dimensions
and obstacle configurations confirm our theoretical and heuristic arguments by
showing that FMT*, for a given execution time, returns substantially better
solutions than either PRM* or RRT*, especially in high-dimensional
configuration spaces and in scenarios where collision-checking is expensive.
|
1306.3542 | Encoding Petri Nets in Answer Set Programming for Simulation Based
Reasoning | cs.AI | One of our long term research goals is to develop systems to answer realistic
questions (e.g., some mentioned in textbooks) about biological pathways that a
biologist may ask. To answer such questions we need formalisms that can model
pathways, simulate their execution, model intervention to those pathways, and
compare simulations under different circumstances. We found Petri Nets to be
the starting point of a suitable formalism for the modeling and simulation
needs. However, we need to make extensions to the Petri Net model and also
reason with multiple simulation runs and parallel state evolutions. Towards
that end Answer Set Programming (ASP) implementation of Petri Nets would allow
us to do both. In this paper we show how ASP can be used to encode basic Petri
Nets in an intuitive manner. We then show how we can modify this encoding to
model several Petri Net extensions by making small changes. We then highlight
some of the reasoning capabilities that we will use to accomplish our ultimate
research goal.
|
1306.3543 | The Open Connectome Project Data Cluster: Scalable Analysis and Vision
for High-Throughput Neuroscience | cs.DC cs.CE q-bio.NC | We describe a scalable database cluster for the spatial analysis and
annotation of high-throughput brain imaging data, initially for 3-d electron
microscopy image stacks, but for time-series and multi-channel data as well.
The system was designed primarily for workloads that build connectomes---neural
connectivity maps of the brain---using the parallel execution of computer
vision algorithms on high-performance compute clusters. These services and
open-science data sets are publicly available at http://openconnecto.me.
The system design inherits much from NoSQL scale-out and data-intensive
computing architectures. We distribute data to cluster nodes by partitioning a
spatial index. We direct I/O to different systems---reads to parallel disk
arrays and writes to solid-state storage---to avoid I/O interference and
maximize throughput. All programming interfaces are RESTful Web services, which
are simple and stateless, improving scalability and usability. We include a
performance evaluation of the production system, highlighting the effectiveness
of spatial data organization.
|
1306.3548 | Encoding Higher Level Extensions of Petri Nets in Answer Set Programming | cs.AI | Answering realistic questions about biological systems and pathways similar
to the ones used by text books to test understanding of students about
biological systems is one of our long term research goals. Often these
questions require simulation based reasoning. To answer such questions, we need
formalisms to build pathway models, add extensions, simulate, and reason with
them. We chose Petri Nets and Answer Set Programming (ASP) as suitable
formalisms, since Petri Net models are similar to biological pathway diagrams;
and ASP provides easy extension and strong reasoning abilities. We found that
certain aspects of biological pathways, such as locations and substance types,
cannot be represented succinctly using regular Petri Nets. As a result, we need
higher level constructs like colored tokens. In this paper, we show how Petri
Nets with colored tokens can be encoded in ASP in an intuitive manner, how
additional Petri Net extensions can be added by making small code changes, and
how this work furthers our long term research goals. Our approach can be
adapted to other domains with similar modeling needs.
|
1306.3551 | Proceedings of the 2nd Workshop on Robots in Clutter: Preparing robots
for the real world (Berlin, 2013) | cs.RO | This volume represents the proceedings of the 2nd Workshop on Robots in
Clutter: Preparing robots for the real world, held June 27, 2013, at the
Robotics: Science and Systems conference in Berlin, Germany.
|
1306.3558 | Outlying Property Detection with Numerical Attributes | cs.LG cs.DB stat.ML | The outlying property detection problem is the problem of discovering the
properties distinguishing a given object, known in advance to be an outlier in
a database, from the other database objects. In this paper, we analyze the
problem within a context where numerical attributes are taken into account,
which represents a relevant case left open in the literature. We introduce a
measure to quantify the degree the outlierness of an object, which is
associated with the relative likelihood of the value, compared to the to the
relative likelihood of other objects in the database. As a major contribution,
we present an efficient algorithm to compute the outlierness relative to
significant subsets of the data. The latter subsets are characterized in a
"rule-based" fashion, and hence the basis for the underlying explanation of the
outlierness.
|
1306.3560 | iCub World: Friendly Robots Help Building Good Vision Data-Sets | cs.CV | In this paper we present and start analyzing the iCub World data-set, an
object recognition data-set, we acquired using a Human-Robot Interaction (HRI)
scheme and the iCub humanoid robot platform. Our set up allows for rapid
acquisition and annotation of data with corresponding ground truth. While more
constrained in its scopes -- the iCub world is essentially a robotics research
lab -- we demonstrate how the proposed data-set poses challenges to current
recognition systems. The iCubWorld data-set is publicly available. The data-set
can be downloaded from: http://www.iit.it/en/projects/data-sets.html.
|
1306.3576 | Multiplex PageRank | physics.soc-ph cond-mat.stat-mech cs.SI | Many complex systems can be described as multiplex networks in which the same
nodes can interact with one another in different layers, thus forming a set of
interacting and co-evolving networks. Examples of such multiplex systems are
social networks where people are involved in different types of relationships
and interact through various forms of communication media. The ranking of nodes
in multiplex networks is one of the most pressing and challenging tasks that
research on complex networks is currently facing. When pairs of nodes can be
connected through multiple links and in multiple layers, the ranking of nodes
should necessarily reflect the importance of nodes in one layer as well as
their importance in other interdependent layers. In this paper, we draw on the
idea of biased random walks to define the Multiplex PageRank centrality measure
in which the effects of the interplay between networks on the centrality of
nodes are directly taken into account. In particular, depending on the
intensity of the interaction between layers, we define the Additive,
Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show
how each version reflects the extent to which the importance of a node in one
layer affects the importance the node can gain in another layer. We discuss
these measures and apply them to an online multiplex social network. Findings
indicate that taking the multiplex nature of the network into account helps
uncover the emergence of rankings of nodes that differ from the rankings
obtained from one single layer. Results provide support in favor of the
salience of multiplex centrality measures, like Multiplex PageRank, for
assessing the prominence of nodes embedded in multiple interacting networks,
and for shedding a new light on structural properties that would otherwise
remain undetected if each of the interacting networks were analyzed in
isolation.
|
1306.3584 | Recurrent Convolutional Neural Networks for Discourse Compositionality | cs.CL | The compositionality of meaning extends beyond the single sentence. Just as
words combine to form the meaning of sentences, so do sentences combine to form
the meaning of paragraphs, dialogues and general discourse. We introduce both a
sentence model and a discourse model corresponding to the two levels of
compositionality. The sentence model adopts convolution as the central
operation for composing semantic vectors and is based on a novel hierarchical
convolutional neural network. The discourse model extends the sentence model
and is based on a recurrent neural network that is conditioned in a novel way
both on the current sentence and on the current speaker. The discourse model is
able to capture both the sequentiality of sentences and the interaction between
different speakers. Without feature engineering or pretraining and with simple
greedy decoding, the discourse model coupled to the sentence model obtains
state of the art performance on a dialogue act classification experiment.
|
1306.3604 | OFDM Synthetic Aperture Radar Imaging with Sufficient Cyclic Prefix | cs.IT math.IT | The existing linear frequency modulated (LFM) (or step frequency) and random
noise synthetic aperture radar (SAR) systems may correspond to the frequency
hopping (FH) and direct sequence (DS) spread spectrum systems in the past
second and third generation wireless communications. Similar to the current and
future wireless communications generations, in this paper, we propose OFDM SAR
imaging, where a sufficient cyclic prefix (CP) is added to each OFDM pulse. The
sufficient CP insertion converts an inter-symbol interference (ISI) channel
from multipaths into multiple ISI-free subchannels as the key in a wireless
communications system, and analogously, it provides an inter-range-cell
interference (IRCI) free (high range resolution) SAR image in a SAR system. The
sufficient CP insertion along with our newly proposed SAR imaging algorithm
particularly for the OFDM signals also differentiates this paper from all the
existing studies in the literature on OFDM radar signal processing. Simulation
results are presented to illustrate the high range resolution performance of
our proposed CP based OFDM SAR imaging algorithm.
|
1306.3609 | Volume Ratio, Sparsity, and Minimaxity under Unitarily Invariant Norms | math.ST cs.IT math.IT stat.TH | The current paper presents a novel machinery for studying non-asymptotic
minimax estimation of high-dimensional matrices, which yields tight minimax
rates for a large collection of loss functions in a variety of problems.
Based on the convex geometry of finite-dimensional Banach spaces, we first
develop a volume ratio approach for determining minimax estimation rates of
unconstrained normal mean matrices under all squared unitarily invariant norm
losses. In addition, we establish the minimax rates for estimating mean
matrices with submatrix sparsity, where the sparsity constraint introduces an
additional term in the rate whose dependence on the norm differs completely
from the rate of the unconstrained problem. Moreover, the approach is
applicable to the matrix completion problem under the low-rank constraint.
The new method also extends beyond the normal mean model. In particular, it
yields tight rates in covariance matrix estimation and Poisson rate matrix
estimation problems for all unitarily invariant norms.
|
1306.3610 | Thresholds of Spatially Coupled Systems via Lyapunov's Method | cs.IT math.IT | The threshold, or saturation phenomenon of spatially coupled systems is
revisited in the light of Lyapunov's theory of dynamical systems. It is shown
that an application of Lyapunov's direct method can be used to quantitatively
describe the threshold phenomenon, prove convergence, and compute threshold
values. This provides a general proof methodology for the various systems
recently studied. Examples of spatially coupled systems are given and their
thresholds are computed.
|
1306.3618 | Theoretical Bounds in Minimax Decentralized Hypothesis Testing | cs.IT math.IT | Minimax decentralized detection is studied under two scenarios: with and
without a fusion center when the source of uncertainty is the Bayesian prior.
When there is no fusion center, the constraints in the network design are
determined. Both for a single decision maker and multiple decision makers, the
maximum loss in detection performance due to minimax decision making is
obtained. In the presence of a fusion center, the maximum loss of detection
performance between with- and without fusion center networks is derived
assuming that both networks are minimax robust. The results are finally
generalized.
|
1306.3622 | A Differential Feedback Scheme Exploiting the Temporal and Spectral
Correlation | cs.IT math.IT | Channel state information (CSI) provided by limited feedback channel can be
utilized to increase the system throughput. However, in multiple input multiple
output (MIMO) systems, the signaling overhead realizing this CSI feedback can
be quite large, while the capacity of the uplink feedback channel is typically
limited. Hence, it is crucial to reduce the amount of feedback bits. Prior work
on limited feedback compression commonly adopted the block fading channel model
where only temporal or spectral correlation in wireless channel is considered.
In this paper, we propose a differential feedback scheme with full use of the
temporal and spectral correlations to reduce the feedback load. Then, the
minimal differential feedback rate over MIMO doubly selective fading channel is
investigated. Finally, the analysis is verified by simulations.
|
1306.3679 | Fractional Order Fuzzy Control of Nuclear Reactor Power with
Thermal-Hydraulic Effects in the Presence of Random Network Induced Delay and
Sensor Noise having Long Range Dependence | math.OC cs.SY | Nonlinear state space modeling of a nuclear reactor has been done for the
purpose of controlling its global power in load following mode. The nonlinear
state space model has been linearized at different percentage of reactor powers
and a novel fractional order (FO) fuzzy proportional integral derivative (PID)
controller is designed using real coded Genetic Algorithm (GA) to control the
reactor power level at various operating conditions. The effectiveness of using
the fuzzy FOPID controller over conventional fuzzy PID controllers has been
shown with numerical simulations. The controllers tuned with the highest power
models are shown to work well at other operating conditions as well; over the
lowest power model based design and hence are robust with respect to the
changes in nuclear reactor operating power levels. This paper also analyzes the
degradation of nuclear reactor power signal due to network induced random
delays in shared communication network and due to sensor noise while being
fed-back to the Reactor Regulating System (RRS). The effect of long range
dependence (LRD) which is a practical consideration for the stochastic
processes like network induced delay and sensor noise has been tackled by
optimum tuning of FO fuzzy PID controllers using GA, while also taking the
operating point shift into consideration.
|
1306.3680 | Optimum Weight Selection Based LQR Formulation for the Design of
Fractional Order PI{\lambda}D{\mu} Controllers to Handle a Class of
Fractional Order Systems | math.OC cs.SY | A weighted summation of Integral of Time Multiplied Absolute Error (ITAE) and
Integral of Squared Controller Output (ISCO) minimization based time domain
optimal tuning of fractional-order (FO) PID or PI{\lambda}D{\mu} controller is
proposed in this paper with a Linear Quadratic Regulator (LQR) based technique
that minimizes the change in trajectories of the state variables and the
control signal. A class of fractional order systems having single non-integer
order element which show highly sluggish and oscillatory open loop responses
have been tuned with an LQR based FOPID controller. The proposed controller
design methodology is compared with the existing time domain optimal tuning
techniques with respect to change in the trajectory of state variables,
tracking performance for change in set-point, magnitude of control signal and
also the capability of load disturbance suppression. A real coded genetic
algorithm (GA) has been used for the optimal choice of weighting matrices while
designing the quadratic regulator by minimizing the time domain integral
performance index. Credible simulation studies have been presented to justify
the proposition.
|
1306.3682 | Frequency Domain Design of Fractional Order PID Controller for AVR
System Using Chaotic Multi-objective Optimization | math.OC cs.SY | A fractional order (FO) PID or FOPID controller is designed for an Automatic
Voltage Regulator (AVR) system with the consideration of contradictory
performance objectives. An improved evolutionary Non-dominated Sorting Genetic
Algorithm (NSGA-II), augmented with a chaotic Henon map is used for the
multi-objective optimization based design procedure. The Henon map as the
random number generator outperforms the original NSGA-II algorithm and its
Logistic map assisted version for obtaining a better design trade-off with an
FOPID controller. The Pareto fronts showing the trade-offs between the
different design objectives have also been shown for both the FOPID controller
and the conventional PID controller to enunciate the relative merits and
demerits of each. The design is done in frequency domain and hence stability
and robustness of the design is automatically guaranteed unlike the other time
domain optimization based controller design methods.
|
1306.3683 | Performance Comparison of Optimal Fractional Order Hybrid Fuzzy PID
Controllers for Handling Oscillatory Fractional Order Processes with Dead
Time | math.OC cs.SY | Fuzzy logic based PID controllers have been studied in this paper,
considering several combinations of hybrid controllers by grouping the
proportional, integral and derivative actions with fuzzy inferencing in
different forms. Fractional order (FO) rate of error signal and FO integral of
control signal have been used in the design of a family of decomposed hybrid FO
fuzzy PID controllers. The input and output scaling factors (SF) along with the
integro-differential operators are tuned with real coded genetic algorithm (GA)
to produce optimum closed loop performance by simultaneous consideration of the
control loop error index and the control signal. Three different classes of
fractional order oscillatory processes with various levels of relative
dominance between time constant and time delay have been used to test the
comparative merits of the proposed family of hybrid fractional order fuzzy PID
controllers. Performance comparison of the different FO fuzzy PID controller
structures has been done in terms of optimal set-point tracking, load
disturbance rejection and minimal variation of manipulated variable or smaller
actuator requirement etc. In addition, multi-objective Non-dominated Sorting
Genetic Algorithm (NSGA-II) has been used to study the Pareto optimal
trade-offs between the set point tracking and control signal, and the set point
tracking and load disturbance performance for each of the controller structure
to handle the three different types of processes.
|
1306.3684 | Design of Hybrid Regrouping PSO-GA based Sub-optimal Networked Control
System with Random Packet Losses | math.OC cs.SY | In this paper, a new approach has been presented to design sub-optimal state
feedback regulators over Networked Control Systems (NCS) with random packet
losses. The optimal regulator gains, producing guaranteed stability are
designed with the nominal discrete time model of a plant using Lyapunov
technique which produces a few set of Bilinear Matrix Inequalities (BMIs). In
order to reduce the computational complexity of the BMIs, a Genetic Algorithm
(GA) based approach coupled with the standard interior point methods for LMIs
has been adopted. A Regrouping Particle Swarm Optimization (RegPSO) based
method is then employed to optimally choose the weighting matrices for the
state feedback regulator design that gets passed through the GA based stability
checking criteria i.e. the BMIs. This hybrid optimization methodology put
forward in this paper not only reduces the computational difficulty of the
feasibility checking condition for optimum stabilizing gain selection but also
minimizes other time domain performance criteria like expected value of the
set-point tracking error with optimum weight selection based LQR design for the
nominal system.
|
1306.3685 | Continuous Order Identification of PHWR Models Under Step-back for the
Design of Hyper-damped Power Tracking Controller with Enhanced Reactor Safety | math.OC cs.SY | In this paper, discrete time higher integer order linear transfer function
models have been identified first for a 500 MWe Pressurized Heavy Water Reactor
(PHWR) which has highly nonlinear dynamical nature. Linear discrete time models
of the nonlinear nuclear reactor have been identified around eight different
operating points (power reduction or step-back conditions) with least square
estimator (LSE) and its four variants. From the synthetic frequency domain data
of these identified discrete time models, fractional order (FO) models with
sampled continuous order distribution are identified for the nuclear reactor.
This enables design of continuous order Proportional-Integral-Derivative (PID)
like compensators in the complex w-plane for global power tracking at a wide
range of operating conditions. Modeling of the PHWR is attempted with various
levels of discrete commensurate-orders and the achievable accuracies are also
elucidated along with the hidden issues, regarding modeling and controller
design. Credible simulation studies are presented to show the effectiveness of
the proposed reactor modeling and power level controller design. The controller
pushes the reactor poles in higher Riemann sheets and thus makes the closed
loop system hyper-damped which ensures safer reactor operation at varying
dc-gain while making the power tracking temporal response slightly sluggish;
but ensuring greater safety margin.
|
1306.3692 | An open diachronic corpus of historical Spanish: annotation criteria and
automatic modernisation of spelling | cs.CL cs.DL | The IMPACT-es diachronic corpus of historical Spanish compiles over one
hundred books --containing approximately 8 million words-- in addition to a
complementary lexicon which links more than 10 thousand lemmas with
attestations of the different variants found in the documents. This textual
corpus and the accompanying lexicon have been released under an open license
(Creative Commons by-nc-sa) in order to permit their intensive exploitation in
linguistic research. Approximately 7% of the words in the corpus (a selection
aimed at enhancing the coverage of the most frequent word forms) have been
annotated with their lemma, part of speech, and modern equivalent. This paper
describes the annotation criteria followed and the standards, based on the Text
Encoding Initiative recommendations, used to the represent the texts in digital
form. As an illustration of the possible synergies between diachronic textual
resources and linguistic research, we describe the application of statistical
machine translation techniques to infer probabilistic context-sensitive rules
for the automatic modernisation of spelling. The automatic modernisation with
this type of statistical methods leads to very low character error rates when
the output is compared with the supervised modern version of the text.
|
1306.3693 | Message Passing Algorithms for Phase Noise Tracking Using Tikhonov
Mixtures | cs.IT math.IT | In this work, a new low complexity iterative algorithm for decoding data
transmitted over strong phase noise channels is presented. The algorithm is
based on the Sum & Product Algorithm (SPA) with phase noise messages modeled as
Tikhonov mixtures. Since mixture based Bayesian inference such as SPA, creates
an exponential increase in mixture order for consecutive messages, mixture
reduction is necessary. We propose a low complexity mixture reduction algorithm
which finds a reduced order mixture whose dissimilarity metric is
mathematically proven to be upper bounded by a given threshold. As part of the
mixture reduction, a new method for optimal clustering provides the closest
circular distribution, in Kullback Leibler sense, to any circular mixture. We
further show a method for limiting the number of tracked components and further
complexity reduction approaches. We show simulation results and complexity
analysis for the proposed algorithm and show better performance than other
state of the art low complexity algorithms. We show that the Tikhonov mixture
approximation of SPA messages is equivalent to the tracking of multiple phase
trajectories, or also can be looked as smart multiple phase locked loops (PLL).
When the number of components is limited to one the result is similar to a
smart PLL.
|
1306.3710 | Symmetric Two-User MIMO BC and IC with Evolving Feedback | cs.IT math.IT | Extending recent findings on the two-user MISO broadcast channel (BC) with
imperfect and delayed channel state information at the transmitter (CSIT), the
work here explores the performance of the two user MIMO BC and the two user
MIMO interference channel (MIMO IC), in the presence of feedback with evolving
quality and timeliness. Under standard assumptions, and in the presence of M
antennas per transmitter and N antennas per receiver, the work derives the DoF
region, which is optimal for a large regime of sufficiently good (but
potentially imperfect) delayed CSIT. This region concisely captures the effect
of having predicted, current and delayed-CSIT, as well as concisely captures
the effect of the quality of CSIT offered at any time, about any channel. In
addition to the progress towards describing the limits of using such imperfect
and delayed feedback in MIMO settings, the work offers different insights that
include the fact that, an increasing number of receive antennas can allow for
reduced quality feedback, as well as that no CSIT is needed for the direct
links in the IC.
|
1306.3717 | Performance Analysis for Physical Layer Security in Multi-Antenna
Downlink Networks with Limited CSI Feedback | cs.IT math.IT | Channel state information (CSI) at the transmitter is of importance to the
performance of physical layer security based on multi-antenna networks.
Specifically, CSI is not only beneficial to improve the capacity of the
legitimate channel, but also can be used to degrade the performance of the
eavesdropper channel. Thus, the secrecy rate increases accordingly. This letter
focuses on the quantitative analysis of the ergodic secrecy sum-rate in terms
of feedback amount of the CSI from the legitimate users in multiuser
multi-antenna downlink networks. Furthermore, the asymptotic characteristics of
the ergodic secrecy sum-rate in two extreme cases is investigated in some
detail. Finally, our theoretical claims are confirmed by the numerical results.
|
1306.3721 | Online Alternating Direction Method (longer version) | cs.LG math.OC | Online optimization has emerged as powerful tool in large scale optimization.
In this pa- per, we introduce efficient online optimization algorithms based on
the alternating direction method (ADM), which can solve online convex
optimization under linear constraints where the objective could be non-smooth.
We introduce new proof techniques for ADM in the batch setting, which yields a
O(1/T) convergence rate for ADM and forms the basis for regret anal- ysis in
the online setting. We consider two scenarios in the online setting, based on
whether an additional Bregman divergence is needed or not. In both settings, we
establish regret bounds for both the objective function as well as constraints
violation for general and strongly convex functions. We also consider inexact
ADM updates where certain terms are linearized to yield efficient updates and
show the stochastic convergence rates. In addition, we briefly discuss that
online ADM can be used as projection- free online learning algorithm in some
scenarios. Preliminary results are presented to illustrate the performance of
the proposed algorithms.
|
1306.3729 | Spectral Experts for Estimating Mixtures of Linear Regressions | cs.LG stat.ML | Discriminative latent-variable models are typically learned using EM or
gradient-based optimization, which suffer from local optima. In this paper, we
develop a new computationally efficient and provably consistent estimator for a
mixture of linear regressions, a simple instance of a discriminative
latent-variable model. Our approach relies on a low-rank linear regression to
recover a symmetric tensor, which can be factorized into the parameters using a
tensor power method. We prove rates of convergence for our estimator and
provide an empirical evaluation illustrating its strengths relative to local
optimization (EM).
|
1306.3738 | A coevolving model based on preferential triadic closure for social
media networks | cs.SI nlin.AO physics.soc-ph | The dynamical origin of complex networks, i.e., the underlying principles
governing network evolution, is a crucial issue in network study. In this
paper, by carrying out analysis to the temporal data of Flickr and
Epinions--two typical social media networks, we found that the dynamical
pattern in neighborhood, especially the formation of triadic links, plays a
dominant role in the evolution of networks. We thus proposed a coevolving
dynamical model for such networks, in which the evolution is only driven by the
local dynamics--the preferential triadic closure. Numerical experiments
verified that the model can reproduce global properties which are qualitatively
consistent with the empirical observations.
|
1306.3764 | Bounding ground state energy of Hopfield models | math.OC cs.IT math.IT | In this paper we look at a class of random optimization problems that arise
in the forms typically known as Hopfield models. We view two scenarios which we
term as the positive Hopfield form and the negative Hopfield form. For both of
these scenarios we define the binary optimization problems that essentially
emulate what would typically be known as the ground state energy of these
models. We then present a simple mechanism that can be used to create a set of
theoretical rigorous bounds for these energies. In addition to purely
theoretical bounds, we also present a couple of fast optimization algorithms
that can also be used to provide solid (albeit a bit weaker) algorithmic bounds
for the ground state energies.
|
1306.3769 | Multi-scale analysis of the European airspace using network community
detection | physics.soc-ph cs.SI | We show that the European airspace can be represented as a multi-scale
traffic network whose nodes are airports, sectors, or navigation points and
links are defined and weighted according to the traffic of flights between the
nodes. By using a unique database of the air traffic in the European airspace,
we investigate the architecture of these networks with a special emphasis on
their community structure. We propose that unsupervised network community
detection algorithms can be used to monitor the current use of the airspaces
and improve it by guiding the design of new ones. Specifically, we compare the
performance of three community detection algorithms, also by using a null model
which takes into account the spatial distance between nodes, and we discuss
their ability to find communities that could be used to define new control
units of the airspace.
|
1306.3770 | Lifting $\ell_1$-optimization strong and sectional thresholds | cs.IT math.IT math.OC | In this paper we revisit under-determined linear systems of equations with
sparse solutions. As is well known, these systems are among core mathematical
problems of a very popular compressed sensing field. The popularity of the
field as well as a substantial academic interest in linear systems with sparse
solutions are in a significant part due to seminal results
\cite{CRT,DonohoPol}. Namely, working in a statistical scenario,
\cite{CRT,DonohoPol} provided substantial mathematical progress in
characterizing relation between the dimensions of the systems and the sparsity
of unknown vectors recoverable through a particular polynomial technique called
$\ell_1$-minimization. In our own series of work
\cite{StojnicCSetam09,StojnicUpper10,StojnicEquiv10} we also provided a
collection of mathematical results related to these problems. While, Donoho's
work \cite{DonohoPol,DonohoUnsigned} established (and our own work
\cite{StojnicCSetam09,StojnicUpper10,StojnicEquiv10} reaffirmed) the typical or
the so-called \emph{weak threshold} behavior of $\ell_1$-minimization many
important questions remain unanswered. Among the most important ones are those
that relate to non-typical or the so-called \emph{strong threshold} behavior.
These questions are usually combinatorial in nature and known techniques come
up short of providing the exact answers. In this paper we provide a powerful
mechanism that that can be used to attack the "tough" scenario, i.e. the
\emph{strong threshold} (and its a similar form called \emph{sectional
threshold}) of $\ell_1$-minimization.
|
1306.3774 | Under-determined linear systems and $\ell_q$-optimization thresholds | cs.IT math.IT math.OC | Recent studies of under-determined linear systems of equations with sparse
solutions showed a great practical and theoretical efficiency of a particular
technique called $\ell_1$-optimization. Seminal works \cite{CRT,DOnoho06CS}
rigorously confirmed it for the first time. Namely, \cite{CRT,DOnoho06CS}
showed, in a statistical context, that $\ell_1$ technique can recover sparse
solutions of under-determined systems even when the sparsity is linearly
proportional to the dimension of the system. A followup \cite{DonohoPol} then
precisely characterized such a linearity through a geometric approach and a
series of work\cite{StojnicCSetam09,StojnicUpper10,StojnicEquiv10} reaffirmed
statements of \cite{DonohoPol} through a purely probabilistic approach. A
theoretically interesting alternative to $\ell_1$ is a more general version
called $\ell_q$ (with an essentially arbitrary $q$). While $\ell_1$ is
typically considered as a first available convex relaxation of sparsity norm
$\ell_0$, $\ell_q,0\leq q\leq 1$, albeit non-convex, should technically be a
tighter relaxation of $\ell_0$. Even though developing polynomial (or close to
be polynomial) algorithms for non-convex problems is still in its initial
phases one may wonder what would be the limits of an $\ell_q,0\leq q\leq 1$,
relaxation even if at some point one can develop algorithms that could handle
its non-convexity. A collection of answers to this and a few realted questions
is precisely what we present in this paper.
|
1306.3778 | Upper-bounding $\ell_1$-optimization sectional thresholds | cs.IT math.IT math.OC | In this paper we look at a particular problem related to under-determined
linear systems of equations with sparse solutions. $\ell_1$-minimization is a
fairly successful polynomial technique that can in certain statistical
scenarios find sparse enough solutions of such systems. Barriers of $\ell_1$
performance are typically referred to as its thresholds. Depending if one is
interested in a typical or worst case behavior one then distinguishes between
the \emph{weak} thresholds that relate to a typical behavior on one side and
the \emph{sectional} and \emph{strong} thresholds that relate to the worst case
behavior on the other side. Starting with seminal works
\cite{CRT,DonohoPol,DOnoho06CS} a substantial progress has been achieved in
theoretical characterization of $\ell_1$-minimization statistical thresholds.
More precisely, \cite{CRT,DOnoho06CS} presented for the first time linear lower
bounds on all of these thresholds. Donoho's work \cite{DonohoPol} (and our own
\cite{StojnicCSetam09,StojnicUpper10}) went a bit further and essentially
settled the $\ell_1$'s \emph{weak} thresholds. At the same time they also
provided fairly good lower bounds on the values on the \emph{sectional} and
\emph{strong} thresholds. In this paper, we revisit the \emph{sectional}
thresholds and present a simple mechanism that can be used to create solid
upper bounds as well. The method we present relies on a seemingly simple but
substantial progress we made in studying Hopfield models in
\cite{StojnicHopBnds10}.
|
1306.3779 | Bounds on restricted isometry constants of random matrices | math.OC cs.IT math.IT math.PR | In this paper we look at isometry properties of random matrices. During the
last decade these properties gained a lot attention in a field called
compressed sensing in first place due to their initial use in \cite{CRT,CT}.
Namely, in \cite{CRT,CT} these quantities were used as a critical tool in
providing a rigorous analysis of $\ell_1$ optimization's ability to solve an
under-determined system of linear equations with sparse solutions. In such a
framework a particular type of isometry, called restricted isometry, plays a
key role. One then typically introduces a couple of quantities, called upper
and lower restricted isometry constants to characterize the isometry properties
of random matrices. Those constants are then usually viewed as mathematical
objects of interest and their a precise characterization is desirable. The
first estimates of these quantities within compressed sensing were given in
\cite{CRT,CT}. As the need for precisely estimating them grew further a finer
improvements of these initial estimates were obtained in e.g.
\cite{BCTsharp09,BT10}. These are typically obtained through a combination of
union-bounding strategy and powerful tail estimates of extreme eigenvalues of
Wishart (Gaussian) matrices (see, e.g. \cite{Edelman88}). In this paper we
attempt to circumvent such an approach and provide an alternative way to obtain
similar estimates.
|
1306.3786 | Analysis of Multi-Cell Downlink Cooperation with a Constrained Spatial
Model | cs.IT math.IT | Multi-cell cooperation (MCC) mitigates intercell interference and improves
throughput at the cell edge. This paper considers a cooperative downlink,
whereby cell-edge mobiles are served by multiple cooperative base stations. The
cooperating base stations transmit identical signals over paths with
non-identical path losses, and the receiving mobile performs diversity
combining. The analysis in this paper is driven by a new expression for the
conditional outage probability when signals arriving over different paths are
combined in the presence of noise and interference, where the conditioning is
with respect to the network topology and shadowing. The channel model accounts
for path loss, shadowing, and Nakagami fading, and the Nakagami fading
parameters do not need to be identical for all paths. To study performance over
a wide class of network topologies, a random spatial model is adopted, and
performance is found by statistically characterizing the rates provided on the
downlinks. To model realistic networks, the model requires a minimum separation
among base stations. Having adopted a realistic model and an accurate analysis,
the paper proceeds to determine performance under several resource-allocation
policies and provides insight regarding how the cell edge should be defined.
|
1306.3791 | A gambling interpretation of some quantum information-theoretic
quantities | quant-ph cs.IT math.IT | It is known that repeated gambling over the outcomes of independent and
identically distributed (i.i.d.) random variables gives rise to alternate
operational meaning of entropies in the classical case in terms of the doubling
rates. We give a quantum extension of this approach for gambling over the
measurement outcomes of tensor product states. Under certain parameters of the
gambling setup, one can give operational meaning of von Neumann entropies. We
discuss two variants of gambling when a helper is available and it is shown
that the difference in their doubling rates is the quantum discord. Lastly, a
quantum extension of Kelly's gambling setup in the classical case gives a
doubling rate that is upper bounded by the Holevo information.
|
1306.3801 | Towards a better compressed sensing | cs.IT math.IT math.OC | In this paper we look at a well known linear inverse problem that is one of
the mathematical cornerstones of the compressed sensing field. In seminal works
\cite{CRT,DOnoho06CS} $\ell_1$ optimization and its success when used for
recovering sparse solutions of linear inverse problems was considered.
Moreover, \cite{CRT,DOnoho06CS} established for the first time in a statistical
context that an unknown vector of linear sparsity can be recovered as a known
existing solution of an under-determined linear system through $\ell_1$
optimization. In \cite{DonohoPol,DonohoUnsigned} (and later in
\cite{StojnicCSetam09,StojnicUpper10}) the precise values of the linear
proportionality were established as well. While the typical $\ell_1$
optimization behavior has been essentially settled through the work of
\cite{DonohoPol,DonohoUnsigned,StojnicCSetam09,StojnicUpper10}, we in this
paper look at possible upgrades of $\ell_1$ optimization. Namely, we look at a
couple of algorithms that turn out to be capable of recovering a substantially
higher sparsity than the $\ell_1$. However, these algorithms assume a bit of
"feedback" to be able to work at full strength. This in turn then translates
the original problem of improving upon $\ell_1$ to designing algorithms that
would be able to provide output needed to feed the $\ell_1$ upgrades considered
in this papers.
|
1306.3808 | Characteristic exponents of complex networks | physics.soc-ph cs.SI | We present a novel way to characterize the structure of complex networks by
studying the statistical properties of the trajectories of random walks over
them. We consider time series corresponding to different properties of the
nodes visited by the walkers. We show that the analysis of the fluctuations of
these time series allows to define a set of characteristic exponents which
capture the local and global organization of a network. This approach provides
a way of solving two classical problems in network science, namely the
systematic classification of networks, and the identification of the salient
properties of growing networks. The results contribute to the construction of a
unifying framework for the investigation of the structure and dynamics of
complex systems.
|
1306.3828 | Non-Uniform Blind Deblurring with a Spatially-Adaptive Sparse Prior | cs.CV | Typical blur from camera shake often deviates from the standard uniform
convolutional script, in part because of problematic rotations which create
greater blurring away from some unknown center point. Consequently, successful
blind deconvolution requires the estimation of a spatially-varying or
non-uniform blur operator. Using ideas from Bayesian inference and convex
analysis, this paper derives a non-uniform blind deblurring algorithm with
several desirable, yet previously-unexplored attributes. The underlying
objective function includes a spatially adaptive penalty which couples the
latent sharp image, non-uniform blur operator, and noise level together. This
coupling allows the penalty to automatically adjust its shape based on the
estimated degree of local blur and image structure such that regions with large
blur or few prominent edges are discounted. Remaining regions with modest blur
and revealing edges therefore dominate the overall estimation process without
explicitly incorporating structure-selection heuristics. The algorithm can be
implemented using a majorization-minimization strategy that is virtually
parameter free. Detailed theoretical analysis and empirical validation on real
images serve to validate the proposed method.
|
1306.3830 | What do leaders know? | physics.soc-ph cs.SI | The ability of a society to make the right decisions on relevant matters
relies on its capability to properly aggregate the noisy information spread
across the individuals it is made of. In this paper we study the information
aggregation performance of a stylized model of a society whose most influential
individuals - the leaders - are highly connected among themselves and
uninformed. Agents update their state of knowledge in a Bayesian manner by
listening to their neighbors. We find analytical and numerical evidence of a
transition, as a function of the noise level in the information initially
available to agents, from a regime where information is correctly aggregated to
one where the population reaches consensus on the wrong outcome with finite
probability. Furthermore, information aggregation depends in a non-trivial
manner on the relative size of the clique of leaders, with the limit of a
vanishingly small clique being singular.
|
1306.3839 | TwitterCrowds: Techniques for Exploring Topic and Sentiment in
Microblogging Data | cs.SI physics.soc-ph | Analysts and social scientists in the humanities and industry require
techniques to help visualize large quantities of microblogging data. Methods
for the automated analysis of large scale social media data (on the order of
tens of millions of tweets) are widely available, but few visualization
techniques exist to support interactive exploration of the results. In this
paper, we present extended descriptions of ThemeCrowds and SentireCrowds, two
tag-based visualization techniques for this data. We subsequently introduce a
new list equivalent for both of these techniques and present a number of case
studies showing them in operation. Finally, we present a formal user study to
evaluate the effectiveness of these list interface equivalents when comparing
them to ThemeCrowds and SentireCrowds. We find that discovering topics
associated with areas of strong positive or negative sentiment is faster when
using a list interface. In terms of user preference, multilevel tag clouds were
found to be more enjoyable to use. Despite both interfaces being usable for all
tested tasks, we have evidence to support that list interfaces can be more
efficient for tasks when an appropriate ordering is known beforehand.
|
1306.3855 | Two-View Matching with View Synthesis Revisited | cs.CV | Wide-baseline matching focussing on problems with extreme viewpoint change is
considered. We introduce the use of view synthesis with affine-covariant
detectors to solve such problems and show that matching with the Hessian-Affine
or MSER detectors outperforms the state-of-the-art ASIFT.
To minimise the loss of speed caused by view synthesis, we propose the
Matching On Demand with view Synthesis algorithm (MODS) that uses progressively
more synthesized images and more (time-consuming) detectors until reliable
estimation of geometry is possible. We show experimentally that the MODS
algorithm solves problems beyond the state-of-the-art and yet is comparable in
speed to standard wide-baseline matchers on simpler problems.
Minor contributions include an improved method for tentative correspondence
selection, applicable both with and without view synthesis and a view synthesis
setup greatly improving MSER robustness to blur and scale change that increase
its running time by 10% only.
|
1306.3856 | From Text to Bank Interrelation Maps | q-fin.RM cs.SI physics.soc-ph | In the wake of the ongoing global financial crisis, interdependencies among
banks have come into focus in trying to assess systemic risk. To date, such
analysis has largely been based on numerical data. By contrast, this study
attempts to gain further insight into bank interconnections by tapping into
financial discussion. Co-mentions of bank names are turned into a network,
which can be visualized and analyzed quantitatively, in order to illustrate
characteristics of individual banks and the network as a whole. The approach
allows for the study of temporal dynamics of the network, to highlight changing
patterns of discussion that reflect real-world events, the current financial
crisis in particular. For instance, it depicts how connections from distressed
banks to other banks and supervisory authorities have emerged and faded over
time, as well as how global shifts in network structure coincide with severe
crisis episodes. The usage of textual data holds an additional advantage in the
possibility of gaining a more qualitative understanding of an observed
interrelation, through its context. We illustrate our approach using a case
study on Finnish banks and financial institutions. The data set comprises 3.9M
posts from online, financial and business-related discussion, during the years
2004 to 2012. Future research includes analyzing European news articles with a
broader perspective, and a focus on improving semantic description of
relations.
|
1306.3860 | Cluster coloring of the Self-Organizing Map: An information
visualization perspective | cs.LG cs.HC | This paper takes an information visualization perspective to visual
representations in the general SOM paradigm. This involves viewing SOM-based
visualizations through the eyes of Bertin's and Tufte's theories on data
graphics. The regular grid shape of the Self-Organizing Map (SOM), while being
a virtue for linking visualizations to it, restricts representation of cluster
structures. From the viewpoint of information visualization, this paper
provides a general, yet simple, solution to projection-based coloring of the
SOM that reveals structures. First, the proposed color space is easy to
construct and customize to the purpose of use, while aiming at being
perceptually correct and informative through two separable dimensions. Second,
the coloring method is not dependent on any specific method of projection, but
is rather modular to fit any objective function suitable for the task at hand.
The cluster coloring is illustrated on two datasets: the iris data, and welfare
and poverty indicators.
|
1306.3874 | Classifying and Visualizing Motion Capture Sequences using Deep Neural
Networks | cs.CV | The gesture recognition using motion capture data and depth sensors has
recently drawn more attention in vision recognition. Currently most systems
only classify dataset with a couple of dozens different actions. Moreover,
feature extraction from the data is often computational complex. In this paper,
we propose a novel system to recognize the actions from skeleton data with
simple, but effective, features using deep neural networks. Features are
extracted for each frame based on the relative positions of joints (PO),
temporal differences (TD), and normalized trajectories of motion (NT). Given
these features a hybrid multi-layer perceptron is trained, which simultaneously
classifies and reconstructs input data. We use deep autoencoder to visualize
learnt features, and the experiments show that deep neural networks can capture
more discriminative information than, for instance, principal component
analysis can. We test our system on a public database with 65 classes and more
than 2,000 motion sequences. We obtain an accuracy above 95% which is, to our
knowledge, the state of the art result for such a large dataset.
|
1306.3882 | Chaining Test Cases for Reactive System Testing (extended version) | cs.SE cs.SY | Testing of synchronous reactive systems is challenging because long input
sequences are often needed to drive them into a state at which a desired
feature can be tested. This is particularly problematic in on-target testing,
where a system is tested in its real-life application environment and the time
required for resetting is high. This paper presents an approach to discovering
a test case chain---a single software execution that covers a group of test
goals and minimises overall test execution time. Our technique targets the
scenario in which test goals for the requirements are given as safety
properties. We give conditions for the existence and minimality of a single
test case chain and minimise the number of test chains if a single test chain
is infeasible. We report experimental results with a prototype tool for C code
generated from Simulink models and compare it to state-of-the-art test suite
generators.
|
1306.3884 | The Rise and Fall of Semantic Rule Updates Based on SE-Models | cs.AI | Logic programs under the stable model semantics, or answer-set programs,
provide an expressive rule-based knowledge representation framework, featuring
a formal, declarative and well-understood semantics. However, handling the
evolution of rule bases is still a largely open problem. The AGM framework for
belief change was shown to give inappropriate results when directly applied to
logic programs under a non-monotonic semantics such as the stable models. The
approaches to address this issue, developed so far, proposed update semantics
based on manipulating the syntactic structure of programs and rules.
More recently, AGM revision has been successfully applied to a significantly
more expressive semantic characterisation of logic programs based on SE-models.
This is an important step, as it changes the focus from the evolution of a
syntactic representation of a rule base to the evolution of its semantic
content.
In this paper, we borrow results from the area of belief update to tackle the
problem of updating (instead of revising) answer-set programs. We prove a
representation theorem which makes it possible to constructively define any
operator satisfying a set of postulates derived from Katsuno and Mendelzon's
postulates for belief update. We define a specific operator based on this
theorem, examine its computational complexity and compare the behaviour of this
operator with syntactic rule update semantics from the literature. Perhaps
surprisingly, we uncover a serious drawback of all rule update operators based
on Katsuno and Mendelzon's approach to update and on SE-models.
|
1306.3888 | The SP theory of intelligence: an overview | cs.AI | This article is an overview of the "SP theory of intelligence". The theory
aims to simplify and integrate concepts across artificial intelligence,
mainstream computing and human perception and cognition, with information
compression as a unifying theme. It is conceived as a brain-like system that
receives 'New' information and stores some or all of it in compressed form as
'Old' information. It is realised in the form of a computer model -- a first
version of the SP machine. The concept of "multiple alignment" is a powerful
central idea. Using heuristic techniques, the system builds multiple alignments
that are 'good' in terms of information compression. For each multiple
alignment, probabilities may be calculated. These provide the basis for
calculating the probabilities of inferences. The system learns new structures
from partial matches between patterns. Using heuristic techniques, the system
searches for sets of structures that are 'good' in terms of information
compression. These are normally ones that people judge to be 'natural', in
accordance with the 'DONSVIC' principle -- the discovery of natural structures
via information compression. The SP theory may be applied in several areas
including 'computing', aspects of mathematics and logic, representation of
knowledge, natural language processing, pattern recognition, several kinds of
reasoning, information storage and retrieval, planning and problem solving,
information compression, neuroscience, and human perception and cognition.
Examples include the parsing and production of language including discontinuous
dependencies in syntax, pattern recognition at multiple levels of abstraction
and its integration with part-whole relations, nonmonotonic reasoning and
reasoning with default values, reasoning in Bayesian networks including
'explaining away', causal diagnosis, and the solving of a geometric analogy
problem.
|
1306.3890 | Big data and the SP theory of intelligence | cs.AI | This article is about how the "SP theory of intelligence" and its realisation
in the "SP machine" may, with advantage, be applied to the management and
analysis of big data. The SP system -- introduced in the article and fully
described elsewhere -- may help to overcome the problem of variety in big data:
it has potential as "a universal framework for the representation and
processing of diverse kinds of knowledge" (UFK), helping to reduce the
diversity of formalisms and formats for knowledge and the different ways in
which they are processed. It has strengths in the unsupervised learning or
discovery of structure in data, in pattern recognition, in the parsing and
production of natural language, in several kinds of reasoning, and more. It
lends itself to the analysis of streaming data, helping to overcome the problem
of velocity in big data. Central in the workings of the system is lossless
compression of information: making big data smaller and reducing problems of
storage and management. There is potential for substantial economies in the
transmission of data, for big cuts in the use of energy in computing, for
faster processing, and for smaller and lighter computers. The system provides a
handle on the problem of veracity in big data, with potential to assist in the
management of errors and uncertainties in data. It lends itself to the
visualisation of knowledge structures and inferential processes. A
high-parallel, open-source version of the SP machine would provide a means for
researchers everywhere to explore what can be done with the system and to
create new versions of it.
|
1306.3895 | On-line PCA with Optimal Regrets | cs.LG | We carefully investigate the on-line version of PCA, where in each trial a
learning algorithm plays a k-dimensional subspace, and suffers the compression
loss on the next instance when projected into the chosen subspace. In this
setting, we analyze two popular on-line algorithms, Gradient Descent (GD) and
Exponentiated Gradient (EG). We show that both algorithms are essentially
optimal in the worst-case. This comes as a surprise, since EG is known to
perform sub-optimally when the instances are sparse. This different behavior of
EG for PCA is mainly related to the non-negativity of the loss in this case,
which makes the PCA setting qualitatively different from other settings studied
in the literature. Furthermore, we show that when considering regret bounds as
function of a loss budget, EG remains optimal and strictly outperforms GD.
Next, we study the extension of the PCA setting, in which the Nature is allowed
to play with dense instances, which are positive matrices with bounded largest
eigenvalue. Again we can show that EG is optimal and strictly better than GD in
this setting.
|
1306.3896 | Improving the efficiency of the LDPC code-based McEliece cryptosystem
through irregular codes | cs.IT cs.CR math.IT | We consider the framework of the McEliece cryptosystem based on LDPC codes,
which is a promising post-quantum alternative to classical public key
cryptosystems. The use of LDPC codes in this context allows to achieve good
security levels with very compact keys, which is an important advantage over
the classical McEliece cryptosystem based on Goppa codes. However, only regular
LDPC codes have been considered up to now, while some further improvement can
be achieved by using irregular LDPC codes, which are known to achieve better
error correction performance than regular LDPC codes. This is shown in this
paper, for the first time at our knowledge. The possible use of irregular
transformation matrices is also investigated, which further increases the
efficiency of the system, especially in regard to the public key size.
|
1306.3899 | Generalized rank weights : a duality statement | cs.IT math.IT | We consider linear codes over some fixed finite field extension over an
arbitrary finite field. Gabidulin introduced rank metric codes, by endowing
linear codes over the extension field with a rank weight over the base field
and studied their basic properties in analogy with linear codes and the
classical Hamming distance. Inspired by the characterization of wiretap II
codes in terms of generalized Hamming weights by Wei, Kurihara et al. defined
some generalized rank weights and showed their relevance for secure network
coding. In this paper, we derive a statement for generalized rank weights of
the dual code, completely analogous to Wei's one for generalized Hamming
weights and we characterize the equality case of the r-generalized Singleton
bound for the generalized rank weights, in terms of the rank weight of the dual
code.
|
1306.3905 | Stability of Multi-Task Kernel Regression Algorithms | cs.LG stat.ML | We study the stability properties of nonlinear multi-task regression in
reproducing Hilbert spaces with operator-valued kernels. Such kernels, a.k.a.
multi-task kernels, are appropriate for learning prob- lems with nonscalar
outputs like multi-task learning and structured out- put prediction. We show
that multi-task kernel regression algorithms are uniformly stable in the
general case of infinite-dimensional output spaces. We then derive under mild
assumption on the kernel generaliza- tion bounds of such algorithms, and we
show their consistency even with non Hilbert-Schmidt operator-valued kernels .
We demonstrate how to apply the results to various multi-task kernel regression
methods such as vector-valued SVR and functional ridge regression.
|
1306.3917 | On Finding the Largest Mean Among Many | stat.ML cs.LG | Sampling from distributions to find the one with the largest mean arises in a
broad range of applications, and it can be mathematically modeled as a
multi-armed bandit problem in which each distribution is associated with an
arm. This paper studies the sample complexity of identifying the best arm
(largest mean) in a multi-armed bandit problem. Motivated by large-scale
applications, we are especially interested in identifying situations where the
total number of samples that are necessary and sufficient to find the best arm
scale linearly with the number of arms. We present a single-parameter
multi-armed bandit model that spans the range from linear to superlinear sample
complexity. We also give a new algorithm for best arm identification, called
PRISM, with linear sample complexity for a wide range of mean distributions.
The algorithm, like most exploration procedures for multi-armed bandits, is
adaptive in the sense that the next arms to sample are selected based on
previous samples. We compare the sample complexity of adaptive procedures with
simpler non-adaptive procedures using new lower bounds. For many problem
instances, the increased sample complexity required by non-adaptive procedures
is a polynomial factor of the number of arms.
|
1306.3920 | Discriminating word senses with tourist walks in complex networks | cs.CL cs.SI physics.soc-ph | Patterns of topological arrangement are widely used for both animal and human
brains in the learning process. Nevertheless, automatic learning techniques
frequently overlook these patterns. In this paper, we apply a learning
technique based on the structural organization of the data in the attribute
space to the problem of discriminating the senses of 10 polysemous words. Using
two types of characterization of meanings, namely semantical and topological
approaches, we have observed significative accuracy rates in identifying the
suitable meanings in both techniques. Most importantly, we have found that the
characterization based on the deterministic tourist walk improves the
disambiguation process when one compares with the discrimination achieved with
traditional complex networks measurements such as assortativity and clustering
coefficient. To our knowledge, this is the first time that such deterministic
walk has been applied to such a kind of problem. Therefore, our finding
suggests that the tourist walk characterization may be useful in other related
applications.
|
1306.3946 | Multi-view in Lensless Compressive Imaging | cs.IT cs.CV math.IT | Multi-view images are acquired by a lensless compressive imaging
architecture, which consists of an aperture assembly and multiple sensors. The
aperture assembly consists of a two dimensional array of aperture elements
whose transmittance can be individually controlled to implement a compressive
sensing matrix. For each transmittance pattern of the aperture assembly, each
of the sensors takes a measurement. The measurement vectors from the multiple
sensors represent multi-view images of the same scene. We present theoretical
framework for multi-view reconstruction and experimental results for enhancing
quality of image using multi-view.
|
1306.3953 | The Appliance Pervasive of Internet of Things in Healthcare Systems | cs.SY | In fact, information systems are the foundation of new productivity sources,
medical organizational forms, and erection of a global economy. IoT based
healthcare systems play a significant role in ICT and have contribution in
growth of medical information systems, which are underpinning of recent medical
and economic development strategies. However, to take advantages of IoT, it is
essential that medical enterprises and community should trust the IoT systems
in terms of performance, security, privacy, reliability and
return-on-investment, which are open challenges of current IoT systems. For
heightening of healthcare system; tracking, tracing and monitoring of patients
and medical objects are more essential. But due to the inadequate healthcare
situation, medical environment, medical technologies and the unique
requirements of some healthcare applications, the obtainable tools cannot meet
them accurately. The tracking, tracing and monitoring of patients and
healthcare actors activities in healthcare system are challenging research
directions for IoT researchers. State-of-the-art IoT based healthcare system
should be developed which ensure the safety of patients and other healthcare
activities. With this manuscript, we elaborate the essential role of IoT in
healthcare systems; immense prospects of Internet of things in healthcare
systems; extensive aspect of the use of IoT is dissimilar among different
healthcare components and finally the participation of IoT between the useful
research and present realistic applications. IoT and few other modern
technologies are still in underpinning stage; mainly in the healthcare system.
|
1306.3954 | Subgroups of direct products closely approximated by direct sums | math.GN cs.IT math.GR math.IT | Let $I$ be an infinite set, $\{G_i:i\in I\}$ be a family of (topological)
groups and $G=\prod_{i\in I} G_i$ be its direct product. For $J\subseteq I$,
$p_{J}: G\to \prod_{j\in J} G_j$ denotes the projection. We say that a subgroup
$H$ of $G$ is: (i) \emph{uniformly controllable} in $G$ provided that for every
finite set $J\subseteq I$ there exists a finite set $K\subseteq I$ such that
$p_{J}(H)=p_{J}(H\cap\bigoplus_{i\in K} G_i)$; (ii) \emph{controllable} in $G$
provided that $p_{J}(H)=p_{J}(H\cap\bigoplus_{i\in I} G_i)$ for every finite
set $J\subseteq I$; (iii) \emph{weakly controllable} in $G$ if $H\cap
\bigoplus_{i\in I} G_i$ is dense in $H$, when $G$ is equipped with the
Tychonoff product topology. One easily proves that (i)$\to$(ii)$\to$(iii). We
thoroughly investigate the question as to when these two arrows can be
reversed. We prove that the first arrow can be reversed when $H$ is compact,
but the second arrow cannot be reversed even when $H$ is compact. Both arrows
can be reversed if all groups $G_i$ are finite. When $G_i=A$ for all $i\in I$,
where $A$ is an abelian group, we show that the first arrow can be reversed for
{\em all} subgroups $H$ of $G$ if and only if $A$ is finitely generated.
Connections with coding theory are highlighted.
|
1306.3955 | The Number of Terms and Documents for Pseudo-Relevant Feedback for
Ad-hoc Information Retrieval | cs.IR | In Information Retrieval System (IRS), the Automatic Relevance Feedback (ARF)
is a query reformulation technique that modifies the initial one without the
user intervention. It is applied mainly through the addition of terms coming
from the external resources such as the ontologies and or the results of the
current research. In this context we are mainly interested in the local
analysis technique for the ARF in ad-hoc IRS on Arabic documents. In this
article, we have examined the impact of the variation of the two parameters
implied in this technique, that is to say, the number of the documents
{\guillemotleft}D{\guillemotright} and the number of terms
{\guillemotleft}T{\guillemotright}, on an Arabic IRS performance. The
experimentation, carried out on an Arabic corpus text, enables us to deduce
that there are queries which are not easily improvable with the query
reformulation. In addition, the success of the ARF is due mainly to the
selection of a sufficient number of documents D and to the extraction of a very
reduced set of relevant terms T for retrieval.
|
1306.3975 | Lifting/lowering Hopfield models ground state energies | math.OC cs.IT math.IT | In our recent work \cite{StojnicHopBnds10} we looked at a class of random
optimization problems that arise in the forms typically known as Hopfield
models. We viewed two scenarios which we termed as the positive Hopfield form
and the negative Hopfield form. For both of these scenarios we defined the
binary optimization problems whose optimal values essentially emulate what
would typically be known as the ground state energy of these models. We then
presented a simple mechanisms that can be used to create a set of theoretical
rigorous bounds for these energies. In this paper we create a way more powerful
set of mechanisms that can substantially improve the simple bounds given in
\cite{StojnicHopBnds10}. In fact, the mechanisms we create in this paper are
the first set of results that show that convexity type of bounds can be
substantially improved in this type of combinatorial problems.
|
1306.3976 | Lifting $\ell_q$-optimization thresholds | cs.IT math.IT math.OC | In this paper we look at a connection between the $\ell_q,0\leq q\leq 1$,
optimization and under-determined linear systems of equations with sparse
solutions. The case $q=1$, or in other words $\ell_1$ optimization and its a
connection with linear systems has been thoroughly studied in last several
decades; in fact, especially so during the last decade after the seminal works
\cite{CRT,DOnoho06CS} appeared. While current understanding of $\ell_1$
optimization-linear systems connection is fairly known, much less so is the
case with a general $\ell_q,0<q<1$, optimization. In our recent work
\cite{StojnicLqThrBnds10} we provided a study in this direction. As a result we
were able to obtain a collection of lower bounds on various $\ell_q,0\leq q\leq
1$, optimization thresholds. In this paper, we provide a substantial conceptual
improvement of the methodology presented in \cite{StojnicLqThrBnds10}.
Moreover, the practical results in terms of achievable thresholds are also
encouraging. As is usually the case with these and similar problems, the
methodology we developed emphasizes their a combinatorial nature and attempts
to somehow handle it. Although our results' main contributions should be on a
conceptual level, they already give a very strong suggestion that $\ell_q$
optimization can in fact provide a better performance than $\ell_1$, a fact
long believed to be true due to a tighter optimization relaxation it provides
to the original $\ell_0$ sparsity finding oriented original problem
formulation. As such, they in a way give a solid boost to further exploration
of the design of the algorithms that would be able to handle $\ell_q,0<q<1$,
optimization in a reasonable (if not polynomial) time.
|
1306.3977 | Compressed sensing of block-sparse positive vectors | cs.IT math.IT math.OC | In this paper we revisit one of the classical problems of compressed sensing.
Namely, we consider linear under-determined systems with sparse solutions. A
substantial success in mathematical characterization of an $\ell_1$
optimization technique typically used for solving such systems has been
achieved during the last decade. Seminal works \cite{CRT,DOnoho06CS} showed
that the $\ell_1$ can recover a so-called linear sparsity (i.e. solve systems
even when the solution has a sparsity linearly proportional to the length of
the unknown vector). Later considerations \cite{DonohoPol,DonohoUnsigned} (as
well as our own ones \cite{StojnicCSetam09,StojnicUpper10}) provided the
precise characterization of this linearity. In this paper we consider the
so-called structured version of the above sparsity driven problem. Namely, we
view a special case of sparse solutions, the so-called block-sparse solutions.
Typically one employs $\ell_2/\ell_1$-optimization as a variant of the standard
$\ell_1$ to handle block-sparse case of sparse solution systems. We considered
systems with block-sparse solutions in a series of work
\cite{StojnicCSetamBlock09,StojnicUpperBlock10,StojnicICASSP09block,StojnicJSTSP09}
where we were able to provide precise performance characterizations if the
$\ell_2/\ell_1$-optimization similar to those obtained for the standard
$\ell_1$ optimization in \cite{StojnicCSetam09,StojnicUpper10}. Here we look at
a similar class of systems where on top of being block-sparse the unknown
vectors are also known to have components of the same sign. In this paper we
slightly adjust $\ell_2/\ell_1$-optimization to account for the known signs and
provide a precise performance characterization of such an adjustment.
|
1306.4009 | On the Asymptotic Performance of Bit-Wise Decoders for Coded Modulation | cs.IT math.IT | Two decoder structures for coded modulation over the Gaussian and flat fading
channels are studied: the maximum likelihood symbol-wise decoder, and the
(suboptimal) bit-wise decoder based on the bit-interleaved coded modulation
paradigm. We consider a 16-ary quadrature amplitude constellation labeled by a
Gray labeling. It is shown that the asymptotic loss in terms of pairwise error
probability, for any two codewords caused by the bit-wise decoder, is bounded
by 1.25 dB. The analysis also shows that for the Gaussian channel the
asymptotic loss is zero for a wide range of linear codes, including all
rate-1/2 convolutional codes.
|
1306.4036 | Distributed Inference with M-ary Quantized Data in the Presence of
Byzantine Attacks | cs.IT cs.CR math.IT stat.AP | The problem of distributed inference with M-ary quantized data at the sensors
is investigated in the presence of Byzantine attacks. We assume that the
attacker does not have knowledge about either the true state of the phenomenon
of interest, or the quantization thresholds used at the sensors. Therefore, the
Byzantine nodes attack the inference network by modifying modifying the symbol
corresponding to the quantized data to one of the other M symbols in the
quantization alphabet-set and transmitting the false symbol to the fusion
center (FC). In this paper, we find the optimal Byzantine attack that blinds
any distributed inference network. As the quantization alphabet size increases,
a tremendous improvement in the security performance of the distributed
inference network is observed.
We also investigate the problem of distributed inference in the presence of
resource-constrained Byzantine attacks. In particular, we focus our attention
on two problems: distributed detection and distributed estimation, when the
Byzantine attacker employs a highly-symmetric attack. For both the problems, we
find the optimal attack strategies employed by the attacker to maximally
degrade the performance of the inference network. A reputation-based scheme for
identifying malicious nodes is also presented as the network's strategy to
mitigate the impact of Byzantine threats on the inference performance of the
distributed sensor network.
|
1306.4040 | An Algorithm to Find Optimal Attack Paths in Nondeterministic Scenarios | cs.CR cs.AI | As penetration testing frameworks have evolved and have become more complex,
the problem of controlling automatically the pentesting tool has become an
important question. This can be naturally addressed as an attack planning
problem. Previous approaches to this problem were based on modeling the actions
and assets in the PDDL language, and using off-the-shelf AI tools to generate
attack plans. These approaches however are limited. In particular, the planning
is classical (the actions are deterministic) and thus not able to handle the
uncertainty involved in this form of attack planning.
We herein contribute a planning model that does capture the uncertainty about
the results of the actions, which is modeled as a probability of success of
each action. We present efficient planning algorithms, specifically designed
for this problem, that achieve industrial-scale runtime performance (able to
solve scenarios with several hundred hosts and exploits). These algorithms take
into account the probability of success of the actions and their expected cost
(for example in terms of execution time, or network traffic generated).
We thus show that probabilistic attack planning can be solved efficiently for
the scenarios that arise when assessing the security of large networks. Two
"primitives" are presented, which are used as building blocks in a framework
separating the overall problem into two levels of abstraction. We also present
the experimental results obtained with our implementation, and conclude with
some ideas for further work.
|
1306.4044 | Attack Planning in the Real World | cs.CR cs.AI | Assessing network security is a complex and difficult task. Attack graphs
have been proposed as a tool to help network administrators understand the
potential weaknesses of their network. However, a problem has not yet been
addressed by previous work on this subject; namely, how to actually execute and
validate the attack paths resulting from the analysis of the attack graph. In
this paper we present a complete PDDL representation of an attack model, and an
implementation that integrates a planner into a penetration testing tool. This
allows to automatically generate attack paths for penetration testing
scenarios, and to validate these attacks by executing the corresponding actions
-including exploits- against the real target network. We present an algorithm
for transforming the information present in the penetration testing tool to the
planning domain, and show how the scalability issues of attack graphs can be
solved using current planners. We include an analysis of the performance of our
solution, showing how our model scales to medium-sized networks and the number
of actions available in current penetration testing tools.
|
1306.4064 | A surrogate for networks -- How scale-free is my scale-free network? | physics.soc-ph cs.SI nlin.AO physics.data-an | Complex networks are now being studied in a wide range of disciplines across
science and technology. In this paper we propose a method by which one can
probe the properties of experimentally obtained network data. Rather than just
measuring properties of a network inferred from data, we aim to ask how typical
is that network? What properties of the observed network are typical of all
such scale free networks, and which are peculiar? To do this we propose a
series of methods that can be used to generate statistically likely complex
networks which are both similar to the observed data and also consistent with
an underlying null-hypothesis -- for example a particular degree distribution.
There is a direct analogy between the approach we propose here and the
surrogate data methods applied to nonlinear time series data.
|
1306.4066 | MYE: Missing Year Estimation in Academic Social Networks | cs.DL cs.SI | In bibliometrics studies, a common challenge is how to deal with incorrect or
incomplete data. However, given a large volume of data, there often exists
certain relationships between the data items that can allow us to recover
missing data items and correct erroneous data. In this paper, we study a
particular problem of this sort - estimating the missing year information
associated with publications (and hence authors' years of active publication).
We first propose a simple algorithm that only makes use of the "direct"
information, such as paper citation/reference relationships or paper-author
relationships. The result of this simple algorithm is used as a benchmark for
comparison. Our goal is to develop algorithms that increase both the coverage
(the percentage of missing year papers recovered) and accuracy (mean absolute
error of the estimated year to the real year). We propose some advanced
algorithms that extend inference by information propagation. For each
algorithm, we propose three versions according to the given academic social
network type: a) Homogeneous (only contains paper citation links), b) Bipartite
(only contains paper-author relations), and, c) Heterogeneous (both paper
citation and paper-author relations). We carry out experiments on the three
public data sets (MSR Libra, DBLP and APS), and evaluated by applying the
K-fold cross validation method. We show that the advanced algorithms can
improve both coverage and accuracy.
|
1306.4069 | An Efficient Distributed Data Extraction Method for Mining Sensor
Networks Data | cs.DB cs.NI | A wide range of Sensor Networks (SNs) are deployed in real world applications
which generate large amount of raw sensory data. Data mining technique to
extract useful knowledge from these applications is an emerging research area
due to its crucial importance but still its a challenge to discover knowledge
efficiently from the sensor network data. In this paper we proposed a
Distributed Data Extraction (DDE) method to extract data from sensor networks
by applying rules based clustering and association rule mining techniques. A
significant amount of sensor readings sent from the sensors to the data
processing point(s) may be lost or corrupted. DDE is also estimating these
missing values from available sensor reading instead of requesting the sensor
node to resend lost reading. DDE also apply data reduction which is able to
reduce the data size while transmitting to sink. Results show our proposed
approach exhibits the maximum data accuracy and efficient data extraction in
term of the entire networks energy consumption.
|
1306.4071 | A Microcontroller Based Device to Reduce Phanthom Power | cs.SY | In this paper we concern ourselves with the problem of minimizing the standby
power consumption in some of the house hold appliances. Here we propose a
remote controlled device through which we could reduce the amount of standby
power consumed by the electrical appliances connected to it. This device
provides an option of controlling each of the appliances connected to it
individually or as a whole when required. The device has got number of plug
points each of which could be controlled through the remote and also has a
provision of switching off all the points at once.
|
1306.4079 | A Novel Block-DCT and PCA Based Image Perceptual Hashing Algorithm | cs.CV | Image perceptual hashing finds applications in content indexing, large-scale
image database management, certification and authentication and digital
watermarking. We propose a Block-DCT and PCA based image perceptual hash in
this article and explore the algorithm in the application of tamper detection.
The main idea of the algorithm is to integrate color histogram and DCT
coefficients of image blocks as perceptual feature, then to compress perceptual
features as inter-feature with PCA, and to threshold to create a robust hash.
The robustness and discrimination properties of the proposed algorithm are
evaluated in detail. Our algorithms first construct a secondary image, derived
from input image by pseudo-randomly extracting features that approximately
capture semi-global geometric characteristics. From the secondary image (which
does not perceptually resemble the input), we further extract the final
features which can be used as a hash value (and can be further suitably
quantized). In this paper, we use spectral matrix invariants as embodied by
Singular Value Decomposition. Surprisingly, formation of the secondary image
turns out be quite important since it not only introduces further robustness,
but also enhances the security properties. Indeed, our experiments reveal that
our hashing algorithms extract most of the geometric information from the
images and hence are robust to severe perturbations (e.g. up to %50 cropping by
area with 20 degree rotations) on images while avoiding misclassification.
Experimental results show that the proposed image perceptual hash algorithm can
effectively address the tamper detection problem with advantageous robustness
and discrimination.
|
1306.4080 | Parallel Coordinate Descent Newton Method for Efficient
$\ell_1$-Regularized Minimization | cs.LG cs.NA | The recent years have witnessed advances in parallel algorithms for large
scale optimization problems. Notwithstanding demonstrated success, existing
algorithms that parallelize over features are usually limited by divergence
issues under high parallelism or require data preprocessing to alleviate these
problems. In this work, we propose a Parallel Coordinate Descent Newton
algorithm using multidimensional approximate Newton steps (PCDN), where the
off-diagonal elements of the Hessian are set to zero to enable parallelization.
It randomly partitions the feature set into $b$ bundles/subsets with size of
$P$, and sequentially processes each bundle by first computing the descent
directions for each feature in parallel and then conducting $P$-dimensional
line search to obtain the step size. We show that: (1) PCDN is guaranteed to
converge globally despite increasing parallelism; (2) PCDN converges to the
specified accuracy $\epsilon$ within the limited iteration number of
$T_\epsilon$, and $T_\epsilon$ decreases with increasing parallelism (bundle
size $P$). Using the implementation technique of maintaining intermediate
quantities, we minimize the data transfer and synchronization cost of the
$P$-dimensional line search. For concreteness, the proposed PCDN algorithm is
applied to $\ell_1$-regularized logistic regression and $\ell_2$-loss SVM.
Experimental evaluations on six benchmark datasets show that the proposed PCDN
algorithm exploits parallelism well and outperforms the state-of-the-art
methods in speed without losing accuracy.
|
1306.4092 | Application of particle swarm optimization for enhanced cyclic steam
stimulation in a offshore heavy oil reservoir | cs.CE | Three different variations of PSO algorithms, i.e. Canonical, Gaussian
Bare-bone and L\'evy Bare-bone PSO, are tested to optimize the ultimate oil
recovery of a large heavy oil reservoir. The performance of these algorithms
was compared in terms of convergence behaviour and the final optimization
results. It is found that, in general, all three types of PSO methods are able
to improve the objective function. The best objective function is found by
using the Canonical PSO, while the other two methods give similar results. The
Gaussian Bare-bone PSO may picks positions that are far away from the optimal
solution. The L\'evy Bare-bone PSO has similar convergence behaviour as the
Canonical PSO. For the specific optimization problem investigated in this
study, it is found that the temperature of the injection steam, CO2 composition
in the injection gas, and the gas injection rates have bigger impact on the
objective function, while steam injection rate and the liquid production rate
have less impact on the objective function.
|
1306.4134 | Dialogue System: A Brief Review | cs.CL | A Dialogue System is a system which interacts with human in natural language.
At present many universities are developing the dialogue system in their
regional language. This paper will discuss about dialogue system, its
components, challenges and its evaluation. This paper helps the researchers for
getting info regarding dialogues system.
|
1306.4136 | Dynamical interplay between awareness and epidemic spreading in
multiplex networks | physics.soc-ph cond-mat.stat-mech cs.SI | We present the analysis of the interrelation between two processes accounting
for the spreading of an epidemics, and the information awareness to prevent its
infection, on top of multiplex networks. This scenario is representative of an
epidemic process spreading on a network of persistent real contacts, and a
cyclic information awareness process diffusing in the network of virtual social
contacts between the same individuals. The topology corresponds to a multiplex
network where two diffusive processes are interacting affecting each other. The
analysis using a Microscopic Markov Chain Approach (MMCA) reveals the phase
diagram of the incidence of the epidemics and allows to capture the evolution
of the epidemic threshold depending on the topological structure of the
multiplex and the interrelation with the awareness process. Interestingly, the
critical point for the onset of the epidemics has a critical value
(meta-critical point) defined by the awareness dynamics and the topology of the
virtual network, from which the onset increases and the epidemics incidence
decreases.
|
1306.4139 | Punjabi Language Interface to Database: a brief review | cs.CL cs.HC | Unlike most user-computer interfaces, a natural language interface allows
users to communicate fluently with a computer system with very little
preparation. Databases are often hard to use in cooperating with the users
because of their rigid interface. A good NLIDB allows a user to enter commands
and ask questions in native language and then after interpreting respond to the
user in native language. For a large number of applications requiring
interaction between humans and the computer systems, it would be convenient to
provide the end-user friendly interface. Punjabi language interface to database
would proof fruitful to native people of Punjab, as it provides ease to them to
use various e-governance applications like Punjab Sewa, Suwidha, Online Public
Utility Forms, Online Grievance Cell, Land Records Management System,legacy
matters, e-District, agriculture, etc. Punjabi is the mother tongue of more
than 110 million people all around the world. According to available
information, Punjabi ranks 10th from top out of a total of 6,900 languages
recognized internationally by the United Nations. This paper covers a brief
overview of the Natural language interface to database, its different
components, its advantages, disadvantages, approaches and techniques used. The
paper ends with the work done on Punjabi language interface to database and
future enhancements that can be done.
|
1306.4144 | Optimal Relay Placement for Capacity and Performance Improvement using a
Fluid Model for Heterogeneous Wireless Networks | cs.NI cs.IT math.IT | In this paper, we address the problem of optimal relay placement in a
cellular network assuming network densification, with the aim of maximizing
cell capacity. In our model, a fraction of radio resources is dedicated to the
base-station (BS)/relay nodes (RN) communication. In the remaining resources,
BS and RN transmit simultaneously to users. During this phase, the network is
densified in the sense that the transmitters density and so network capacity
are increased. Intra- and inter-cell interference is taken into account in
Signal to Interference plus Noise Ratio (SINR) simple formulas derived from a
fluid model for heterogeneous network. Optimization can then be quickly
performed using Simulated Annealing. Performance results show that cell
capacity is boosted thanks to densification despite a degradation of the signal
quality. Bounds are also provided on the fraction of resources dedicated to the
BS-RN link.
|
1306.4149 | Exploring the limits of community detection strategies in complex
networks | physics.soc-ph cond-mat.stat-mech cs.SI | The characterization of network community structure has profound implications
in several scientific areas. Therefore, testing the algorithms developed to
establish the optimal division of a network into communities is a fundamental
problem in the field. We performed here a highly detailed evaluation of
community detection algorithms, which has two main novelties: 1) using complex
closed benchmarks, which provide precise ways to assess whether the solutions
generated by the algorithms are optimal; and, 2) A novel type of analysis,
based on hierarchically clustering the solutions suggested by multiple
community detection algorithms, which allows to easily visualize how different
are those solutions. Surprise, a global parameter that evaluates the quality of
a partition, confirms the power of these analyses. We show that none of the
community detection algorithms tested provide consistently optimal results in
all networks and that Surprise maximization, obtained by combining multiple
algorithms, obtains quasi-optimal performances in these difficult benchmarks.
|
1306.4152 | Bioclimating Modelling: A Machine Learning Perspective | cs.LG stat.ML | Many machine learning (ML) approaches are widely used to generate bioclimatic
models for prediction of geographic range of organism as a function of climate.
Applications such as prediction of range shift in organism, range of invasive
species influenced by climate change are important parameters in understanding
the impact of climate change. However, success of machine learning-based
approaches depends on a number of factors. While it can be safely said that no
particular ML technique can be effective in all applications and success of a
technique is predominantly dependent on the application or the type of the
problem, it is useful to understand their behaviour to ensure informed choice
of techniques. This paper presents a comprehensive review of machine
learning-based bioclimatic model generation and analyses the factors
influencing success of such models. Considering the wide use of statistical
techniques, in our discussion we also include conventional statistical
techniques used in bioclimatic modelling.
|
1306.4166 | Second-Order Asymptotics of Conversions of Distributions and Entangled
States Based on Rayleigh-Normal Probability Distributions | quant-ph cs.IT math.IT | We discuss the asymptotic behavior of conversions between two independent and
identical distributions up to the second-order conversion rate when the
conversion is produced by a deterministic function from the input probability
space to the output probability space. To derive the second-order conversion
rate, we introduce new probability distributions named Rayleigh-normal
distributions. The family of Rayleigh-normal distributions includes a Rayleigh
distribution and coincides with the standard normal distribution in the limit
case. Using this family of probability distributions, we represent the
asymptotic second-order rates for the distribution conversion. As an
application, we also consider the asymptotic behavior of conversions between
the multiple copies of two pure entangled states in quantum systems when only
local operations and classical communications (LOCC) are allowed. This problem
contains entanglement concentration, entanglement dilution and a kind of
cloning problem with LOCC restriction as special cases.
|
1306.4193 | Gravity Effects on Information Filtering and Network Evolving | physics.soc-ph cs.IR cs.SI | In this paper, based on the gravity principle of classical physics, we
propose a tunable gravity-based model, which considers tag usage pattern to
weigh both the mass and distance of network nodes. We then apply this model in
solving the problems of information filtering and network evolving.
Experimental results on two real-world data sets, \emph{Del.icio.us} and
\emph{MovieLens}, show that it can not only enhance the algorithmic
performance, but can also better characterize the properties of real networks.
This work may shed some light on the in-depth understanding of the effect of
gravity model.
|
1306.4230 | On the Broadcast Latency in Finite Cooperative Wireless Networks | cs.IT math.IT | The aim of this paper is to study the effect of cooperation on system delay,
quantified as the number of retransmissions required to deliver a broadcast
message to all intended receivers. Unlike existing works on broadcast
scenarios, where distance between nodes is not explicitly considered, we
examine the joint effect of small scale fading and propagation path loss. Also,
we study cooperation in application to finite networks, i.e. when the number of
cooperating nodes is small. Stochastic geometry and order statistics are used
to develop analytical models that tightly match the simulation results for
non-cooperative scenario and provide a lower bound for delay in a cooperative
setting. We demonstrate that even for a simple flooding scenario, cooperative
broadcast achieves significantly lower system delay.
|
1306.4303 | Distributed conjugate gradient strategies for parameter estimation over
sensor networks | cs.IT math.IT | This paper presents distributed adaptive algorithms based on the conjugate
gradient (CG) method for distributed networks. Both incremental and diffusion
adaptive solutions are all considered. The distributed conventional (CG) and
modified CG (MCG) algorithms have an improved performance in terms of mean
square error as compared with least-mean square (LMS)-based algorithms and a
performance that is close to recursive least-squares (RLS) algorithms . The
resulting algorithms are distributed, cooperative and able to respond in real
time to changes in the environment.
|
1306.4345 | An Overview of the Research on Texture Based Plant Leaf Classification | cs.CV | Plant classification has a broad application prospective in agriculture and
medicine, and is especially significant to the biology diversity research. As
plants are vitally important for environmental protection, it is more important
to identify and classify them accurately. Plant leaf classification is a
technique where leaf is classified based on its different morphological
features. The goal of this paper is to provide an overview of different aspects
of texture based plant leaf classification and related things. At last we will
be concluding about the efficient method i.e. the method that gives better
performance compared to the other methods.
|
1306.4350 | Joint Unitary Triangularization for Gaussian Multi-User MIMO Networks | cs.IT math.IT | The problem of transmitting a common message to multiple users over the
Gaussian multiple-input multiple-output broadcast channel is considered, where
each user is equipped with an arbitrary number of antennas. A closed-loop
scenario is assumed, for which a practical capacity-approaching scheme is
developed. By applying judiciously chosen unitary operations at the transmit
and receive nodes, the channel matrices are triangularized so that the
resulting matrices have equal diagonals, up to a possible multiplicative scalar
factor. This, along with the utilization of successive interference
cancellation, reduces the coding and decoding tasks to those of coding and
decoding over the single-antenna additive white Gaussian noise channel. Over
the resulting effective channel, any off-the-shelf code may be used. For the
two-user case, it was recently shown that such joint unitary triangularization
is always possible. In this paper, it is shown that for more than two users, it
is necessary to carry out the unitary linear processing jointly over multiple
channel uses, i.e., space-time processing is employed. It is further shown that
exact triangularization, where all resulting diagonals are equal, is still not
always possible, and appropriate conditions for the existence of such are
established for certain cases. When exact triangularization is not possible, an
asymptotic construction is proposed, that achieves the desired property of
equal diagonals up to edge effects that can be made arbitrarily small, at the
price of processing a sufficiently large number of channel uses together.
|
1306.4355 | Blind Calibration in Compressed Sensing using Message Passing Algorithms | cs.IT cond-mat.stat-mech math.IT | Compressed sensing (CS) is a concept that allows to acquire compressible
signals with a small number of measurements. As such it is very attractive for
hardware implementations. Therefore, correct calibration of the hardware is a
central is- sue. In this paper we study the so-called blind calibration, i.e.
when the training signals that are available to perform the calibration are
sparse but unknown. We extend the approximate message passing (AMP) algorithm
used in CS to the case of blind calibration. In the calibration-AMP, both the
gains on the sensors and the elements of the signals are treated as unknowns.
Our algorithm is also applica- ble to settings in which the sensors distort the
measurements in other ways than multiplication by a gain, unlike previously
suggested blind calibration algorithms based on convex relaxations. We study
numerically the phase diagram of the blind calibration problem, and show that
even in cases where convex relaxation is pos- sible, our algorithm requires a
smaller number of measurements and/or signals in order to perform well.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.