id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1305.7393 | Performance of Single User vs. Multiuser Modulation in Wireless
Multicarrier (MC) Communications | cs.IT math.IT | The main objective of this paper is to compare block transmission system
performance analytically for Code Divisional Multiple Access (CDMA) modulations
in Generalized Multicarrier environment against linear modulation techniques
for both single user and multiuser. The effectivity of GMC-CDMA for multiuser
will also be judged for different Direct Sequence CDMA (DS-CDMA) such as
MC-CDMA, MC-DS-CDMA, MC-SS-CDMA. The analytical comparison will be in terms of
computing probability of bit error for frequency selective and slow flat fading
channels for different modulation techniques. The Bit Error Rate should be a
good indication for of the performance. The tolerance characteristics of
DS-CDMA in frequency-selective channels and MC-CDMA in flat fading channels
will be shown analytically and improved capacity and Bit Error Rate performance
will be derived for Block Spread Multiuser Multicarrier by relying block symbol
spreading and Fast Fourier Transform (FFT) operations. GMC-CDMA should give
guaranteed symbol recovery regardless of channel limitations.
|
1305.7395 | PageRank model of opinion formation on Ulam networks | nlin.CD cs.SI physics.soc-ph | We consider a PageRank model of opinion formation on Ulam networks, generated
by the intermittency map and the typical Chirikov map. The Ulam networks
generated by these maps have certain similarities with such scale-free networks
as the World Wide Web (WWW), showing an algebraic decay of the PageRank
probability. We find that the opinion formation process on Ulam networks have
certain similarities but also distinct features comparing to the WWW. We
attribute these distinctions to internal differences in network structure of
the Ulam and WWW networks. We also analyze the process of opinion formation in
the frame of generalized Sznajd model which protects opinion of small
communities.
|
1305.7416 | The Dendritic Cell Algorithm for Intrusion Detection | cs.CR cs.NE | As one of the solutions to intrusion detection problems, Artificial Immune
Systems (AIS) have shown their advantages. Unlike genetic algorithms, there is
no one archetypal AIS, instead there are four major paradigms. Among them, the
Dendritic Cell Algorithm (DCA) has produced promising results in various
applications. The aim of this chapter is to demonstrate the potential for the
DCA as a suitable candidate for intrusion detection problems. We review some of
the commonly used AIS paradigms for intrusion detection problems and
demonstrate the advantages of one particular algorithm, the DCA. In order to
clearly describe the algorithm, the background to its development and a formal
definition are given. In addition, improvements to the original DCA are
presented and their implications are discussed, including previous work done on
an online analysis component with segmentation and ongoing work on automated
data preprocessing. Based on preliminary results, both improvements appear to
be promising for online anomaly-based intrusion detection.
|
1305.7422 | Evaluating Different Cost-Benefit Analysis Methods for Port Security
Operations | cs.CE | Service industries, such as ports, are attentive to their standards, a smooth
service flow and economic viability. Cost benefit analysis has proven itself as
a useful tool to support this type of decision making; it has been used by
businesses and governmental agencies for many years. In this book chapter we
demonstrate different modelling methods that are used for estimating input
factors required for conducting cost benefit analysis based on a single case
study. These methods are: scenario analysis, decision trees, Monte-Carlo
simulation modelling and discrete event simulation modelling. Our aims are, on
the one hand, to guide the analyst through the modelling processes and, on the
other hand, to demonstrate what additional decision support information can be
obtained from applying each of these modelling methods.
|
1305.7424 | Investigating the effectiveness of Variance Reduction Techniques in
Manufacturing, Call Center and Cross-docking Discrete Event Simulation Models | cs.CE | Variance reduction techniques have been shown by others in the past to be a
useful tool to reduce variance in Simulation studies. However, their
application and success in the past has been mainly domain specific, with
relatively little guidelines as to their general applicability, in particular
for novices in this area. To facilitate their use, this study aims to
investigate the robustness of individual techniques across a set of scenarios
from different domains. Experimental results show that Control Variates is the
only technique which achieves a reduction in variance across all domains.
Furthermore, applied individually, Antithetic Variates and Control Variates
perform particularly well in the Cross-docking scenarios, which was previously
unknown.
|
1305.7430 | Dynamical Systems to Monitor Complex Networks in Continuous Time | cs.SI math.DS physics.soc-ph | In many settings it is appropriate to treat the evolution of pairwise
interactions over continuous time. We show that new Katz-style centrality
measures can be derived in this context via solutions to a nonautonomous ODE
driven by the network dynamics. This allows us to identify and track, at any
resolution, the most influential nodes in terms of broadcasting and receiving
information through time dependent links. In addition to the classical notion
of attenuation across edges used in the static Katz centrality measure, the ODE
also allows for attenuation over time, so that real time "running measures" can
be computed. With regard to computational efficiency, we explain why it is
cheaper to track good receivers of information than good broadcasters. We
illustrate the new measures on a large scale voice call network, where key
features are discovered that are not evident from snapshots or aggregates.
|
1305.7432 | Real-world Transfer of Evolved Artificial Immune System Behaviours
between Small and Large Scale Robotic Platforms | cs.NE cs.RO | In mobile robotics, a solid test for adaptation is the ability of a control
system to function not only in a diverse number of physical environments, but
also on a number of different robotic platforms. This paper demonstrates that a
set of behaviours evolved in simulation on a miniature robot (epuck) can be
transferred to a much larger-scale platform (Pioneer), both in simulation and
in the real world. The chosen architecture uses artificial evolution of epuck
behaviours to obtain a genetic sequence, which is then employed to seed an
idiotypic, artificial immune system (AIS) on the Pioneers. Despite numerous
hardware and software differences between the platforms, navigation and
target-finding experiments show that the evolved behaviours transfer very well
to the larger robot when the idiotypic AIS technique is used. In contrast,
transferability is poor when reinforcement learning alone is used, which
validates the adaptability of the chosen architecture.
|
1305.7434 | Motif Detection Inspired by Immune Memory (JORS) | cs.NE | The search for patterns or motifs in data represents an area of key interest
to many researchers. In this paper we present the Motif Tracking Algorithm, a
novel immune inspired pattern identification tool that is able to identify
variable length unknown motifs which repeat within time series data. The
algorithm searches from a neutral perspective that is independent of the data
being analysed and the underlying motifs. In this paper we test the flexibility
of the motif tracking algorithm by applying it to the search for patterns in
two industrial data sets. The algorithm is able to identify a population of
meaningful motifs in both cases, and the value of these motifs is discussed.
|
1305.7437 | Modelling Electricity Consumption in Office Buildings: An Agent Based
Approach | cs.CE cs.AI | In this paper, we develop an agent-based model which integrates four
important elements, i.e. organisational energy management policies/regulations,
energy management technologies, electric appliances and equipment, and human
behaviour, to simulate the electricity consumption in office buildings. Based
on a case study, we use this model to test the effectiveness of different
electricity management strategies, and solve practical office electricity
consumption problems. This paper theoretically contributes to an integration of
the four elements involved in the complex organisational issue of office
electricity consumption, and practically contributes to an application of an
agent-based approach for office building electricity consumption study.
|
1305.7438 | Heterogeneity Involved Network-based Algorithm Leads to Accurate and
Personalized Recommendations | physics.soc-ph cs.IR cs.SI | Heterogeneity of both the source and target objects is taken into account in
a network-based algorithm for the directional resource transformation between
objects. Based on a biased heat conduction recommendation method (BHC) which
considers the heterogeneity of the target object, we propose a heterogeneous
heat conduction algorithm (HHC), by further taking the source object degree as
the weight of diffusion. Tested on three real datasets, the Netflix, RYM and
MovieLens, the HHC algorithm is found to present a better recommendation in
both the accuracy and personalization than two excellent algorithms, i.e., the
original BHC and a hybrid algorithm of heat conduction and mass diffusion
(HHM), while not requiring any other accessorial information or parameter.
Moreover, the HHC even elevates the recommendation accuracy on cold objects,
referring to the so-called cold start problem, for effectively relieving the
recommendation bias on objects with different level of popularity.
|
1305.7440 | A Simple Generative Model of Collective Online Behaviour | physics.soc-ph cs.SI | Human activities increasingly take place in online environments, providing
novel opportunities for relating individual behaviours to population-level
outcomes. In this paper, we introduce a simple generative model for the
collective behaviour of millions of social networking site users who are
deciding between different software applications. Our model incorporates two
distinct components: one is associated with recent decisions of users, and the
other reflects the cumulative popularity of each application. Importantly,
although various combinations of the two mechanisms yield long-time behaviour
that is consistent with data, the only models that reproduce the observed
temporal dynamics are those that strongly emphasize the recent popularity of
applications over their cumulative popularity. This demonstrates---even when
using purely observational data without experimental design---that temporal
data-driven modelling can effectively distinguish between competing microscopic
mechanisms, allowing us to uncover new aspects of collective online behaviour.
|
1305.7445 | Eigenvector centrality of nodes in multiplex networks | physics.soc-ph cs.SI | We extend the concept of eigenvector centrality to multiplex networks, and
introduce several alternative parameters that quantify the importance of nodes
in a multi-layered networked system, including the definition of vectorial-type
centralities. In addition, we rigorously show that, under reasonable
conditions, such centrality measures exist and are unique. Computer experiments
and simulations demonstrate that the proposed measures provide substantially
different results when applied to the same multiplex structure, and highlight
the non-trivial relationships between the different measures of centrality
introduced.
|
1305.7454 | Privileged Information for Data Clustering | cs.LG stat.ML | Many machine learning algorithms assume that all input samples are
independently and identically distributed from some common distribution on
either the input space X, in the case of unsupervised learning, or the input
and output space X x Y in the case of supervised and semi-supervised learning.
In the last number of years the relaxation of this assumption has been explored
and the importance of incorporation of additional information within machine
learning algorithms became more apparent. Traditionally such fusion of
information was the domain of semi-supervised learning. More recently the
inclusion of knowledge from separate hypothetical spaces has been proposed by
Vapnik as part of the supervised setting. In this work we are interested in
exploring Vapnik's idea of master-class learning and the associated learning
using privileged information, however within the unsupervised setting. Adoption
of the advanced supervised learning paradigm for the unsupervised setting
instigates investigation into the difference between privileged and technical
data. By means of our proposed aRi-MAX method stability of the KMeans algorithm
is improved and identification of the best clustering solution is achieved on
an artificial dataset. Subsequently an information theoretic dot product based
algorithm called P-Dot is proposed. This method has the ability to utilize a
wide variety of clustering techniques, individually or in combination, while
fusing privileged and technical data for improved clustering. Application of
the P-Dot method to the task of digit recognition confirms our findings in a
real-world scenario.
|
1305.7458 | Validation of a Microsimulation of the Port of Dover | cs.CE cs.AI physics.soc-ph | Modelling and simulating the traffic of heavily used but secure environments
such as seaports and airports is of increasing importance. Errors made when
simulating these environments can have long standing economic, social and
environmental implications. This paper discusses issues and problems that may
arise when designing a simulation strategy. Data for the Port is presented,
methods for lightweight vehicle assessment that can be used to calibrate and
validate simulations are also discussed along with a diagnosis of
overcalibration issues. We show that decisions about where the intelligence
lies in a system has important repercussions for the reliability of system
statistics. Finally, conclusions are drawn about how microsimulations can be
moved forward as a robust planning tool for the 21st century.
|
1305.7465 | Wavelet feature extraction and genetic algorithm for biomarker detection
in colorectal cancer data | cs.NE cs.CE | Biomarkers which predict patient's survival can play an important role in
medical diagnosis and treatment. How to select the significant biomarkers from
hundreds of protein markers is a key step in survival analysis. In this paper a
novel method is proposed to detect the prognostic biomarkers of survival in
colorectal cancer patients using wavelet analysis, genetic algorithm, and Bayes
classifier. One dimensional discrete wavelet transform (DWT) is normally used
to reduce the dimensionality of biomedical data. In this study one dimensional
continuous wavelet transform (CWT) was proposed to extract the features of
colorectal cancer data. One dimensional CWT has no ability to reduce
dimensionality of data, but captures the missing features of DWT, and is
complementary part of DWT. Genetic algorithm was performed on extracted wavelet
coefficients to select the optimized features, using Bayes classifier to build
its fitness function. The corresponding protein markers were located based on
the position of optimized features. Kaplan-Meier curve and Cox regression model
were used to evaluate the performance of selected biomarkers. Experiments were
conducted on colorectal cancer dataset and several significant biomarkers were
detected. A new protein biomarker CD46 was found to significantly associate
with survival time.
|
1305.7471 | Investigating Mathematical Models of Immuno-Interactions with
Early-Stage Cancer under an Agent-Based Modelling Perspective | cs.CE cs.AI | Many advances in research regarding immuno-interactions with cancer were
developed with the help of ordinary differential equation (ODE) models. These
models, however, are not effectively capable of representing problems involving
individual localisation, memory and emerging properties, which are common
characteristics of cells and molecules of the immune system. Agent-based
modelling and simulation is an alternative paradigm to ODE models that
overcomes these limitations. In this paper we investigate the potential
contribution of agent-based modelling and simulation when compared to ODE
modelling and simulation. We seek answers to the following questions: Is it
possible to obtain an equivalent agent-based model from the ODE formulation? Do
the outcomes differ? Are there any benefits of using one method compared to the
other? To answer these questions, we have considered three case studies using
established mathematical models of immune interactions with early-stage cancer.
These case studies were re-conceptualised under an agent-based perspective and
the simulation results were then compared with those from the ODE models. Our
results show that it is possible to obtain equivalent agent-based models (i.e.
implementing the same mechanisms); the simulation output of both types of
models however might differ depending on the attributes of the system to be
modelled. In some cases, additional insight from using agent-based modelling
was obtained. Overall, we can confirm that agent-based modelling is a useful
addition to the tool set of immunologists, as it has extra features that allow
for simulations with characteristics that are closer to the biological
phenomena.
|
1305.7476 | Theoretical formulation and analysis of the deterministic dendritic cell
algorithm | cs.NE cs.DS | As one of the emerging algorithms in the field of Artificial Immune Systems
(AIS), the Dendritic Cell Algorithm (DCA) has been successfully applied to a
number of challenging real-world problems. However, one criticism is the lack
of a formal definition, which could result in ambiguity for understanding the
algorithm. Moreover, previous investigations have mainly focused on its
empirical aspects. Therefore, it is necessary to provide a formal definition of
the algorithm, as well as to perform runtime analyses to revealits theoretical
aspects. In this paper, we define the deterministic version of the DCA, named
the dDCA, using set theory and mathematical functions. Runtime analyses of the
standard algorithm and the one with additional segmentation are performed. Our
analysis suggests that the standard dDCA has a runtime complexity of O(n2) for
the worst-case scenario, where n is the number of input data instances. The
introduction of segmentation changes the algorithm's worst case runtime
complexity to O(max(nN; nz)), for DC population size N with size of each
segment z. Finally, two runtime variables of the algorithm are formulated based
on the input data, to understand its runtime behaviour as guidelines for
further development.
|
1305.7477 | On model selection consistency of regularized M-estimators | math.ST cs.LG math.OC stat.ME stat.ML stat.TH | Regularized M-estimators are used in diverse areas of science and engineering
to fit high-dimensional models with some low-dimensional structure. Usually the
low-dimensional structure is encoded by the presence of the (unknown)
parameters in some low-dimensional model subspace. In such settings, it is
desirable for estimates of the model parameters to be \emph{model selection
consistent}: the estimates also fall in the model subspace. We develop a
general framework for establishing consistency and model selection consistency
of regularized M-estimators and show how it applies to some special cases of
interest in statistical learning. Our analysis identifies two key properties of
regularized M-estimators, referred to as geometric decomposability and
irrepresentability, that ensure the estimators are consistent and model
selection consistent.
|
1305.7480 | Path diversity improves the identification of influential spreaders | physics.soc-ph cs.SI | Identifying influential spreaders in complex networks is a crucial problem
which relates to wide applications. Many methods based on the global
information such as $k$-shell and PageRank have been applied to rank spreaders.
However, most of related previous works overwhelmingly focus on the number of
paths for propagation, while whether the paths are diverse enough is usually
overlooked. Generally, the spreading ability of a node might not be strong if
its propagation depends on one or two paths while the other paths are dead
ends. In this Letter, we introduced the concept of path diversity and find that
it can largely improve the ranking accuracy. We further propose a local method
combining the information of path number and path diversity to identify
influential nodes in complex networks. This method is shown to outperform many
well-known methods in both undirected and directed networks. Moreover, the
efficiency of our method makes it possible to be applied to very large systems.
|
1305.7484 | Technical Report: Convex Optimization of Nonlinear Feedback Controllers
via Occupation Measures | cs.RO cs.SY math.OC | In this paper, we present an approach for designing feedback controllers for
polynomial systems that maximize the size of the time-limited backwards
reachable set (BRS). We rely on the notion of occupation measures to pose the
synthesis problem as an infinite dimensional linear program (LP) and provide
finite dimensional approximations of this LP in terms of semidefinite programs
(SDPs). The solution to each SDP yields a polynomial control policy and an
outer approximation of the largest achievable BRS. In contrast to traditional
Lyapunov based approaches which are non-convex and require feasible
initialization, our approach is convex and does not require any form of
initialization. The resulting time-varying controllers and approximated
reachable sets are well-suited for use in a trajectory library or feedback
motion planning algorithm. We demonstrate the efficacy and scalability of our
approach on five nonlinear systems.
|
1306.0029 | A Hierarchical Modulation for Upgrading Digital Broadcast Systems | cs.IT math.IT | A hierarchical modulation scheme is proposed to upgrade an existing digital
broadcast system, such as satellite TV, or satellite radio, by adding more data
in its transmission. The hierarchical modulation consists of a basic
constellation, which is the same as in the original system, and a secondary
constellation, which carries the additional data for the upgraded system. The
upgraded system with the hierarchical modulation is backward compatible in the
sense that receivers that have been deployed in the original system can
continue receiving data in the basic constellation. New receivers can be
designed to receive data carried in the secondary constellation, as well as
those in the basic constellation. Analysis will be performed to show the
tradeoff between bit rate of the data in secondary constellation and the
penalty to the performance of receiving the basic constellation.
|
1306.0034 | Providing Local Content in a Hybrid Single Frequency Network using
Hierarchical Modulation | cs.IT math.IT | A hierarchical modulation method is proposed for providing local content in a
hybrid satellite and terrestrial single frequency network such DVB-SH. The
hierarchical modulation is used to transmit both global and local content in
terrestrial transmitters. The global content is transmitted with high priority
layer of the hierarchical modulation, and the local content is modulated with
the low priority layer of the hierarchical modulation. The satellite transmits
global content only. The performance of the hierarchical system for both global
and local content is analyzed.
|
1306.0036 | Optimal Decentralized State-Feedback Control with Sparsity and Delays | cs.SY math.OC | This work presents the solution to a class of decentralized linear quadratic
state-feedback control problems, in which the plant and controller must satisfy
the same combination of delay and sparsity constraints. Using a novel
decomposition of the noise history, the control problem is split into
independent subproblems that are solved using dynamic programming. The approach
presented herein both unifies and generalizes many existing results.
|
1306.0037 | Digital predistortion for power amplifiers using separable functions | cs.IT math.IT | This paper is concerned with digital predistortion for linearization of RF
high power amplifiers (HPAs). It has two objectives. First, we establish a
theoretical framework for a generic predistorter system, and show that if a
postdistorter exists, then it is also a predistorter, and therefore, the
predistorter and postdistorter are equivalent. This justifies the indirect
learning methods for a large class of HPAs. Secondly, we establish a systematic
and general structure for a predistorter that is capable of compensating
nonlinearity for a large variety of HPAs. This systematic structure is derived
using approximation by separable functions, and avoids selection of
predistorters based on the assumption of HPA models traditionally done in the
literature.
|
1306.0054 | Analysis and Evaluation of the Link and Content Based Focused
Treasure-Crawler | cs.IR cs.DL | Indexing the Web is becoming a laborious task for search engines as the Web
exponentially grows in size and distribution. Presently, the most effective
known approach to overcome this problem is the use of focused crawlers. A
focused crawler applies a proper algorithm in order to detect the pages on the
Web that relate to its topic of interest. For this purpose we proposed a custom
method that uses specific HTML elements of a page to predict the topical focus
of all the pages that have an unvisited link within the current page. These
recognized on-topic pages have to be sorted later based on their relevance to
the main topic of the crawler for further actual downloads. In the
Treasure-Crawler, we use a hierarchical structure called the T-Graph which is
an exemplary guide to assign appropriate priority score to each unvisited link.
These URLs will later be downloaded based on this priority. This paper outlines
the architectural design and embodies the implementation, test results and
performance evaluation of the Treasure-Crawler system. The Treasure-Crawler is
evaluated in terms of information retrieval criteria such as recall and
precision, both with values close to 0.5. Gaining such outcome asserts the
significance of the proposed approach.
|
1306.0090 | Harmony search algorithm for the container storage problem | cs.NE | Recently a new metaheuristic called harmony search was developed. It mimics
the behaviors of musicians improvising to find the better state harmony. In
this paper, this algorithm is described and applied to solve the container
storage problem in the harbor. The objective of this problem is to determine a
valid containers arrangement, which meets customers delivery deadlines, reduces
the number of container rehandlings and minimizes the ship idle time. In this
paper, an adaptation of the harmony search algorithm to the container storage
problem is detailed and some experimental results are presented and discussed.
The proposed approach was compared to a genetic algorithm previously applied to
the same problem and recorded a good results.
|
1306.0094 | Analysis of Mismatched Estimation Errors Using Gradients of Partition
Functions | cs.IT cond-mat.stat-mech math.IT | We consider the problem of signal estimation (denoising) from a
statistical-mechanical perspective, in continuation to a recent work on the
analysis of mean-square error (MSE) estimation using a direct relationship
between optimum estimation and certain partition functions. The paper consists
of essentially two parts. In the first part, using the aforementioned
relationship, we derive single-letter expressions of the mismatched MSE of a
codeword (from a randomly selected code), corrupted by a Gaussian vector
channel. In the second part, we provide several examples to demonstrate phase
transitions in the behavior of the MSE. These examples enable us to understand
more deeply and to gather intuition regarding the roles of the real and the
mismatched probability measures in creating these phase transitions.
|
1306.0095 | Universal Induction with Varying Sets of Combinators | cs.AI | Universal induction is a crucial issue in AGI. Its practical applicability
can be achieved by the choice of the reference machine or representation of
algorithms agreed with the environment. This machine should be updatable for
solving subsequent tasks more efficiently. We study this problem on an example
of combinatory logic as the very simple Turing-complete reference machine,
which enables modifying program representations by introducing different sets
of primitive combinators. Genetic programming system is used to search for
combinator expressions, which are easily decomposed into sub-expressions being
recombined in crossover. Our experiments show that low-complexity induction or
prediction tasks can be solved by the developed system (much more efficiently
than using brute force); useful combinators can be revealed and included into
the representation simplifying more difficult tasks. However, optimal sets of
combinators depend on the specific task, so the reference machine should be
adaptively chosen in coordination with the search engine.
|
1306.0112 | Deciphering the global organization of clustering in real complex
networks | physics.soc-ph cond-mat.dis-nn cs.SI | We uncover the global organization of clustering in real complex networks. As
it happens with other fundamental properties of networks such as the degree
distribution, we find that real networks are neither completely random nor
ordered with respect to clustering, although they tend to be closer to
maximally random architectures. We reach this conclusion by comparing the
global structure of clustering in real networks with that in maximally random
and in maximally ordered clustered graphs. The former are produced with an
exponential random graph model that maintains correlations among adjacent edges
at the minimum needed to conform with the expected clustering spectrum; the
later with a random model that arranges triangles in cliques inducing highly
ordered structures. To compare the global organization of clustering in real
and model networks, we compute $m$-core landscapes, where the $m$-core is
defined, akin to the $k$-core, as the maximal subgraph with edges participating
at least in $m$ triangles. This property defines a set of nested subgraphs
that, contrarily to $k$-cores, is able to distinguish between hierarchical and
modular architectures. To visualize the $m$-core decomposition we developed the
LaNet-vi 3.0 tool.
|
1306.0125 | Understanding ACT-R - an Outsider's Perspective | cs.LG | The ACT-R theory of cognition developed by John Anderson and colleagues
endeavors to explain how humans recall chunks of information and how they solve
problems. ACT-R also serves as a theoretical basis for "cognitive tutors",
i.e., automatic tutoring systems that help students learn mathematics, computer
programming, and other subjects. The official ACT-R definition is distributed
across a large body of literature spanning many articles and monographs, and
hence it is difficult for an "outsider" to learn the most important aspects of
the theory. This paper aims to provide a tutorial to the core components of the
ACT-R theory.
|
1306.0128 | Towards Detection of Bottlenecks in Modular Systems | cs.AI cs.SY | The paper describes some basic approaches to detection of bottlenecks in
composite (modular) systems. The following basic system bottlenecks detection
problems are examined: (1) traditional quality management approaches (Pareto
chart based method, multicriteria analysis as selection of Pareto-efficient
points, and/or multicriteria ranking), (2) selection of critical system
elements (critical components/modules, critical component interconnection), (3)
selection of interconnected system components as composite system faults (via
clique-based fusion), (4) critical elements (e.g., nodes) in networks, and (5)
predictive detection of system bottlenecks (detection of system components
based on forecasting of their parameters). Here, heuristic solving schemes are
used. Numerical examples illustrate the approaches.
|
1306.0139 | Image Inpainting by Kriging Interpolation Technique | cs.CV | Image inpainting is the art of predicting damaged regions of an image. The
manual way of image inpainting is a time consuming. Therefore, there must be an
automatic digital method for image inpainting that recovers the image from the
damaged regions. In this paper, a novel statistical image inpainting algorithm
based on Kriging interpolation technique was proposed. Kriging technique
automatically fills the damaged region in an image using the information
available from its surrounding regions in such away that it uses the spatial
correlation structure of points inside the k-by-k block. Kriging has the
ability to face the challenge of keeping the structure and texture information
as the size of damaged region heighten. Experimental results showed that,
Kriging has a high PSNR value when recovering a variety of test images from
scratches and text as damaged regions.
|
1306.0150 | Simulation of Molecular Signaling in Blood Vessels: Software Design and
Application to Atherogenesis | cs.CE cond-mat.soft q-bio.QM | This paper presents a software platform, named BiNS2, able to simulate
diffusion-based molecular communications with drift inside blood vessels. The
contribution of the paper is twofold. First a detailed description of the
simulator is given, under the software engineering point of view, by
highlighting the innovations and optimizations introduced. Their introduction
into the previous version of the BiNS simulator was needed to provide to
functions for simulating molecular signaling and communication potentials
inside bounded spaces. The second contribution consists of the analysis,
carried out by using BiNS2, of a specific communication process happening
inside blood vessels, the atherogenesis, which is the initial phase of the
formation of atherosclerotic plaques, due to the abnormal signaling between
platelets and endothelium. From a communication point of view, platelets act as
mobile transmitters, endothelial cells are fixed receivers, sticky to the
vessel walls, and the transmitted signal is made of bursts of molecules emitted
by platelets. The simulator allows evaluating the channel latency and the
footprint on the vessel wall of the transmitted signal as a function of the
transmitter distance from the vessels wall, the signal strength, and the
receiver sensitivity.
|
1306.0152 | An Analysis of the Connections Between Layers of Deep Neural Networks | cs.CV | We present an analysis of different techniques for selecting the connection
be- tween layers of deep neural networks. Traditional deep neural networks use
ran- dom connection tables between layers to keep the number of connections
small and tune to different image features. This kind of connection performs
adequately in supervised deep networks because their values are refined during
the training. On the other hand, in unsupervised learning, one cannot rely on
back-propagation techniques to learn the connections between layers. In this
work, we tested four different techniques for connecting the first layer of the
network to the second layer on the CIFAR and SVHN datasets and showed that the
accuracy can be im- proved up to 3% depending on the technique used. We also
showed that learning the connections based on the co-occurrences of the
features does not confer an advantage over a random connection table in small
networks. This work is helpful to improve the efficiency of connections between
the layers of unsupervised deep neural networks.
|
1306.0155 | Dynamic Ad Allocation: Bandits with Budgets | cs.LG cs.DS | We consider an application of multi-armed bandits to internet advertising
(specifically, to dynamic ad allocation in the pay-per-click model, with
uncertainty on the click probabilities). We focus on an important practical
issue that advertisers are constrained in how much money they can spend on
their ad campaigns. This issue has not been considered in the prior work on
bandit-based approaches for ad allocation, to the best of our knowledge.
We define a simple, stylized model where an algorithm picks one ad to display
in each round, and each ad has a \emph{budget}: the maximal amount of money
that can be spent on this ad. This model admits a natural variant of UCB1, a
well-known algorithm for multi-armed bandits with stochastic rewards. We derive
strong provable guarantees for this algorithm.
|
1306.0158 | Virality Prediction and Community Structure in Social Networks | cs.SI cs.CY physics.data-an physics.soc-ph | How does network structure affect diffusion? Recent studies suggest that the
answer depends on the type of contagion. Complex contagions, unlike infectious
diseases (simple contagions), are affected by social reinforcement and
homophily. Hence, the spread within highly clustered communities is enhanced,
while diffusion across communities is hampered. A common hypothesis is that
memes and behaviors are complex contagions. We show that, while most memes
indeed behave like complex contagions, a few viral memes spread across many
communities, like diseases. We demonstrate that the future popularity of a meme
can be predicted by quantifying its early spreading pattern in terms of
community concentration. The more communities a meme permeates, the more viral
it is. We present a practical method to translate data about community
structure into predictive knowledge about what information will spread widely.
This connection may lead to significant advances in computational social
science, social media analytics, and marketing applications.
|
1306.0160 | Phase Retrieval using Alternating Minimization | stat.ML cs.IT cs.LG math.IT | Phase retrieval problems involve solving linear equations, but with missing
sign (or phase, for complex numbers) information. More than four decades after
it was first proposed, the seminal error reduction algorithm of (Gerchberg and
Saxton 1972) and (Fienup 1982) is still the popular choice for solving many
variants of this problem. The algorithm is based on alternating minimization;
i.e. it alternates between estimating the missing phase information, and the
candidate solution. Despite its wide usage in practice, no global convergence
guarantees for this algorithm are known. In this paper, we show that a
(resampling) variant of this approach converges geometrically to the solution
of one such problem -- finding a vector $\mathbf{x}$ from
$\mathbf{y},\mathbf{A}$, where $\mathbf{y} =
\left|\mathbf{A}^{\top}\mathbf{x}\right|$ and $|\mathbf{z}|$ denotes a vector
of element-wise magnitudes of $\mathbf{z}$ -- under the assumption that
$\mathbf{A}$ is Gaussian.
Empirically, we demonstrate that alternating minimization performs similar to
recently proposed convex techniques for this problem (which are based on
"lifting" to a convex matrix problem) in sample complexity and robustness to
noise. However, it is much more efficient and can scale to large problems.
Analytically, for a resampling version of alternating minimization, we show
geometric convergence to the solution, and sample complexity that is off by log
factors from obvious lower bounds. We also establish close to optimal scaling
for the case when the unknown vector is sparse. Our work represents the first
theoretical guarantee for alternating minimization (albeit with resampling) for
any variant of phase retrieval problems in the non-convex setting.
|
1306.0162 | Cellular-Based Statistical Model for Mobile Dispersion | cs.IT math.IT | While analyzing mobile systems we often approximate the actual coverage
surface and assume an ideal cell shape. In a multi-cellular network, because of
its tessellating nature, a hexagon is more preferred than a circular geometry.
Despite this reality, perhaps due to the inherent simplicity, only a model for
circular based random spreading is available. However, if used, this results an
unfair terminal distribution for non-circular contours. Therefore, in this
paper we specifically derived an unbiased node density model for a hexagon. We
then extended the principle and established stochastic ways to handle sectored
cells. Next, based on these mathematical findings, we created a generic
modeling tool that can support a complex network with varying position,
capacity, size, user density, and sectoring capability. Last, simulation was
used to verify the theoretical analysis.
|
1306.0165 | CRUC: Cold-start Recommendations Using Collaborative Filtering in
Internet of Things | cs.IR cs.NI | The Internet of Things (IoT) aims at interconnecting everyday objects
(including both things and users) and then using this connection information to
provide customized user services. However, IoT does not work in its initial
stages without adequate acquisition of user preferences. This is caused by
cold-start problem that is a situation where only few users are interconnected.
To this end, we propose CRUC scheme - Cold-start Recommendations Using
Collaborative Filtering in IoT, involving formulation, filtering and prediction
steps. Extensive experiments over real cases and simulation have been performed
to evaluate the performance of CRUC scheme. Experimental results show that CRUC
efficiently solves the cold-start problem in IoT.
|
1306.0178 | Using a bag of Words for Automatic Medical Image Annotation with a
Latent Semantic | cs.IR cs.CV | We present in this paper a new approach for the automatic annotation of
medical images, using the approach of "bag-of-words" to represent the visual
content of the medical image combined with text descriptors based approach
tf.idf and reduced by latent semantic to extract the co-occurrence between
terms and visual terms. A medical report is composed of a text describing a
medical image. First, we are interested to index the text and extract all
relevant terms using a thesaurus containing MeSH medical concepts. In a second
phase, the medical image is indexed while recovering areas of interest which
are invariant to change in scale, light and tilt. To annotate a new medical
image, we use the approach of "bagof-words" to recover the feature vector.
Indeed, we use the vector space model to retrieve similar medical image from
the database training. The calculation of the relevance value of an image to
the query image is based on the cosine function. We conclude with an experiment
carried out on five types of radiological imaging to evaluate the performance
of our system of medical annotation. The results showed that our approach works
better with more images from the radiology of the skull.
|
1306.0186 | RNADE: The real-valued neural autoregressive density-estimator | stat.ML cs.LG | We introduce RNADE, a new model for joint density estimation of real-valued
vectors. Our model calculates the density of a datapoint as the product of
one-dimensional conditionals modeled using mixture density networks with shared
parameters. RNADE learns a distributed representation of the data, while having
a tractable expression for the calculation of densities. A tractable likelihood
allows direct comparison with other methods and training by standard
gradient-based optimizers. We compare the performance of RNADE on several
datasets of heterogeneous and perceptual data, finding it outperforms mixture
models in all but one case.
|
1306.0193 | A Trust-based Recruitment Framework for Multi-hop Social Participatory
Sensing | cs.SI physics.soc-ph | The idea of social participatory sensing provides a substrate to benefit from
friendship relations in recruiting a critical mass of participants willing to
attend in a sensing campaign. However, the selection of suitable participants
who are trustable and provide high quality contributions is challenging. In
this paper, we propose a recruitment framework for social participatory
sensing. Our framework leverages multi-hop friendship relations to identify and
select suitable and trustworthy participants among friends or friends of
friends, and finds the most trustable paths to them. The framework also
includes a suggestion component which provides a cluster of suggested friends
along with the path to them, which can be further used for recruitment or
friendship establishment. Simulation results demonstrate the efficacy of our
proposed recruitment framework in terms of selecting a large number of
well-suited participants and providing contributions with high overall trust,
in comparison with one-hop recruitment architecture.
|
1306.0194 | Genetic algorithms and solid state NMR pulse sequences | cs.CE physics.ins-det | The use of genetic algorithms for the optimisation of magic angle spinning
NMR pulse sequences is discussed. The discussion uses as an example the
optimisation of the C7 dipolar recoupling pulse sequence, aiming to achieve
improved efficiency for spin systems characterised by large chemical shielding
anisotropies and/or small dipolar coupling interactions. The optimised pulse
sequence is found to be robust over a wide range of parameters, requires only
minimal a priori knowledge of the spin system for experimental implementations
with buildup rates being solely determined by the magnitude of the dipolar
coupling interaction, but is found to be less broadbanded than the original C7
pulse sequence. The optimised pulse sequence breaks the synchronicity between
r.f. pulses and sample spinning.
|
1306.0196 | Cumulative Effect in Information Diffusion: A Comprehensive Empirical
Study on Microblogging Network | cs.SI physics.soc-ph | Cumulative effect in social contagions underlies many studies on the spread
of innovation, behaviors, and influence. However, few large-scale empirical
studies are conducted to validate the existence of cumulative effect in the
information diffusion on social networks. In this paper, using the
population-scale dataset from the largest Chinese microblogging website, we
conduct a comprehensive study on the cumulative effect in information
diffusion. We base our study on the diffusion network of each message, where
nodes are the involved users and links are the following relationships among
them. We find that multiple exposures to the same message indeed increase the
possibility of forwarding it. However, additional exposures cannot further
improve the chance of forwarding when the number of exposures crosses its peak
at two. This finding questions the cumulative effect hypothesis in information
diffusion. Furthermore, to clarify the forwarding preference among users, we
investigate both the structural motif of the diffusion network and the temporal
pattern of information diffusion process among users. The patterns provide
vital insight for understanding the variation of message popularity and explain
the characteristics of diffusion networks.
|
1306.0225 | Convergence Analysis and Parallel Computing Implementation for the
Multiagent Coordination Optimization Algorithm | math.OC cs.NE math.DS | In this report, a novel variation of Particle Swarm Optimization (PSO)
algorithm, called Multiagent Coordination Optimization (MCO), is implemented in
a parallel computing way for practical use by introducing MATLAB built-in
function "parfor" into MCO. Then we rigorously analyze the global convergence
of MCO by means of semistability theory. Besides sharing global optimal
solutions with the PSO algorithm, the MCO algorithm integrates cooperative
swarm behavior of multiple agents into the update formula by sharing velocity
and position information between neighbors to improve its performance.
Numerical evaluation of the parallel MCO algorithm is provided in the report by
running the proposed algorithm on supercomputers in the High Performance
Computing Center at Texas Tech University. In particular, the optimal value and
consuming time are compared with PSO and serial MCO by solving several
benchmark functions in the literature, respectively. Based on the simulation
results, the performance of the parallel MCO is not only superb compared with
PSO for solving many nonlinear, noncovex optimization problems, but also is of
high efficiency by saving the computational time.
|
1306.0233 | Scale-Free Networks with the Same Degree Distribution: Different
Structural Properties | cs.SI cond-mat.dis-nn cond-mat.stat-mech math.DS physics.soc-ph | We have analysed some structural properties of scale-free networks with the
same degree distribution. Departing from a degree distribution obtained from
the Barab\'asi-Albert (BA) algorithm, networks were generated using four
additional different algorithms a (Molloy-Reed, Kalisky, and two new models
named A and B) besides the BA algorithm itself. For each network, we have
calculated the following structural measures: average degree of the nearest
neighbours, central point dominance, clustering coefficient, the Pearson
correlation coefficient, and global efficiency. We found that different
networks with the same degree distribution may have distinct structural
properties. In particular, model B generates decentralized networks with a
larger number of components, a smaller giant component size, and a low global
efficiency when compared to the other algorithms, especially compared to the
centralized BA networks that have all vertices in a single component, with a
medium to high global efficiency. The other three models generate networks with
intermediate characteristics between B and BA models. A consequence of this
finding is that the dynamics of different phenomena on these networks may
differ considerably.
|
1306.0237 | Guided Random Forest in the RRF Package | cs.LG | Random Forest (RF) is a powerful supervised learner and has been popularly
used in many applications such as bioinformatics.
In this work we propose the guided random forest (GRF) for feature selection.
Similar to a feature selection method called guided regularized random forest
(GRRF), GRF is built using the importance scores from an ordinary RF. However,
the trees in GRRF are built sequentially, are highly correlated and do not
allow for parallel computing, while the trees in GRF are built independently
and can be implemented in parallel. Experiments on 10 high-dimensional gene
data sets show that, with a fixed parameter value (without tuning the
parameter), RF applied to features selected by GRF outperforms RF applied to
all features on 9 data sets and 7 of them have significant differences at the
0.05 level. Therefore, both accuracy and interpretability are significantly
improved. GRF selects more features than GRRF, however, leads to better
classification accuracy. Note in this work the guided random forest is guided
by the importance scores from an ordinary random forest, however, it can also
be guided by other methods such as human insights (by specifying $\lambda_i$).
GRF can be used in "RRF" v1.4 (and later versions), a package that also
includes the regularized random forest methods.
|
1306.0239 | Deep Learning using Linear Support Vector Machines | cs.LG stat.ML | Recently, fully-connected and convolutional neural networks have been trained
to achieve state-of-the-art performance on a wide variety of tasks such as
speech recognition, image classification, natural language processing, and
bioinformatics. For classification tasks, most of these "deep learning" models
employ the softmax activation function for prediction and minimize
cross-entropy loss. In this paper, we demonstrate a small but consistent
advantage of replacing the softmax layer with a linear support vector machine.
Learning minimizes a margin-based loss instead of the cross-entropy loss. While
there have been various combinations of neural nets and SVMs in prior art, our
results using L2-SVMs show that by simply replacing softmax with linear SVMs
gives significant gains on popular deep learning datasets MNIST, CIFAR-10, and
the ICML 2013 Representation Learning Workshop's face expression recognition
challenge.
|
1306.0257 | Spatially distributed social complex networks | physics.soc-ph cond-mat.stat-mech cs.SI nlin.AO | We propose a bare-bones stochastic model that takes into account both the
geographical distribution of people within a country and their complex network
of connections. The model, which is designed to give rise to a scale-free
network of social connections and to visually resemble the geographical spread
seen in satellite pictures of the Earth at night, gives rise to a power-law
distribution for the ranking of cities by population size (but for the largest
cities) and reflects the notion that highly connected individuals tend to live
in highly populated areas. It also yields some interesting insights regarding
Gibrat's law for the rates of city growth (by population size), in partial
support of the findings in a recent analysis of real data [Rozenfeld et al.,
Proc. Natl. Acad. Sci. U.S.A. 105, 18702 (2008)]. The model produces a
nontrivial relation between city population and city population density and a
superlinear relationship between social connectivity and city population, both
of which seem quite in line with real data.
|
1306.0260 | A Distributed Algorithm for Solving Positive Definite Linear Equations
over Networks with Membership Dynamics | cs.SY cs.DC | This paper considers the problem of solving a symmetric positive definite
system of linear equations over a network of agents with arbitrary asynchronous
interactions and membership dynamics. The latter implies that each agent is
allowed to join and leave the network at any time, for infinitely many times,
and lose all its memory upon leaving. We develop Subset Equalizing (SE), a
distributed asynchronous algorithm for solving such a problem. To design and
analyze SE, we introduce a novel time-varying Lyapunov-like function, defined
on a state space with changing dimension, and a generalized concept of network
connectivity, capable of handling such interactions and membership dynamics.
Based on them, we establish the boundedness, asymptotic convergence, and
exponential convergence of SE, along with a bound on its convergence rate.
Finally, through extensive simulation, we show that SE is effective in a
volatile agent network and that a special case of SE, termed Groupwise
Equalizing, is significantly more bandwidth/energy efficient than two existing
algorithms in multi-hop wireless networks.
|
1306.0264 | Epidemic-like Proximity-based Traffic Offloading | cs.IT cs.SI math.IT | Cellular networks are overloaded due to the mobile traffic surge, and mobile
social network (MSNets) carrying information flow can help reduce cellular
traffic load. If geographically-nearby users directly adopt WiFi or Bluetooth
technology (i.e., leveraging proximity-based communication) for information
spreading in MSNets, a portion of mobile traffic can be offloaded from cellular
networks. For many delay-tolerant applications, it is beneficial for traffic
offloading to pick some seed users as information sources, which help further
spread the information to others in an epidemic-like manner using
proximity-based communication. In this paper, we develop a theoretical
framework to study the issue of choosing only k seed users so as to maximize
the mobile traffic offloaded from cellular networks via proximity-based
communication. We introduce a gossip-style social cascade (GSC) model to model
the information diffusion process, which captures the epidemic-like nature of
proximity-based communication and characterizes users' social participation as
well. For static networks as a special-case study and mobile networks, we
establish an equivalent view and a temporal mapping of the information
diffusion process, respectively, leveraging virtual coupon collectors. We
further prove the submodularity in the information diffusion and propose a
greedy algorithm to choose the seed users for proximity-based traffic
offloading, yielding a solution within about 63% of the optimal value to the
traffic offloading maximization (TOM) problem. Experiments are carried out to
study the offloading performance of our approach, illustrating that
proximity-based communication can offload cellular traffic by over 60% with a
small number of seed users and the greedy algorithm significantly outperforms
the heuristic and random algorithms.
|
1306.0271 | KERT: Automatic Extraction and Ranking of Topical Keyphrases from
Content-Representative Document Titles | cs.LG cs.IR | We introduce KERT (Keyphrase Extraction and Ranking by Topic), a framework
for topical keyphrase generation and ranking. By shifting from the
unigram-centric traditional methods of unsupervised keyphrase extraction to a
phrase-centric approach, we are able to directly compare and rank phrases of
different lengths. We construct a topical keyphrase ranking function which
implements the four criteria that represent high quality topical keyphrases
(coverage, purity, phraseness, and completeness). The effectiveness of our
approach is demonstrated on two collections of content-representative titles in
the domains of Computer Science and Physics.
|
1306.0282 | An efficient method for evaluating BEM singular integrals on curved
elements with application in acoustic analysis | cs.CE math.NA | The polar coordinate transformation (PCT) method has been extensively used to
treat various singular integrals in the boundary element method (BEM). However,
the resultant integrands of the PCT tend to become nearly singular when (1) the
aspect ratio of the element is large or (2) the field point is closed to the
element boundary; thus a large number of quadrature points are needed to
achieve a relatively high accuracy. In this paper, the first problem is
circumvented by using a conformal transformation so that the geometry of the
curved physical element is preserved in the transformed domain. The second
problem is alleviated by using a sigmoidal transformation, which makes the
quadrature points more concentrated around the near singularity.
By combining the proposed two transformations with the Guiggiani's method in
[M. Guiggiani, \emph{et al}.
A general algorithm for the numerical solution of hypersingular boundary
integral equations.
\emph{ASME Journal of Applied Mechanics}, 59(1992), 604-614], one obtains an
efficient and robust numerical method for computing the weakly-, strongly- and
hyper-singular integrals in high-order BEM with curved elements. Numerical
integration results show that, compared with the original PCT, the present
method can reduce the number of quadrature points considerably, for given
accuracy. For further verification, the method is incorporated into a 2-order
Nystr\"om BEM code for solving acoustic Burton-Miller boundary integral
equation. It is shown that the method can retain the convergence rate of the
BEM with much less quadrature points than the existing PCT. The method is
implemented in C language and freely available.
|
1306.0291 | Revisiting Circular-Based Random Node Simulation | cs.IT math.IT | In literature, a stochastic model for spreading nodes in a cellular cell is
available. Despite its existence, the current method does not offer any
versatility in dealing with sectored layers. Of course, this needed
adaptability could be created synthetically through heuristic means. However,
due to selective sampling, such practice dissolves the true randomness sought.
Hence, in this paper, a universal exact scattering model is derived. Also, as
an alternative to exhaustive simulation, a generic close-form path-loss
predictor between a node and a BS is obtained. Further, using these results, an
algorithm based on the superposition principle is proposed. This will ensure
greater emulation flexibility, and attain a heterogeneous spatial density.
|
1306.0308 | Probabilistic Solutions to Differential Equations and their Application
to Riemannian Statistics | stat.ML cs.LG math.NA | We study a probabilistic numerical method for the solution of both boundary
and initial value problems that returns a joint Gaussian process posterior over
the solution. Such methods have concrete value in the statistics on Riemannian
manifolds, where non-analytic ordinary differential equations are involved in
virtually all computations. The probabilistic formulation permits marginalising
the uncertainty of the numerical solution such that statistics are less
sensitive to inaccuracies. This leads to new Riemannian algorithms for mean
value computations and principal geodesic analysis. Marginalisation also means
results can be less precise than point estimates, enabling a noticeable
speed-up over the state of the art. Our approach is an argument for a wider
point that uncertainty caused by numerical calculations should be tracked
throughout the pipeline of machine learning algorithms.
|
1306.0322 | Correlation of Automorphism Group Size and Topological Properties with
Program-size Complexity Evaluations of Graphs and Complex Networks | cs.IT cs.CC cs.CG math.IT q-bio.MN | We show that numerical approximations of Kolmogorov complexity (K) applied to
graph adjacency matrices capture some group-theoretic and topological
properties of graphs and empirical networks ranging from metabolic to social
networks. That K and the size of the group of automorphisms of a graph are
correlated opens up interesting connections to problems in computational
geometry, and thus connects several measures and concepts from complexity
science. We show that approximations of K characterise synthetic and natural
networks by their generating mechanisms, assigning lower algorithmic randomness
to complex network models (Watts-Strogatz and Barabasi-Albert networks) and
high Kolmogorov complexity to (random) Erdos-Renyi graphs. We derive these
results via two different Kolmogorov complexity approximation methods applied
to the adjacency matrices of the graphs and networks. The methods used are the
traditional lossless compression approach to Kolmogorov complexity, and a
normalised version of a Block Decomposition Method (BDM) measure, based on
algorithmic probability theory.
|
1306.0340 | Majority-vote model on Opinion-Dependent Networks | physics.soc-ph cs.SI | We study a nonequilibrium model with up-down symmetry and a noise parameter
$q$ known as majority-vote model of M.J. Oliveira $1992$ on opinion-dependent
network or Stauffer-Hohnisch-Pittnauer networks. By Monte Carlo simulations and
finite-size scaling relations the critical exponents $\beta/\nu$, $\gamma/\nu$,
and $1/\nu$ and points $q_{c}$ and $U^*$ are obtained. After extensive
simulations, we obtain $\beta/\nu=0.230(3)$, $\gamma/\nu=0.535(2)$, and
$1/\nu=0.475(8)$. The calculated values of the critical noise parameter and
Binder cumulant are $q_{c}=0.166(3)$ and $U^*=0.288(3)$. Within the error bars,
the exponents obey the relation $2\beta/\nu+\gamma/\nu=1$ and the results
presented here demonstrate that the majority-vote model belongs to a different
universality class than the equilibrium Ising model on
Stauffer-Hohnisch-Pittnauer networks, but to the same class as majority-vote
models on some other networks.
|
1306.0386 | Improved and Generalized Upper Bounds on the Complexity of Policy
Iteration | math.OC cs.AI cs.DM cs.RO | Given a Markov Decision Process (MDP) with $n$ states and a totalnumber $m$
of actions, we study the number of iterations needed byPolicy Iteration (PI)
algorithms to converge to the optimal$\gamma$-discounted policy. We consider
two variations of PI: Howard'sPI that changes the actions in all states with a
positive advantage,and Simplex-PI that only changes the action in the state
with maximaladvantage. We show that Howard's PI terminates after at most
$O\left(\frac{m}{1-\gamma}\log\left(\frac{1}{1-\gamma}\right)\right)$iterations,
improving by a factor $O(\log n)$ a result by Hansen etal., while Simplex-PI
terminates after at most
$O\left(\frac{nm}{1-\gamma}\log\left(\frac{1}{1-\gamma}\right)\right)$iterations,
improving by a factor $O(\log n)$ a result by Ye. Undersome structural
properties of the MDP, we then consider bounds thatare independent of the
discount factor~$\gamma$: quantities ofinterest are bounds $\tau\_t$ and
$\tau\_r$---uniform on all states andpolicies---respectively on the
\emph{expected time spent in transientstates} and \emph{the inverse of the
frequency of visits in recurrentstates} given that the process starts from the
uniform distribution.Indeed, we show that Simplex-PI terminates after at most
$\tilde O\left(n^3 m^2 \tau\_t \tau\_r \right)$ iterations. This extends
arecent result for deterministic MDPs by Post & Ye, in which $\tau\_t\le 1$ and
$\tau\_r \le n$, in particular it shows that Simplex-PI isstrongly polynomial
for a much larger class of MDPs. We explain whysimilar results seem hard to
derive for Howard's PI. Finally, underthe additional (restrictive) assumption
that the state space ispartitioned in two sets, respectively states that are
transient andrecurrent for all policies, we show that both Howard's PI
andSimplex-PI terminate after at most $\tilde
O(m(n^2\tau\_t+n\tau\_r))$iterations.
|
1306.0393 | Learning from networked examples in a k-partite graph | cs.LG stat.ML | Many machine learning algorithms are based on the assumption that training
examples are drawn independently. However, this assumption does not hold
anymore when learning from a networked sample where two or more training
examples may share common features. We propose an efficient weighting method
for learning from networked examples and show the sample error bound which is
better than previous work.
|
1306.0404 | Iterative Grassmannian Optimization for Robust Image Alignment | cs.CV math.OC stat.ML | Robust high-dimensional data processing has witnessed an exciting development
in recent years, as theoretical results have shown that it is possible using
convex programming to optimize data fit to a low-rank component plus a sparse
outlier component. This problem is also known as Robust PCA, and it has found
application in many areas of computer vision. In image and video processing and
face recognition, the opportunity to process massive image databases is
emerging as people upload photo and video data online in unprecedented volumes.
However, data quality and consistency is not controlled in any way, and the
massiveness of the data poses a serious computational challenge. In this paper
we present t-GRASTA, or "Transformed GRASTA (Grassmannian Robust Adaptive
Subspace Tracking Algorithm)". t-GRASTA iteratively performs incremental
gradient descent constrained to the Grassmann manifold of subspaces in order to
simultaneously estimate a decomposition of a collection of images into a
low-rank subspace, a sparse part of occlusions and foreground objects, and a
transformation such as rotation or translation of the image. We show that
t-GRASTA is 4 $\times$ faster than state-of-the-art algorithms, has half the
memory requirement, and can achieve alignment for face images as well as
jittered camera surveillance images.
|
1306.0424 | A data-driven analysis to question epidemic models for citation cascades
on the blogosphere | cs.SI physics.soc-ph | Citation cascades in blog networks are often considered as traces of
information spreading on this social medium. In this work, we question this
point of view using both a structural and semantic analysis of five months
activity of the most representative blogs of the french-speaking
community.Statistical measures reveal that our dataset shares many features
with those that can be found in the literature, suggesting the existence of an
identical underlying process. However, a closer analysis of the post content
indicates that the popular epidemic-like descriptions of cascades are
misleading in this context.A basic model, taking only into account the behavior
of bloggers and their restricted social network, accounts for several important
statistical features of the data.These arguments support the idea that
citations primary goal may not be information spreading on the blogosphere.
|
1306.0442 | Evolutionary Approach for the Containers Bin-Packing Problem | cs.NE | This paper deals with the resolution of combinatorial optimization problems,
particularly those concerning the maritime transport scheduling. We are
interested in the management platforms in a river port and more specifically in
container organisation operations with a view to minimizing the number of
container rehandlings. Subsequently, we rmeet customers delivery deadlines and
we reduce ship stoppage time In this paper, we propose a genetic algorithm to
solve this problem and we present some experiments and results.
|
1306.0493 | Graph Metrics for Temporal Networks | physics.soc-ph cs.SI | Temporal networks, i.e., networks in which the interactions among a set of
elementary units change over time, can be modelled in terms of time-varying
graphs, which are time-ordered sequences of graphs over a set of nodes. In such
graphs, the concepts of node adjacency and reachability crucially depend on the
exact temporal ordering of the links. Consequently, all the concepts and
metrics proposed and used for the characterisation of static complex networks
have to be redefined or appropriately extended to time-varying graphs, in order
to take into account the effects of time ordering on causality. In this chapter
we discuss how to represent temporal networks and we review the definitions of
walks, paths, connectedness and connected components valid for graphs in which
the links fluctuate over time. We then focus on temporal node-node distance,
and we discuss how to characterise link persistence and the temporal
small-world behaviour in this class of networks. Finally, we discuss the
extension of classic centrality measures, including closeness, betweenness and
spectral centrality, to the case of time-varying graphs, and we review the work
on temporal motifs analysis and the definition of modularity for temporal
graphs.
|
1306.0502 | Learning-Based Adaptive Transmission for Limited Feedback Multiuser
MIMO-OFDM | cs.IT math.IT | Performing link adaptation in a multiantenna and multiuser system is
challenging because of the coupling between precoding, user selection, spatial
mode selection and use of limited feedback about the channel. The problem is
exacerbated by the difficulty of selecting the proper modulation and coding
scheme when using orthogonal frequency division multiplexing (OFDM). This paper
presents a data-driven approach to link adaptation for multiuser multiple input
mulitple output (MIMO) OFDM systems. A machine learning classifier is used to
select the modulation and coding scheme, taking as input the SNR values in the
different subcarriers and spatial streams. A new approximation is developed to
estimate the unknown interuser interference due to the use of limited feedback.
This approximation allows to obtain SNR information at the transmitter with a
minimum communication overhead. A greedy algorithm is used to perform spatial
mode and user selection with affordable complexity, without resorting to an
exhaustive search. The proposed adaptation is studied in the context of the
IEEE 802.11ac standard, and is shown to schedule users and adjust the
transmission parameters to the channel conditions as well as to the rate of the
feedback channel.
|
1306.0514 | Riemannian metrics for neural networks II: recurrent networks and
learning symbolic data sequences | cs.NE cs.LG | Recurrent neural networks are powerful models for sequential data, able to
represent complex dependencies in the sequence that simpler models such as
hidden Markov models cannot handle. Yet they are notoriously hard to train.
Here we introduce a training procedure using a gradient ascent in a Riemannian
metric: this produces an algorithm independent from design choices such as the
encoding of parameters and unit activities. This metric gradient ascent is
designed to have an algorithmic cost close to backpropagation through time for
sparsely connected networks. We use this procedure on gated leaky neural
networks (GLNNs), a variant of recurrent neural networks with an architecture
inspired by finite automata and an evolution equation inspired by
continuous-time networks. GLNNs trained with a Riemannian gradient are
demonstrated to effectively capture a variety of structures in synthetic
problems: basic block nesting as in context-free grammars (an important feature
of natural languages, but difficult to learn), intersections of multiple
independent Markov-type relations, or long-distance relationships such as the
distant-XOR problem. This method does not require adjusting the network
structure or initial parameters: the network used is a sparse random graph and
the initialization is identical for all problems considered.
|
1306.0519 | Random Walks on Multiplex Networks | physics.soc-ph cond-mat.dis-nn cs.SI | Multiplex networks are receiving increasing interests because they allow to
model relationships between networked agents on several layers simultaneously.
In this supplementary material for the paper "Navigability of interconnected
networks under random failures", we extend well-known random walks to
multiplexes and we introduce a new type of walk that can exist only in
multiplexes. We derive exact expressions for vertex occupation time and the
coverage. Finally, we show how the efficiency in exploring the multiplex
critically depends on the underlying topology of layers, the weight of their
inter-connections and the strategy adopted to walk.
|
1306.0530 | Hybrid Coding: An Interface for Joint Source-Channel Coding and Network
Communication | cs.IT math.IT | A new approach to joint source-channel coding is presented in the context of
communicating correlated sources over multiple access channels. Similar to the
separation architecture, the joint source-channel coding system architecture in
this approach is modular, whereby the source encoding and channel decoding
operations are decoupled. However, unlike the separation architecture, the same
codeword is used for both source coding and channel coding, which allows the
resulting hybrid coding scheme to achieve the performance of the best known
joint source-channel coding schemes. Applications of the proposed architecture
to relay communication are also discussed.
|
1306.0539 | On the Performance Bounds of some Policy Search Dynamic Programming
Algorithms | cs.AI cs.LG | We consider the infinite-horizon discounted optimal control problem
formalized by Markov Decision Processes. We focus on Policy Search algorithms,
that compute an approximately optimal policy by following the standard Policy
Iteration (PI) scheme via an -approximate greedy operator (Kakade and Langford,
2002; Lazaric et al., 2010). We describe existing and a few new performance
bounds for Direct Policy Iteration (DPI) (Lagoudakis and Parr, 2003; Fern et
al., 2006; Lazaric et al., 2010) and Conservative Policy Iteration (CPI)
(Kakade and Langford, 2002). By paying a particular attention to the
concentrability constants involved in such guarantees, we notably argue that
the guarantee of CPI is much better than that of DPI, but this comes at the
cost of a relative--exponential in $\frac{1}{\epsilon}$-- increase of time
complexity. We then describe an algorithm, Non-Stationary Direct Policy
Iteration (NSDPI), that can either be seen as 1) a variation of Policy Search
by Dynamic Programming by Bagnell et al. (2003) to the infinite horizon
situation or 2) a simplified version of the Non-Stationary PI with growing
period of Scherrer and Lesner (2012). We provide an analysis of this algorithm,
that shows in particular that it enjoys the best of both worlds: its
performance guarantee is similar to that of CPI, but within a time complexity
similar to that of DPI.
|
1306.0541 | Identifying Pairs in Simulated Bio-Medical Time-Series | cs.LG cs.CE | The paper presents a time-series-based classification approach to identify
similarities in pairs of simulated human-generated patterns. An example for a
pattern is a time-series representing a heart rate during a specific
time-range, wherein the time-series is a sequence of data points that represent
the changes in the heart rate values. A bio-medical simulator system was
developed to acquire a collection of 7,871 price patterns of financial
instruments. The financial instruments traded in real-time on three American
stock exchanges, NASDAQ, NYSE, and AMEX, simulate bio-medical measurements. The
system simulates a human in which each price pattern represents one bio-medical
sensor. Data provided during trading hours from the stock exchanges allowed
real-time classification. Classification is based on new machine learning
techniques: self-labeling, which allows the application of supervised learning
methods on unlabeled time-series and similarity ranking, which applied on a
decision tree learning algorithm to classify time-series regardless of type and
quantity.
|
1306.0543 | Predicting Parameters in Deep Learning | cs.LG cs.NE stat.ML | We demonstrate that there is significant redundancy in the parameterization
of several deep learning models. Given only a few weight values for each
feature it is possible to accurately predict the remaining values. Moreover, we
show that not only can the parameter values be predicted, but many of them need
not be learned at all. We train several different architectures by learning
only a small number of weights and predicting the rest. In the best case we are
able to predict more than 95% of the weights of a network without any drop in
accuracy.
|
1306.0549 | Waveform Design for Secure SISO Transmissions and Multicasting | cs.CR cs.IT math.IT | Wireless physical-layer security is an emerging field of research aiming at
preventing eavesdropping in an open wireless medium. In this paper, we propose
a novel waveform design approach to minimize the likelihood that a message
transmitted between trusted single-antenna nodes is intercepted by an
eavesdropper. In particular, with knowledge first of the eavesdropper's channel
state information (CSI), we find the optimum waveform and transmit energy that
minimize the signal-to-interference-plus-noise ratio (SINR) at the output of
the eavesdropper's maximum-SINR linear filter, while at the same time provide
the intended receiver with a required pre-specified SINR at the output of its
own max-SINR filter. Next, if prior knowledge of the eavesdropper's CSI is
unavailable, we design a waveform that maximizes the amount of energy available
for generating disturbance to eavesdroppers, termed artificial noise (AN),
while the SINR of the intended receiver is maintained at the pre-specified
level. The extensions of the secure waveform design problem to multiple
intended receivers are also investigated and semidefinite relaxation (SDR) -an
approximation technique based on convex optimization- is utilized to solve the
arising NP-hard design problems. Extensive simulation studies confirm our
analytical performance predictions and illustrate the benefits of the designed
waveforms on securing single-input single-output (SISO) transmissions and
multicasting.
|
1306.0585 | Iterative Decoding and Turbo Equalization: The Z-Crease Phenomenon | cs.IT math.IT nlin.CD | Iterative probabilistic inference, popularly dubbed the soft-iterative
paradigm, has found great use in a wide range of communication applications,
including turbo decoding and turbo equalization. The classic approach of
analyzing the iterative approach inevitably use the statistical and
information-theoretical tools that bear ensemble-average flavors. This paper
consider the per-block error rate performance, and analyzes it using nonlinear
dynamical theory. By modeling the iterative processor as a nonlinear dynamical
system, we report a universal "Z-crease phenomenon:" the zig-zag or up-and-down
fluctuation -- rather than the monotonic decrease -- of the per-block errors,
as the number of iteration increases. Using the turbo decoder as an example, we
also report several interesting motion phenomenons which were not previously
reported, and which appear to correspond well with the notion of "pseudo
codewords" and "stopping/trapping sets." We further propose a heuristic
stopping criterion to control Z-crease and identify the best iteration. Our
stopping criterion is most useful for controlling the worst-case per-block
errors, and helps to significantly reduce the average-iteration numbers.
|
1306.0587 | Analog Turbo Codes: Turning Chaos to Reliability | cs.IT math.IT | Analog error correction codes, by relaxing the source space and the codeword
space from discrete fields to continuous fields, present a generalization of
digital codes. While linear codes are sufficient for digital codes, they are
not for analog codes, and hence nonlinear mappings must be employed to fully
harness the power of analog codes. This paper demonstrates new ways of building
effective (nonlinear) analog codes from a special class of nonlinear,
fast-diverging functions known as the chaotic functions. It is shown that the
"butterfly effect" of the chaotic functions matches elegantly with the distance
expansion condition required for error correction, and that the useful idea in
digital turbo codes can be exploited to construct efficient turbo-like chaotic
analog codes. Simulations show that the new analog codes can perform on par
with, or better than, their digital counter-parts when transmitting analog
sources.
|
1306.0604 | Distributed k-Means and k-Median Clustering on General Topologies | cs.LG cs.DC stat.ML | This paper provides new algorithms for distributed clustering for two popular
center-based objectives, k-median and k-means. These algorithms have provable
guarantees and improve communication complexity over existing approaches.
Following a classic approach in clustering by \cite{har2004coresets}, we reduce
the problem of finding a clustering with low cost to the problem of finding a
coreset of small size. We provide a distributed method for constructing a
global coreset which improves over the previous methods by reducing the
communication complexity, and which works over general communication
topologies. Experimental results on large scale data sets show that this
approach outperforms other coreset-based distributed clustering algorithms.
|
1306.0618 | Prediction with Missing Data via Bayesian Additive Regression Trees | stat.ML cs.LG | We present a method for incorporating missing data in non-parametric
statistical learning without the need for imputation. We focus on a tree-based
method, Bayesian Additive Regression Trees (BART), enhanced with "Missingness
Incorporated in Attributes," an approach recently proposed incorporating
missingness into decision trees (Twala, 2008). This procedure takes advantage
of the partitioning mechanisms found in tree-based models. Simulations on
generated models and real data indicate that our proposed method can forecast
well on complicated missing-at-random and not-missing-at-random models as well
as models where missingness itself influences the response. Our procedure has
higher predictive performance and is more stable than competitors in many
cases. We also illustrate BART's abilities to incorporate missingness into
uncertainty intervals and to detect the influence of missingness on the model
fit.
|
1306.0626 | Provable Inductive Matrix Completion | cs.LG cs.IT math.IT stat.ML | Consider a movie recommendation system where apart from the ratings
information, side information such as user's age or movie's genre is also
available. Unlike standard matrix completion, in this setting one should be
able to predict inductively on new users/movies. In this paper, we study the
problem of inductive matrix completion in the exact recovery setting. That is,
we assume that the ratings matrix is generated by applying feature vectors to a
low-rank matrix and the goal is to recover back the underlying matrix.
Furthermore, we generalize the problem to that of low-rank matrix estimation
using rank-1 measurements. We study this generic problem and provide conditions
that the set of measurements should satisfy so that the alternating
minimization method (which otherwise is a non-convex method with no convergence
guarantees) is able to recover back the {\em exact} underlying low-rank matrix.
In addition to inductive matrix completion, we show that two other low-rank
estimation problems can be studied in our framework: a) general low-rank matrix
sensing using rank-1 measurements, and b) multi-label regression with missing
labels. For both the problems, we provide novel and interesting bounds on the
number of measurements required by alternating minimization to provably
converges to the {\em exact} low-rank matrix. In particular, our analysis for
the general low rank matrix sensing problem significantly improves the required
storage and computational cost than that required by the RIP-based matrix
sensing methods \cite{RechtFP2007}. Finally, we provide empirical validation of
our approach and demonstrate that alternating minimization is able to recover
the true matrix for the above mentioned problems using a small number of
measurements.
|
1306.0646 | Information sharing and sorting in a community | physics.soc-ph cond-mat.stat-mech cs.SI | We present the results of detailed numerical study of a model for the sharing
and sorting of informations in a community consisting of a large number of
agents. The information gathering takes place in a sequence of mutual bipartite
interactions where randomly selected pairs of agents communicate with each
other to enhance their knowledge and sort out the common information. Though
our model is less restricted compared to the well established naming game, yet
the numerical results strongly indicate that the whole set of exponents
characterizing this model are different from those of the naming game and they
assume non-trivial values. Finally it appears that in analogy to the emergence
of clusters in the phenomenon of percolation, one can define clusters of agents
here having the same information. We have studied in detail the growth of the
largest cluster in this article and performed its finite-size scaling analysis.
|
1306.0662 | Predictability of Event Occurrences in Timed Systems | cs.SY cs.FL cs.LO math.OC | We address the problem of predicting events' occurrences in partially
observable timed systems modelled by timed automata. Our contribution is
many-fold: 1) we give a definition of bounded predictability, namely
k-predictability, that takes into account the minimum delay between the
prediction and the actual event's occurrence; 2) we show that 0-predictability
is equivalent to the original notion of predictability of S. Genc and S.
Lafortune; 3) we provide a necessary and sufficient condition for
k-predictability (which is very similar to k-diagnosability) and give a simple
algorithm to check k-predictability; 4) we address the problem of
predictability of events' occurrences in timed automata and show that the
problem is PSPACE-complete.
|
1306.0665 | Narrative based Postdictive Reasoning for Cognitive Robotics | cs.AI cs.RO | Making sense of incomplete and conflicting narrative knowledge in the
presence of abnormalities, unobservable processes, and other real world
considerations is a challenge and crucial requirement for cognitive robotics
systems. An added challenge, even when suitably specialised action languages
and reasoning systems exist, is practical integration and application within
large-scale robot control frameworks.
In the backdrop of an autonomous wheelchair robot control task, we report on
application-driven work to realise postdiction triggered abnormality detection
and re-planning for real-time robot control: (a) Narrative-based knowledge
about the environment is obtained via a larger smart environment framework; and
(b) abnormalities are postdicted from stable-models of an answer-set program
corresponding to the robot's epistemic model. The overall reasoning is
performed in the context of an approximate epistemic action theory based
planner implemented via a translation to answer-set programming.
|
1306.0682 | Modified CRB for Location and Velocity Estimation using Signals of
Opportunity | cs.IT math.IT | We consider the problem of localizing two sensors using signals of
opportunity from beacons with known positions. Beacons and sensors have
asynchronous local clocks or oscillators with unknown clock skews and offsets.
We model clock skews as random, and analyze the biases introduced by clock
asynchronism in the received signals. By deriving the equivalent Fisher
information matrix for the modified Bayesian Cram\'er-Rao lower bound (CRLB) of
sensor position and velocity estimation, we quantify the errors caused by clock
asynchronism.
|
1306.0686 | Online Learning under Delayed Feedback | cs.LG cs.AI stat.ML | Online learning with delayed feedback has received increasing attention
recently due to its several applications in distributed, web-based learning
problems. In this paper we provide a systematic study of the topic, and analyze
the effect of delay on the regret of online learning algorithms. Somewhat
surprisingly, it turns out that delay increases the regret in a multiplicative
way in adversarial problems, and in an additive way in stochastic problems. We
give meta-algorithms that transform, in a black-box fashion, algorithms
developed for the non-delayed case into ones that can handle the presence of
delays in the feedback loop. Modifications of the well-known UCB algorithm are
also developed for the bandit problem with delayed feedback, with the advantage
over the meta-algorithms that they can be implemented with lower complexity.
|
1306.0694 | Iterated Tabu Search Algorithm for Packing Unequal Circles in a Circle | math.OC cs.AI | This paper presents an Iterated Tabu Search algorithm (denoted by ITS-PUCC)
for solving the problem of Packing Unequal Circles in a Circle. The algorithm
exploits the continuous and combinatorial nature of the unequal circles packing
problem. It uses a continuous local optimization method to generate locally
optimal packings. Meanwhile, it builds a neighborhood structure on the set of
local minimum via two appropriate perturbation moves and integrates two
combinatorial optimization methods, Tabu Search and Iterated Local Search, to
systematically search for good local minima. Computational experiments on two
sets of widely-used test instances prove its effectiveness and efficiency. For
the first set of 46 instances coming from the famous circle packing contest and
the second set of 24 instances widely used in the literature, the algorithm is
able to discover respectively 14 and 16 better solutions than the previous
best-known records.
|
1306.0710 | On the Optimum Cyclic Subcode Chains of $\mathcal{RM}(2,m)^*$ for
Increasing Message Length | cs.IT math.IT | The distance profiles of linear block codes can be employed to design
variational coding scheme for encoding message with variational length and
getting lower decoding error probability by large minimum Hamming distance. %,
e.g. the design of TFCI in CDMA and the researches on the second-order
Reed-Muller code $\mathcal{RM}(2,m)$, etc.
Considering convenience for encoding, we focus on the distance profiles with
respect to cyclic subcode chains (DPCs) of cyclic codes over $GF(q)$ with
length $n$ such that $\mbox{gcd}(n,q) = 1$. In this paper the optimum DPCs and
the corresponding optimum cyclic subcode chains are investigated on the
punctured second-order Reed-Muller code $\mathcal{RM}(2,m)^*$ for increasing
message length, where two standards on the optimums are studied according to
the rhythm of increase.
|
1306.0712 | Resource Allocation for Secure Communication in Systems with Wireless
Information and Power Transfer | cs.IT math.IT | This paper considers secure communication in a multiuser multiple-input
single-output (MISO) downlink system with simultaneous wireless information and
power transfer. We study the design of resource allocation algorithms
minimizing the total transmit power for the case when the receivers are able to
harvest energy from the radio frequency. In particular, the algorithm design is
formulated as a non-convex optimization problem which takes into account
artificial noise generation to combat potential eavesdroppers, a minimum
required signal-to-interference-plus-noise ratio (SINR) at the desired
receiver, maximum tolerable SINRs at the potential eavesdroppers, and a minimum
required power delivered to the receivers. We adopt a semidefinite programming
(SDP) relaxation approach to obtain an upper bound solution for the considered
problem. The tightness of the upper bound is revealed by examining a sufficient
condition for the global optimal solution. Inspired by the sufficient
condition, we propose two suboptimal resource allocation schemes enhancing
secure communication and facilitating efficient energy harvesting. Simulation
results demonstrate a close-to-optimal performance achieved by the proposed
suboptimal schemes and significant transmit power savings by optimization of
the artificial noise generation.
|
1306.0715 | Random Walks on Stochastic Temporal Networks | physics.soc-ph cs.SI | In the study of dynamical processes on networks, there has been intense focus
on network structure -- i.e., the arrangement of edges and their associated
weights -- but the effects of the temporal patterns of edges remains poorly
understood. In this chapter, we develop a mathematical framework for random
walks on temporal networks using an approach that provides a compromise between
abstract but unrealistic models and data-driven but non-mathematical
approaches. To do this, we introduce a stochastic model for temporal networks
in which we summarize the temporal and structural organization of a system
using a matrix of waiting-time distributions. We show that random walks on
stochastic temporal networks can be described exactly by an
integro-differential master equation and derive an analytical expression for
its asymptotic steady state. We also discuss how our work might be useful to
help build centrality measures for temporal networks.
|
1306.0733 | Fast Gradient-Based Inference with Continuous Latent Variable Models in
Auxiliary Form | cs.LG stat.ML | We propose a technique for increasing the efficiency of gradient-based
inference and learning in Bayesian networks with multiple layers of continuous
latent vari- ables. We show that, in many cases, it is possible to express such
models in an auxiliary form, where continuous latent variables are
conditionally deterministic given their parents and a set of independent
auxiliary variables. Variables of mod- els in this auxiliary form have much
larger Markov blankets, leading to significant speedups in gradient-based
inference, e.g. rapid mixing Hybrid Monte Carlo and efficient gradient-based
optimization. The relative efficiency is confirmed in ex- periments.
|
1306.0751 | First-Order Decomposition Trees | cs.AI | Lifting attempts to speed up probabilistic inference by exploiting symmetries
in the model. Exact lifted inference methods, like their propositional
counterparts, work by recursively decomposing the model and the problem. In the
propositional case, there exist formal structures, such as decomposition trees
(dtrees), that represent such a decomposition and allow us to determine the
complexity of inference a priori. However, there is currently no equivalent
structure nor analogous complexity results for lifted inference. In this paper,
we introduce FO-dtrees, which upgrade propositional dtrees to the first-order
level. We show how these trees can characterize a lifted inference solution for
a probabilistic logical model (in terms of a sequence of lifted operations),
and make a theoretical analysis of the complexity of lifted inference in terms
of the novel notion of lifted width for the tree.
|
1306.0772 | Equivalence and comparison of heterogeneous cellular networks | cs.NI cs.IT math.IT math.PR | We consider a general heterogeneous network in which, besides general
propagation effects (shadowing and/or fading), individual base stations can
have different emitting powers and be subject to different parameters of
Hata-like path-loss models (path-loss exponent and constant) due to, for
example, varying antenna heights. We assume also that the stations may have
varying parameters of, for example, the link layer performance (SINR threshold,
etc). By studying the propagation processes of signals received by the typical
user from all antennas marked by the corresponding antenna parameters, we show
that seemingly different heterogeneous networks based on Poisson point
processes can be equivalent from the point of view a typical user. These
neworks can be replaced with a model where all the previously varying
propagation parameters (including path-loss exponents) are set to constants
while the only trade-off being the introduction of an isotropic base station
density. This allows one to perform analytic comparisons of different network
models via their isotropic representations. In the case of a constant path-loss
exponent, the isotropic representation simplifies to a homogeneous modification
of the constant intensity of the original network, thus generalizing a previous
result showing that the propagation processes only depend on one moment of the
emitted power and propagation effects. We give examples and applications to
motivate these results and highlight an interesting observation regarding
random path-loss exponents.
|
1306.0785 | Robust multirobot coordination using priority encoded homotopic
constraints | cs.RO | We study the problem of coordinating multiple robots along fixed geometric
paths. Our contribution is threefold. First we formalize the intuitive concept
of priorities as a binary relation induced by a feasible coordination solution,
without excluding the case of robots following each other on the same geometric
path. Then we prove that two paths in the coordination space are continuously
deformable into each other if and only if they induce the \emph{same priority
graph}, that is, the priority graph uniquely encodes homotopy classes of
coordination solutions. Finally, we give a simple control law allowing to
safely navigate into homotopy classes \emph{under kinodynamic constraints} even
in the presence of unexpected events, such as a sudden robot deceleration
without notice. It appears the freedom within homotopy classes allows to much
deviate from any pre-planned trajectory without ever colliding nor having to
re-plan the assigned priorities.
|
1306.0808 | The Role of Trends in Evolving Networks | physics.soc-ph cond-mat.stat-mech cs.SI | Modeling complex networks has been the focus of much research for over a
decade. Preferential attachment (PA) is considered a common explanation to the
self organization of evolving networks, suggesting that new nodes prefer to
attach to more popular nodes. The PA model results in broad degree
distributions, found in many networks, but cannot explain other common
properties such as: The growth of nodes arriving late and Clustering (community
structure). Here we show that when the tendency of networks to adhere to trends
is incorporated into the PA model, it can produce networks with such
properties. Namely, in trending networks, newly arriving nodes may become
central at random, forming new clusters. In particular, we show that when the
network is young it is more susceptible to trends, but even older networks may
have trendy new nodes that become central in their structure. Alternatively,
networks can be seen as composed of two parts: static, governed by a power law
degree distribution, and a dynamic part governed by trends, as we show on Wiki
pages. Our results also show that the arrival of trending new nodes not only
creates new clusters, but also has an effect on the relative importance and
centrality of all other nodes in the network. This can explain a variety of
real world networks in economics, social and online networks, and cultural
networks. Products popularity, formed by the network of people's opinions,
exhibit these properties. Some lines of products are increasingly susceptible
to trends and hence to shifts in popularity, while others are less trendy and
hence more stable. We believe that our findings have a big impact on our
understanding of real networks.
|
1306.0811 | A Gang of Bandits | cs.LG cs.SI stat.ML | Multi-armed bandit problems are receiving a great deal of attention because
they adequately formalize the exploration-exploitation trade-offs arising in
several industrially relevant applications, such as online advertisement and,
more generally, recommendation systems. In many cases, however, these
applications have a strong social component, whose integration in the bandit
algorithm could lead to a dramatic performance increase. For instance, we may
want to serve content to a group of users by taking advantage of an underlying
network of social relationships among them. In this paper, we introduce novel
algorithmic approaches to the solution of such networked bandit problems. More
specifically, we design and analyze a global strategy which allocates a bandit
algorithm to each network node (user) and allows it to "share" signals
(contexts and payoffs) with the neghboring nodes. We then derive two more
scalable variants of this strategy based on different ways of clustering the
graph nodes. We experimentally compare the algorithm and its variants to
state-of-the-art methods for contextual bandits that do not use the relational
information. Our experiments, carried out on synthetic and real-world datasets,
show a marked increase in prediction performance obtained by exploiting the
network structure.
|
1306.0813 | Social Media and Information Overload: Survey Results | cs.SI cs.CY physics.soc-ph | A UK-based online questionnaire investigating aspects of usage of
user-generated media (UGM), such as Facebook, LinkedIn and Twitter, attracted
587 participants. Results show a high degree of engagement with social
networking media such as Facebook, and a significant engagement with other
media such as professional media, microblogs and blogs. Participants who
experience information overload are those who engage less frequently with the
media, rather than those who have fewer posts to read. Professional users show
different behaviours to social users. Microbloggers complain of information
overload to the greatest extent. Two thirds of Twitter-users have felt that
they receive too many posts, and over half of Twitter-users have felt the need
for a tool to filter out the irrelevant posts. Generally speaking, participants
express satisfaction with the media, though a significant minority express a
range of concerns including information overload and privacy.
|
1306.0816 | A Critical Assessment of Cost-Based Nash Methods for Demand Scheduling
in Smart Grids | cs.GT cs.CE | Demand-side management (DSM) is becoming an increasingly important component
of the envisioned smart grid. The ability to improve the efficiency of energy
use in the power system by altering demand is widely viewed as being not merely
promising but in fact essential. However, while the advantages of DSM are
clear, arriving at an efficient implementation has so far proven to be less
straightforward. There have recently been many proposals put forth in the
literature to tackle the demand scheduling aspect of DSM. One particular
approach based on a game-theoretic treatment of the day-ahead load-scheduling
problem has recently gained tremendous popularity in the DSM literature. In
this letter, an assessment of this approach is conducted, and its main result
is challenged.
|
1306.0832 | Large-signal stability conditions for semi-quasi-Z-source inverters:
switched and averaged models | cs.SY | The recently introduced semi-quasi-Z-source in- verter can be interpreted as
a DC-DC converter whose input- output voltage gain may take any value between
minus infinity and 1 depending on the applied duty cycle. In order to generate
a sinusoidal voltage waveform at the output of this converter, a time-varying
duty cycle needs to be applied. Application of a time-varying duty cycle that
produces large-signal behavior requires careful consideration of stability
issues. This paper provides stability results for both the large-signal
averaged and the switched models of the semi-quasi-Z-source inverter operating
in continuous conduction mode. We show that if the load is linear and purely
resistive then the boundedness and ultimate boundedness of the state
trajectories is guaranteed provided some reasonable operation conditions are
ensured. These conditions amount to keeping the duty cycle away from the
extreme values 0 or 1 (averaged and switched models), and limiting the maximum
PWM switching period (switched model). The results obtained can be used to give
theoretical justification to the inverter operation strategy recently proposed
by Cao et al. in [1].
|
1306.0842 | Kernel Mean Estimation and Stein's Effect | stat.ML cs.LG math.ST stat.TH | A mean function in reproducing kernel Hilbert space, or a kernel mean, is an
important part of many applications ranging from kernel principal component
analysis to Hilbert-space embedding of distributions. Given finite samples, an
empirical average is the standard estimate for the true kernel mean. We show
that this estimator can be improved via a well-known phenomenon in statistics
called Stein's phenomenon. After consideration, our theoretical analysis
reveals the existence of a wide class of estimators that are better than the
standard. Focusing on a subset of this class, we propose efficient shrinkage
estimators for the kernel mean. Empirical evaluations on several benchmark
applications clearly demonstrate that the proposed estimators outperform the
standard kernel mean estimator.
|
1306.0865 | Joint Signal and Channel State Information Compression for the Backhaul
of Uplink Network MIMO Systems | cs.IT math.IT | In network MIMO cellular systems, subsets of base stations (BSs), or remote
radio heads, are connected via backhaul links to central units (CUs) that
perform joint encoding in the downlink and joint decoding in the uplink.
Focusing on the uplink, an effective solution for the communication between BSs
and the corresponding CU on the backhaul links is based on compressing and
forwarding the baseband received signal from each BS. In the presence of
ergodic fading, communicating the channel state information (CSI) from the BSs
to the CU may require a sizable part of the backhaul capacity. In a prior work,
this aspect was studied by assuming a Compress-Forward-Estimate (CFE) approach,
whereby the BSs compress the training signal and CSI estimation takes place at
the CU. In this work, instead, an Estimate-Compress-Forward (ECF) approach is
investigated, whereby the BSs perform CSI estimation and forward a compressed
version of the CSI to the CU. This choice is motivated by the information
theoretic optimality of separate estimation and compression. Various ECF
strategies are proposed that perform either separate or joint compression of
estimated CSI and received signal. Moreover, the proposed strategies are
combined with distributed source coding when considering multiple BSs.
"Semi-coherent" strategies are also proposed that do not convey any CSI or
training information on the backhaul links. Via numerical results, it is shown
that a proper design of ECF strategies based on joint received signal and
estimated CSI compression or of semi-coherent schemes leads to substantial
performance gains compared to more conventional approaches based on
non-coherent transmission or the CFE approach.
|
1306.0886 | $\propto$SVM for learning with label proportions | cs.LG stat.ML | We study the problem of learning with label proportions in which the training
data is provided in groups and only the proportion of each class in each group
is known. We propose a new method called proportion-SVM, or $\propto$SVM, which
explicitly models the latent unknown instance labels together with the known
group label proportions in a large-margin framework. Unlike the existing works,
our approach avoids making restrictive assumptions about the data. The
$\propto$SVM model leads to a non-convex integer programming problem. In order
to solve it efficiently, we propose two algorithms: one based on simple
alternating optimization and the other based on a convex relaxation. Extensive
experiments on standard datasets show that $\propto$SVM outperforms the
state-of-the-art, especially for larger group sizes.
|
1306.0896 | Finding Numerical Solutions of Diophantine Equations using Ant Colony
Optimization | cs.NE cs.ET | The paper attempts to find numerical solutions of Diophantine equations, a
challenging problem as there are no general methods to find solutions of such
equations. It uses the metaphor of foraging habits of real ants. The ant colony
optimization based procedure starts with randomly assigned locations to a fixed
number of artificial ants. Depending upon the quality of these positions, ants
deposit pheromone at the nodes. A successor node is selected from the
topological neighborhood of each of the nodes based on this stochastic
pheromone deposit. If an ant bumps into an already encountered node, the
pheromone is updated correspondingly. A suitably defined pheromone evaporation
strategy guarantees that premature convergence does not take place. The
experimental results, which compares with those of other machine intelligence
techniques, validate the effectiveness of the proposed method.
|
1306.0897 | Urban ozone concentration forecasting with artificial neural network in
Corsica | cs.NE | Atmospheric pollutants concentration forecasting is an important issue in air
quality monitoring. Qualitair Corse, the organization responsible for
monitoring air quality in Corsica (France) region, needs to develop a
short-term prediction model to lead its mission of information towards the
public. Various deterministic models exist for meso-scale or local forecasting,
but need powerful large variable sets, a good knowledge of atmospheric
processes, and can be inaccurate because of local climatical or geographical
particularities, as observed in Corsica, a mountainous island located in a
Mediterranean Sea. As a result, we focus in this study on statistical models,
and particularly Artificial Neural Networks (ANN) that have shown good results
in the prediction of ozone concentration at horizon h+1 with data measured
locally. The purpose of this study is to build a predictor to realize
predictions of ozone and PM10 at horizon d+1 in Corsica in order to be able to
anticipate pollution peak formation and to take appropriated prevention
measures. Specific meteorological conditions are known to lead to particular
pollution event in Corsica (e.g. Saharan dust event). Therefore, several ANN
models will be used, for meteorological conditions clustering and for
operational forecasting.
|
1306.0924 | Graph theory enables drug repurposing. How a mathematical model can
drive the discovery of hidden Mechanisms of Action | q-bio.QM cs.CE | We introduced a methodology to efficiently exploit natural-language expressed
biomedical knowledge for repurposing existing drugs towards diseases for which
they were not initially intended. Leveraging on developments in Computational
Linguistics and Graph Theory, a methodology is defined to build a graph
representation of knowledge, which is automatically analysed to discover hidden
relations between any drug and any disease: these relations are specific paths
among the biomedical entities of the graph, representing possible Modes of
Action for any given pharmacological compound. These paths are ranked according
to their relevance, exploiting a measure induced by a stochastic process
defined on the graph. Here we show, providing real-world examples, how the
method successfully retrieves known pathophysiological Mode of Actions and
finds new ones by meaningfully selecting and aggregating contributions from
known bio-molecular interactions. Applications of this methodology are
presented, and prove the efficacy of the method for selecting drugs as
treatment options for rare diseases.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.