id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1307.1073 | Modelling Reactive and Proactive Behaviour in Simulation: A Case Study
in a University Organisation | cs.CE | Simulation is a well established what-if scenario analysis tool in
Operational Research (OR). While traditionally Discrete Event Simulation (DES)
and System Dynamics Simulation (SDS) are the predominant simulation techniques
in OR, a new simulation technique, namely Agent-Based Simulation (ABS), has
emerged and is gaining more attention. In our research we focus on discrete
simulation methods (i.e. DES and ABS). The contribution made by this paper is
the comparison of DES and combined DES/ABS for modelling human reactive and
different level of detail of human proactive behaviour in service systems. The
results of our experiments show that the level of proactiveness considered in
the model has a big impact on the simulation output. However, there is not a
big difference between the results from the DES and the combined DES/ABS
simulation models. Therefore, for service systems of the type we investigated
we would suggest to use DES as the preferred analysis tool.
|
1307.1078 | Investigating the Detection of Adverse Drug Events in a UK General
Practice Electronic Health-Care Database | cs.CE cs.LG | Data-mining techniques have frequently been developed for Spontaneous
reporting databases. These techniques aim to find adverse drug events
accurately and efficiently. Spontaneous reporting databases are prone to
missing information, under reporting and incorrect entries. This often results
in a detection lag or prevents the detection of some adverse drug events. These
limitations do not occur in electronic health-care databases. In this paper,
existing methods developed for spontaneous reporting databases are implemented
on both a spontaneous reporting database and a general practice electronic
health-care database and compared. The results suggests that the application of
existing methods to the general practice database may help find signals that
have gone undetected when using the spontaneous reporting system database. In
addition the general practice database provides far more supplementary
information, that if incorporated in analysis could provide a wealth of
information for identifying adverse events more accurately.
|
1307.1079 | Application of a clustering framework to UK domestic electricity data | cs.CE cs.LG | This paper takes an approach to clustering domestic electricity load profiles
that has been successfully used with data from Portugal and applies it to UK
data. Clustering techniques are applied and it is found that the preferred
technique in the Portuguese work (a two stage process combining Self Organised
Maps and Kmeans) is not appropriate for the UK data. The work shows that up to
nine clusters of households can be identified with the differences in usage
profiles being visually striking. This demonstrates the appropriateness of
breaking the electricity usage patterns down to more detail than the two load
profiles currently published by the electricity industry. The paper details
initial results using data collected in Milton Keynes around 1990. Further work
is described and will concentrate on building accurate and meaningful clusters
of similar electricity users in order to better direct demand side management
initiatives to the most relevant target customers.
|
1307.1101 | Mixed-Timescale Precoding and Cache Control in Cached MIMO Interference
Network | cs.IT math.IT | Consider media streaming in MIMO interference networks whereby multiple base
stations (BS) simultaneously deliver media to their associated users using
fixed data rates. The performance is fundamentally limited by the cross-link
interference. We propose a cache-induced opportunistic cooperative MIMO (CoMP)
for interference mitigation. By caching a portion of the media files, the BSs
opportunistically employ CoMP to transform the cross-link interference into
spatial multiplexing gain. We study a mixed-timescale optimization of MIMO
precoding and cache control to minimize the transmit power under the rate
constraint. The cache control is to create more CoMP opportunities and is
adaptive to the long-term popularity of the media files. The precoding is to
guarantee the rate requirement and is adaptive to the channel state information
and cache state at the BSs. The joint stochastic optimization problem is
decomposed into a short-term precoding and a long-term cache control problem.
We propose a precoding algorithm which converges to a stationary point of the
short-term problem. Based on this, we exploit the hidden convexity of the
long-term problem and propose a low complexity and robust solution using
stochastic subgradient. The solution has significant gains over various
baselines and does not require explicit knowledge of the media popularity.
|
1307.1166 | A Novel Robust Method to Add Watermarks to Bitmap Images by Fading
Technique | cs.CV cs.MM | Digital water marking is one of the essential fields in image security and
copyright protection. The proposed technique in this paper was based on the
principle of protecting images by hide an invisible watermark in the image. The
technique starts with merging the cover image and the watermark image with
suitable ratios, i.e., 99% from the cover image will be merged with 1% from the
watermark image. Technically, the fading process is irreversible but with the
proposed technique, the probability to reconstruct the original watermark image
is great. There is no perceptible difference between the original and
watermarked image by human eye. The experimental results show that the proposed
technique proven its ability to hide images that have the same size of the
cover image. Three performance measures were implemented to support the
proposed techniques which are MSE, PSNR, and SSIM. Fortunately, all the three
measures have excellent values.
|
1307.1170 | A Formal Sociologic Study of Free Will | cs.SI | We make a formal sociologic study of the concept of free will. By using the
language of mathematics and logic, we define what we call everlasting
societies. Everlasting societies never age: persons never age, and the goods of
the society are indestructible. The infinite history of an everlasting society
unfolds by following deterministic and probabilistic laws that do their best to
satisfy the free will of all the persons of the society.
We define three possible kinds of histories for everlasting societies:
primitive histories, good histories, and golden histories. In primitive
histories, persons are inherently selfish, and they use their free will to
obtain the personal ownerships of all the goods of the society. In good
histories, persons are inherently good, and they use their free will to
distribute the goods of the society. In good histories, a person is not only
able to desire the personal ownership of goods, but is also able to desire that
a good be owned by another person. In golden histories, free will is bound by
the ethic of reciprocity, which states that "you should wish upon others as you
would like others to wish upon yourself". In golden societies, the ethic of
reciprocity becomes a law that partially binds free will, and that must be
abided at all times. In other words, the verb "should" becomes the verb "must".
|
1307.1179 | Future Web Growth and its Consequences for Web Search Architectures | cs.IR | Introduction: Before embarking on the design of any computer system it is
first necessary to assess the magnitude of the problem. In the case of a web
search engine this assessment amounts to determining the current size of the
web, the growth rate of the web, and the quantity of computing resource
necessary to search it, and projecting the historical growth of this into the
future. Method: The over 20 year history of the web makes it possible to make
short-term projections on future growth. The longer history of hard disk drives
(and smart phone memory card) makes it possible to make short-term hardware
projections. Analysis: Historical data on Internet uptake and hardware growth
is extrapolated. Results: It is predicted that within a decade the storage
capacity of a single hard drive will exceed the size of the index of the web at
that time. Within another decade it will be possible to store the entire
searchable text on the same hard drive. Within another decade the entire
searchable web (including images) will also fit. Conclusion: This result raises
questions about the future architecture of search engines. Several new models
are proposed. In one model the user's computer is an active part of the
distributed search architecture. They search a pre-loaded snapshot (back-file)
of the web on their local device which frees up the online data centre for
searching just the difference between the snapshot and the current time.
Advantageously this also makes it possible to search when the user is
disconnected from the Internet. In another model all changes to all files are
broadcast to all users (forming a star-like network) and no data centre is
needed.
|
1307.1192 | AdaBoost and Forward Stagewise Regression are First-Order Convex
Optimization Methods | stat.ML cs.LG math.OC | Boosting methods are highly popular and effective supervised learning methods
which combine weak learners into a single accurate model with good statistical
performance. In this paper, we analyze two well-known boosting methods,
AdaBoost and Incremental Forward Stagewise Regression (FS$_\varepsilon$), by
establishing their precise connections to the Mirror Descent algorithm, which
is a first-order method in convex optimization. As a consequence of these
connections we obtain novel computational guarantees for these boosting
methods. In particular, we characterize convergence bounds of AdaBoost, related
to both the margin and log-exponential loss function, for any step-size
sequence. Furthermore, this paper presents, for the first time, precise
computational complexity results for FS$_\varepsilon$.
|
1307.1212 | Handover adaptation for dynamic load balancing in 3gpp long term
evolution systems | cs.NI cs.RO | The long-Term Evolution (LTE) of the 3GPP (3rd Generation Partnership
Project) radio access network is in early stage of specification. Self-tuning
and self-optimisation algorithms are currently studied with the aim of
enriching the LTE standard. This paper investigates auto-tuning of LTE mobility
algorithm. The auto-tuning is carried out by adapting handover parameters of
each base station according to its radio load and the load of its adjacent
cells. The auto-tuning alleviates cell congestion and balances the traffic and
the load between cells by handing off mobiles close to the cell border from the
congested cell to its neighbouring cells. Simulation results show that the
auto-tuning process brings an important gain in both call admission rate and
user throughput.
|
1307.1252 | The Complexity of Fully Proportional Representation for Single-Crossing
Electorates | cs.GT cs.MA | We study the complexity of winner determination in single-crossing elections
under two classic fully proportional representation
rules---Chamberlin--Courant's rule and Monroe's rule. Winner determination for
these rules is known to be NP-hard for unrestricted preferences. We show that
for single-crossing preferences this problem admits a polynomial-time algorithm
for Chamberlin--Courant's rule, but remains NP-hard for Monroe's rule. Our
algorithm for Chamberlin--Courant's rule can be modified to work for elections
with bounded single-crossing width. To circumvent the hardness result for
Monroe's rule, we consider single-crossing elections that satisfy an additional
constraint, namely, ones where each candidate is ranked first by at least one
voter (such elections are called narcissistic). For single-crossing
narcissistic elections, we provide an efficient algorithm for the egalitarian
version of Monroe's rule.
|
1307.1253 | Network robustness of multiplex networks with interlayer degree
correlations | physics.soc-ph cond-mat.stat-mech cs.SI | We study the robustness properties of multiplex networks consisting of
multiple layers of distinct types of links, focusing on the role of
correlations between degrees of a node in different layers. We use generating
function formalism to address various notions of the network robustness
relevant to multiplex networks such as the resilience of ordinary- and mutual
connectivity under random or targeted node removals as well as the
biconnectivity. We found that correlated coupling can affect the structural
robustness of multiplex networks in diverse fashion. For example, for
maximally-correlated duplex networks, all pairs of nodes in the giant component
are connected via at least two independent paths and network structure is
highly resilient to random failure. In contrast, anti-correlated duplex
networks are on one hand robust against targeted attack on high-degree nodes,
but on the other hand they can be vulnerable to random failure.
|
1307.1275 | Constructing Hierarchical Image-tags Bimodal Representations for Word
Tags Alternative Choice | cs.LG cs.NE | This paper describes our solution to the multi-modal learning challenge of
ICML. This solution comprises constructing three-level representations in three
consecutive stages and choosing correct tag words with a data-specific
strategy. Firstly, we use typical methods to obtain level-1 representations.
Each image is represented using MPEG-7 and gist descriptors with additional
features released by the contest organizers. And the corresponding word tags
are represented by bag-of-words model with a dictionary of 4000 words.
Secondly, we learn the level-2 representations using two stacked RBMs for each
modality. Thirdly, we propose a bimodal auto-encoder to learn the
similarities/dissimilarities between the pairwise image-tags as level-3
representations. Finally, during the test phase, based on one observation of
the dataset, we come up with a data-specific strategy to choose the correct tag
words leading to a leap of an improved overall performance. Our final average
accuracy on the private test set is 100%, which ranks the first place in this
challenge.
|
1307.1277 | Evidence and plausibility in neighborhood structures | math.LO cs.AI cs.LO | The intuitive notion of evidence has both semantic and syntactic features. In
this paper, we develop an {\em evidence logic} for epistemic agents faced with
possibly contradictory evidence from different sources. The logic is based on a
neighborhood semantics, where a neighborhood $N$ indicates that the agent has
reason to believe that the true state of the world lies in $N$. Further notions
of relative plausibility between worlds and beliefs based on the latter
ordering are then defined in terms of this evidence structure, yielding our
intended models for evidence-based beliefs. In addition, we also consider a
second more general flavor, where belief and plausibility are modeled using
additional primitive relations, and we prove a representation theorem showing
that each such general model is a $p$-morphic image of an intended one. This
semantics invites a number of natural special cases, depending on how uniform
we make the evidence sets, and how coherent their total structure. We give a
structural study of the resulting `uniform' and `flat' models. Our main result
are sound and complete axiomatizations for the logics of all four major model
classes with respect to the modal language of evidence, belief and safe belief.
We conclude with an outlook toward logics for the dynamics of changing
evidence, and the resulting language extensions and connections with logics of
plausibility change.
|
1307.1289 | Further results on dissimilarity spaces for hyperspectral images RF-CBIR | cs.IR cs.CV | Content-Based Image Retrieval (CBIR) systems are powerful search tools in
image databases that have been little applied to hyperspectral images.
Relevance feedback (RF) is an iterative process that uses machine learning
techniques and user's feedback to improve the CBIR systems performance. We
pursued to expand previous research in hyperspectral CBIR systems built on
dissimilarity functions defined either on spectral and spatial features
extracted by spectral unmixing techniques, or on dictionaries extracted by
dictionary-based compressors. These dissimilarity functions were not suitable
for direct application in common machine learning techniques. We propose to use
a RF general approach based on dissimilarity spaces which is more appropriate
for the application of machine learning algorithms to the hyperspectral
RF-CBIR. We validate the proposed RF method for hyperspectral CBIR systems over
a real hyperspectral dataset.
|
1307.1303 | Submodularity of a Set Label Disagreement Function | cs.CV | A set label disagreement function is defined over the number of variables
that deviates from the dominant label. The dominant label is the value assumed
by the largest number of variables within a set of binary variables. The
submodularity of a certain family of set label disagreement function is
discussed in this manuscript. Such disagreement function could be utilized as a
cost function in combinatorial optimization approaches for problems defined
over hypergraphs.
|
1307.1307 | Fourier-Laguerre transform, convolution and wavelets on the ball | cs.IT astro-ph.IM math.IT | We review the Fourier-Laguerre transform, an alternative harmonic analysis on
the three-dimensional ball to the usual Fourier-Bessel transform. The
Fourier-Laguerre transform exhibits an exact quadrature rule and thus leads to
a sampling theorem on the ball. We study the definition of convolution on the
ball in this context, showing explicitly how translation on the radial line may
be viewed as convolution with a shifted Dirac delta function. We review the
exact Fourier-Laguerre wavelet transform on the ball, coined flaglets, and show
that flaglets constitute a tight frame.
|
1307.1354 | Modeling and Predicting the Growth and Death of Membership-based
Websites | physics.soc-ph cs.SI | Driven by outstanding success stories of Internet startups such as Facebook
and The Huffington Post, recent studies have thoroughly described their growth.
These highly visible online success stories, however, overshadow an untold
number of similar ventures that fail. The study of website popularity is
ultimately incomplete without general mechanisms that can describe both
successes and failures. In this work we present six years of the daily number
of users (DAU) of twenty-two membership-based websites - encompassing online
social networks, grassroots movements, online forums, and membership-only
Internet stores - well balanced between successes and failures. We then propose
a combination of reaction-diffusion-decay processes whose resulting equations
seem not only to describe well the observed DAU time series but also provide
means to roughly predict their evolution. This model allows an approximate
automatic DAU-based classification of websites into self-sustainable v.s.
unsustainable and whether the startup growth is mostly driven by marketing &
media campaigns or word-of-mouth adoptions.
|
1307.1360 | On sparsity averaging | cs.IT astro-ph.IM math.IT | Recent developments in Carrillo et al. (2012) and Carrillo et al. (2013)
introduced a novel regularization method for compressive imaging in the context
of compressed sensing with coherent redundant dictionaries. The approach relies
on the observation that natural images exhibit strong average sparsity over
multiple coherent frames. The associated reconstruction algorithm, based on an
analysis prior and a reweighted $\ell_1$ scheme, is dubbed Sparsity Averaging
Reweighted Analysis (SARA). We review these advances and extend associated
simulations establishing the superiority of SARA to regularization methods
based on sparsity in a single frame, for a generic spread spectrum acquisition
and for a Fourier acquisition of particular interest in radio astronomy.
|
1307.1370 | Matching Known Patients to Health Records in Washington State Data | cs.CY cs.DB | The State of Washington sells patient-level health data for $50. This
publicly available dataset has virtually all hospitalizations occurring in the
State in a given year, including patient demographics, diagnoses, procedures,
attending physician, hospital, a summary of charges, and how the bill was paid.
It does not contain patient names or addresses (only ZIPs). Newspaper stories
printed in the State for the same year that contain the word "hospitalized"
often include a patient's name and residential information and explain why the
person was hospitalized, such as vehicle accident or assault. News information
uniquely and exactly matched medical records in the State database for 35 of
the 81 cases (or 43 percent) found in 2011, thereby putting names to patient
records. A news reporter verified matches by contacting patients. Employers,
financial organizations and others know the same kind of information as
reported in news stories making it just as easy for them to identify the
medical records of employees, debtors, and others.
|
1307.1372 | Clustering of Complex Networks and Community Detection Using Group
Search Optimization | cs.NE cs.DS | Group Search Optimizer(GSO) is one of the best algorithms, is very new in the
field of Evolutionary Computing. It is very robust and efficient algorithm,
which is inspired by animal searching behaviour. The paper describes an
application of GSO to clustering of networks. We have tested GSO against five
standard benchmark datasets, GSO algorithm is proved very competitive in terms
of accuracy and convergence speed.
|
1307.1380 | The Application of a Data Mining Framework to Energy Usage Profiling in
Domestic Residences using UK data | cs.CE cs.LG stat.AP | This paper describes a method for defining representative load profiles for
domestic electricity users in the UK. It considers bottom up and clustering
methods and then details the research plans for implementing and improving
existing framework approaches based on the overall usage profile. The work
focuses on adapting and applying analysis framework approaches to UK energy
data in order to determine the effectiveness of creating a few (single figures)
archetypical users with the intention of improving on the current methods of
determining usage profiles. The work is currently in progress and the paper
details initial results using data collected in Milton Keynes around 1990.
Various possible enhancements to the work are considered including a split
based on temperature to reflect the varying UK weather conditions.
|
1307.1385 | Creating Personalised Energy Plans. From Groups to Individuals using
Fuzzy C Means Clustering | cs.CE cs.LG | Changes in the UK electricity market mean that domestic users will be
required to modify their usage behaviour in order that supplies can be
maintained. Clustering allows usage profiles collected at the household level
to be clustered into groups and assigned a stereotypical profile which can be
used to target marketing campaigns. Fuzzy C Means clustering extends this by
allowing each household to be a member of many groups and hence provides the
opportunity to make personalised offers to the household dependent on their
degree of membership of each group. In addition, feedback can be provided on
how user's changing behaviour is moving them towards more "green" or cost
effective stereotypical usage.
|
1307.1387 | Examining the Classification Accuracy of TSVMs with ?Feature Selection
in Comparison with the GLAD Algorithm | cs.LG cs.CE | Gene expression data sets are used to classify and predict patient diagnostic
categories. As we know, it is extremely difficult and expensive to obtain gene
expression labelled examples. Moreover, conventional supervised approaches
cannot function properly when labelled data (training examples) are
insufficient using Support Vector Machines (SVM) algorithms. Therefore, in this
paper, we suggest Transductive Support Vector Machines (TSVMs) as
semi-supervised learning algorithms, learning with both labelled samples data
and unlabelled samples to perform the classification of microarray data. To
prune the superfluous genes and samples we used a feature selection method
called Recursive Feature Elimination (RFE), which is supposed to enhance the
output of classification and avoid the local optimization problem. We examined
the classification prediction accuracy of the TSVM-RFE algorithm in comparison
with the Genetic Learning Across Datasets (GLAD) algorithm, as both are
semi-supervised learning methods. Comparing these two methods, we found that
the TSVM-RFE surpassed both a SVM using RFE and GLAD.
|
1307.1388 | Introducing Memory and Association Mechanism into a Biologically
Inspired Visual Model | cs.AI | A famous biologically inspired hierarchical model firstly proposed by
Riesenhuber and Poggio has been successfully applied to multiple visual
recognition tasks. The model is able to achieve a set of position- and
scale-tolerant recognition, which is a central problem in pattern recognition.
In this paper, based on some other biological experimental results, we
introduce the Memory and Association Mechanisms into the above biologically
inspired model. The main motivations of the work are (a) to mimic the active
memory and association mechanism and add the 'top down' adjustment to the above
biologically inspired hierarchical model and (b) to build up an algorithm which
can save the space and keep a good recognition performance. The new model is
also applied to object recognition processes. The primary experimental results
show that our method is efficient with much less memory requirement.
|
1307.1390 | Systems Dynamics or Agent-Based Modelling for Immune Simulation? | cs.CE cs.MA | In immune system simulation there are two competing simulation approaches:
System Dynamics Simulation (SDS) and Agent-Based Simulation (ABS). In the
literature there is little guidance on how to choose the best approach for a
specific immune problem. Our overall research aim is to develop a framework
that helps researchers with this choice. In this paper we investigate if it is
possible to easily convert simulation models between approaches. With no
explicit guidelines available from the literature we develop and test our own
set of guidelines for converting SDS models into ABS models in a non-spacial
scenario. We also define guidelines to convert ABS into SDS considering a
non-spatial and a spatial scenario. After running some experiments with the
developed models we found that in all cases there are significant differences
between the results produced by the different simulation methods.
|
1307.1391 | Quiet in Class: Classification, Noise and the Dendritic Cell Algorithm | cs.LG cs.CR | Theoretical analyses of the Dendritic Cell Algorithm (DCA) have yielded
several criticisms about its underlying structure and operation. As a result,
several alterations and fixes have been suggested in the literature to correct
for these findings. A contribution of this work is to investigate the effects
of replacing the classification stage of the DCA (which is known to be flawed)
with a traditional machine learning technique. This work goes on to question
the merits of those unique properties of the DCA that are yet to be thoroughly
analysed. If none of these properties can be found to have a benefit over
traditional approaches, then "fixing" the DCA is arguably less efficient than
simply creating a new algorithm. This work examines the dynamic filtering
property of the DCA and questions the utility of this unique feature for the
anomaly detection problem. It is found that this feature, while advantageous
for noisy, time-ordered classification, is not as useful as a traditional
static filter for processing a synthetic dataset. It is concluded that there
are still unique features of the DCA left to investigate. Areas that may be of
benefit to the Artificial Immune Systems community are suggested.
|
1307.1394 | Detect adverse drug reactions for drug Alendronate | cs.CE cs.LG | Adverse drug reaction (ADR) is widely concerned for public health issue. In
this study we propose an original approach to detect the ADRs using feature
matrix and feature selection. The experiments are carried out on the drug
Simvastatin. Major side effects for the drug are detected and better
performance is achieved compared to other computerized methods. The detected
ADRs are based on the computerized method, further investigation is needed.
|
1307.1397 | Secure Source Coding with a Public Helper | cs.IT math.IT | We consider secure multi-terminal source coding problems in the presence of a
public helper. Two main scenarios are studied: 1) source coding with a helper
where the coded side information from the helper is eavesdropped by an external
eavesdropper; 2) triangular source coding with a helper where the helper is
considered as a public terminal. We are interested in how the helper can
support the source transmission subject to a constraint on the amount of
information leaked due to its public nature. We characterize the tradeoff
between transmission rate, incurred distortion, and information leakage rate at
the helper/eavesdropper in the form of a rate-distortion-leakage region for
various classes of problems.
|
1307.1408 | An investigation into the relationship between type-2 FOU size and
environmental uncertainty in robotic control | cs.RO cs.AI | It has been suggested that, when faced with large amounts of uncertainty in
situations of automated control, type-2 fuzzy logic based controllers will
out-perform the simpler type-1 varieties due to the latter lacking the
flexibility to adapt accordingly. This paper aims to investigate this problem
in detail in order to analyse when a type-2 controller will improve upon type-1
performance. A robotic sailing boat is subjected to several experiments in
which the uncertainty and difficulty of the sailing problem is increased in
order to observe the effects on measured performance. Improved performance is
observed but not in every case. The size of the FOU is shown to be have a large
effect on performance with potentially severe performance penalties for
incorrectly sized footprints.
|
1307.1411 | Discovering Sequential Patterns in a UK General Practice Database | cs.LG cs.CE stat.AP | The wealth of computerised medical information becoming readily available
presents the opportunity to examine patterns of illnesses, therapies and
responses. These patterns may be able to predict illnesses that a patient is
likely to develop, allowing the implementation of preventative actions. In this
paper sequential rule mining is applied to a General Practice database to find
rules involving a patients age, gender and medical history. By incorporating
these rules into current health-care a patient can be highlighted as
susceptible to a future illness based on past or current illnesses, gender and
year of birth. This knowledge has the ability to greatly improve health-care
and reduce health-care costs.
|
1307.1437 | Toward Guaranteed Illumination Models for Non-Convex Objects | cs.CV | Illumination variation remains a central challenge in object detection and
recognition. Existing analyses of illumination variation typically pertain to
convex, Lambertian objects, and guarantee quality of approximation in an
average case sense. We show that it is possible to build V(vertex)-description
convex cone models with worst-case performance guarantees, for non-convex
Lambertian objects. Namely, a natural verification test based on the angle to
the constructed cone guarantees to accept any image which is sufficiently
well-approximated by an image of the object under some admissible lighting
condition, and guarantees to reject any image that does not have a sufficiently
good approximation. The cone models are generated by sampling point
illuminations with sufficient density, which follows from a new perturbation
bound for point images in the Lambertian model. As the number of point images
required for guaranteed verification may be large, we introduce a new
formulation for cone preserving dimensionality reduction, which leverages tools
from sparse and low-rank decomposition to reduce the complexity, while
controlling the approximation error with respect to the original cone.
|
1307.1448 | Distributed Detection and Estimation in Wireless Sensor Networks | cs.DC cs.IT math.IT | In this article we consider the problems of distributed detection and
estimation in wireless sensor networks. In the first part, we provide a general
framework aimed to show how an efficient design of a sensor network requires a
joint organization of in-network processing and communication. Then, we recall
the basic features of consensus algorithm, which is a basic tool to reach
globally optimal decisions through a distributed approach. The main part of the
paper starts addressing the distributed estimation problem. We show first an
entirely decentralized approach, where observations and estimations are
performed without the intervention of a fusion center. Then, we consider the
case where the estimation is performed at a fusion center, showing how to
allocate quantization bits and transmit powers in the links between the nodes
and the fusion center, in order to accommodate the requirement on the maximum
estimation variance, under a constraint on the global transmit power. We extend
the approach to the detection problem. Also in this case, we consider the
distributed approach, where every node can achieve a globally optimal decision,
and the case where the decision is taken at a central node. In the latter case,
we show how to allocate coding bits and transmit power in order to maximize the
detection probability, under constraints on the false alarm rate and the global
transmit power. Then, we generalize consensus algorithms illustrating a
distributed procedure that converges to the projection of the observation
vector onto a signal subspace. We then address the issue of energy consumption
in sensor networks, thus showing how to optimize the network topology in order
to minimize the energy necessary to achieve a global consensus. Finally, we
address the problem of matching the topology of the network to the graph
describing the statistical dependencies among the observed variables.
|
1307.1461 | Degrees of Freedom of the Rank-deficient Interference Channel with
Feedback | cs.IT math.IT | We investigate the total degrees of freedom (DoF) of the K-user
rank-deficient interference channel with feedback. For the two-user case, we
characterize the total DoF by developing an achievable scheme and deriving a
matching upper bound. For the three-user case, we develop a new achievable
scheme which employs interference alignment to efficiently utilize the
dimension of the received signal space. In addition, we derive an upper bound
for the general K-user case and show the tightness of the bound when the number
of antennas at each node is sufficiently large. As a consequence of these
results, we show that feedback can increase the DoF when the number of antennas
at each node is large enough as compared to the ranks of channel matrices. This
finding is in contrast to the full-rank interference channel where feedback
provides no DoF gain. The gain comes from using feedback to provide alternative
signal paths, thereby effectively increasing the ranks of desired channel
matrices.
|
1307.1466 | Detect adverse drug reactions for the drug Pravastatin | cs.CE | Adverse drug reaction (ADR) is widely concerned for public health issue. ADRs
are one of most common causes to withdraw some drugs from market. Prescription
event monitoring (PEM) is an important approach to detect the adverse drug
reactions. The main problem to deal with this method is how to automatically
extract the medical events or side effects from high-throughput medical data,
which are collected from day to day clinical practice. In this study we propose
an original approach to detect the ADRs using feature matrix and feature
selection. The experiments are carried out on the drug Pravastatin. Major side
effects for the drug are detected. The detected ADRs are based on computerized
method, further investigation is needed.
|
1307.1482 | Towards Combining HTN Planning and Geometric Task Planning | cs.AI | In this paper we present an interface between a symbolic planner and a
geometric task planner, which is different to a standard trajectory planner in
that the former is able to perform geometric reasoning on abstract
entities---tasks. We believe that this approach facilitates a more principled
interface to symbolic planning, while also leaving more room for the geometric
planner to make independent decisions. We show how the two planners could be
interfaced, and how their planning and backtracking could be interleaved. We
also provide insights for a methodology for using the combined system, and
experimental results to use as a benchmark with future extensions to both the
combined system, as well as to the geometric task planner.
|
1307.1493 | Dropout Training as Adaptive Regularization | stat.ML cs.LG stat.ME | Dropout and other feature noising schemes control overfitting by artificially
corrupting the training data. For generalized linear models, dropout performs a
form of adaptive regularization. Using this viewpoint, we show that the dropout
regularizer is first-order equivalent to an L2 regularizer applied after
scaling the features by an estimate of the inverse diagonal Fisher information
matrix. We also establish a connection to AdaGrad, an online learning
algorithm, and find that a close relative of AdaGrad operates by repeatedly
solving linear dropout-regularized problems. By casting dropout as
regularization, we develop a natural semi-supervised algorithm that uses
unlabeled data to create a better adaptive regularizer. We apply this idea to
document classification tasks, and show that it consistently boosts the
performance of dropout training, improving on state-of-the-art results on the
IMDB reviews dataset.
|
1307.1508 | Multiple-Level Power Allocation Strategy for Secondary Users in
Cognitive Radio Networks | cs.IT math.IT | In this paper, we propose a multiple-level power allocation strategy for the
secondary user (SU) in cognitive radio (CR) networks. Different from the
conventional strategies, where SU either stays silent or transmit with a
constant/binary power depending on the busy/idle status of the primary user
(PU), the proposed strategy allows SU to choose different power levels
according to a carefully designed function of the receiving energy. The way of
the power level selection is optimized to maximize the achievable rate of SU
under the constraints of average transmit power at SU and average interference
power at PU. Simulation results demonstrate that the proposed strategy can
significantly improve the performance of SU compared to the conventional
strategies.
|
1307.1514 | Network-Coded Multiple Access | cs.NI cs.IT math.IT | This paper proposes and experimentally demonstrates a first wireless local
area network (WLAN) system that jointly exploits physical-layer network coding
(PNC) and multiuser decoding (MUD) to boost system throughput. We refer to this
multiple access mode as Network-Coded Multiple Access (NCMA). Prior studies on
PNC mostly focused on relay networks. NCMA is the first realized multiple
access scheme that establishes the usefulness of PNC in a non-relay setting.
NCMA allows multiple nodes to transmit simultaneously to the access point (AP)
to boost throughput. In the non-relay setting, when two nodes A and B transmit
to the AP simultaneously, the AP aims to obtain both packet A and packet B
rather than their network-coded packet. An interesting question is whether
network coding, specifically PNC which extracts packet (A XOR B), can still be
useful in such a setting. We provide an affirmative answer to this question
with a novel two-layer decoding approach amenable to real-time implementation.
Our USRP prototype indicates that NCMA can boost throughput by 100% in the
medium-high SNR regime (>=10dB). We believe further throughput enhancement is
possible by allowing more than two users to transmit together.
|
1307.1524 | Fundamentals of Heterogeneous Cellular Networks with Energy Harvesting | cs.IT cs.NI math.IT stat.AP | We develop a new tractable model for K-tier heterogeneous cellular networks
(HetNets), where each base station (BS) is powered solely by a self-contained
energy harvesting module. The BSs across tiers differ in terms of the energy
harvesting rate, energy storage capacity, transmit power and deployment
density. Since a BS may not always have enough energy, it may need to be kept
OFF and allowed to recharge while nearby users are served by neighboring BSs
that are ON. We show that the fraction of time a k^{th} tier BS can be kept ON,
termed availability \rho_k, is a fundamental metric of interest. Using tools
from random walk theory, fixed point analysis and stochastic geometry, we
characterize the set of K-tuples (\rho_1, \rho_2, ... \rho_K), termed the
availability region, that is achievable by general uncoordinated operational
strategies, where the decision to toggle the current ON/OFF state of a BS is
taken independently of the other BSs. If the availability vector corresponding
to the optimal system performance, e.g., in terms of rate, lies in this
availability region, there is no performance loss due to the presence of
unreliable energy sources. As a part of our analysis, we model the temporal
dynamics of the energy level at each BS as a birth-death process, derive the
energy utilization rate, and use hitting/stopping time analysis to prove that
there exists a fundamental limit on \rho_k that cannot be surpassed by any
uncoordinated strategy.
|
1307.1537 | Optimal Power Allocation and User Loading for Multiuser MISO Channels
with Regularized Channel Inversion | cs.IT math.IT | We consider a multiuser system where a single transmitter equipped with
multiple antennas (the base station) communicates with multiple users each with
a single antenna. Regularized channel inversion is employed as the precoding
strategy at the base station. Within this scenario we are interested in the
problems of power allocation and user admission control so as to maximize the
system throughput, i.e., which users should we communicate with and what power
should we use for each of the admitted users so as to get the highest sum rate.
This is in general a very difficult problem but we do two things to allow some
progress to be made. Firstly we consider the large system regime where the
number of antennas at the base station is large along with the number of users.
Secondly we cluster the downlink path gains of users into a finite number of
groups. By doing this we are able to show that the optimal power allocation
under an average transmit power constraint follows the well-known water filling
scheme. We also investigate the user admission problem which reduces in the
large system regime to optimization of the user loading in the system.
|
1307.1543 | Finding Information Through Integrated Ad-Hoc Socializing in the Virtual
and Physical World | cs.IR cs.SI | Despite the services of sophisticated search engines like Google, there are a
number of interesting information sources which are useful but largely
inaccessible to current Web users. These information sources are often ad-hoc,
location-specific and only useful for users over short periods of time, or
relate to tacit knowledge of users or implicit knowledge in crowds. The
solution presented in this paper addresses these problems by introducing an
integrated concept of "location" and "presence" across the physical and virtual
worlds enabling ad-hoc socializing of users interested in, or looking for
similar information. While the definition of presence in the physical world is
straightforward - through a spatial location and vicinity at a certain point in
time - their definitions in the virtual world are neither obvious nor trivial.
Based on a detailed analysis we provide an integrated spatial model spanning
both worlds which enables us to define presence of users in a unified way. This
integrated model allows us to enable ad-hoc socializing of users browsing the
Web with users in the physical world specific to their joint information needs
and allows us to unlock the untapped information sources mentioned above. We
describe a proof-of-concept implementation of our model and provide an
empirical analysis based on real-world experiments.
|
1307.1561 | A Sub-block Based Image Retrieval Using Modified Integrated Region
Matching | cs.IR cs.CV | This paper proposes a content based image retrieval (CBIR) system using the
local colour and texture features of selected image sub-blocks and global
colour and shape features of the image. The image sub-blocks are roughly
identified by segmenting the image into partitions of different configuration,
finding the edge density in each partition using edge thresholding followed by
morphological dilation. The colour and texture features of the identified
regions are computed from the histograms of the quantized HSV colour space and
Gray Level Co- occurrence Matrix (GLCM) respectively. The colour and texture
feature vectors is computed for each region. The shape features are computed
from the Edge Histogram Descriptor (EHD). A modified Integrated Region Matching
(IRM) algorithm is used for finding the minimum distance between the sub-blocks
of the query and target image. Experimental results show that the proposed
method provides better retrieving result than retrieval using some of the
existing methods.
|
1307.1568 | Using MathML to Represent Units of Measurement for Improved Ontology
Alignment | cs.AI | Ontologies provide a formal description of concepts and their relationships
in a knowledge domain. The goal of ontology alignment is to identify
semantically matching concepts and relationships across independently developed
ontologies that purport to describe the same knowledge. In order to handle the
widest possible class of ontologies, many alignment algorithms rely on
terminological and structural meth- ods, but the often fuzzy nature of concepts
complicates the matching process. However, one area that should provide clear
matching solutions due to its mathematical nature, is units of measurement.
Several on- tologies for units of measurement are available, but there has been
no attempt to align them, notwithstanding the obvious importance for tech-
nical interoperability. We propose a general strategy to map these (and
similar) ontologies by introducing MathML to accurately capture the semantic
description of concepts specified therein. We provide mapping results for three
ontologies, and show that our approach improves on lexical comparisons.
|
1307.1584 | Comparing Data-mining Algorithms Developed for Longitudinal
Observational Databases | cs.LG cs.CE cs.DB | Longitudinal observational databases have become a recent interest in the
post marketing drug surveillance community due to their ability of presenting a
new perspective for detecting negative side effects. Algorithms mining
longitudinal observation databases are not restricted by many of the
limitations associated with the more conventional methods that have been
developed for spontaneous reporting system databases. In this paper we
investigate the robustness of four recently developed algorithms that mine
longitudinal observational databases by applying them to The Health Improvement
Network (THIN) for six drugs with well document known negative side effects.
Our results show that none of the existing algorithms was able to consistently
identify known adverse drug reactions above events related to the cause of the
drug and no algorithm was superior.
|
1307.1597 | A Beginners Guide to Systems Simulation in Immunology | cs.CE | Some common systems modelling and simulation approaches for immune problems
are Monte Carlo simulations, system dynamics, discrete-event simulation and
agent-based simulation. These methods, however, are still not widely adopted in
immunology research. In addition, to our knowledge, there is few research on
the processes for the development of simulation models for the immune system.
Hence, for this work, we have two contributions to knowledge. The first one is
to show the importance of systems simulation to help immunological research and
to draw the attention of simulation developers to this research field. The
second contribution is the introduction of a quick guide containing the main
steps for modelling and simulation in immunology, together with challenges that
occur during the model development. Further, this paper introduces an example
of a simulation problem, where we test our guidelines.
|
1307.1598 | Extending a Microsimulation of the Port of Dover | cs.CE | Modelling and simulating the traffic of heavily used but secure environments
such as seaports and airports is of increasing importance. This paper discusses
issues and problems that may arise when extending an existing microsimulation
strategy. We also discuss how extensions of these simulations can aid planners
with optimal physical and operational feedback. Conclusions are drawn about how
microsimulations can be moved forward as a robust planning tool for the 21st
century.
|
1307.1599 | Supervised Learning and Anti-learning of Colorectal Cancer Classes and
Survival Rates from Cellular Biology Parameters | cs.LG cs.CE stat.ML | In this paper, we describe a dataset relating to cellular and physical
conditions of patients who are operated upon to remove colorectal tumours. This
data provides a unique insight into immunological status at the point of tumour
removal, tumour classification and post-operative survival. Attempts are made
to learn relationships between attributes (physical and immunological) and the
resulting tumour stage and survival. Results for conventional machine learning
approaches can be considered poor, especially for predicting tumour stages for
the most important types of cancer. This poor performance is further
investigated and compared with a synthetic, dataset based on the logical
exclusive-OR function and it is shown that there is a significant level of
'anti-learning' present in all supervised methods used and this can be
explained by the highly dimensional, complex and sparsely representative
dataset. For predicting the stage of cancer from the immunological attributes,
anti-learning approaches outperform a range of popular algorithms.
|
1307.1601 | Biomarker Clustering of Colorectal Cancer Data to Complement Clinical
Classification | cs.LG cs.CE | In this paper, we describe a dataset relating to cellular and physical
conditions of patients who are operated upon to remove colorectal tumours. This
data provides a unique insight into immunological status at the point of tumour
removal, tumour classification and post-operative survival. Attempts are made
to cluster this dataset and important subsets of it in an effort to
characterize the data and validate existing standards for tumour
classification. It is apparent from optimal clustering that existing tumour
classification is largely unrelated to immunological factors within a patient
and that there may be scope for re-evaluating treatment options and survival
estimates based on a combination of tumour physiology and patient
histochemistry.
|
1307.1625 | Robust Causality Check for Sampled Scattering Parameters via a Filtered
Fourier Transform | cs.CE | We introduce a robust numerical technique to verify the causality of sampled
scattering parameters given on a finite bandwidth. The method is based on a
filtered Fourier transform and includes a rigorous estimation of the errors
caused by missing out-of-band samples. Compared to existing techniques, the
method is simpler to implement and provides a useful insight on the time-domain
characteristics of the detected violation. Through an applicative example, we
shows its usefulness to improve the accuracy and reliability of macromodeling
techniques used to convert sampled scattering parameters into models for
transient analysis.
|
1307.1630 | Power Allocation Strategies in Energy Harvesting Wireless Cooperative
Networks | cs.IT math.IT | In this paper, a wireless cooperative network is considered, in which
multiple source-destination pairs communicate with each other via an energy
harvesting relay. The focus of this paper is on the relay's strategies to
distribute the harvested energy among the multiple users and their impact on
the system performance. Specifically, a non-cooperative strategy is to use the
energy harvested from the i-th source as the relay transmission power to the
i-th destination, to which asymptotic results show that its outage performance
decays as logSNR over SNR. A faster decaying rate, 1 over SNR, can be achieved
by the two centralized strategies proposed this the paper, where the water
filling based one can achieve optimal performance with respect to several
criteria, with a price of high complexity. An auction based power allocation
scheme is also proposed to achieve a better tradeoff between the system
performance and complexity. Simulation results are provided to confirm the
accuracy of the developed analytical results and facilitate a better
performance comparison.
|
1307.1656 | Contact-based Social Contagion in Multiplex Networks | physics.soc-ph cond-mat.stat-mech cs.SI | We develop a theoretical framework for the study of epidemic-like social
contagion in large scale social systems. We consider the most general setting
in which different communication platforms or categories form multiplex
networks. Specifically, we propose a contact-based information spreading model,
and show that the critical point of the multiplex system associated to the
active phase is determined by the layer whose contact probability matrix has
the largest eigenvalue. The framework is applied to a number of different
situations, including a real multiplex system. Finally, we also show that when
the system through which information is disseminating is inherently multiplex,
working with the graph that results from the aggregation of the different
layers is flawed.
|
1307.1662 | Polyglot: Distributed Word Representations for Multilingual NLP | cs.CL cs.LG | Distributed word representations (word embeddings) have recently contributed
to competitive performance in language modeling and several NLP tasks. In this
work, we train word embeddings for more than 100 languages using their
corresponding Wikipedias. We quantitatively demonstrate the utility of our word
embeddings by using them as the sole features for training a part of speech
tagger for a subset of these languages. We find their performance to be
competitive with near state-of-art methods in English, Danish and Swedish.
Moreover, we investigate the semantic features captured by these embeddings
through the proximity of word groupings. We will release these embeddings
publicly to help researchers in the development and enhancement of multilingual
applications.
|
1307.1674 | Stochastic Optimization of PCA with Capped MSG | stat.ML cs.LG | We study PCA as a stochastic optimization problem and propose a novel
stochastic approximation algorithm which we refer to as "Matrix Stochastic
Gradient" (MSG), as well as a practical variant, Capped MSG. We study the
method both theoretically and empirically.
|
1307.1681 | Extracting the trustworthiest way to service provider in complex online
social networks | cs.SI cs.ET | In complex online social networks, it is crucial for a service consumer to
extract the trustworthiest way to a target service provider from numerous
social trust paths between them. The extraction of the trustworthiest way
(namely, optimal social trust path (OSTP)) with multiple end-to-end quality of
trust (QoT) constraints has been proved to be NP-Complete. Heuristic algorithms
with polynomial and pseudo-polynomial-time complexities are often used to deal
with this challenging problem. However, existing solutions cannot guarantee the
efficiency of searching, that is, they can hardly avoid obtaining partial
optimal solutions during searching process. Quantum annealing uses
delocalization and tunneling to avoid falling into local minima without
sacrifying execution time. It has been proved to be a promising way to many
optimization problems in recently published literatures. In this paper, for the
first time, QA based OSTP algorithms (QA_OSTP) is applied to the extraction of
the trustworthiest way. The experiment results show that QA based algorithms
have better performance than its heuristic opponents.
|
1307.1690 | An efficient reconciliation algorithm for social networks | cs.DS cs.SI | People today typically use multiple online social networks (Facebook,
Twitter, Google+, LinkedIn, etc.). Each online network represents a subset of
their "real" ego-networks. An interesting and challenging problem is to
reconcile these online networks, that is, to identify all the accounts
belonging to the same individual. Besides providing a richer understanding of
social dynamics, the problem has a number of practical applications. At first
sight, this problem appears algorithmically challenging. Fortunately, a small
fraction of individuals explicitly link their accounts across multiple
networks; our work leverages these connections to identify a very large
fraction of the network.
Our main contributions are to mathematically formalize the problem for the
first time, and to design a simple, local, and efficient parallel algorithm to
solve it. We are able to prove strong theoretical guarantees on the algorithm's
performance on well-established network models (Random Graphs, Preferential
Attachment). We also experimentally confirm the effectiveness of the algorithm
on synthetic and real social network data sets.
|
1307.1718 | Graph-based Approach to Automatic Taxonomy Generation (GraBTax) | cs.IR | We propose a novel graph-based approach for constructing concept hierarchy
from a large text corpus. Our algorithm, GraBTax, incorporates both statistical
co-occurrences and lexical similarity in optimizing the structure of the
taxonomy. To automatically generate topic-dependent taxonomies from a large
text corpus, GraBTax first extracts topical terms and their relationships from
the corpus. The algorithm then constructs a weighted graph representing topics
and their associations. A graph partitioning algorithm is then used to
recursively partition the topic graph into a taxonomy. For evaluation, we apply
GraBTax to articles, primarily computer science, in the CiteSeerX digital
library and search engine. The quality of the resulting concept hierarchy is
assessed by both human judges and comparison with Wikipedia categories.
|
1307.1739 | Anatomical Feature-guided Volumeric Registration of Multimodal Prostate
MRI | cs.CV cs.GR | Radiological imaging of prostate is becoming more popular among researchers
and clinicians in searching for diseases, primarily cancer. Scans might be
acquired at different times, with patient movement between scans, or with
different equipment, resulting in multiple datasets that need to be registered.
For this issue, we introduce a registration method using anatomical
feature-guided mutual information. Prostate scans of the same patient taken in
three different orientations are first aligned for the accurate detection of
anatomical features in 3D. Then, our pipeline allows for multiple modalities
registration through the use of anatomical features, such as the interior
urethra of prostate and gland utricle, in a bijective way. The novelty of this
approach is the application of anatomical features as the pre-specified
corresponding landmarks for prostate registration. We evaluate the registration
results through both artificial and clinical datasets. Registration accuracy is
evaluated by performing statistical analysis of local intensity differences or
spatial differences of anatomical landmarks between various MR datasets.
Evaluation results demonstrate that our method statistics-significantly
improves the quality of registration. Although this strategy is tested for
MRI-guided brachytherapy, the preliminary results from these experiments
suggest that it can be also applied to other settings such as transrectal
ultrasound-guided or CT-guided therapy, where the integration of preoperative
MRI may have a significant impact upon treatment planning and guidance.
|
1307.1746 | Generalized Quasi-Cyclic Codes Over $\mathbb{F}_q+u\mathbb{F}_q$ | cs.IT math.IT | Generalized quasi-cyclic (GQC) codes with arbitrary lengths over the ring
$\mathbb{F}_{q}+u\mathbb{F}_{q}$, where $u^2=0$, $q=p^n$, $n$ a positive
integer and $p$ a prime number, are investigated. By the Chinese Remainder
Theorem, structural properties and the decomposition of GQC codes are given.
For 1-generator GQC codes, minimal generating sets and lower bounds on the
minimum distance are given. As a special class of GQC codes, quasi-cyclic (QC)
codes over $\mathbb{F}_q+u\mathbb{F}_q$ are also discussed briefly in this
paper.
|
1307.1751 | Study and Development of a Data Acquisition & Control (DAQ) System using
TCP/Modbus Protocol | cs.SY cs.HC physics.ins-det | The aim of the project was to develop a HMI (Human-Machine Interface) with
the help of which a person could remotely control and monitor the Vacuum
measurement system. The Vacuum measurement system was constructed using a DAQ
(Data Acquisition & Control) implementation instead of a PLC based
implementation because of the cost involvement and complexity involved in
deployment when only one basic parameter i.e. vacuum is required to be
measured. The system is to be installed in the Superconducting Cyclotron
section of VECC. The need for remote monitoring arises as during the operation
of the K500 Superconducting Cyclotron, people are not allowed to enter within a
certain specified range due to effective ion radiation. Using the designed
software i.e. HMI the following objective of remote monitoring could be
achieved effortlessly from any area which is in the safe zone. Moreover the
software was designed in a way that data could be recorded real time and in an
unmanned way. The hardware is also easy to setup and overcomes the complexity
involved in interfacing a PLC with other hardware. The deployment time is also
quite fast. Lastly, the practical results obtained showed an appreciable degree
of accuracy of the system and friendliness with the user.
|
1307.1759 | Approximate dynamic programming using fluid and diffusion approximations
with applications to power management | cs.LG math.OC | Neuro-dynamic programming is a class of powerful techniques for approximating
the solution to dynamic programming equations. In their most computationally
attractive formulations, these techniques provide the approximate solution only
within a prescribed finite-dimensional function class. Thus, the question that
always arises is how should the function class be chosen? The goal of this
paper is to propose an approach using the solutions to associated fluid and
diffusion approximations. In order to illustrate this approach, the paper
focuses on an application to dynamic speed scaling for power management in
computer processors.
|
1307.1769 | Ensemble Methods for Multi-label Classification | stat.ML cs.LG | Ensemble methods have been shown to be an effective tool for solving
multi-label classification tasks. In the RAndom k-labELsets (RAKEL) algorithm,
each member of the ensemble is associated with a small randomly-selected subset
of k labels. Then, a single label classifier is trained according to each
combination of elements in the subset. In this paper we adopt a similar
approach, however, instead of randomly choosing subsets, we select the minimum
required subsets of k labels that cover all labels and meet additional
constraints such as coverage of inter-label correlations. Construction of the
cover is achieved by formulating the subset selection as a minimum set covering
problem (SCP) and solving it by using approximation algorithms. Every cover
needs only to be prepared once by offline algorithms. Once prepared, a cover
may be applied to the classification of any given multi-label dataset whose
properties conform with those of the cover. The contribution of this paper is
two-fold. First, we introduce SCP as a general framework for constructing label
covers while allowing the user to incorporate cover construction constraints.
We demonstrate the effectiveness of this framework by proposing two
construction constraints whose enforcement produces covers that improve the
prediction performance of random selection. Second, we provide theoretical
bounds that quantify the probabilities of random selection to produce covers
that meet the proposed construction criteria. The experimental results indicate
that the proposed methods improve multi-label classification accuracy and
stability compared with the RAKEL algorithm and to other state-of-the-art
algorithms.
|
1307.1770 | Improving A*OMP: Theoretical and Empirical Analyses With a Novel Dynamic
Cost Model | cs.IT math.IT | Best-first search has been recently utilized for compressed sensing (CS) by
the A* orthogonal matching pursuit (A*OMP) algorithm. In this work, we
concentrate on theoretical and empirical analyses of A*OMP. We present a
restricted isometry property (RIP) based general condition for exact recovery
of sparse signals via A*OMP. In addition, we develop online guarantees which
promise improved recovery performance with the residue-based termination
instead of the sparsity-based one. We demonstrate the recovery capabilities of
A*OMP with extensive recovery simulations using the adaptive-multiplicative
(AMul) cost model, which effectively compensates for the path length
differences in the search tree. The presented results, involving phase
transitions for different nonzero element distributions as well as recovery
rates and average error, reveal not only the superior recovery accuracy of
A*OMP, but also the improvements with the residue-based termination and the
AMul cost model. Comparison of the run times indicate the speed up by the AMul
cost model. We also demonstrate a hybrid of OMP and A?OMP to accelerate the
search further. Finally, we run A*OMP on a sparse image to illustrate its
recovery performance for more realistic coefcient distributions.
|
1307.1786 | MacWilliams type identities for some new $m$-spotty weight enumerators
over finite commutative Frobenius rings | cs.IT math.IT | Past few years have seen an extensive use of RAM chips with wide I/O data
(e.g. 16, 32, 64 bits) in computer memory systems. These chips are highly
vulnerable to a special type of byte error, called an $m$-spotty byte error,
which can be effectively detected or corrected using byte error-control codes.
The MacWilliams identity provides the relationship between the weight
distribution of a code and that of its dual. This paper introduces $m$-spotty
Hamming weight enumerator, joint $m$-spotty Hamming weight enumerator and split
$m$-spotty Hamming weight enumerator for byte error-control codes over finite
commutative Frobenius rings as well as $m$-spotty Lee weight enumerator over an
infinite family of rings. In addition, MacWilliams type identities are also
derived for these enumerators.
|
1307.1790 | Lifting Structural Tractability to CSP with Global Constraints | cs.AI | A wide range of problems can be modelled as constraint satisfaction problems
(CSPs), that is, a set of constraints that must be satisfied simultaneously.
Constraints can either be represented extensionally, by explicitly listing
allowed combinations of values, or implicitly, by special-purpose algorithms
provided by a solver. Such implicitly represented constraints, known as global
constraints, are widely used; indeed, they are one of the key reasons for the
success of constraint programming in solving real-world problems.
In recent years, a variety of restrictions on the structure of CSP instances
that yield tractable classes have been identified. However, many such
restrictions fail to guarantee tractability for CSPs with global constraints.
In this paper, we investigate the properties of extensionally represented
constraints that these restrictions exploit to achieve tractability, and show
that there are large classes of global constraints that also possess these
properties. This allows us to lift these restrictions to the global case, and
identify new tractable classes of CSPs with global constraints.
|
1307.1827 | Loss minimization and parameter estimation with heavy tails | cs.LG stat.ML | This work studies applications and generalizations of a simple estimation
technique that provides exponential concentration under heavy-tailed
distributions, assuming only bounded low-order moments. We show that the
technique can be used for approximate minimization of smooth and strongly
convex losses, and specifically for least squares linear regression. For
instance, our $d$-dimensional estimator requires just
$\tilde{O}(d\log(1/\delta))$ random samples to obtain a constant factor
approximation to the optimal least squares loss with probability $1-\delta$,
without requiring the covariates or noise to be bounded or subgaussian. We
provide further applications to sparse linear regression and low-rank
covariance matrix estimation with similar allowances on the noise and covariate
distributions. The core technique is a generalization of the median-of-means
estimator to arbitrary metric spaces.
|
1307.1829 | Group performance is maximized by hierarchical competence distribution | physics.soc-ph cs.SI | Groups of people or even robots often face problems they need to solve
together. Examples include collectively searching for resources, choosing when
and where to invest time and effort, and many more. Although a hierarchical
ordering of the relevance of the group members' inputs during collective
decision making is abundant, a quantitative demonstration of its origin and
advantages using a generic approach has not been described yet. Here we
introduce a family of models based on the most general features of group
decision making to show that the optimal distribution of competences is a
highly skewed function with a structured fat tail. Our results have been
obtained by optimizing the groups' compositions through identifying the best
performing distributions for both the competences and for the members'
flexibilities/pliancies. Potential applications include choosing the best
composition for a group intended to solve a given task.
|
1307.1834 | Multiple Vectors Propagation of Epidemics in Complex Networks | physics.soc-ph cs.SI | This letter investigates the epidemic spreading in two-vectors propagation
network (TPN). We propose detailed theoretical analysis that allows us to
accurately calculate the epidemic threshold and outbreak size. It is found that
the epidemics can spread across the TPN even if two sub-single-vector
propagation networks (SPNs) of TPN are well below their respective epidemic
thresholds. Strong positive degree-degree correlation of nodes in TPN could
lead to a much lower epidemic threshold and a relatively smaller outbreak size.
However, the average similarity between the neighbors from different SPNs of
nodes has no effect on the epidemic threshold and outbreak size.
|
1307.1870 | Crossing the Reality Gap: a Short Introduction to the Transferability
Approach | cs.RO | In robotics, gradient-free optimization algorithms (e.g. evolutionary
algorithms) are often used only in simulation because they require the
evaluation of many candidate solutions. Nevertheless, solutions obtained in
simulation often do not work well on the real device. The transferability
approach aims at crossing this gap between simulation and reality by
\emph{making the optimization algorithm aware of the limits of the simulation}.
In the present paper, we first describe the transferability function, that
maps solution descriptors to a score representing how well a simulator matches
the reality. We then show that this function can be learned using a regression
algorithm and a few experiments with the real devices. Our results are
supported by an extensive study of the reality gap for a simple quadruped robot
whose control parameters are optimized. In particular, we mapped the whole
search space in reality and in simulation to understand the differences between
the fitness landscapes.
|
1307.1872 | Intelligent Hybrid Man-Machine Translation Quality Estimation | cs.CL | Inferring evaluation scores based on human judgments is invaluable compared
to using current evaluation metrics which are not suitable for real-time
applications e.g. post-editing. However, these judgments are much more
expensive to collect especially from expert translators, compared to evaluation
based on indicators contrasting source and translation texts. This work
introduces a novel approach for quality estimation by combining learnt
confidence scores from a probabilistic inference model based on human
judgments, with selective linguistic features-based scores, where the proposed
inference model infers the credibility of given human ranks to solve the
scarcity and inconsistency issues of human judgments. Experimental results,
using challenging language-pairs, demonstrate improvement in correlation with
human judgments over traditional evaluation metrics.
|
1307.1879 | On Stochastic Subgradient Mirror-Descent Algorithm with Weighted
Averaging | math.OC cs.SY | This paper considers stochastic subgradient mirror-descent method for solving
constrained convex minimization problems. In particular, a stochastic
subgradient mirror-descent method with weighted iterate-averaging is
investigated and its per-iterate convergence rate is analyzed. The novel part
of the approach is in the choice of weights that are used to construct the
averages. Through the use of these weighted averages, we show that the known
optimal rates can be obtained with simpler algorithms than those currently
existing in the literature. Specifically, by suitably choosing the stepsize
values, one can obtain the rate of the order $1/k$ for strongly convex
functions, and the rate $1/\sqrt{k}$ for general convex functions (not
necessarily differentiable). Furthermore, for the latter case, it is shown that
a stochastic subgradient mirror-descent with iterate averaging converges (along
a subsequence) to an optimal solution, almost surely, even with the stepsize of
the form $1/\sqrt{1+k}$, which was not previously known. The stepsize choices
that achieve the best rates are those proposed by Paul Tseng for acceleration
of proximal gradient methods.
|
1307.1890 | Solution of Rectangular Fuzzy Games by Principle of Dominance Using
LR-type Trapezoidal Fuzzy Numbers | cs.AI | Fuzzy Set Theory has been applied in many fields such as Operations Research,
Control Theory, and Management Sciences etc. In particular, an application of
this theory in Managerial Decision Making Problems has a remarkable
significance. In this Paper, we consider a solution of Rectangular Fuzzy game
with pay-off as imprecise numbers instead of crisp numbers viz., interval and
LR-type Trapezoidal Fuzzy Numbers. The solution of such Fuzzy games with pure
strategies by minimax-maximin principle is discussed. The Algebraic Method to
solve Fuzzy games without saddle point by using mixed strategies is also
illustrated. Here, pay-off matrix is reduced to pay-off matrix by Dominance
Method. This fact is illustrated by means of Numerical Example.
|
1307.1891 | A Comparative study of Transportation Problem under Probabilistic and
Fuzzy Uncertainties | cs.AI | Transportation Problem is an important aspect which has been widely studied
in Operations Research domain. It has been studied to simulate different real
life problems. In particular, application of this Problem in NP- Hard Problems
has a remarkable significance. In this Paper, we present a comparative study of
Transportation Problem through Probabilistic and Fuzzy Uncertainties. Fuzzy
Logic is a computational paradigm that generalizes classical two-valued logic
for reasoning under uncertainty. In order to achieve this, the notation of
membership in a set needs to become a matter of degree. By doing this we
accomplish two things viz., (i) ease of describing human knowledge involving
vague concepts and (ii) enhanced ability to develop cost-effective solution to
real-world problem. The multi-valued nature of Fuzzy Sets allows handling
uncertain and vague information. It is a model-less approach and a clever
disguise of Probability Theory. We give comparative simulation results of both
approaches and discuss the Computational Complexity. To the best of our
knowledge, this is the first work on comparative study of Transportation
Problem using Probabilistic and Fuzzy Uncertainties.
|
1307.1893 | Trapezoidal Fuzzy Numbers for the Transportation Problem | cs.AI | Transportation Problem is an important problem which has been widely studied
in Operations Research domain. It has been often used to simulate different
real life problems. In particular, application of this Problem in NP Hard
Problems has a remarkable significance. In this Paper, we present the closed,
bounded and non empty feasible region of the transportation problem using fuzzy
trapezoidal numbers which ensures the existence of an optimal solution to the
balanced transportation problem. The multivalued nature of Fuzzy Sets allows
handling of uncertainty and vagueness involved in the cost values of each cells
in the transportation table. For finding the initial solution of the
transportation problem we use the Fuzzy Vogel Approximation Method and for
determining the optimality of the obtained solution Fuzzy Modified Distribution
Method is used. The fuzzification of the cost of the transportation problem is
discussed with the help of a numerical example. Finally, we discuss the
computational complexity involved in the problem. To the best of our knowledge,
this is the first work on obtaining the solution of the transportation problem
using fuzzy trapezoidal numbers.
|
1307.1895 | Discovering Stock Price Prediction Rules of Bombay Stock Exchange Using
Rough Fuzzy Multi Layer Perception Networks | cs.AI | In India financial markets have existed for many years. A functionally
accented, diverse, efficient and flexible financial system is vital to the
national objective of creating a market driven, productive and competitive
economy. Today markets of varying maturity exist in equity, debt, commodities
and foreign exchange. In this work we attempt to generate prediction rules
scheme for stock price movement at Bombay Stock Exchange using an important
Soft Computing paradigm viz., Rough Fuzzy Multi Layer Perception. The use of
Computational Intelligence Systems such as Neural Networks, Fuzzy Sets, Genetic
Algorithms, etc. for Stock Market Predictions has been widely established. The
process is to extract knowledge in the form of rules from daily stock
movements. These rules can then be used to guide investors. To increase the
efficiency of the prediction process, Rough Sets is used to discretize the
data. The methodology uses a Genetic Algorithm to obtain a structured network
suitable for both classification and rule extraction. The modular concept,
based on divide and conquer strategy, provides accelerated training and a
compact network suitable for generating a minimum number of rules with high
certainty values. The concept of variable mutation operator is introduced for
preserving the localized structure of the constituting Knowledge Based
sub-networks, while they are integrated and evolved. Rough Set Dependency Rules
are generated directly from the real valued attribute table containing Fuzzy
membership values. The paradigm is thus used to develop a rule extraction
algorithm. The extracted rules are compared with some of the related rule
extraction techniques on the basis of some quantitative performance indices.
The proposed methodology extracts rules which are less in number, are accurate,
have high certainty factor and have low confusion with less computation time.
|
1307.1900 | Fuzzy Integer Linear Programming Mathematical Models for Examination
Timetable Problem | cs.AI | ETP is NP Hard combinatorial optimization problem. It has received tremendous
research attention during the past few years given its wide use in
universities. In this Paper, we develop three mathematical models for NSOU,
Kolkata, India using FILP technique. To deal with impreciseness and vagueness
we model various allocation variables through fuzzy numbers. The solution to
the problem is obtained using Fuzzy number ranking method. Each feasible
solution has fuzzy number obtained by Fuzzy objective function. The different
FILP technique performance are demonstrated by experimental data generated
through extensive simulation from NSOU, Kolkata, India in terms of its
execution times. The proposed FILP models are compared with commonly used
heuristic viz. ILP approach on experimental data which gives an idea about
quality of heuristic. The techniques are also compared with different
Artificial Intelligence based heuristics for ETP with respect to best and mean
cost as well as execution time measures on Carter benchmark datasets to
illustrate its effectiveness. FILP takes an appreciable amount of time to
generate satisfactory solution in comparison to other heuristics. The
formulation thus serves as good benchmark for other heuristics. The
experimental study presented here focuses on producing a methodology that
generalizes well over spectrum of techniques that generates significant results
for one or more datasets. The performance of FILP model is finally compared to
the best results cited in literature for Carter benchmarks to assess its
potential. The problem can be further reduced by formulating with lesser number
of allocation variables it without affecting optimality of solution obtained.
FLIP model for ETP can also be adapted to solve other ETP as well as
combinatorial optimization problems.
|
1307.1903 | Achieving greater Explanatory Power and Forecasting Accuracy with
Non-uniform spread Fuzzy Linear Regression | cs.AI | Fuzzy regression models have been applied to several Operations Research
applications viz., forecasting and prediction. Earlier works on fuzzy
regression analysis obtain crisp regression coefficients for eliminating the
problem of increasing spreads for the estimated fuzzy responses as the
magnitude of the independent variable increases. But they cannot deal with the
problem of non-uniform spreads. In this work, a three-phase approach is
discussed to construct the fuzzy regression model with non-uniform spreads to
deal with this problem. The first phase constructs the membership functions of
the least-squares estimates of regression coefficients based on extension
principle to completely conserve the fuzziness of observations. They are then
defuzzified by the centre of area method to obtain crisp regression
coefficients in the second phase. Finally, the error terms of the method are
determined by setting each estimated spread equal to its corresponding observed
spread. The Tagaki-Sugeno inference system is used for improving the accuracy
of forecasts. The simulation example demonstrates the strength of fuzzy linear
regression model in terms of higher explanatory power and forecasting
performance.
|
1307.1905 | A Dynamic Algorithm for the Longest Common Subsequence Problem using Ant
Colony Optimization Technique | cs.AI | We present a dynamic algorithm for solving the Longest Common Subsequence
Problem using Ant Colony Optimization Technique. The Ant Colony Optimization
Technique has been applied to solve many problems in Optimization Theory,
Machine Learning and Telecommunication Networks etc. In particular, application
of this theory in NP-Hard Problems has a remarkable significance. Given two
strings, the traditional technique for finding Longest Common Subsequence is
based on Dynamic Programming which consists of creating a recurrence relation
and filling a table of size . The proposed algorithm draws analogy with
behavior of ant colonies function and this new computational paradigm is known
as Ant System. It is a viable new approach to Stochastic Combinatorial
Optimization. The main characteristics of this model are positive feedback,
distributed computation, and the use of constructive greedy heuristic. Positive
feedback accounts for rapid discovery of good solutions, distributed
computation avoids premature convergence and greedy heuristic helps find
acceptable solutions in minimum number of stages. We apply the proposed
methodology to Longest Common Subsequence Problem and give the simulation
results. The effectiveness of this approach is demonstrated by efficient
Computational Complexity. To the best of our knowledge, this is the first Ant
Colony Optimization Algorithm for Longest Common Subsequence Problem.
|
1307.1927 | Link Based Session Reconstruction: Finding All Maximal Paths | cs.DB | This paper introduces a new method for the session construction problem,
which is the first main step of the web usage mining process. Through
experiments, it is shown that when our new technique is used, it outperforms
previous approaches in web usage mining applications such as next-page
prediction.
|
1307.1940 | Reinforcing Power Grid Transmission with FACTS Devices | math.OC cs.SY physics.soc-ph | We explore optimization methods for planning the placement, sizing and
operations of Flexible Alternating Current Transmission System (FACTS) devices
installed into the grid to relieve congestion created by load growth or
fluctuations of intermittent renewable generation. We limit our selection of
FACTS devices to those that can be represented by modification of the
inductance of the transmission lines. Our master optimization problem minimizes
the $l_1$ norm of the FACTS-associated inductance correction subject to
constraints enforcing that no line of the system exceeds its thermal limit. We
develop off-line heuristics that reduce this non-convex optimization to a
succession of Linear Programs (LP) where at each step the constraints are
linearized analytically around the current operating point. The algorithm is
accelerated further with a version of the cutting plane method greatly reducing
the number of active constraints during the optimization, while checking
feasibility of the non-active constraints post-factum. This hybrid algorithm
solves a typical single-contingency problem over the MathPower Polish Grid
model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard
laptop---a speed up that allows the sizing and placement of a family of FACTS
devices to correct a large set of anticipated contingencies. From testing of
multiple examples, we observe that our algorithm finds feasible solutions that
are always sparse, i.e., FACTS devices are placed on only a few lines. The
optimal FACTS are not always placed on the originally congested lines, however
typically the correction(s) is made at line(s) positioned in a relative
proximity of the overload line(s).
|
1307.1944 | READ-EVAL-PRINT in Parallel and Asynchronous Proof-checking | cs.LO cs.AI cs.HC | The LCF tradition of interactive theorem proving, which was started by Milner
in the 1970-ies, appears to be tied to the classic READ-EVAL-PRINT-LOOP of
sequential and synchronous evaluation of prover commands. We break up this loop
and retrofit the read-eval-print phases into a model of parallel and
asynchronous proof processing. Thus we explain some key concepts of the
Isabelle/Scala approach to prover interaction and integration, and the
Isabelle/jEdit Prover IDE as front-end technology. We hope to open up the
scientific discussion about non-trivial interaction models for ITP systems
again, and help getting other old-school proof assistants on a similar track.
|
1307.1949 | Orthogonal Matching Pursuit with Thresholding and its Application in
Compressive Sensing | cs.IT math.IT | Greed is good. However, the tighter you squeeze, the less you have. In this
paper, a less greedy algorithm for sparse signal reconstruction in compressive
sensing, named orthogonal matching pursuit with thresholding is studied. Using
the global 2-coherence , which provides a "bridge" between the well known
mutual coherence and the restricted isometry constant, the performance of
orthogonal matching pursuit with thresholding is analyzed and more general
results for sparse signal reconstruction are obtained. It is also shown that
given the same assumption on the coherence index and the restricted isometry
constant as required for orthogonal matching pursuit, the thresholding
variation gives exactly the same reconstruction performance with significantly
less complexity.
|
1307.1954 | B-tests: Low Variance Kernel Two-Sample Tests | cs.LG stat.ML | A family of maximum mean discrepancy (MMD) kernel two-sample tests is
introduced. Members of the test family are called Block-tests or B-tests, since
the test statistic is an average over MMDs computed on subsets of the samples.
The choice of block size allows control over the tradeoff between test power
and computation time. In this respect, the $B$-test family combines favorable
properties of previously proposed MMD two-sample tests: B-tests are more
powerful than a linear time test where blocks are just pairs of samples, yet
they are more computationally efficient than a quadratic time test where a
single large block incorporating all the samples is used to compute a
U-statistic. A further important advantage of the B-tests is their
asymptotically Normal null distribution: this is by contrast with the
U-statistic, which is degenerate under the null hypothesis, and for which
estimates of the null distribution are computationally demanding. Recent
results on kernel selection for hypothesis testing transfer seamlessly to the
B-tests, yielding a means to optimize test power via kernel choice.
|
1307.1960 | Modal Analysis with Compressive Measurements | cs.IT math.IT | Structural Health Monitoring (SHM) systems are critical for monitoring aging
infrastructure (such as buildings or bridges) in a cost-effective manner. Such
systems typically involve collections of battery-operated wireless sensors that
sample vibration data over time. After the data is transmitted to a central
node, modal analysis can be used to detect damage in the structure. In this
paper, we propose and study three frameworks for Compressive Sensing (CS) in
SHM systems; these methods are intended to minimize power consumption by
allowing the data to be sampled and/or transmitted more efficiently. At the
central node, all of these frameworks involve a very simple technique for
estimating the structure's mode shapes without requiring a traditional CS
reconstruction of the vibration signals; all that is needed is to compute a
simple Singular Value Decomposition. We provide theoretical justification
(including measurement bounds) for each of these techniques based on the
equations of motion describing a simplified Multiple-Degree-Of-Freedom (MDOF)
system, and we support our proposed techniques using simulations based on
synthetic and real data.
|
1307.1961 | Optimal Locally Repairable Linear Codes | cs.IT math.IT | Linear erasure codes with local repairability are desirable for distributed
data storage systems. An [n, k, d] code having all-symbol (r,
\delta})-locality, denoted as (r, {\delta})a, is considered optimal if it also
meets the minimum Hamming distance bound. The existing results on the existence
and the construction of optimal (r, {\delta})a codes are limited to only the
special case of {\delta} = 2, and to only two small regions within this special
case, namely, m = 0 or m >= (v+{\delta}-1) > ({\delta}-1), where m = n mod
(r+{\delta}-1) and v = k mod r. This paper investigates the existence
conditions and presents deterministic constructive algorithms for optimal (r,
{\delta})a codes with general r and {\delta}. First, a structure theorem is
derived for general optimal (r, {\delta})a codes which helps illuminate some of
their structure properties. Next, the entire problem space with arbitrary n, k,
r and {\delta} is divided into eight different cases (regions) with regard to
the specific relations of these parameters. For two cases, it is rigorously
proved that no optimal (r, {\delta})a could exist. For four other cases the
optimal (r, {\delta})a codes are shown to exist, deterministic constructions
are proposed and the lower bound on the required field size for these
algorithms to work is provided. Our new constructive algorithms not only cover
more cases, but for the same cases where previous algorithms exist, the new
constructions require a considerably smaller field, which translates to
potentially lower computational complexity. Our findings substantially enriches
the knowledge on (r, {\delta})a codes, leaving only two cases in which the
existence of optimal codes are yet to be determined.
|
1307.1998 | Using Clustering to extract Personality Information from socio economic
data | cs.LG cs.CE | It has become apparent that models that have been applied widely in
economics, including Machine Learning techniques and Data Mining methods,
should take into consideration principles that derive from the theories of
Personality Psychology in order to discover more comprehensive knowledge
regarding complicated economic behaviours. In this work, we present a method to
extract Behavioural Groups by using simple clustering techniques that can
potentially reveal aspects of the Personalities for their members. We believe
that this is very important because the psychological information regarding the
Personalities of individuals is limited in real world applications and because
it can become a useful tool in improving the traditional models of Knowledge
Economy.
|
1307.2001 | Variance in System Dynamics and Agent Based Modelling Using the SIR
Model of Infectious Disease | cs.CE cs.MA | Classical deterministic simulations of epidemiological processes, such as
those based on System Dynamics, produce a single result based on a fixed set of
input parameters with no variance between simulations. Input parameters are
subsequently modified on these simulations using Monte-Carlo methods, to
understand how changes in the input parameters affect the spread of results for
the simulation. Agent Based simulations are able to produce different output
results on each run based on knowledge of the local interactions of the
underlying agents and without making any changes to the input parameters. In
this paper we compare the influence and effect of variation within these two
distinct simulation paradigms and show that the Agent Based simulation of the
epidemiological SIR (Susceptible, Infectious, and Recovered) model is more
effective at capturing the natural variation within SIR compared to an
equivalent model using System Dynamics with Monte-Carlo simulation. To
demonstrate this effect, the SIR model is implemented using both System
Dynamics (with Monte-Carlo simulation) and Agent Based Modelling based on
previously published empirical data.
|
1307.2015 | Full-text Support for Publish/Subscribe Ontology Systems | cs.IR cs.DB | We envision a publish/subscribe ontology system that is able to index
millions of user subscriptions and filter them against ontology data that
arrive in a streaming fashion. In this work, we propose a SPARQL extension
appropriate for a publish/subscribe setting; our extension builds on the
natural semantic graph matching of the language and supports the creation of
full-text subscriptions. Subsequently, we propose a main-memory subscription
indexing algorithm which performs both semantic and full-text matching at low
complexity and minimal filtering time. Thus, when ontology data are published
matching subscriptions are identified and notifications are forwarded to users.
|
1307.2084 | Mitigating Epidemics through Mobile Micro-measures | cs.SI cs.CY physics.soc-ph | Epidemics of infectious diseases are among the largest threats to the quality
of life and the economic and social well-being of developing countries. The
arsenal of measures against such epidemics is well-established, but costly and
insufficient to mitigate their impact. In this paper, we argue that mobile
technology adds a powerful weapon to this arsenal, because (a) mobile devices
endow us with the unprecedented ability to measure and model the detailed
behavioral patterns of the affected population, and (b) they enable the
delivery of personalized behavioral recommendations to individuals in real
time. We combine these two ideas and propose several strategies to generate
such recommendations from mobility patterns. The goal of each strategy is a
large reduction in infections, with a small impact on the normal course of
daily life. We evaluate these strategies over the Orange D4D dataset and show
the benefit of mobile micro-measures, even if only a fraction of the population
participates. These preliminary results demonstrate the potential of mobile
technology to complement other measures like vaccination and quarantines
against disease epidemics.
|
1307.2087 | Performance Bounds for Constrained Linear Min-Max Control | math.OC cs.SY | This paper proposes a method to compute lower performance bounds for
discrete-time infinite-horizon min-max control problems with input constraints
and bounded disturbances. Such bounds can be used as a performance metric for
control policies synthesized via suboptimal design techniques. Our approach is
motivated by recent work on performance bounds for stochastic constrained
optimal control problems using relaxations of the Bellman equation. The central
idea of the paper is to find an unconstrained min-max control problem, with
negatively weighted disturbances as in H infinity control, that provides the
tightest possible lower performance bound on the original problem of interest
and whose value function is easily computed. The new method is demonstrated via
a numerical example for a system with box constrained input.
|
1307.2089 | Certifying non-existence of undesired locally stable equilibria in
formation shape control problems | math.OC cs.SY | A fundamental control problem for autonomous vehicle formations is formation
shape control, in which the agents must maintain a prescribed formation shape
using only information measured or communicated from neighboring agents. While
a large and growing literature has recently emerged on distance-based formation
shape control, global stability properties remain a significant open problem.
Even in four-agent formations, the basic question of whether or not there can
exist locally stable incorrect equilibrium shapes remains open. This paper
shows how this question can be answered for any size formation in principle
using semidefinite programming techniques for semialgebraic problems, involving
solutions sets of polynomial equations, inequations, and inequalities.
|
1307.2090 | Spectral properties of the Laplacian of multiplex networks | physics.soc-ph cond-mat.stat-mech cs.SI | One of the more challenging tasks in the understanding of dynamical
properties of models on top of complex networks is to capture the precise role
of multiplex topologies. In a recent paper, Gomez et al. [Phys. Rev. Lett. 101,
028701 (2013)] proposed a framework for the study of diffusion processes in
such networks. Here, we extend the previous framework to deal with general
configurations in several layers of networks, and analyze the behavior of the
spectrum of the Laplacian of the full multiplex. We derive an interesting
decoupling of the problem that allow us to unravel the role played by the
interconnections of the multiplex in the dynamical processes on top of them.
Capitalizing on this decoupling we perform an asymptotic analysis that allow us
to derive analytical expressions for the full spectrum of eigenvalues. This
spectrum is used to gain insight into physical phenomena on top of multiplex,
specifically, diffusion processes and synchronizability.
|
1307.2104 | Enhanced reconstruction of weighted networks from strengths and degrees | physics.data-an cs.SI physics.soc-ph | Network topology plays a key role in many phenomena, from the spreading of
diseases to that of financial crises. Whenever the whole structure of a network
is unknown, one must resort to reconstruction methods that identify the least
biased ensemble of networks consistent with the partial information available.
A challenging case, frequently encountered due to privacy issues in the
analysis of interbank flows and Big Data, is when there is only local
(node-specific) aggregate information available. For binary networks, the
relevant ensemble is one where the degree (number of links) of each node is
constrained to its observed value. However, for weighted networks the problem
is much more complicated. While the naive approach prescribes to constrain the
strengths (total link weights) of all nodes, recent counter-intuitive results
suggest that in weighted networks the degrees are often more informative than
the strengths. This implies that the reconstruction of weighted networks would
be significantly enhanced by the specification of both strengths and degrees, a
computationally hard and bias-prone procedure. Here we solve this problem by
introducing an analytical and unbiased maximum-entropy method that works in the
shortest possible time and does not require the explicit generation of
reconstructed samples. We consider several real-world examples and show that,
while the strengths alone give poor results, the additional knowledge of the
degrees yields accurately reconstructed networks. Information-theoretic
criteria rigorously confirm that the degree sequence, as soon as it is
non-trivial, is irreducible to the strength sequence. Our results have strong
implications for the analysis of motifs and communities and whenever the
reconstructed ensemble is required as a null model to detect higher-order
patterns.
|
1307.2105 | Successive Integer-Forcing and its Sum-Rate Optimality | cs.IT math.IT | Integer-forcing receivers generalize traditional linear receivers for the
multiple-input multiple-output channel by decoding integer-linear combinations
of the transmitted streams, rather then the streams themselves. Previous works
have shown that the additional degree of freedom in choosing the integer
coefficients enables this receiver to approach the performance of
maximum-likelihood decoding in various scenarios. Nonetheless, even for the
optimal choice of integer coefficients, the additive noise at the equalizer's
output is still correlated. In this work we study a variant of integer-forcing,
termed successive integer-forcing, that exploits these noise correlations to
improve performance. This scheme is the integer-forcing counterpart of
successive interference cancellation for traditional linear receivers.
Similarly to the latter, we show that successive integer-forcing is capacity
achieving when it is possible to optimize the rate allocation to the different
streams. In comparison to standard successive interference cancellation
receivers, the successive integer-forcing receiver offers more possibilities
for capacity achieving rate tuples, and in particular, ones that are more
balanced.
|
1307.2111 | Finding the creatures of habit; Clustering households based on their
flexibility in using electricity | cs.LG cs.CE | Changes in the UK electricity market, particularly with the roll out of smart
meters, will provide greatly increased opportunities for initiatives intended
to change households' electricity usage patterns for the benefit of the overall
system. Users show differences in their regular behaviours and clustering
households into similar groupings based on this variability provides for
efficient targeting of initiatives. Those people who are stuck into a regular
pattern of activity may be the least receptive to an initiative to change
behaviour. A sample of 180 households from the UK are clustered into four
groups as an initial test of the concept and useful, actionable groupings are
found.
|
1307.2117 | Mixed Compressed Sensing Based on Random Graphs | cs.IT math.IT | Finding a suitable measurement matrix is an important topic in compressed
sensing. Though the known random matrix, whose entries are drawn independently
from a certain probability distribution, can be used as a measurement matrix
and recover signal well, in most cases, we hope the measurement matrix imposed
with some special structure. In this paper, based on random graph models, we
show that the mixed symmetric random matrices, whose diagonal entries obey a
distribution and non-diagonal entries obey another distribution, can be used to
recover signal successfully with high probability.
|
1307.2118 | A PAC-Bayesian Tutorial with A Dropout Bound | cs.LG | This tutorial gives a concise overview of existing PAC-Bayesian theory
focusing on three generalization bounds. The first is an Occam bound which
handles rules with finite precision parameters and which states that
generalization loss is near training loss when the number of bits needed to
write the rule is small compared to the sample size. The second is a
PAC-Bayesian bound providing a generalization guarantee for posterior
distributions rather than for individual rules. The PAC-Bayesian bound
naturally handles infinite precision rule parameters, $L_2$ regularization,
{\em provides a bound for dropout training}, and defines a natural notion of a
single distinguished PAC-Bayesian posterior distribution. The third bound is a
training-variance bound --- a kind of bias-variance analysis but with bias
replaced by expected training loss. The training-variance bound dominates the
other bounds but is more difficult to interpret. It seems to suggest variance
reduction methods such as bagging and may ultimately provide a more meaningful
analysis of dropouts.
|
1307.2136 | Near-Optimal Encoding for Sigma-Delta Quantization of Finite Frame
Expansions | cs.IT math.IT | In this paper we investigate encoding the bit-stream resulting from coarse
Sigma-Delta quantization of finite frame expansions (i.e., overdetermined
representations) of vectors. We show that for a wide range of finite-frames,
including random frames and piecewise smooth frames, there exists a simple
encoding algorithm ---acting only on the Sigma-Delta bit stream--- and an
associated decoding algorithm that together yield an approximation error which
decays exponentially in the number of bits used. The encoding strategy consists
of applying a discrete random operator to the Sigma-Delta bit stream and
assigning a binary codeword to the result. The reconstruction procedure is
essentially linear and equivalent to solving a least squares minimization
problem.
|
1307.2150 | Transmodal Analysis of Neural Signals | q-bio.NC cs.LG q-bio.QM | Localizing neuronal activity in the brain, both in time and in space, is a
central challenge to advance the understanding of brain function. Because of
the inability of any single neuroimaging techniques to cover all aspects at
once, there is a growing interest to combine signals from multiple modalities
in order to benefit from the advantages of each acquisition method. Due to the
complexity and unknown parameterization of any suggested complete model of BOLD
response in functional magnetic resonance imaging (fMRI), the development of a
reliable ultimate fusion approach remains difficult. But besides the primary
goal of superior temporal and spatial resolution, conjoint analysis of data
from multiple imaging modalities can alternatively be used to segregate neural
information from physiological and acquisition noise. In this paper we suggest
a novel methodology which relies on constructing a quantifiable mapping of data
from one modality (electroencephalography; EEG) into another (fMRI), called
transmodal analysis of neural signals (TRANSfusion). TRANSfusion attempts to
map neural data embedded within the EEG signal into its reflection in fMRI
data. Assessing the mapping performance on unseen data allows to localize brain
areas where a significant portion of the signal could be reliably
reconstructed, hence the areas neural activity of which is reflected in both
EEG and fMRI data. Consecutive analysis of the learnt model allows to localize
areas associated with specific frequency bands of EEG, or areas functionally
related (connected or coherent) to any given EEG sensor. We demonstrate the
performance of TRANSfusion on artificial and real data from an auditory
experiment. We further speculate on possible alternative uses: cross-modal data
filtering and EEG-driven interpolation of fMRI signals to obtain arbitrarily
high temporal sampling of BOLD.
|
1307.2189 | On the Topology of the Facebook Page Network | cs.SI physics.soc-ph | The Facebook Page Network (FPN) is a platform for Businesses, Public Figures
and Organizations (BPOs) to connect with individuals and other BPOs in the
digital space. For over a decade scale-free networks have most appropriately
described a variety of seemingly disparate physical, biological and social
real-world systems unified by similar network properties such as
scale-invariance, growth via a preferential attachment mechanism, and a power
law degree distribution P(k) = ck^-{\lambda} where typically 2<{\lambda}<3. In
this paper we show that both the Facebook Page Network and its BPO-BPO
subnetwork suggest power law and scale-free characteristics. We argue that
social media analysts must consider the logarithmic and non-linear properties
of social media audiences of scale.
|
1307.2191 | A Knowledge-based Treatment of Human-Automation Systems | cs.HC cs.AI | In a supervisory control system the human agent knowledge of past, current,
and future system behavior is critical for system performance. Being able to
reason about that knowledge in a precise and structured manner is central to
effective system design. In this paper we introduce the application of a
well-established formal approach to reasoning about knowledge to the modeling
and analysis of complex human-automation systems. An intuitive notion of
knowledge in human-automation systems is sketched and then cast as a formal
model. We present a case study in which the approach is used to model and
reason about a classic problem from the human-automation systems literature;
the results of our analysis provide evidence for the validity and value of
reasoning about complex systems in terms of the knowledge of the system agents.
To conclude, we discuss research directions that will extend this approach, and
note several systems in the aviation and human-robot team domains that are of
particular interest.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.