id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1402.5730 | Power Efficient and Secure Multiuser Communication Systems with Wireless
Information and Power Transfer | cs.IT math.IT | In this paper, we study resource allocation algorithm design for power
efficient secure communication with simultaneous wireless information and power
transfer (WIPT) in multiuser communication systems. In particular, we focus on
power splitting receivers which are able to harvest energy and decode
information from the received signals. The considered problem is modeled as an
optimization problem which takes into account a minimum required
signal-to-interference-plus-noise ratio (SINR) at multiple desired receivers, a
maximum tolerable data rate at multiple multi-antenna potential eavesdroppers,
and a minimum required power delivered to the receivers. The proposed problem
formulation facilitates the dual use of artificial noise in providing efficient
energy transfer and guaranteeing secure communication. We aim at minimizing the
total transmit power by jointly optimizing transmit beamforming vectors, power
splitting ratios at the desired receivers, and the covariance of the artificial
noise. The resulting non-convex optimization problem is transformed into a
semidefinite programming (SDP) and solved by SDP relaxation. We show that the
adopted SDP relaxation is tight and achieves the global optimum of the original
problem. Simulation results illustrate the significant power saving obtained by
the proposed optimal algorithm compared to suboptimal baseline schemes.
|
1402.5731 | Information-Theoretic Bounds for Adaptive Sparse Recovery | cs.IT cs.LG math.IT math.ST stat.TH | We derive an information-theoretic lower bound for sample complexity in
sparse recovery problems where inputs can be chosen sequentially and
adaptively. This lower bound is in terms of a simple mutual information
expression and unifies many different linear and nonlinear observation models.
Using this formula we derive bounds for adaptive compressive sensing (CS),
group testing and 1-bit CS problems. We show that adaptivity cannot decrease
sample complexity in group testing, 1-bit CS and CS with linear sparsity. In
contrast, we show there might be mild performance gains for CS in the sublinear
regime. Our unified analysis also allows characterization of gains due to
adaptivity from a wider perspective on sparse problems.
|
1402.5734 | Permutation trinomials over finite fields with even characteristic | cs.IT math.IT | Permutation polynomials have been a subject of study for a long time and have
applications in many areas of science and engineering. However, only a small
number of specific classes of permutation polynomials are described in the
literature so far. In this paper we present a number of permutation trinomials
over finite fields, which are of different forms.
|
1402.5742 | Secure Logical Schema and Decomposition Algorithm for Proactive Context
Dependent Attribute Based Access Control | cs.DB cs.CR | Traditional database access control mechanisms use role based methods, with
generally row based and attribute based constraints for granularity, and
privacy is achieved mainly by using views. However if only a set of views
according to policy are made accessible to users, then this set should be
checked against the policy for the whole probable query history. The aim of
this work is to define a proactive decomposition algorithm according to the
attribute based policy rules and build a secure logical schema in which
relations are decomposed into several ones in order to inhibit joins or
inferences that may violate predefined privacy constraints. The attributes
whose association should not be inferred, are defined as having security
dependency among them and they form a new kind of context dependent attribute
based policy rule named as security dependent set. The decomposition algorithm
works on a logical schema with given security dependent sets and aims to
prohibit the inference of the association among the elements of these sets. It
is also proven that the decomposition technique generates a secure logical
schema that is in compliance with the given security dependent set constraints.
|
1402.5743 | Evaluation of node importance in complex networks | cs.SI physics.soc-ph | The assessment of node importance has been a fundamental issue in the
research of complex networks. In this paper, we propose to use the
Shannon-Parry measure (SPM) to evaluate the importance of a node
quantitatively, because SPM is the stationary distribution of the most
unprejudiced random walk on the network. We demonstrate the accuracy and
robustness of SPM compared with several popular methods in the Zachary karate
club network and three toy networks. We apply SPM to analyze the city
importance of China Railways High-speed (CRH) network, and obtain reasonable
results. Since SPM can be used effectively in weighted and directed network, we
believe it is a relevant method to identify key nodes in networks.
|
1402.5750 | A new inexact iterative hard thresholding algorithm for compressed
sensing | cs.IT math.IT | Compressed sensing (CS) demonstrates that a sparse, or compressible signal
can be acquired using a low rate acquisition process below the Nyquist rate,
which projects the signal onto a small set of vectors incoherent with the
sparsity basis. In this paper, we propose a new framework for compressed
sensing recovery problem using iterative approximation method via L0
minimization. Instead of directly solving the unconstrained L0 norm
optimization problem, we use the linearization and proximal points techniques
to approximate the penalty function at each iteration. The proposed algorithm
is very simple, efficient, and proved to be convergent. Numerical simulation
demonstrates our conclusions and indicates that the algorithm can improve the
reconstruction quality.
|
1402.5757 | An Integrated e-science Analysis Base for Computation Neuroscience
Experiments and Analysis | cs.SE cs.CE | Recent developments in data management and imaging technologies have
significantly affected diagnostic and extrapolative research in the
understanding of neurodegenerative diseases. However, the impact of these new
technologies is largely dependent on the speed and reliability with which the
medical data can be visualised, analysed and interpreted. The EUs neuGRID for
Users (N4U) is a follow-on project to neuGRID, which aims to provide an
integrated environment to carry out computational neuroscience experiments.
This paper reports on the design and development of the N4U Analysis Base and
related Information Services, which addresses existing research and practical
challenges by offering an integrated medical data analysis environment with the
necessary building blocks for neuroscientists to optimally exploit neuroscience
workflows, large image datasets and algorithms in order to conduct analyses.
The N4U Analysis Base enables such analyses by indexing and interlinking the
neuroimaging and clinical study datasets stored on the N4U Grid infrastructure,
algorithms and scientific workflow definitions along with their associated
provenance information.
|
1402.5758 | Bandits with concave rewards and convex knapsacks | cs.LG | In this paper, we consider a very general model for exploration-exploitation
tradeoff which allows arbitrary concave rewards and convex constraints on the
decisions across time, in addition to the customary limitation on the time
horizon. This model subsumes the classic multi-armed bandit (MAB) model, and
the Bandits with Knapsacks (BwK) model of Badanidiyuru et al.[2013]. We also
consider an extension of this model to allow linear contexts, similar to the
linear contextual extension of the MAB model. We demonstrate that a natural and
simple extension of the UCB family of algorithms for MAB provides a polynomial
time algorithm that has near-optimal regret guarantees for this substantially
more general model, and matches the bounds provided by Badanidiyuru et
al.[2013] for the special case of BwK, which is quite surprising. We also
provide computationally more efficient algorithms by establishing interesting
connections between this problem and other well studied problems/algorithms
such as the Blackwell approachability problem, online convex optimization, and
the Frank-Wolfe technique for convex optimization. We give examples of several
concrete applications, where this more general model of bandits allows for
richer and/or more efficient formulations of the problem.
|
1402.5759 | Asynchronous $l$-Complete Approximations | cs.SY | This paper extends the $l$-complete approximation method developed for time
invariant systems to a larger system class, ensuring that the resulting
approximation can be realized by a finite state machine. To derive the new
abstraction method, called asynchronous $l$-complete approximation, an
asynchronous version of the well-known concepts of state property, memory span
and $l$-completeness is introduced, extending the behavioral systems theory in
a consistent way.
|
1402.5761 | A Technique for Deriving Equational Conditions on the Denavit-Hartenberg
Parameters of 6R Linkages that are Necessary for Movability | cs.RO cs.SC | A closed 6R linkage is generically rigid. Special cases may be mobile. Many
families of mobile 6R linkages have been characterised in terms of the
invariant Denavit-Hartenberg parameters of the linkage. In other words, many
sufficient conditions for mobility are known. In this paper we give, for the
first time, equational conditions on the invariant Denavit-Hartenberg
parameters that are necessary for mobility. The method is based on the theory
of bonds. We illustrate the method by deriving the equational conditions for
various well-known linkages (Bricard's line symmetric linkage, Hooke's linkage,
Dietmaier's linkage, and recent a generalization of Bricard's orthogonal
linkage), starting from their bond diagrams; and by deriving the equations for
another bond diagram, thereby discovering a new mobile 6R linkage.
|
1402.5766 | No more meta-parameter tuning in unsupervised sparse feature learning | cs.LG cs.CV | We propose a meta-parameter free, off-the-shelf, simple and fast unsupervised
feature learning algorithm, which exploits a new way of optimizing for
sparsity. Experiments on STL-10 show that the method presents state-of-the-art
performance and provides discriminative features that generalize well.
|
1402.5773 | Data Management Challenges in Paediatric Information Systems | cs.DB cs.CY | There is a compelling demand for the data integration and exploitation of
heterogeneous biomedical information for improved clinical practice, medical
research, and personalised healthcare across the EU. The area of paediatric
information integration is particularly challenging since the patients
physiology changes with growth and different aspects of health being regularly
monitored over extended periods of time. Paediatricians require access to
heterogeneous data sets, often collected in different locations with different
apparatus and over extended timescales. Using a Grid platform originally
developed for physics at CERN and a novel integrated semantic data model the
Health-e-Child project has developed an integrated healthcare platform for
European paediatrics, providing seamless integration of traditional and
emerging sources of biomedical data. The long-term goal of the project was to
provide uninhibited access to universal biomedical knowledge repositories for
personalised and preventive healthcare, large-scale information-based
biomedical research and training, and informed policy making. The project built
a Grid-enabled european network of leading clinical centres that can share and
annotate paediatric data, can validate systems clinically, and diffuse clinical
excellence across Europe by setting up new technologies, clinical workflows,
and standards. The Health-e-Child project highlights data management challenges
for the future of European paediatric healthcare and is the subject of this
chapter.
|
1402.5774 | Information Filtering via Balanced Diffusion on Bipartite Networks | cs.IR | Recent decade has witnessed the increasing popularity of recommender systems,
which help users acquire relevant commodities and services from overwhelming
resources on Internet. Some simple physical diffusion processes have been used
to design effective recommendation algorithms for user-object bipartite
networks, typically mass diffusion (MD) and heat conduction (HC) algorithms
which have different advantages respectively on accuracy and diversity. In this
paper, we investigate the effect of weight assignment in the hybrid of MD and
HC, and find that a new hybrid algorithm of MD and HC with balanced weights
will achieve the optimal recommendation results, we name it balanced diffusion
(BD) algorithm. Numerical experiments on three benchmark data sets, MovieLens,
Netflix and RateYourMusic (RYM), show that the performance of BD algorithm
outperforms the existing diffusion-based methods on the three important
recommendation metrics, accuracy, diversity and novelty. Specifically, it can
not only provide accurately recommendation results, but also yield higher
diversity and novelty in recommendations by accurately recommending unpopular
objects.
|
1402.5781 | Program Transformations for Asynchronous and Batched Query Submission | cs.DB | The performance of database/Web-service backed applications can be
significantly improved by asynchronous submission of queries/requests well
ahead of the point where the results are needed, so that results are likely to
have been fetched already when they are actually needed. However, manually
writing applications to exploit asynchronous query submission is tedious and
error-prone. In this paper we address the issue of automatically transforming a
program written assuming synchronous query submission, to one that exploits
asynchronous query submission. Our program transformation method is based on
data flow analysis and is framed as a set of transformation rules. Our rules
can handle query executions within loops, unlike some of the earlier work in
this area. We also present a novel approach that, at runtime, can combine
multiple asynchronous requests into batches, thereby achieving the benefits of
batching in addition to that of asynchronous submission. We have built a tool
that implements our transformation techniques on Java programs that use JDBC
calls; our tool can be extended to handle Web service calls. We have carried
out a detailed experimental study on several real-life applications, which
shows the effectiveness of the proposed rewrite techniques, both in terms of
their applicability and the performance gains achieved.
|
1402.5784 | Transmission Power Scheduling for Energy Harvesting Sensor in Remote
State Estimation | math.OC cs.SY | We study remote estimation in a wireless sensor network. Instead of using a
conventional battery-powered sensor, a sensor equipped with an energy harvester
which can obtain energy from the external environment is utilized. We formulate
this problem into an infinite time-horizon Markov decision process and provide
the optimal sensor transmission power control strategy. In addition, a
sub-optimal strategy which is easier to implement and requires less computation
is presented. A numerical example is provided to illustrate the implementation
of the sub-optimal policy and evaluation of its estimation performance.
|
1402.5792 | A Novel Scheme for Intelligent Recognition of Pornographic Images | cs.CV | Harmful contents are rising in internet day by day and this motivates the
essence of more research in fast and reliable obscene and immoral material
filtering. Pornographic image recognition is an important component in each
filtering system. In this paper, a new approach for detecting pornographic
images is introduced. In this approach, two new features are suggested. These
two features in combination with other simple traditional features provide
decent difference between porn and non-porn images. In addition, we applied
fuzzy integral based information fusion to combine MLP (Multi-Layer Perceptron)
and NF (Neuro-Fuzzy) outputs. To test the proposed method, performance of
system was evaluated over 18354 download images from internet. The attained
precision was 93% in TP and 8% in FP on training dataset, and 87% and 5.5% on
test dataset. Achieved results verify the performance of proposed system versus
other related works.
|
1402.5803 | Sparse phase retrieval via group-sparse optimization | cs.IT cs.LG math.IT | This paper deals with sparse phase retrieval, i.e., the problem of estimating
a vector from quadratic measurements under the assumption that few components
are nonzero. In particular, we consider the problem of finding the sparsest
vector consistent with the measurements and reformulate it as a group-sparse
optimization problem with linear constraints. Then, we analyze the convex
relaxation of the latter based on the minimization of a block l1-norm and show
various exact recovery and stability results in the real and complex cases.
Invariance to circular shifts and reflections are also discussed for real
vectors measured via complex matrices.
|
1402.5805 | Automatic Estimation of Live Coffee Leaf Infection based on Image
Processing Techniques | cs.CV | Image segmentation is the most challenging issue in computer vision
applications. And most difficulties for crops management in agriculture are the
lack of appropriate methods for detecting the leaf damage for pests treatment.
In this paper we proposed an automatic method for leaf damage detection and
severity estimation of coffee leaf by avoiding defoliation. After enhancing the
contrast of the original image using LUT based gamma correction, the image is
processed to remove the background, and the output leaf is clustered using
Fuzzy c-means segmentation in V channel of YUV color space to maximize all leaf
damage detection, and finally, the severity of leaf is estimated in terms of
ratio for leaf pixel distribution between the normal and the detected leaf
damage. The results in each proposed method was compared to the current
researches and the accuracy is obvious either in the background removal or
damage detection.
|
1402.5830 | A hybrid swarm-based algorithm for single-objective optimization
problems involving high-cost analyses | math.OC cs.AI cs.DC cs.NE | In many technical fields, single-objective optimization procedures in
continuous domains involve expensive numerical simulations. In this context, an
improvement of the Artificial Bee Colony (ABC) algorithm, called the Artificial
super-Bee enhanced Colony (AsBeC), is presented. AsBeC is designed to provide
fast convergence speed, high solution accuracy and robust performance over a
wide range of problems. It implements enhancements of the ABC structure and
hybridizations with interpolation strategies. The latter are inspired by the
quadratic trust region approach for local investigation and by an efficient
global optimizer for separable problems. Each modification and their combined
effects are studied with appropriate metrics on a numerical benchmark, which is
also used for comparing AsBeC with some effective ABC variants and other
derivative-free algorithms. In addition, the presented algorithm is validated
on two recent benchmarks adopted for competitions in international conferences.
Results show remarkable competitiveness and robustness for AsBeC.
|
1402.5836 | Avoiding pathologies in very deep networks | stat.ML cs.LG | Choosing appropriate architectures and regularization strategies for deep
networks is crucial to good predictive performance. To shed light on this
problem, we analyze the analogous problem of constructing useful priors on
compositions of functions. Specifically, we study the deep Gaussian process, a
type of infinitely-wide, deep neural network. We show that in standard
architectures, the representational capacity of the network tends to capture
fewer degrees of freedom as the number of layers increases, retaining only a
single degree of freedom in the limit. We propose an alternate network
architecture which does not suffer from this pathology. We also examine deep
covariance functions, obtained by composing infinitely many feature transforms.
Lastly, we characterize the class of models obtained by performing dropout on
Gaussian processes.
|
1402.5845 | Mathematical Modelling of Energy Wastage in Absence of Levelling and
Sectoring in Wireless Sensor Networks | cs.IT cs.NI math.IT | In this paper, we quantitatively (mathematically) reason the energy savings
achieved by the Leveling and Sectoring protocol. Due to the energy constraints
on the sensor nodes (in terms of supply of energy) energy awareness has become
crucial in networking protocol stack. The understanding of routing protocols
along with energy awareness in a network would help in energy opti-mization
with efficient routing .We provide analytical modelling of the energy wastage
in the absence of Leveling and Sectoring protocol by considering the network in
the form of binary tree, nested tree and Q-ary tree. The simulation results
reflect the energy wastage in the absence of Levelling and Sectoring based
hybrid protocol.
|
1402.5859 | A Novel Face Recognition Method using Nearest Line Projection | cs.CV | Face recognition is a popular application of pat- tern recognition methods,
and it faces challenging problems including illumination, expression, and pose.
The most popular way is to learn the subspaces of the face images so that it
could be project to another discriminant space where images of different
persons can be separated. In this paper, a nearest line projection algorithm is
developed to represent the face images for face recognition. Instead of
projecting an image to its nearest image, we try to project it to its nearest
line spanned by two different face images. The subspaces are learned so that
each face image to its nearest line is minimized. We evaluated the proposed
algorithm on some benchmark face image database, and also compared it to some
other image projection algorithms. The experiment results showed that the
proposed algorithm outperforms other ones.
|
1402.5869 | Compound Multiple Access Channel with Confidential Messages | cs.IT math.IT | In this paper, we study the problem of secret communication over a Compound
Multiple Access Channel (MAC). In this channel, we assume that one of the
transmitted messages is confidential that is only decoded by its corresponding
receiver and kept secret from the other receiver. For this proposed setting
(compound MAC with confidential messages), we derive general inner and outer
bounds on the secrecy capacity region. Also, as examples, we investigate 'Less
noisy' and 'Gaussian' versions of this channel, and extend the results of the
discrete memoryless version to these cases. Moreover, providing numerical
examples for the Gaussian case, we illustrate the comparison between achievable
rate regions of compound MAC and compound MAC with confidential messages.
|
1402.5874 | Predictive Interval Models for Non-parametric Regression | cs.LG stat.ML | Having a regression model, we are interested in finding two-sided intervals
that are guaranteed to contain at least a desired proportion of the conditional
distribution of the response variable given a specific combination of
predictors. We name such intervals predictive intervals. This work presents a
new method to find two-sided predictive intervals for non-parametric least
squares regression without the homoscedasticity assumption. Our predictive
intervals are built by using tolerance intervals on prediction errors in the
query point's neighborhood. We proposed a predictive interval model test and we
also used it as a constraint in our hyper-parameter tuning algorithm. This
gives an algorithm that finds the smallest reliable predictive intervals for a
given dataset. We also introduce a measure for comparing different interval
prediction methods yielding intervals having different size and coverage. These
experiments show that our methods are more reliable, effective and precise than
other interval prediction methods.
|
1402.5876 | Manifold Gaussian Processes for Regression | stat.ML cs.LG | Off-the-shelf Gaussian Process (GP) covariance functions encode smoothness
assumptions on the structure of the function to be modeled. To model complex
and non-differentiable functions, these smoothness assumptions are often too
restrictive. One way to alleviate this limitation is to find a different
representation of the data by introducing a feature space. This feature space
is often learned in an unsupervised way, which might lead to data
representations that are not useful for the overall regression task. In this
paper, we propose Manifold Gaussian Processes, a novel supervised method that
jointly learns a transformation of the data into a feature space and a GP
regression from the feature space to observed space. The Manifold GP is a full
GP and allows to learn data representations, which are useful for the overall
regression task. As a proof-of-concept, we evaluate our approach on complex
non-smooth functions where standard GPs perform poorly, such as step functions
and robotics tasks with contacts.
|
1402.5878 | Friend Inspector: A Serious Game to Enhance Privacy Awareness in Social
Networks | cs.CY cs.AI | Currently, many users of Social Network Sites are insufficiently aware of who
can see their shared personal items. Nonetheless, most approaches focus on
enhancing privacy in Social Networks through improved privacy settings,
neglecting the fact that privacy awareness is a prerequisite for privacy
control. Social Network users first need to know about privacy issues before
being able to make adjustments. In this paper, we introduce Friend Inspector, a
serious game that allows its users to playfully increase their privacy
awareness on Facebook. Since its launch, Friend Inspector has attracted a
significant number of visitors, emphasising the need for better tools to
understand privacy settings on Social Networks.
|
1402.5881 | Filter Bank Multicarrier for Massive MIMO | cs.IT math.IT | This paper introduces filter bank multicarrier (FBMC) as a potential
candidate in the application of massive MIMO communication. It also points out
the advantages of FBMC over OFDM (orthogonal frequency division multiplexing)
in the application of massive MIMO. The absence of cyclic prefix in FBMC
increases the bandwidth efficiency. In addition, FBMC allows carrier
aggregation straightforwardly. Self-equalization, a property of FBMC in massive
MIMO that is introduced in this paper, has the impact of reducing (i)
complexity; (ii) sensitivity to carrier frequency offset (CFO); (iii)
peak-to-average power ratio (PAPR); (iv) system latency; and (v) increasing
bandwidth efficiency. The numerical results that corroborate these claims are
presented.
|
1402.5886 | Near Optimal Bayesian Active Learning for Decision Making | cs.LG cs.AI | How should we gather information to make effective decisions? We address
Bayesian active learning and experimental design problems, where we
sequentially select tests to reduce uncertainty about a set of hypotheses.
Instead of minimizing uncertainty per se, we consider a set of overlapping
decision regions of these hypotheses. Our goal is to drive uncertainty into a
single decision region as quickly as possible.
We identify necessary and sufficient conditions for correctly identifying a
decision region that contains all hypotheses consistent with observations. We
develop a novel Hyperedge Cutting (HEC) algorithm for this problem, and prove
that is competitive with the intractable optimal policy. Our efficient
implementation of the algorithm relies on computing subsets of the complete
homogeneous symmetric polynomials. Finally, we demonstrate its effectiveness on
two practical applications: approximate comparison-based learning and active
localization using a robot manipulator.
|
1402.5902 | On Learning from Label Proportions | stat.ML cs.LG | Learning from Label Proportions (LLP) is a learning setting, where the
training data is provided in groups, or "bags", and only the proportion of each
class in each bag is known. The task is to learn a model to predict the class
labels of the individual instances. LLP has broad applications in political
science, marketing, healthcare, and computer vision. This work answers the
fundamental question, when and why LLP is possible, by introducing a general
framework, Empirical Proportion Risk Minimization (EPRM). EPRM learns an
instance label classifier to match the given label proportions on the training
data. Our result is based on a two-step analysis. First, we provide a VC bound
on the generalization error of the bag proportions. We show that the bag sample
complexity is only mildly sensitive to the bag size. Second, we show that under
some mild assumptions, good bag proportion prediction guarantees good instance
label prediction. The results together provide a formal guarantee that the
individual labels can indeed be learned in the LLP setting. We discuss
applications of the analysis, including justification of LLP algorithms,
learning with population proportions, and a paradigm for learning algorithms
with privacy guarantees. We also demonstrate the feasibility of LLP based on a
case study in real-world setting: predicting income based on census data.
|
1402.5912 | On the Vector Broadcast Channel with Alternating CSIT: A Topological
Perspective | cs.IT math.IT | In many wireless networks, link strengths are affected by many topological
factors such as different distances, shadowing and inter-cell interference,
thus resulting in some links being generally stronger than other links. From an
information theoretic point of view, accounting for such topological aspects
has remained largely unexplored, despite strong indications that such aspects
can crucially affect transceiver and feedback design, as well as the overall
performance.
The work here takes a step in exploring this interplay between topology,
feedback and performance. This is done for the two user broadcast channel with
random fading, in the presence of a simple two-state topological setting of
statistically strong vs. weaker links, and in the presence of a practical
ternary feedback setting of alternating channel state information at the
transmitter (alternating CSIT) where for each channel realization, this CSIT
can be perfect, delayed, or not available.
In this setting, the work derives generalized degrees-of-freedom bounds and
exact expressions, that capture performance as a function of feedback
statistics and topology statistics. The results are based on novel topological
signal management (TSM) schemes that account for topology in order to fully
utilize feedback. This is achieved for different classes of feedback mechanisms
of practical importance, from which we identify specific feedback mechanisms
that are best suited for different topologies. This approach offers further
insight on how to split the effort --- of channel learning and feeding back
CSIT --- for the strong versus for the weaker link. Further intuition is
provided on the possible gains from topological spatio-temporal diversity,
where topology changes in time and across users.
|
1402.5923 | A Testbed for Cross-Dataset Analysis | cs.CV | Since its beginning visual recognition research has tried to capture the huge
variability of the visual world in several image collections. The number of
available datasets is still progressively growing together with the amount of
samples per object category. However, this trend does not correspond directly
to an increasing in the generalization capabilities of the developed
recognition systems. Each collection tends to have its specific characteristics
and to cover just some aspects of the visual world: these biases often narrow
the effect of the methods defined and tested separately over each image set.
Our work makes a first step towards the analysis of the dataset bias problem on
a large scale. We organize twelve existing databases in a unique corpus and we
present the visual community with a useful feature repository for future
research.
|
1402.5927 | Limitations on Quantum Key Repeaters | quant-ph cs.IT math.IT | A major application of quantum communication is the distribution of entangled
particles for use in quantum key distribution (QKD). Due to noise in the
communication line, QKD is in practice limited to a distance of a few hundred
kilometres, and can only be extended to longer distances by use of a quantum
repeater, a device which performs entanglement distillation and quantum
teleportation. The existence of noisy entangled states that are undistillable
but nevertheless useful for QKD raises the question of the feasibility of a
quantum key repeater, which would work beyond the limits of entanglement
distillation, hence possibly tolerating higher noise levels than existing
protocols. Here we exhibit fundamental limits on such a device in the form of
bounds on the rate at which it may extract secure key. As a consequence, we
give examples of states suitable for QKD but unsuitable for the most general
quantum key repeater protocol.
|
1402.5951 | Navigation Function Based Decentralized Control of A Multi-Agent System
with Network Connectivity Constraints | cs.SY | A wide range of applications require or can benefit from collaborative
behavior of a group of agents. The technical challenge addressed in this
chapter is the development of a decentralized control strategy that enables
each agent to independently navigate to ensure agents achieve a collective goal
while maintaining network connectivity. Specifically, cooperative controllers
are developed for networked agents with limited sensing and network
connectivity constraints. By modeling the interaction among the agents as a
graph, several different approaches to address the problems of preserving
network connectivity are presented, with the focus on a method that utilizes
navigation function frameworks. By modeling network connectivity constraints as
artificial obstacles in navigation functions, a decentralized control strategy
is presented in two particular applications, formation control and rendezvous
for a system of autonomous agents, which ensures global convergence to the
unique minimum of the potential field (i.e., desired formation or desired
destination) while preserving network connectivity. Simulation results are
provided to demonstrate the developed strategy.
|
1402.5979 | A Multiplierless Pruned DCT-like Transformation for Image and Video
Compression that Requires 10 Additions Only | cs.MM cs.CV stat.ME | A multiplierless pruned approximate 8-point discrete cosine transform (DCT)
requiring only 10 additions is introduced. The proposed algorithm was assessed
in image and video compression, showing competitive performance with
state-of-the-art methods. Digital implementation in 45 nm CMOS technology up to
place-and-route level indicates clock speed of 288 MHz at a 1.1 V supply. The
8x8 block rate is 36 MHz.The DCT approximation was embedded into HEVC reference
software; resulting video frames, at up to 327 Hz for 8-bit RGB HEVC, presented
negligible image degradation.
|
1402.5988 | Incremental Learning of Event Definitions with Inductive Logic
Programming | cs.LG cs.AI | Event recognition systems rely on properly engineered knowledge bases of
event definitions to infer occurrences of events in time. The manual
development of such knowledge is a tedious and error-prone task, thus
event-based applications may benefit from automated knowledge construction
techniques, such as Inductive Logic Programming (ILP), which combines machine
learning with the declarative and formal semantics of First-Order Logic.
However, learning temporal logical formalisms, which are typically utilized by
logic-based Event Recognition systems is a challenging task, which most ILP
systems cannot fully undertake. In addition, event-based data is usually
massive and collected at different times and under various circumstances.
Ideally, systems that learn from temporal data should be able to operate in an
incremental mode, that is, revise prior constructed knowledge in the face of
new evidence. Most ILP systems are batch learners, in the sense that in order
to account for new evidence they have no alternative but to forget past
knowledge and learn from scratch. Given the increased inherent complexity of
ILP and the volumes of real-life temporal data, this results to algorithms that
scale poorly. In this work we present an incremental method for learning and
revising event-based knowledge, in the form of Event Calculus programs. The
proposed algorithm relies on abductive-inductive learning and comprises a
scalable clause refinement methodology, based on a compressive summarization of
clause coverage in a stream of examples. We present an empirical evaluation of
our approach on real and synthetic data from activity recognition and city
transport applications.
|
1402.5991 | A predictive analytics approach to reducing avoidable hospital
readmission | stat.AP cs.AI | Hospital readmission has become a critical metric of quality and cost of
healthcare. Medicare anticipates that nearly $17 billion is paid out on the 20%
of patients who are readmitted within 30 days of discharge. Although several
interventions such as transition care management and discharge reengineering
have been practiced in recent years, the effectiveness and sustainability
depends on how well they can identify and target patients at high risk of
rehospitalization. Based on the literature, most current risk prediction models
fail to reach an acceptable accuracy level; none of them considers patient's
history of readmission and impacts of patient attribute changes over time; and
they often do not discriminate between planned and unnecessary readmissions.
Tackling such drawbacks, we develop a new readmission metric based on
administrative data that can identify potentially avoidable readmissions from
all other types of readmission. We further propose a tree based classification
method to estimate the predicted probability of readmission that can directly
incorporate patient's history of readmission and risk factors changes over
time. The proposed methods are validated with 2011-12 Veterans Health
Administration data from inpatients hospitalized for heart failure, acute
myocardial infarction, pneumonia, or chronic obstructive pulmonary disease in
the State of Michigan. Results shows improved discrimination power compared to
the literature (c-statistics>80%) and good calibration.
|
1402.5992 | POD/DEIM Reduced-Order Strategies for Efficient Four Dimensional
Variational Data Assimilation | cs.SY math.NA | This work studies reduced order modeling (ROM) approaches to speed up the
solution of variational data assimilation problems with large scale nonlinear
dynamical models. It is shown that a key requirement for a successful reduced
order solution is that reduced order Karush-Kuhn-Tucker conditions accurately
represent their full order counterparts. In particular, accurate reduced order
approximations are needed for the forward and adjoint dynamical models, as well
as for the reduced gradient. New strategies to construct reduced order based
are developed for Proper Orthogonal Decomposition (POD) ROM data assimilation
using both Galerkin and Petrov-Galerkin projections. For the first time POD,
tensorial POD, and discrete empirical interpolation method (DEIM) are employed
to develop reduced data assimilation systems for a geophysical flow model,
namely, the two dimensional shallow water equations. Numerical experiments
confirm the theoretical framework for Galerkin projection. In the case of
Petrov-Galerkin projection, stabilization strategies must be considered for the
reduced order models. The new reduced order shallow water data assimilation
system provides analyses similar to those produced by the full resolution data
assimilation system in one tenth of the computational time.
|
1402.6010 | Tripartite Graph Clustering for Dynamic Sentiment Analysis on Social
Media | cs.SI cs.CL cs.IR | The growing popularity of social media (e.g, Twitter) allows users to easily
share information with each other and influence others by expressing their own
sentiments on various subjects. In this work, we propose an unsupervised
\emph{tri-clustering} framework, which analyzes both user-level and tweet-level
sentiments through co-clustering of a tripartite graph. A compelling feature of
the proposed framework is that the quality of sentiment clustering of tweets,
users, and features can be mutually improved by joint clustering. We further
investigate the evolution of user-level sentiments and latent feature vectors
in an online framework and devise an efficient online algorithm to sequentially
update the clustering of tweets, users and features with newly arrived data.
The online framework not only provides better quality of both dynamic
user-level and tweet-level sentiment analysis, but also improves the
computational and storage efficiency. We verified the effectiveness and
efficiency of the proposed approaches on the November 2012 California ballot
Twitter data.
|
1402.6013 | Open science in machine learning | cs.LG cs.DL | We present OpenML and mldata, open science platforms that provides easy
access to machine learning data, software and results to encourage further
study and application. They go beyond the more traditional repositories for
data sets and software packages in that they allow researchers to also easily
share the results they obtained in experiments and to compare their solutions
with those of others.
|
1402.6016 | Incremental Redundancy, Fountain Codes and Advanced Topics | cs.IT math.IT | This document is written in order to establish a common base ground on which
the majority of the relevant research about linear fountain codes can be
analyzed and compared. As far as I am concerned, there is no unified approach
that outlines and compares most of the published linear fountain codes in a
single and self-contained framework. This written document has not only
resulted in the review of theoretical fundamentals of efficient coding
techniques for incremental redundancy and linear fountain coding, but also
helped me have a comprehensive reference document and hopefully for many other
graduate students who would like to have some background to pursue a research
career regarding fountain codes and their various applications. Some background
in information, coding, graph and probability theory is expected. Although
various aspects of this topic and many other relevant research are deliberately
left out, I still hope that this document shall serve researchers' need well. I
have also included several exercises to warm up. The presentation style is
usually informal and the presented material is not necessarily rigorous. There
are many spots in the text that are product of my coauthors and myself,
although some of which have not been published yet.
|
1402.6028 | Algorithms for multi-armed bandit problems | cs.AI cs.LG | Although many algorithms for the multi-armed bandit problem are
well-understood theoretically, empirical confirmation of their effectiveness is
generally scarce. This paper presents a thorough empirical study of the most
popular multi-armed bandit algorithms. Three important observations can be made
from our results. Firstly, simple heuristics such as epsilon-greedy and
Boltzmann exploration outperform theoretically sound algorithms on most
settings by a significant margin. Secondly, the performance of most algorithms
varies dramatically with the parameters of the bandit problem. Our study
identifies for each algorithm the settings where it performs well, and the
settings where it performs poorly. Thirdly, the algorithms' performance
relative each to other is affected only by the number of bandit arms and the
variance of the rewards. This finding may guide the design of subsequent
empirical evaluations. In the second part of the paper, we turn our attention
to an important area of application of bandit algorithms: clinical trials.
Although the design of clinical trials has been one of the principal practical
problems motivating research on multi-armed bandits, bandit algorithms have
never been evaluated as potential treatment allocation strategies. Using data
from a real study, we simulate the outcome that a 2001-2002 clinical trial
would have had if bandit algorithms had been used to allocate patients to
treatments. We find that an adaptive trial would have successfully treated at
least 50% more patients, while significantly reducing the number of adverse
effects and increasing patient retention. At the end of the trial, the best
treatment could have still been identified with a high level of statistical
confidence. Our findings demonstrate that bandit algorithms are attractive
alternatives to current adaptive treatment allocation strategies.
|
1402.6034 | A DCT Approximation for Image Compression | cs.MM cs.CV stat.ME | An orthogonal approximation for the 8-point discrete cosine transform (DCT)
is introduced. The proposed transformation matrix contains only zeros and ones;
multiplications and bit-shift operations are absent. Close spectral behavior
relative to the DCT was adopted as design criterion. The proposed algorithm is
superior to the signed discrete cosine transform. It could also outperform
state-of-the-art algorithms in low and high image compression scenarios,
exhibiting at the same time a comparable computational complexity.
|
1402.6044 | Generalized Nonlinear Robust Energy-to-Peak Filtering for Differential
Algebraic Systems | cs.SY math.OC | The problem of robust nonlinear energy-to-peak filtering for nonlinear
descriptor systems with model uncertainties is addressed. The system is assumed
to have nonlinearities both in the state and output equations as well as
norm-bounded time-varying uncertainties in the realization matrices. A
generalized nonlinear dynamic filtering structure is proposed for such a class
of systems with more degrees of freedom than the conventional static-gain and
dynamic filtering structures. The L2-Linfty filter is synthesized through
semidefinite programming and strict LMIs, in which the energy-to-peak filtering
performance in optimized.
|
1402.6050 | Abiot: A Low cost agile sonic pest control tricopter | cs.RO | In this paper we introduce the concept of an agile electronic pest control
intelligent device for commercial usage and we have evaluated its performance
in comparison with other existing similar technologies. The frequency and
intensities are changed with respect to the target pest however human behavior
has been found to be inert with their exposure. The unit has been tested in lab
conditions as well as field testing done have given encouraging results. The
device can be a standalone unit and hence work for small scale viz. kitchen
garden on the other hand multiple devices acting in coordination with each
other give the desired output on a larger scale.
|
1402.6065 | Multi-Agent Distributed Optimization via Inexact Consensus ADMM | cs.SY math.OC | Multi-agent distributed consensus optimization problems arise in many signal
processing applications. Recently, the alternating direction method of
multipliers (ADMM) has been used for solving this family of problems. ADMM
based distributed optimization method is shown to have faster convergence rate
compared with classic methods based on consensus subgradient, but can be
computationally expensive, especially for problems with complicated structures
or large dimensions. In this paper, we propose low-complexity algorithms that
can reduce the overall computational cost of consensus ADMM by an order of
magnitude for certain large-scale problems. Central to the proposed algorithms
is the use of an inexact step for each ADMM update, which enables the agents to
perform cheap computation at each iteration. Our convergence analyses show that
the proposed methods converge well under some convexity assumptions. Numerical
results show that the proposed algorithms offer considerably lower
computational complexity than the standard ADMM based distributed optimization
methods.
|
1402.6067 | Regular path queries on graphs with data: A rigid approach | cs.LO cs.DB cs.FL | Regular path queries (RPQ) is a classical navigational query formalism for
graph databases to specify constraints on labeled paths. Recently, RPQs have
been extended by Libkin and Vrgo$\rm \check{c}$ to incorporate data value
comparisons among different nodes on paths, called regular path queries with
data (RDPQ). It has been shown that the evaluation problem of RDPQs is
PSPACE-complete and NLOGSPACE-complete in data complexity. On the other hand,
the containment problem of RDPQs is in general undecidable. In this paper, we
propose a novel approach to extend regular path queries with data value
comparisons, called rigid regular path queries with data (RRDPQ). The main
ingredient of this approach is an automata model called nondeterministic rigid
register automata (NRRA), in which the data value comparisons are \emph{rigid},
in the sense that if the data value in the current position $x$ is compared to
a data value in some other position $y$, then by only using the labels (but not
data values), the position $y$ can be uniquely determined from $x$. We show
that NRRAs are robust in the sense that nondeterministic, deterministic and
two-way variant of NRRAs, as well as an extension of regular expressions, are
all of the same expressivity. We then argue that the expressive power of RDPQs
are reasonable by demonstrating that for every graph database, there is a
localized transformation of the graph database so that every RDPQ in the
original graph database can be turned into an equivalent RRDPQ over the
transformed one. Finally, we investigate the computational properties of RRDPQs
and conjunctive RRDPQs (CRRDPQ). In particular, we show that the containment of
CRRDPQs (and RRDPQs) can be decided in 2EXPSPACE.
|
1402.6076 | Machine Learning at Scale | cs.LG cs.MS stat.ML | It takes skill to build a meaningful predictive model even with the abundance
of implementations of modern machine learning algorithms and readily available
computing resources. Building a model becomes challenging if hundreds of
terabytes of data need to be processed to produce the training data set. In a
digital advertising technology setting, we are faced with the need to build
thousands of such models that predict user behavior and power advertising
campaigns in a 24/7 chaotic real-time production environment. As data
scientists, we also have to convince other internal departments critical to
implementation success, our management, and our customers that our machine
learning system works. In this paper, we present the details of the design and
implementation of an automated, robust machine learning platform that impacts
billions of advertising impressions monthly. This platform enables us to
continuously optimize thousands of campaigns over hundreds of millions of
users, on multiple continents, against varying performance objectives.
|
1402.6077 | Inductive Logic Boosting | cs.LG cs.AI | Recent years have seen a surge of interest in Probabilistic Logic Programming
(PLP) and Statistical Relational Learning (SRL) models that combine logic with
probabilities. Structure learning of these systems is an intersection area of
Inductive Logic Programming (ILP) and statistical learning (SL). However, ILP
cannot deal with probabilities, SL cannot model relational hypothesis. The
biggest challenge of integrating these two machine learning frameworks is how
to estimate the probability of a logic clause only from the observation of
grounded logic atoms. Many current methods models a joint probability by
representing clause as graphical model and literals as vertices in it. This
model is still too complicate and only can be approximate by pseudo-likelihood.
We propose Inductive Logic Boosting framework to transform the relational
dataset into a feature-based dataset, induces logic rules by boosting Problog
Rule Trees and relaxes the independence constraint of pseudo-likelihood.
Experimental evaluation on benchmark datasets demonstrates that the AUC-PR and
AUC-ROC value of ILP learned rules are higher than current state-of-the-art SRL
methods.
|
1402.6083 | Widely-Linear Digital Self-Interference Cancellation in
Direct-Conversion Full-Duplex Transceiver | cs.IT math.IT | This article addresses the modeling and cancellation of self-interference in
full-duplex direct-conversion radio transceivers, operating under practical
imperfect radio frequency (RF) components. Firstly, detailed self-interference
signal modeling is carried out, taking into account the most important RF
imperfections, namely transmitter power amplifier nonlinear distortion as well
as transmitter and receiver IQ mixer amplitude and phase imbalances. The
analysis shows that after realistic antenna isolation and RF cancellation, the
dominant self-interference waveform at receiver digital baseband can be modeled
through a widely-linear transformation of the original transmit data, opposed
to classical purely linear models. Such widely-linear self-interference
waveform is physically stemming from the transmitter and receiver IQ imaging,
and cannot be efficiently suppressed by classical linear digital cancellation.
Motivated by this, novel widely-linear digital self-interference cancellation
processing is then proposed and formulated, combined with efficient parameter
estimation methods. Extensive simulation results demonstrate that the proposed
widely-linear cancellation processing clearly outperforms the existing linear
solutions, hence enabling the use of practical low-cost RF front-ends utilizing
IQ mixing in full-duplex transceivers.
|
1402.6109 | The Complexity of Repairing, Adjusting, and Aggregating of Extensions in
Abstract Argumentation | cs.DS cs.AI | We study the computational complexity of problems that arise in abstract
argumentation in the context of dynamic argumentation, minimal change, and
aggregation. In particular, we consider the following problems where always an
argumentation framework F and a small positive integer k are given.
- The Repair problem asks whether a given set of arguments can be modified
into an extension by at most k elementary changes (i.e., the extension is of
distance k from the given set).
- The Adjust problem asks whether a given extension can be modified by at
most k elementary changes into an extension that contains a specified argument.
- The Center problem asks whether, given two extensions of distance k,
whether there is a "center" extension that is a distance at most (k-1) from
both given extensions.
We study these problems in the framework of parameterized complexity, and
take the distance k as the parameter. Our results covers several different
semantics, including admissible, complete, preferred, semi-stable and stable
semantics.
|
1402.6114 | Node seniority ranking | physics.soc-ph cs.SI | Recent advances in graph theory suggest that is possible to identify the
oldest nodes of a network using only the graph topology. Here we report on
applications to heterogeneous real world networks. To this end, and in order to
gain new insights, we propose the theoretical framework of the Estrada
communicability. We apply it to two technological networks (an underground, the
diffusion of a software worm in a LAN) and to a third network representing a
cholera outbreak. In spite of errors introduced in the adjacency matrix of
their graphs, the identification of the oldest nodes is feasible, within a
small margin of error, and extremely simple. Utilizations include the search of
the initial disease-spreader (patient zero problem), rumors in social networks,
malware in computer networks, triggering events in blackouts, oldest urban
sites recognition.
|
1402.6124 | Differential Privacy in Metric Spaces: Numerical, Categorical and
Functional Data Under the One Roof | cs.DB cs.IT math.IT math.PR | We study Differential Privacy in the abstract setting of Probability on
metric spaces. Numerical, categorical and functional data can be handled in a
uniform manner in this setting. We demonstrate how mechanisms based on data
sanitisation and those that rely on adding noise to query responses fit within
this framework. We prove that once the sanitisation is differentially private,
then so is the query response for any query. We show how to construct
sanitisations for high-dimensional databases using simple 1-dimensional
mechanisms. We also provide lower bounds on the expected error for
differentially private sanitisations in the general metric space setting.
Finally, we consider the question of sufficient sets for differential privacy
and show that for relaxed differential privacy, any algebra generating the
Borel $\sigma$-algebra is a sufficient set for relaxed differential privacy.
|
1402.6132 | Uncovering the information core in recommender systems | cs.IR | With the rapid growth of the Internet and overwhelming amount of information
that people are confronted with, recommender systems have been developed to
effiectively support users' decision-making process in online systems. So far,
much attention has been paid to designing new recommendation algorithms and
improving existent ones. However, few works considered the different
contributions from different users to the performance of a recommender system.
Such studies can help us improve the recommendation efficiency by excluding
irrelevant users. In this paper, we argue that in each online system there
exists a group of core users who carry most of the information for
recommendation. With them, the recommender systems can already generate
satisfactory recommendation. Our core user extraction method enables the
recommender systems to achieve 90% of the accuracy by taking only 20% of the
data into account.
|
1402.6133 | Bayesian Sample Size Determination of Vibration Signals in Machine
Learning Approach to Fault Diagnosis of Roller Bearings | stat.ML cs.LG | Sample size determination for a data set is an important statistical process
for analyzing the data to an optimum level of accuracy and using minimum
computational work. The applications of this process are credible in every
domain which deals with large data sets and high computational work. This study
uses Bayesian analysis for determination of minimum sample size of vibration
signals to be considered for fault diagnosis of a bearing using pre-defined
parameters such as the inverse standard probability and the acceptable margin
of error. Thus an analytical formula for sample size determination is
introduced. The fault diagnosis of the bearing is done using a machine learning
approach using an entropy-based J48 algorithm. The following method will help
researchers involved in fault diagnosis to determine minimum sample size of
data for analysis for a good statistical stability and precision.
|
1402.6138 | Discovering the Network Backbone from Traffic Activity Data | cs.SI | We introduce a new computational problem, the BackboneDiscovery problem,
which encapsulates both functional and structural aspects of network analysis.
While the topology of a typical road network has been available for a long
time (e.g., through maps), it is only recently that fine-granularity functional
(activity and usage) information about the network (like source-destination
traffic information) is being collected and is readily available. The
combination of functional and structural information provides an efficient way
to explore and understand usage patterns of networks and aid in design and
decision making. We propose efficient algorithms for the BackboneDiscovery
problem including a novel use of edge centrality. We observe that for many real
world networks, our algorithm produces a backbone with a small subset of the
edges that support a large percentage of the network activity.
|
1402.6208 | The Anatomy of a Modular System for Media Content Analysis | cs.MA cs.AI cs.DC | Intelligent systems for the annotation of media content are increasingly
being used for the automation of parts of social science research. In this
domain the problem of integrating various Artificial Intelligence (AI)
algorithms into a single intelligent system arises spontaneously. As part of
our ongoing effort in automating media content analysis for the social
sciences, we have built a modular system by combining multiple AI modules into
a flexible framework in which they can cooperate in complex tasks. Our system
combines data gathering, machine translation, topic classification, extraction
and annotation of entities and social networks, as well as many other tasks
that have been perfected over the past years of AI research. Over the last few
years, it has allowed us to realise a series of scientific studies over a vast
range of applications including comparative studies between news outlets and
media content in different countries, modelling of user preferences, and
monitoring public mood. The framework is flexible and allows the design and
implementation of modular agents, where simple modules cooperate in the
annotation of a large dataset without central coordination.
|
1402.6225 | Predicting missing links via significant paths | physics.soc-ph cs.SI physics.data-an | Link prediction plays an important role in understanding intrinsic evolving
mechanisms of networks. With the belief that the likelihood of the existence of
a link between two nodes is strongly related with their similarity, many
methods have been proposed to calculate node similarity based on node
attributes and/or topological structures. Among a large variety of methods that
take into account paths connecting the target pair of nodes, most of which
neglect the heterogeneity of those paths. Our hypothesis is that a path
consisting of small-degree nodes provides a strong evidence of similarity
between two ends, accordingly, we propose a so-called sig- nificant path index
in this Letter to leverage intermediate nodes' degrees in similarity
calculation. Empirical experiments on twelve disparate real networks
demonstrate that the proposed index outperforms the mainstream link prediction
baselines.
|
1402.6238 | Improving Collaborative Filtering based Recommenders using Topic
Modelling | cs.IR cs.CL cs.LG | Standard Collaborative Filtering (CF) algorithms make use of interactions
between users and items in the form of implicit or explicit ratings alone for
generating recommendations. Similarity among users or items is calculated
purely based on rating overlap in this case,without considering explicit
properties of users or items involved, limiting their applicability in domains
with very sparse rating spaces. In many domains such as movies, news or
electronic commerce recommenders, considerable contextual data in text form
describing item properties is available along with the rating data, which could
be utilized to improve recommendation quality.In this paper, we propose a novel
approach to improve standard CF based recommenders by utilizing latent
Dirichlet allocation (LDA) to learn latent properties of items, expressed in
terms of topic proportions, derived from their textual description. We infer
user's topic preferences or persona in the same latent space,based on her
historical ratings. While computing similarity between users, we make use of a
combined similarity measure involving rating overlap as well as similarity in
the latent topic space. This approach alleviates sparsity problem as it allows
calculation of similarity between users even if they have not rated any items
in common. Our experiments on multiple public datasets indicate that the
proposed hybrid approach significantly outperforms standard user Based and item
Based CF recommenders in terms of classification accuracy metrics such as
precision, recall and f-measure.
|
1402.6239 | Improved Upper and Lower Bound Heuristics for Degree Anonymization in
Social Networks | cs.SI cs.DS | Motivated by a strongly growing interest in anonymizing social network data,
we investigate the NP-hard Degree Anonymization problem: given an undirected
graph, the task is to add a minimum number of edges such that the graph becomes
k-anonymous. That is, for each vertex there have to be at least k-1 other
vertices of exactly the same degree. The model of degree anonymization has been
introduced by Liu and Terzi [ACM SIGMOD'08], who also proposed and evaluated a
two-phase heuristic. We present an enhancement of this heuristic, including new
algorithms for each phase which significantly improve on the previously known
theoretical and practical running times. Moreover, our algorithms are optimized
for large-scale social networks and provide upper and lower bounds for the
optimal solution. Notably, on about 26 % of the real-world data we provide
(provably) optimal solutions; whereas in the other cases our upper bounds
significantly improve on known heuristic solutions.
|
1402.6243 | Globally Optimal Cooperation in Dense Cognitive Radio Networks | cs.NI cs.IT math.IT | The problem of calculating the local and global decision thresholds in hard
decisions based cooperative spectrum sensing is well known for its mathematical
intractability. Previous work relied on simple suboptimal counting rules for
decision fusion in order to avoid the exhaustive numerical search required for
obtaining the optimal thresholds. However, these simple rules are not globally
optimal as they do not maximize the overall global detection probability by
jointly selecting local and global thresholds. Instead, they maximize the
detection probability for a specific global threshold. In this paper, a
globally optimal decision fusion rule for Primary User signal detection based
on the Neyman- Pearson (NP) criterion is derived. The algorithm is based on a
novel representation for the global performance metrics in terms of the
regularized incomplete beta function. Based on this mathematical
representation, it is shown that the globally optimal NP hard decision fusion
test can be put in the form of a conventional one dimensional convex
optimization problem. A binary search for the global threshold can be applied
yielding a complexity of O(log2(N)), where N represents the number of
cooperating users. The logarithmic complexity is appreciated because we are
concerned with dense networks, and thus N is expected to be large. The proposed
optimal scheme outperforms conventional counting rules, such as the OR, AND,
and MAJORITY rules. It is shown via simulations that, although the optimal rule
tends to the simple OR rule when the number of cooperating secondary users is
small, it offers significant SNR gain in dense cognitive radio networks with
large number of cooperating users.
|
1402.6273 | Explaining Snapshots of Network Diffusions: Structural and Hardness
Results | cs.SI physics.soc-ph | Much research has been done on studying the diffusion of ideas or
technologies on social networks including the \textit{Influence Maximization}
problem and many of its variations. Here, we investigate a type of inverse
problem. Given a snapshot of the diffusion process, we seek to understand if
the snapshot is feasible for a given dynamic, i.e., whether there is a limited
number of nodes whose initial adoption can result in the snapshot in finite
time. While similar questions have been considered for epidemic dynamics, here,
we consider this problem for variations of the deterministic Linear Threshold
Model, which is more appropriate for modeling strategic agents. Specifically,
we consider both sequential and simultaneous dynamics when deactivations are
allowed and when they are not. Even though we show hardness results for all
variations we consider, we show that the case of sequential dynamics with
deactivations allowed is significantly harder than all others. In contrast,
sequential dynamics make the problem trivial on cliques even though it's
complexity for simultaneous dynamics is unknown. We complement our hardness
results with structural insights that can help better understand diffusions of
social networks under various dynamics.
|
1402.6278 | Sample Complexity Bounds on Differentially Private Learning via
Communication Complexity | cs.DS cs.CC cs.LG | In this work we analyze the sample complexity of classification by
differentially private algorithms. Differential privacy is a strong and
well-studied notion of privacy introduced by Dwork et al. (2006) that ensures
that the output of an algorithm leaks little information about the data point
provided by any of the participating individuals. Sample complexity of private
PAC and agnostic learning was studied in a number of prior works starting with
(Kasiviswanathan et al., 2008) but a number of basic questions still remain
open, most notably whether learning with privacy requires more samples than
learning without privacy.
We show that the sample complexity of learning with (pure) differential
privacy can be arbitrarily higher than the sample complexity of learning
without the privacy constraint or the sample complexity of learning with
approximate differential privacy. Our second contribution and the main tool is
an equivalence between the sample complexity of (pure) differentially private
learning of a concept class $C$ (or $SCDP(C)$) and the randomized one-way
communication complexity of the evaluation problem for concepts from $C$. Using
this equivalence we prove the following bounds:
1. $SCDP(C) = \Omega(LDim(C))$, where $LDim(C)$ is the Littlestone's (1987)
dimension characterizing the number of mistakes in the online-mistake-bound
learning model. Known bounds on $LDim(C)$ then imply that $SCDP(C)$ can be much
higher than the VC-dimension of $C$.
2. For any $t$, there exists a class $C$ such that $LDim(C)=2$ but $SCDP(C)
\geq t$.
3. For any $t$, there exists a class $C$ such that the sample complexity of
(pure) $\alpha$-differentially private PAC learning is $\Omega(t/\alpha)$ but
the sample complexity of the relaxed $(\alpha,\beta)$-differentially private
PAC learning is $O(\log(1/\beta)/\alpha)$. This resolves an open problem of
Beimel et al. (2013b).
|
1402.6286 | Improved Recovery Guarantees for Phase Retrieval from Coded Diffraction
Patterns | cs.IT math.IT quant-ph | In this work we analyze the problem of phase retrieval from Fourier
measurements with random diffraction patterns. To this end, we consider the
recently introduced PhaseLift algorithm, which expresses the problem in the
language of convex optimization. We provide recovery guarantees which require
O(log^2 d) different diffraction patterns, thus improving on recent results by
Candes et al. [arXiv:1310.3240], which require O(log^4 d) different patterns.
|
1402.6288 | A categorization scheme for socialbot attacks in online social networks | cs.SI physics.soc-ph | In the past, online social networks (OSN) like Facebook and Twitter became
powerful instruments for communication and networking. Unfortunately, they have
also become a welcome target for socialbot attacks. Therefore, a deep
understanding of the nature of such attacks is important to protect the
Eco-System of OSNs. In this extended abstract we propose a categorization
scheme of social bot attacks that aims at providing an overview of the state of
the art of techniques in this emerging field. Finally, we demonstrate the
usefulness of our categorization scheme by characterizing recent socialbot
attacks according to our categorization scheme.
|
1402.6289 | Understanding the impact of socialbot attacks in online social networks | cs.SI physics.soc-ph | Online social networks (OSN) like Twitter or Facebook are popular and
powerful since they allow reaching millions of users online. They are also a
popular target for socialbot attacks. Without a deep understanding of the
impact of such attacks, the potential of online social networks as an
instrument for facilitating discourse or democratic processes is in jeopardy.
In this extended abstract we present insights from a live lab experiment in
which social bots aimed at manipulating the social graph of an online social
network, in our case Twitter. We explored the link creation behavior between
targeted human users and our results suggest that socialbots may indeed have
the ability to shape and influence the social graph in online social networks.
However, our results also show that external factors may play an important role
in the creation of social links in OSNs.
|
1402.6294 | Frankl-R\"odl type theorems for codes and permutations | math.CO cs.IT math.IT | We give a new proof of the Frankl-R\"odl theorem on forbidden intersections,
via the probabilistic method of dependent random choice. Our method extends to
codes with forbidden distances, where over large alphabets our bound is
significantly better than that obtained by Frankl and R\"odl. We also apply our
bound to a question of Ellis on sets of permutations with forbidden distances,
and to establish a weak form of a conjecture of Alon, Shpilka and Umans on
sunflowers.
|
1402.6299 | Necessary and sufficient optimality conditions for classical simulations
of quantum communication processes | quant-ph cs.IT math.IT | We consider the process consisting of preparation, transmission through a
quantum channel, and subsequent measurement of quantum states. The
communication complexity of the channel is the minimal amount of classical
communication required for classically simulating it. Recently, we reduced the
computation of this quantity to a convex minimization problem with linear
constraints. Every solution of the constraints provides an upper bound on the
communication complexity. In this paper, we derive the dual maximization
problem of the original one. The feasible points of the dual constraints, which
are inequalities, give lower bounds on the communication complexity, as
illustrated with an example. The optimal values of the two problems turn out to
be equal (zero duality gap). By this property, we provide necessary and
sufficient conditions for optimality in terms of a set of equalities and
inequalities. We use these conditions and two reasonable but unproven
hypotheses to derive the lower bound $n 2^{n-1}$ for a noiseless quantum
channel with capacity equal to $n$ qubits. This lower bound can have
interesting consequences in the context of the recent debate on the reality of
the quantum state.
|
1402.6305 | About Adaptive Coding on Countable Alphabets: Max-Stable Envelope
Classes | cs.IT math.IT math.ST stat.TH | In this paper, we study the problem of lossless universal source coding for
stationary memoryless sources on countably infinite alphabets. This task is
generally not achievable without restricting the class of sources over which
universality is desired. Building on our prior work, we propose natural
families of sources characterized by a common dominating envelope. We
particularly emphasize the notion of adaptivity, which is the ability to
perform as well as an oracle knowing the envelope, without actually knowing it.
This is closely related to the notion of hierarchical universal source coding,
but with the important difference that families of envelope classes are not
discretely indexed and not necessarily nested.
Our contribution is to extend the classes of envelopes over which adaptive
universal source coding is possible, namely by including max-stable
(heavy-tailed) envelopes which are excellent models in many applications, such
as natural language modeling. We derive a minimax lower bound on the redundancy
of any code on such envelope classes, including an oracle that knows the
envelope. We then propose a constructive code that does not use knowledge of
the envelope. The code is computationally efficient and is structured to use an
{E}xpanding {T}hreshold for {A}uto-{C}ensoring, and we therefore dub it the
\textsc{ETAC}-code. We prove that the \textsc{ETAC}-code achieves the lower
bound on the minimax redundancy within a factor logarithmic in the sequence
length, and can be therefore qualified as a near-adaptive code over families of
heavy-tailed envelopes. For finite and light-tailed envelopes the penalty is
even less, and the same code follows closely previous results that explicitly
made the light-tailed assumption. Our technical results are founded on methods
from regular variation theory and concentration of measure.
|
1402.6361 | Oracle-Based Robust Optimization via Online Learning | math.OC cs.LG | Robust optimization is a common framework in optimization under uncertainty
when the problem parameters are not known, but it is rather known that the
parameters belong to some given uncertainty set. In the robust optimization
framework the problem solved is a min-max problem where a solution is judged
according to its performance on the worst possible realization of the
parameters. In many cases, a straightforward solution of the robust
optimization problem of a certain type requires solving an optimization problem
of a more complicated type, and in some cases even NP-hard. For example,
solving a robust conic quadratic program, such as those arising in robust SVM,
ellipsoidal uncertainty leads in general to a semidefinite program. In this
paper we develop a method for approximately solving a robust optimization
problem using tools from online convex optimization, where in every stage a
standard (non-robust) optimization program is solved. Our algorithms find an
approximate robust solution using a number of calls to an oracle that solves
the original (non-robust) problem that is inversely proportional to the square
of the target accuracy.
|
1402.6366 | LSSVM-ABC Algorithm for Stock Price prediction | cs.CE cs.NE | In this paper, Artificial Bee Colony (ABC) algorithm which inspired from the
behavior of honey bees swarm is presented. ABC is a stochastic population-based
evolutionary algorithm for problem solving. ABC algorithm, which is considered
one of the most recently swarm intelligent techniques, is proposed to optimize
least square support vector machine (LSSVM) to predict the daily stock prices.
The proposed model is based on the study of stocks historical data, technical
indicators and optimizing LSSVM with ABC algorithm. ABC selects best free
parameters combination for LSSVM to avoid over-fitting and local minima
problems and improve prediction accuracy. LSSVM optimized by Particle swarm
optimization (PSO) algorithm, LSSVM, and ANN techniques are used for comparison
with proposed model. Proposed model tested with twenty datasets representing
different sectors in S&P 500 stock market. Results presented in this paper show
that the proposed model has fast convergence speed, and it also achieves better
accuracy than compared techniques in most cases.
|
1402.6383 | Large-margin Learning of Compact Binary Image Encodings | cs.CV | The use of high-dimensional features has become a normal practice in many
computer vision applications. The large dimension of these features is a
limiting factor upon the number of data points which may be effectively stored
and processed, however. We address this problem by developing a novel approach
to learning a compact binary encoding, which exploits both pair-wise proximity
and class-label information on training data set. Exploiting this extra
information allows the development of encodings which, although compact,
outperform the original high-dimensional features in terms of final
classification or retrieval performance. The method is general, in that it is
applicable to both non-parametric and parametric learning methods. This
generality means that the embedded features are suitable for a wide variety of
computer vision tasks, such as image classification and content-based image
retrieval. Experimental results demonstrate that the new compact descriptor
achieves an accuracy comparable to, and in some cases better than, the visual
descriptor in the original space despite being significantly more compact.
Moreover, any convex loss function and convex regularization penalty (e.g., $
\ell_p $ norm with $ p \ge 1 $) can be incorporated into the framework, which
provides future flexibility.
|
1402.6387 | Active spline model: A shape based model-interactive segmentation | cs.CV | Rarely in literature a method of segmentation cares for the edit after the
algorithm delivers. They provide no solution when segmentation goes wrong. We
propose to formulate point distribution model in terms of
centripetal-parameterized Catmull-Rom spline. Such fusion brings interactivity
to model-based segmentation, so that edit is better handled. When the delivered
segment is unsatisfactory, user simply shifts points to vary the curve. We ran
the method on three disparate imaging modalities and achieved an average
overlap of 0.879 for automated lung segmentation on chest radiographs. The edit
afterward improved the average overlap to 0.945, with a minimum of 0.925. The
source code and the demo video are available at http://wp.me/p3vCKy-2S
|
1402.6399 | Formally self-dual linear binary codes from circulant graphs | math.CO cs.IT math.IT | In 2002, Tonchev first constructed some linear binary codes defined by the
adjacency matrices of undirected graphs. So, graph is an important tool for
searching optimum codes. In this paper, we introduce a new method of searching
(proposed) optimum formally self-dual linear binary codes from circulant
graphs.
|
1402.6404 | On the Algebraic Structure of Linear Trellises | cs.IT cs.DM math.IT | Trellises are crucial graphical representations of codes. While conventional
trellises are well understood, the general theory of (tail-biting) trellises is
still under development. Iterative decoding concretely motivates such theory.
In this paper we first develop a new algebraic framework for a systematic
analysis of linear trellises which enables us to address open foundational
questions. In particular, we present a useful and powerful characterization of
linear trellis isomorphy. We also obtain a new proof of the Factorization
Theorem of Koetter/Vardy and point out unnoticed problems for the group case.
Next, we apply our work to: describe all the elementary trellis
factorizations of linear trellises and consequently to determine all the
minimal linear trellises for a given code; prove that nonmergeable one-to-one
linear trellises are strikingly determined by the edge-label sequences of
certain closed paths; prove self-duality theorems for minimal linear trellises;
analyze quasi-cyclic linear trellises and consequently extend results on
reduced linear trellises to nonreduced ones. To achieve this, we also provide
new insight into mergeability and path connectivity properties of linear
trellises.
Our classification results are important for iterative decoding as we show
that minimal linear trellises can yield different pseudocodewords even if they
have the same graph structure.
|
1402.6407 | Better bitmap performance with Roaring bitmaps | cs.DB | Bitmap indexes are commonly used in databases and search engines. By
exploiting bit-level parallelism, they can significantly accelerate queries.
However, they can use much memory, and thus we might prefer compressed bitmap
indexes. Following Oracle's lead, bitmaps are often compressed using run-length
encoding (RLE). Building on prior work, we introduce the Roaring compressed
bitmap format: it uses packed arrays for compression instead of RLE. We compare
it to two high-performance RLE-based bitmap encoding techniques: WAH (Word
Aligned Hybrid compression scheme) and Concise (Compressed `n' Composable
Integer Set). On synthetic and real data, we find that Roaring bitmaps (1)
often compress significantly better (e.g., 2 times) and (2) are faster than the
compressed alternatives (up to 900 times faster for intersections). Our results
challenge the view that RLE-based bitmap compression is best.
|
1402.6416 | Deconstruction of compound objects from image sets | cs.CV | We propose a method to recover the structure of a compound object from
multiple silhouettes. Structure is expressed as a collection of 3D primitives
chosen from a pre-defined library, each with an associated pose. This has
several advantages over a volume or mesh representation both for estimation and
the utility of the recovered model. The main challenge in recovering such a
model is the combinatorial number of possible arrangements of parts. We address
this issue by exploiting the sparse nature of the problem, and show that our
method scales to objects constructed from large libraries of parts.
|
1402.6422 | A Novel User Pairing Scheme for Functional Decode-and-Forward Multi-way
Relay Network | cs.IT math.IT | In this paper, we consider a functional decode and forward (FDF) multi-way
relay network (MWRN) where a common user facilitates each user in the network
to obtain messages from all other users. We propose a novel user pairing
scheme, which is based on the principle of selecting a common user with the
best average channel gain. This allows the user with the best channel
conditions to contribute to the overall system performance. Assuming lattice
code based transmissions, we derive upper bounds on the average common rate and
the average sum rate with the proposed pairing scheme. Considering M-ary
quadrature amplitude modulation with square constellation as a special case of
lattice code transmission, we derive asymptotic average symbol error rate (SER)
of the MWRN. We show that in terms of the achievable rates, the proposed
pairing scheme outperforms the existing pairing schemes under a wide range of
channel scenarios. The proposed pairing scheme also has lower average SER
compared to existing schemes. We show that overall, the MWRN performance with
the proposed pairing scheme is more robust, compared to existing pairing
schemes, especially under worst case channel conditions when majority of users
have poor average channel gains.
|
1402.6428 | Clustering Multidimensional Data with PSO based Algorithm | cs.NE | Data clustering is a recognized data analysis method in data mining whereas
K-Means is the well known partitional clustering method, possessing pleasant
features. We observed that, K-Means and other partitional clustering techniques
suffer from several limitations such as initial cluster centre selection,
preknowledge of number of clusters, dead unit problem, multiple cluster
membership and premature convergence to local optima. Several optimization
methods are proposed in the literature in order to solve clustering
limitations, but Swarm Intelligence (SI) has achieved its remarkable position
in the concerned area. Particle Swarm Optimization (PSO) is the most popular SI
technique and one of the favorite areas of researchers. In this paper, we
present a brief overview of PSO and applicability of its variants to solve
clustering challenges. Also, we propose an advanced PSO algorithm named as
Subtractive Clustering based Boundary Restricted Adaptive Particle Swarm
Optimization (SC-BR-APSO) algorithm for clustering multidimensional data. For
comparison purpose, we have studied and analyzed various algorithms such as
K-Means, PSO, K-Means-PSO, Hybrid Subtractive + PSO, BRAPSO, and proposed
algorithm on nine different datasets. The motivation behind proposing
SC-BR-APSO algorithm is to deal with multidimensional data clustering, with
minimum error rate and maximum convergence rate.
|
1402.6430 | Coverage and Rate Analysis for Millimeter Wave Cellular Networks | cs.IT cs.NI math.IT | Millimeter wave (mmWave) holds promise as a carrier frequency for fifth
generation cellular networks. Because mmWave signals are sensitive to blockage,
prior models for cellular networks operated in the ultra high frequency (UHF)
band do not apply to analyze mmWave cellular networks directly. Leveraging
concepts from stochastic geometry, this paper proposes a general framework to
evaluate the coverage and rate performance in mmWave cellular networks. Using a
distance-dependent line-of-site (LOS) probability function, the locations of
the LOS and non-LOS base stations are modeled as two independent
non-homogeneous Poisson point processes, to which different path loss laws are
applied. Based on the proposed framework, expressions for the
signal-to-noise-and-interference ratio (SINR) and rate coverage probability are
derived. The mmWave coverage and rate performance are examined as a function of
the antenna geometry and base station density. The case of dense networks is
further analyzed by applying a simplified system model, in which the LOS region
of a user is approximated as a fixed LOS ball. The results show that dense
mmWave networks can achieve comparable coverage and much higher data rates than
conventional UHF cellular systems, despite the presence of blockages. The
results suggest that the cell size to achieve the optimal SINR scales with the
average size of the area that is LOS to a user.
|
1402.6441 | Collaborative Wireless Energy and Information Transfer in Interference
Channel | cs.IT math.IT | This paper studies the simultaneous wireless information and power transfer
(SWIPT) in a multiuser wireless system, in which distributed transmitters send
independent messages to their respective receivers, and at the same time
cooperatively transmit wireless power to the receivers via energy beamforming.
Accordingly, from the wireless information transmission (WIT) perspective, the
system of interest can be modeled as the classic interference channel, while it
also can be regarded as a distributed multiple-input multiple-output (MIMO)
system for collaborative wireless energy transmission (WET). To enable both
information decoding (ID) and energy harvesting (EH) in SWIPT, we adopt the
low-complexity time switching operation at each receiver to switch between the
ID and EH modes over scheduled time. Based on this hybrid model, we aim to
characterize the achievable rate-energy (R-E) trade-offs in the multiuser SWIPT
system under various transmitter-side collaboration schemes. Specifically, to
facilitate the collaborative energy beamforming, we propose a new signal
splitting scheme at the transmitters, where each transmit signal is generally
composed of an information signal component and an energy signal component for
WIT and WET, respectively. With this new scheme, first, we study the two-user
SWIPT system and derive the optimal mode switching rule at the receivers and
the corresponding transmit signal optimization to achieve various R-E
trade-offs over the fading channel. We also compare the R-E performance of our
proposed scheme with transmit energy beamforming and signal splitting against
two existing schemes with partial or no cooperation of the transmitters, and
show remarkable gains over these baseline schemes. Finally, the general case of
SWIPT systems with more than two users is studied, for which we propose and
compare two practical transmit collaboration schemes.
|
1402.6485 | Solving MaxSAT and #SAT on structured CNF formulas | cs.DS cs.AI cs.CC | In this paper we propose a structural parameter of CNF formulas and use it to
identify instances of weighted MaxSAT and #SAT that can be solved in polynomial
time. Given a CNF formula we say that a set of clauses is precisely satisfiable
if there is some complete assignment satisfying these clauses only. Let the
ps-value of the formula be the number of precisely satisfiable sets of clauses.
Applying the notion of branch decompositions to CNF formulas and using ps-value
as cut function, we define the ps-width of a formula. For a formula given with
a decomposition of polynomial ps-width we show dynamic programming algorithms
solving weighted MaxSAT and #SAT in polynomial time. Combining with results of
'Belmonte and Vatshelle, Graph classes with structured neighborhoods and
algorithmic applications, Theor. Comput. Sci. 511: 54-65 (2013)' we get
polynomial-time algorithms solving weighted MaxSAT and #SAT for some classes of
structured CNF formulas. For example, we get $O(m^2(m + n)s)$ algorithms for
formulas $F$ of $m$ clauses and $n$ variables and size $s$, if $F$ has a linear
ordering of the variables and clauses such that for any variable $x$ occurring
in clause $C$, if $x$ appears before $C$ then any variable between them also
occurs in $C$, and if $C$ appears before $x$ then $x$ occurs also in any clause
between them. Note that the class of incidence graphs of such formulas do not
have bounded clique-width.
|
1402.6489 | On the influence of topological characteristics on robustness of complex
networks | physics.soc-ph cs.SI nlin.AO | In this paper, we explore the relationship between the topological
characteristics of a complex network and its robustness to sustained targeted
attacks. Using synthesised scale-free, small-world and random networks, we look
at a number of network measures, including assortativity, modularity, average
path length, clustering coefficient, rich club profiles and scale-free exponent
(where applicable) of a network, and how each of these influence the robustness
of a network under targeted attacks. We use an established robustness
coefficient to measure topological robustness, and consider sustained targeted
attacks by order of node degree. With respect to scale-free networks, we show
that assortativity, modularity and average path length have a positive
correlation with network robustness, whereas clustering coefficient has a
negative correlation. We did not find any correlation between scale-free
exponent and robustness, or rich-club profiles and robustness. The robustness
of small-world networks on the other hand, show substantial positive
correlations with assortativity, modularity, clustering coefficient and average
path length. In comparison, the robustness of Erdos-Renyi random networks did
not have any significant correlation with any of the network properties
considered. A significant observation is that high clustering decreases
topological robustness in scale-free networks, yet it increases topological
robustness in small-world networks. Our results highlight the importance of
topological characteristics in influencing network robustness, and illustrate
design strategies network designers can use to increase the robustness of
scale-free and small-world networks under sustained targeted attacks.
|
1402.6500 | Social Bootstrapping: How Pinterest and Last.fm Social Communities
Benefit by Borrowing Links from Facebook | cs.SI cs.CY physics.soc-ph | How does one develop a new online community that is highly engaging to each
user and promotes social interaction? A number of websites offer friend-finding
features that help users bootstrap social networks on the website by copying
links from an established network like Facebook or Twitter. This paper
quantifies the extent to which such social bootstrapping is effective in
enhancing a social experience of the website. First, we develop a stylised
analytical model that suggests that copying tends to produce a giant connected
component (i.e., a connected community) quickly and preserves properties such
as reciprocity and clustering, up to a linear multiplicative factor. Second, we
use data from two websites, Pinterest and Last.fm, to empirically compare the
subgraph of links copied from Facebook to links created natively. We find that
the copied subgraph has a giant component, higher reciprocity and clustering,
and confirm that the copied connections see higher social interactions.
However, the need for copying diminishes as users become more active and
influential. Such users tend to create links natively on the website, to users
who are more similar to them than their Facebook friends. Our findings give new
insights into understanding how bootstrapping from established social networks
can help engage new users by enhancing social interactivity.
|
1402.6508 | Considerations about multistep community detection | cs.SI physics.soc-ph | The problem and implications of community detection in networks have raised a
huge attention, for its important applications in both natural and social
sciences. A number of algorithms has been developed to solve this problem,
addressing either speed optimization or the quality of the partitions
calculated. In this paper we propose a multi-step procedure bridging the
fastest, but less accurate algorithms (coarse clustering), with the slowest,
most effective ones (refinement). By adopting heuristic ranking of the nodes,
and classifying a fraction of them as `critical', a refinement step can be
restricted to this subset of the network, thus saving computational time.
Preliminary numerical results are discussed, showing improvement of the final
partition.
|
1402.6515 | Performance Analysis of 2*4 MIMO-MC-CDMA in Rayleigh Fading Channel
Using ZF-Decoder | cs.IT cs.NI math.IT | In this paper we analyze the performance of 2*4 MIMO-MC-CDMA system in MATLAB
which highly reduces BER. In this paper we combine MIMO and MC-CDMA system to
reduce bit error rate in which MC-CDMA is multi user and multiple access
schemes which is used to increase the data rate of the system. MC-CDMA system
is a single wideband frequency selective carrier which converts frequency
selective to parallel narrowband flat fading multiple sub-carriers to enhance
the performance of system. Now MC-CDMA system further improved by grouping with
2*4 MIMO system which uses ZF (Zero Forcing) decoder at the receiver to
decrease BER with half rate convolutionally encoded Alamouti STBC block code is
used as transmit diversity of MIMO through multiple transmit antenna.
Importance of using MIMO-MC-CDMA using convolution code is firstly to reduce
the complexity of system secondary to reduce BER and lastly to increase gain.
In this paper we examine system performance in diverse modulation techniques
like, 8-PSK, 16-QAM, QPSK, 32-QAM, 8-QAM and 64-QAM in Rayleigh fading channel
using MATLAB.
|
1402.6516 | Modelling the Lexicon in Unsupervised Part of Speech Induction | cs.CL | Automatically inducing the syntactic part-of-speech categories for words in
text is a fundamental task in Computational Linguistics. While the performance
of unsupervised tagging models has been slowly improving, current
state-of-the-art systems make the obviously incorrect assumption that all
tokens of a given word type must share a single part-of-speech tag. This
one-tag-per-type heuristic counters the tendency of Hidden Markov Model based
taggers to over generate tags for a given word type. However, it is clearly
incompatible with basic syntactic theory. In this paper we extend a
state-of-the-art Pitman-Yor Hidden Markov Model tagger with an explicit model
of the lexicon. In doing so we are able to incorporate a soft bias towards
inducing few tags per type. We develop a particle filter for drawing samples
from the posterior of our model and present empirical results that show that
our model is competitive with and faster than the state-of-the-art without
making any unrealistic restrictions.
|
1402.6519 | Performance Analysis of Interference-Limited Three-Phase Two-Way
Relaying with Direct Channel | cs.IT math.IT | This paper investigates the performance of interference-limited three-phase
two-way relaying with direct channel between two terminals in Rayleigh fading
channels. The outage probability, sum bit error rate (BER) and ergodic sum rate
are analyzed for a general model that both terminals and relay are corrupted by
co-channel interference. We first derive the closed-form expressions of
cumulative distribution function (CDF) for received
signal-to-interference-plus-noise ratio (SINR) at the terminal. Based on the
results for CDF, the lower bounds, approximate expressions as well as the
asymptotic expressions for outage probability and sum BER are derived in
closed-form with different computational complexities and accuracies. The
approximate expression for ergodic sum rate is also presented. With the
theoretic results, we consider the optimal power allocation at the relay and
optimal relay location problems that aiming to minimize the outage and sum BER
performances of the protocol. It is shown that jointly optimization of power
and relay location can provide the best performance. Simulation results are
presented to study the effect of system parameters while verify the theoretic
analysis. The results show that three-phase TWR protocol can outperform
two-phase TWR protocol in ergodic sum rate when the interference power at the
relay is much larger than that at the terminals. This is in sharp contrast with
the conclusion in interference free scenario. Moreover, we show that an
estimation error on the interference channel will not affect the system
performance significantly, while a very small estimation error on the desired
channels can degrade the performance considerably.
|
1402.6552 | Renewable Energy Prediction using Weather Forecasts for Optimal
Scheduling in HPC Systems | cs.LG | The objective of the GreenPAD project is to use green energy (wind, solar and
biomass) for powering data-centers that are used to run HPC jobs. As a part of
this it is important to predict the Renewable (Wind) energy for efficient
scheduling (executing jobs that require higher energy when there is more green
energy available and vice-versa). For predicting the wind energy we first
analyze the historical data to find a statistical model that gives relation
between wind energy and weather attributes. Then we use this model based on the
weather forecast data to predict the green energy availability in the future.
Using the green energy prediction obtained from the statistical model we are
able to precompute job schedules for maximizing the green energy utilization in
the future. We propose a model which uses live weather data in addition to
machine learning techniques (which can predict future deviations in weather
conditions based on current deviations from the forecast) to make on-the-fly
changes to the precomputed schedule (based on green energy prediction).
For this we first analyze the data using histograms and simple statistical
tools such as correlation. In addition we build (correlation) regression model
for finding the relation between wind energy availability and weather
attributes (temperature, cloud cover, air pressure, wind speed / direction,
precipitation and sunshine). We also analyze different algorithms and machine
learning techniques for optimizing the job schedules for maximizing the green
energy utilization.
|
1402.6555 | The effect of interdependence on the percolation of interdependent
networks | physics.soc-ph cs.SI | Two stochastic models are proposed to generate a system composed of two
interdependent scale-free (SF) or Erd\H{o}s-R\'{e}nyi (ER) networks where
interdependent nodes are connected with exponential or power-law relation, as
well as different dependence strength, respectively. Each subnetwork grows
through the addition of new nodes with constant accelerating random attachment
in the first model but with preferential attachment in the second model. Two
subnetworks interact with multi-support and undirectional dependence links. The
effect of dependence relations and strength between subnetworks are analyzed in
the percolation behavior of fully interdependent networks against random
failure, both theoretically and numerically, and as a result, for both
relations: interdependent SF networks show a second-order percolation phase
transition and increased dependence strength decreases the robustness of the
system, whereas, interdependent ER networks show the opposite results. In
addition, power-law relation between networks yields greater robustness than
exponential one at given dependence strength.
|
1402.6556 | Evolutionary solving of the debts' clearing problem | cs.NE cs.AI | The debts' clearing problem is about clearing all the debts in a group of n
entities (persons, companies etc.) using a minimal number of money transaction
operations. The problem is known to be NP-hard in the strong sense. As for many
intractable problems, techniques from the field of artificial intelligence are
useful in finding solutions close to optimum for large inputs. An evolutionary
algorithm for solving the debts' clearing problem is proposed.
|
1402.6560 | Even more generic solution construction in Valuation-Based Systems | cs.AI | Valuation algebras abstract a large number of formalisms for automated
reasoning and enable the definition of generic inference procedures. Many of
these formalisms provide some notions of solutions. Typical examples are
satisfying assignments in constraint systems, models in logics or solutions to
linear equation systems.
Recently, formal requirements for the presence of solutions and a generic
algorithm for solution construction based on the results of a previously
executed inference scheme have been proposed in the literature. Unfortunately,
the formalization of Pouly and Kohlas relies on a theorem for which we provide
a counter example. In spite of that, the mainline of the theory described is
correct, although some of the necessary conditions to apply some of the
algorithms have to be revised. To fix the theory, we generalize some of their
definitions and provide correct sufficient conditions for the algorithms. As a
result, we get a more general and corrected version of the already existing
theory.
|
1402.6573 | A comparative analysis of the statistical properties of large mobile
phone calling networks | cs.SI physics.soc-ph | Mobile phone calling is one of the most widely used communication methods in
modern society. The records of calls among mobile phone users provide us a
valuable proxy for the understanding of human communication patterns embedded
in social networks. Mobile phone users call each other forming a directed
calling network. If only reciprocal calls are considered, we obtain an
undirected mutual calling network. The preferential communication behavior
between two connected users can be statistically tested and it results in two
Bonferroni networks with statistically validated edges. We perform a
comparative analysis of the statistical properties of these four networks,
which are constructed from the calling records of more than nine million
individuals in Shanghai over a period of 110 days. We find that these networks
share many common structural properties and also exhibit idiosyncratic features
when compared with previously studied large mobile calling networks. The
empirical findings provide us an intriguing picture of a representative large
social network that might shed new lights on the modelling of large social
networks.
|
1402.6633 | An Optimal Transmission Strategy for Kalman Filtering over Packet
Dropping Links with Imperfect Acknowledgements | math.OC cs.IT math.IT | This paper presents a novel design methodology for optimal transmission
policies at a smart sensor to remotely estimate the state of a stable linear
stochastic dynamical system. The sensor makes measurements of the process and
forms estimates of the state using a local Kalman filter. The sensor transmits
quantized information over a packet dropping link to the remote receiver. The
receiver sends packet receipt acknowledgments back to the sensor via an
erroneous feedback communication channel which is itself packet dropping. The
key novelty of this formulation is that the smart sensor decides, at each
discrete time instant, whether to transmit a quantized version of either its
local state estimate or its local innovation. The objective is to design
optimal transmission policies in order to minimize a long term average cost
function as a convex combination of the receiver's expected estimation error
covariance and the energy needed to transmit the packets. The optimal
transmission policy is obtained by the use of dynamic programming techniques.
Using the concept of submodularity, the optimality of a threshold policy in the
case of scalar systems with perfect packet receipt acknowledgments is proved.
Suboptimal solutions and their structural results are also discussed. Numerical
results are presented illustrating the performance of the optimal and
suboptimal transmission policies.
|
1402.6636 | Analysis of Multibeam SONAR Data using Dissimilarity Representations | cs.CE stat.ML | This paper considers the problem of low-dimensional visualisation of very
high dimensional information sources for the purpose of situation awareness in
the maritime environment. In response to the requirement for human decision
support aids to reduce information overload (and specifically, data amenable to
inter-point relative similarity measures) appropriate to the below-water
maritime domain, we are investigating a preliminary prototype topographic
visualisation model. The focus of the current paper is on the mathematical
problem of exploiting a relative dissimilarity representation of signals in a
visual informatics mapping model, driven by real-world sonar systems. An
independent source model is used to analyse the sonar beams from which a simple
probabilistic input model to represent uncertainty is mapped to a latent
visualisation space where data uncertainty can be accommodated. The use of
euclidean and non-euclidean measures are used and the motivation for future use
of non-euclidean measures is made. Concepts are illustrated using a simulated
64 beam weak SNR dataset with realistic sonar targets.
|
1402.6650 | A Novel Method for the Recognition of Isolated Handwritten Arabic
Characters | cs.CV | There are many difficulties facing a handwritten Arabic recognition system
such as unlimited variation in human handwriting, similarities of distinct
character shapes, interconnections of neighbouring characters and their
position in the word. The typical Optical Character Recognition (OCR) systems
are based mainly on three stages, preprocessing, features extraction and
recognition. This paper proposes new methods for handwritten Arabic character
recognition which is based on novel preprocessing operations including
different kinds of noise removal also different kind of features like
structural, Statistical and Morphological features from the main body of the
character and also from the secondary components. Evaluation of the accuracy of
the selected features is made. The system was trained and tested by back
propagation neural network with CENPRMI dataset. The proposed algorithm
obtained promising results as it is able to recognize 88% of our test set
accurately. In Comparable with other related works we find that our result is
the highest among other published works.
|
1402.6663 | Enaction-Based Artificial Intelligence: Toward Coevolution with Humans
in the Loop | cs.AI nlin.AO | This article deals with the links between the enaction paradigm and
artificial intelligence. Enaction is considered a metaphor for artificial
intelligence, as a number of the notions which it deals with are deemed
incompatible with the phenomenal field of the virtual. After explaining this
stance, we shall review previous works regarding this issue in terms of
artifical life and robotics. We shall focus on the lack of recognition of
co-evolution at the heart of these approaches. We propose to explicitly
integrate the evolution of the environment into our approach in order to refine
the ontogenesis of the artificial system, and to compare it with the enaction
paradigm. The growing complexity of the ontogenetic mechanisms to be activated
can therefore be compensated by an interactive guidance system emanating from
the environment. This proposition does not however resolve that of the
relevance of the meaning created by the machine (sense-making). Such
reflections lead us to integrate human interaction into this environment in
order to construct relevant meaning in terms of participative artificial
intelligence. This raises a number of questions with regards to setting up an
enactive interaction. The article concludes by exploring a number of issues,
thereby enabling us to associate current approaches with the principles of
morphogenesis, guidance, the phenomenology of interactions and the use of
minimal enactive interfaces in setting up experiments which will deal with the
problem of artificial intelligence in a variety of enaction-based ways.
|
1402.6690 | Why Are You More Engaged? Predicting Social Engagement from Word Use | cs.SI cs.CL cs.CY | We present a study to analyze how word use can predict social engagement
behaviors such as replies and retweets in Twitter. We compute psycholinguistic
category scores from word usage, and investigate how people with different
scores exhibited different reply and retweet behaviors on Twitter. We also
found psycholinguistic categories that show significant correlations with such
social engagement behaviors. In addition, we have built predictive models of
replies and retweets from such psycholinguistic category based features. Our
experiments using a real world dataset collected from Twitter validates that
such predictions can be done with reasonable accuracy.
|
1402.6693 | Optimal Energy Allocation for Kalman Filtering over Packet Dropping
Links with Imperfect Acknowledgments and Energy Harvesting Constraints | math.OC cs.IT math.IT | This paper presents a design methodology for optimal transmission energy
allocation at a sensor equipped with energy harvesting technology for remote
state estimation of linear stochastic dynamical systems. In this framework, the
sensor measurements as noisy versions of the system states are sent to the
receiver over a packet dropping communication channel. The packet dropout
probabilities of the channel depend on both the sensor's transmission energies
and time varying wireless fading channel gains. The sensor has access to an
energy harvesting source which is an everlasting but unreliable energy source
compared to conventional batteries with fixed energy storages. The receiver
performs optimal state estimation with random packet dropouts to minimize the
estimation error covariances based on received measurements. The receiver also
sends packet receipt acknowledgments to the sensor via an erroneous feedback
communication channel which is itself packet dropping.
The objective is to design optimal transmission energy allocation at the
energy harvesting sensor to minimize either a finite-time horizon sum or a long
term average (infinite-time horizon) of the trace of the expected estimation
error covariance of the receiver's Kalman filter. These problems are formulated
as Markov decision processes with imperfect state information. The optimal
transmission energy allocation policies are obtained by the use of dynamic
programming techniques. Using the concept of submodularity, the structure of
the optimal transmission energy policies are studied. Suboptimal solutions are
also discussed which are far less computationally intensive than optimal
solutions. Numerical simulation results are presented illustrating the
performance of the energy allocation algorithms.
|
1402.6742 | CRISTAL-ISE : Provenance Applied in Industry | cs.DB cs.SE | This paper presents the CRISTAL-iSE project as a framework for the management
of provenance information in industry. The project itself is a research
collaboration between academia and industry. A key factor in the project is the
use of a system known as CRISTAL which is a mature system based on proven
description driven principles. A crucial element in the description driven
approach is that the fact that objects (Items) are described at runtime
enabling managed systems to be both dynamic and flexible. Another factor is the
notion that all Items in CRISTAL are stored and versioned, therefore enabling a
provenance collection system. In this paper a concrete application, called
Agilium, is briefly described and a future application CIMAG-RA is presented
which will harness the power of both CRISTAL and Agilium.
|
1402.6757 | Concise Probability Distributions of Eigenvalues of Real-Valued Wishart
Matrices | cs.IT math.IT | In this paper, we consider the problem of deriving new eigenvalue
distributions of real-valued Wishart matrices that arises in many scientific
and engineering applications. The distributions are derived using the tools
from the theory of skew symmetric matrices. In particular, we relate the
multiple integrals of a determinant, which arises while finding the eigenvalue
distributions, in terms of the Pfaffian of skew-symmetric matrices. Pfaffians
being the square root of skew symmetric matrices are easy to compute than the
conventional distributions that involve Zonal polynomials or beta integrals. We
show that the plots of the derived distributions are exactly coinciding with
the numerically simulated plots.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.