id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1404.0298 | A Kernel-Based Nonparametric Test for Anomaly Detection over Line
Networks | cs.IT math.IT stat.ML | The nonparametric problem of detecting existence of an anomalous interval
over a one dimensional line network is studied. Nodes corresponding to an
anomalous interval (if exists) receive samples generated by a distribution q,
which is different from the distribution p that generates samples for other
nodes. If anomalous interval does not exist, then all nodes receive samples
generated by p. It is assumed that the distributions p and q are arbitrary, and
are unknown. In order to detect whether an anomalous interval exists, a test is
built based on mean embeddings of distributions into a reproducing kernel
Hilbert space (RKHS) and the metric of maximummean discrepancy (MMD). It is
shown that as the network size n goes to infinity, if the minimum length of
candidate anomalous intervals is larger than a threshold which has the order
O(log n), the proposed test is asymptotically successful, i.e., the probability
of detection error approaches zero asymptotically. An efficient algorithm to
perform the test with substantial computational complexity reduction is
proposed, and is shown to be asymptotically successful if the condition on the
minimum length of candidate anomalous interval is satisfied. Numerical results
are provided, which are consistent with the theoretical results.
|
1404.0300 | Followers Are Not Enough: A Question-Oriented Approach to Community
Detection in Online Social Networks | cs.SI physics.soc-ph | Community detection in online social networks is typically based on the
analysis of the explicit connections between users, such as "friends" on
Facebook and "followers" on Twitter. But online users often have hundreds or
even thousands of such connections, and many of these connections do not
correspond to real friendships or more generally to accounts that users
interact with. We claim that community detection in online social networks
should be question-oriented and rely on additional information beyond the
simple structure of the network. The concept of 'community' is very general,
and different questions such as "whom do we interact with?" and "with whom do
we share similar interests?" can lead to the discovery of different social
groups. In this paper we focus on three types of communities beyond structural
communities: activity-based, topic-based, and interaction-based. We analyze a
Twitter dataset using three different weightings of the structural network
meant to highlight these three community types, and then infer the communities
associated with these weightings. We show that the communities obtained in the
three weighted cases are highly different from each other, and from the
communities obtained by considering only the unweighted structural network. Our
results confirm that asking a precise question is an unavoidable first step in
community detection in online social networks, and that different questions can
lead to different insights about the network under study.
|
1404.0320 | A Note on Randomized Element-wise Matrix Sparsification | cs.IT math.IT | Given a matrix A \in R^{m x n}, we present a randomized algorithm that
sparsifies A by retaining some of its elements by sampling them according to a
distribution that depends on both the square and the absolute value of the
entries. We combine the ideas of [4, 1] and provide an elementary proof of the
approximation accuracy of our algorithm following [4] without the truncation
step.
|
1404.0333 | Cross-checking different sources of mobility information | physics.soc-ph cs.CY cs.SI | The pervasive use of new mobile devices has allowed a better characterization
in space and time of human concentrations and mobility in general. Besides its
theoretical interest, describing mobility is of great importance for a number
of practical applications ranging from the forecast of disease spreading to the
design of new spaces in urban environments. While classical data sources, such
as surveys or census, have a limited level of geographical resolution (e.g.,
districts, municipalities, counties are typically used) or are restricted to
generic workdays or weekends, the data coming from mobile devices can be
precisely located both in time and space. Most previous works have used a
single data source to study human mobility patterns. Here we perform instead a
cross-check analysis by comparing results obtained with data collected from
three different sources: Twitter, census and cell phones. The analysis is
focused on the urban areas of Barcelona and Madrid, for which data of the three
types is available. We assess the correlation between the datasets on different
aspects: the spatial distribution of people concentration, the temporal
evolution of people density and the mobility patterns of individuals. Our
results show that the three data sources are providing comparable information.
Even though the representativeness of Twitter geolocated data is lower than
that of mobile phone and census data, the correlations between the population
density profiles and mobility patterns detected by the three datasets are close
to one in a grid with cells of 2x2 and 1x1 square kilometers. This level of
correlation supports the feasibility of interchanging the three data sources at
the spatio-temporal scales considered.
|
1404.0334 | Active Deformable Part Models | cs.CV cs.LG | This paper presents an active approach for part-based object detection, which
optimizes the order of part filter evaluations and the time at which to stop
and make a prediction. Statistics, describing the part responses, are learned
from training data and are used to formalize the part scheduling problem as an
offline optimization. Dynamic programming is applied to obtain a policy, which
balances the number of part evaluations with the classification accuracy.
During inference, the policy is used as a look-up table to choose the part
order and the stopping time based on the observed filter responses. The method
is faster than cascade detection with deformable part models (which does not
optimize the part order) with negligible loss in accuracy when evaluated on the
PASCAL VOC 2007 and 2010 datasets.
|
1404.0336 | A Continuous Max-Flow Approach to General Hierarchical Multi-Labeling
Problems | cs.CV | Multi-region segmentation algorithms often have the onus of incorporating
complex anatomical knowledge representing spatial or geometric relationships
between objects, and general-purpose methods of addressing this knowledge in an
optimization-based manner have thus been lacking. This paper presents
Generalized Hierarchical Max-Flow (GHMF) segmentation, which captures simple
anatomical part-whole relationships in the form of an unconstrained hierarchy.
Regularization can then be applied to both parts and wholes independently,
allowing for spatial grouping and clustering of labels in a globally optimal
convex optimization framework. For the purposes of ready integration into a
variety of segmentation tasks, the hierarchies can be presented in run-time,
allowing for the segmentation problem to be readily specified and alternatives
explored without undue programming effort or recompilation.
|
1404.0338 | Multi-Robot Control Using Time-Varying Density Functions | math.OC cs.RO | This paper presents an approach to externally influencing a team of robots by
means of time-varying density functions. These density functions represent
rough references for where the robots should be located. To this end, a
continuous-time algorithm is proposed that moves the robots so as to provide
optimal coverage given the density functions as they evolve over time. The
developed algorithm represents an extension to previous coverage algorithms in
that time-varying densities are explicitly taken into account in a provable
manner. A distributed approximation to this algorithm is moreover proposed
whereby the robots only need to access information from adjacent robots.
Simulations and robotic experiments show that the proposed algorithms do indeed
exhibit the desired behaviors in practice as well as in theory.
|
1404.0346 | Scaling laws for molecular communication | cs.IT math-ph math.IT math.MP q-bio.QM | In this paper, we investigate information-theoretic scaling laws, independent
from communication strategies, for point-to-point molecular communication,
where it sends/receives information-encoded molecules between nanomachines.
Since the Shannon capacity for this is still an open problem, we first derive
an asymptotic order in a single coordinate, i.e., i) scaling time with constant
number of molecules $m$ and ii) scaling molecules with constant time $t$. For a
single coordinate case, we show that the asymptotic scaling is logarithmic in
either coordinate, i.e., $\Theta(\log t)$ and $\Theta(\log m)$, respectively.
We also study asymptotic behavior of scaling in both time and molecules and
show that, if molecules and time are proportional to each other, then the
asymptotic scaling is linear, i.e., $\Theta(t)=\Theta(m)$.
|
1404.0354 | Performance of the Generalized Quantize-and-Forward Scheme over the
Multiple-Access Relay Channel | cs.IT math.IT | This work focuses on the half-duplex (HD) relaying based on the generalized
quantize-and-forward (GQF) scheme in the slow fading Multiple Access Relay
Channel (MARC) where the relay has no channel state information (CSI) of the
relay-to-destination link. Relay listens to the channel in the first slot of
the transmission block and cooperatively transmits to the destination in the
second slot. In order to investigate the performance of the GQF, the following
steps have been taken: 1)The GQF scheme is applied to establish the achievable
rate regions of the discrete memoryless half-duplex MARC and the corresponding
additive white Gaussian noise channel. This scheme is developed based on the
generalization of the Quantize-and-Forward (QF) scheme and single block with
two slots coding structure. 2) as the general performance measure of the slow
fading channel, the common outage probability and the expected sum rate (total
throughput) of the GQF scheme have been characterized. The numerical examples
show that when the relay has no access to the CSI of the relay-destination
link, the GQF scheme outperforms other relaying schemes, e.g., classic
compress-and-forward (CF), decode-and-forward (DF) and amplify-and-forward
(AF). 3) for a MAC channel with heterogeneous user channels and
quality-of-service (QoS) requirements, individual outage probability and total
throughput of the GQF scheme are also obtained and shown to outperform the
classic CF scheme.
|
1404.0367 | Facilitators on networks reveal the optimal interplay between
information exchange and reciprocity | physics.soc-ph cs.SI q-bio.PE | Reciprocity is firmly established as an important mechanism that promotes
cooperation. An efficient information exchange is likewise important,
especially on structured populations, where interactions between players are
limited. Motivated by these two facts, we explore the role of facilitators in
social dilemmas on networks. Facilitators are here mirrors to their neighbors
-- they cooperate with cooperators and defect with defectors -- but they do not
participate in the exchange of strategies. As such, in addition to introducing
direct reciprocity, they also obstruct information exchange. In well-mixed
populations, facilitators favor the replacement and invasion of defection by
cooperation as long as their number exceeds a critical value. In structured
populations, on the other hand, there exists a delicate balance between the
benefits of reciprocity and the deterioration of information exchange.
Extensive Monte Carlo simulations of social dilemmas on various interaction
networks reveal that there exists an optimal interplay between reciprocity and
information exchange, which sets in only when a small number of facilitators
occupies the main hubs of the scale-free network. The drawbacks of missing
cooperative hubs are more than compensated by reciprocity and, at the same
time, the compromised information exchange is routed via the auxiliary hubs
with only marginal losses in effectivity. These results indicate that it is not
always optimal for the main hubs to become ''leaders of the masses'', but
rather to exploit their highly connected state to promote tit-for-tat-like
behavior.
|
1404.0400 | A Deep Representation for Invariance And Music Classification | cs.SD cs.LG stat.ML | Representations in the auditory cortex might be based on mechanisms similar
to the visual ventral stream; modules for building invariance to
transformations and multiple layers for compositionality and selectivity. In
this paper we propose the use of such computational modules for extracting
invariant and discriminative audio representations. Building on a theory of
invariance in hierarchical architectures, we propose a novel, mid-level
representation for acoustical signals, using the empirical distributions of
projections on a set of templates and their transformations. Under the
assumption that, by construction, this dictionary of templates is composed from
similar classes, and samples the orbit of variance-inducing signal
transformations (such as shift and scale), the resulting signature is
theoretically guaranteed to be unique, invariant to transformations and stable
to deformations. Modules of projection and pooling can then constitute layers
of deep networks, for learning composite representations. We present the main
theoretical and computational aspects of a framework for unsupervised learning
of invariant audio representations, empirically evaluated on music genre
classification.
|
1404.0404 | EEG Spatial Decoding and Classification with Logit Shrinkage Regularized
Directed Information Assessment (L-SODA) | cs.IT math.IT | There is an increasing interest in studying the neural interaction mechanisms
behind patterns of cognitive brain activity. This paper proposes a new approach
to infer such interaction mechanisms from electroencephalographic (EEG) data
using a new estimator of directed information (DI) called logit shrinkage
optimized directed information assessment (L-SODA). Unlike previous directed
information measures applied to neural decoding, L-SODA uses shrinkage
regularization on multinomial logistic regression to deal with the high
dimensionality of multi-channel EEG signals and the small sizes of many
real-world datasets. It is designed to make few a priori assumptions and can
handle both non-linear and non-Gaussian flows among electrodes. Our L-SODA
estimator of the DI is accompanied by robust statistical confidence intervals
on the true DI that make it especially suitable for hypothesis testing on the
information flow patterns. We evaluate our work in the context of two different
problems where interaction localization is used to determine highly interactive
areas for EEG signals spatially and temporally. First, by mapping the areas
that have high DI into Brodmann area, we identify that the areas with high DI
are associated with motor-related functions. We demonstrate that L-SODA
provides better accuracy for neural decoding of EEG signals as compared to
several state-of-the-art approaches on the Brain Computer Interface (BCI) EEG
motor activity dataset. Second, the proposed L-SODA estimator is evaluated on
the CHB-MIT Scalp EEG database. We demonstrate that compared to the
state-of-the-art approaches, the proposed method provides better performance in
detecting the epileptic seizure.
|
1404.0408 | Optimal Multiuser Transmit Beamforming: A Difficult Problem with a
Simple Solution Structure | cs.IT math.IT | Transmit beamforming is a versatile technique for signal transmission from an
array of $N$ antennas to one or multiple users [1]. In wireless communications,
the goal is to increase the signal power at the intended user and reduce
interference to non-intended users. A high signal power is achieved by
transmitting the same data signal from all antennas, but with different
amplitudes and phases, such that the signal components add coherently at the
user. Low interference is accomplished by making the signal components add
destructively at non-intended users. This corresponds mathematically to
designing beamforming vectors (that describe the amplitudes and phases) to have
large inner products with the vectors describing the intended channels and
small inner products with non-intended user channels.
While it is fairly easy to design a beamforming vector that maximizes the
signal power at the intended user, it is difficult to strike a perfect balance
between maximizing the signal power and minimizing the interference leakage. In
fact, the optimization of multiuser transmit beamforming is generally a
nondeterministic polynomial-time (NP) hard problem [2]. Nevertheless, this
lecture shows that the optimal transmit beamforming has a simple structure with
very intuitive properties and interpretations. This structure provides a
theoretical foundation for practical low-complexity beamforming schemes.
(See this lecture note for the complete abstract/introduction)
|
1404.0414 | Proceedings 2nd International Workshop on Strategic Reasoning | cs.GT cs.LO cs.MA | This volume contains the proceedings of the 2nd International Workshop on
Strategic Reasoning 2014 (SR 2014), held in Grenoble (France), April 5-6, 2014.
The SR workshop aims to bring together researchers, possibly with different
backgrounds, working on various aspects of strategic reasoning in computer
science, both from a theoretical and a practical point of view. This year SR
has hosted four invited talks by Thomas A. Henzinger, Wiebe van der Hoek,
Alessio R. Lomuscio, and Wolfgang Thomas. Moreover, the workshop has hosted 14
contributed talks, all selected among the full contributions submitted, which
have been deeply evaluated, by four reviewers, according to their quality and
relevance.
|
1404.0424 | Conjugate Gradient-based Soft-Output Detection and Precoding in Massive
MIMO Systems | cs.IT math.IT | Massive multiple-input multiple-output (MIMO) promises improved spectral
efficiency, coverage, and range, compared to conventional (small-scale) MIMO
wireless systems. Unfortunately, these benefits come at the cost of
significantly increased computational complexity, especially for systems with
realistic antenna configurations. To reduce the complexity of data detection
(in the uplink) and precoding (in the downlink) in massive MIMO systems, we
propose to use conjugate gradient (CG) methods. While precoding using CG is
rather straightforward, soft-output minimum mean-square error (MMSE) detection
requires the computation of the post-equalization
signal-to-interference-and-noise-ratio (SINR). To enable CG for soft-output
detection, we propose a novel way of computing the SINR directly within the CG
algorithm at low complexity. We investigate the performance/complexity
trade-offs associated with CG-based soft-output detection and precoding, and we
compare it to exact and approximate methods. Our results reveal that the
proposed method outperforms existing algorithms for massive MIMO systems with
realistic antenna configurations.
|
1404.0425 | Partition Information and its Transmission over Boolean Multi-Access
Channels | cs.IT math.IT | In this paper, we propose a novel partition reservation system to study the
partition information and its transmission over a noise-free Boolean
multi-access channel. The objective of transmission is not message restoration,
but to partition active users into distinct groups so that they can,
subsequently, transmit their messages without collision. We first calculate (by
mutual information) the amount of information needed for the partitioning
without channel effects, and then propose two different coding schemes to
obtain achievable transmission rates over the channel. The first one is the
brute force method, where the codebook design is based on centralized source
coding; the second method uses random coding where the codebook is generated
randomly and optimal Bayesian decoding is employed to reconstruct the
partition. Both methods shed light on the internal structure of the partition
problem. A novel hypergraph formulation is proposed for the random coding
scheme, which intuitively describes the information in terms of a strong
coloring of a hypergraph induced by a sequence of channel operations and
interactions between active users. An extended Fibonacci structure is found for
a simple, but non-trivial, case with two active users. A comparison between
these methods and group testing is conducted to demonstrate the uniqueness of
our problem.
|
1404.0427 | Learning Two-input Linear and Nonlinear Analog Functions with a Simple
Chemical System | q-bio.MN cs.LG | The current biochemical information processing systems behave in a
predetermined manner because all features are defined during the design phase.
To make such unconventional computing systems reusable and programmable for
biomedical applications, adaptation, learning, and self-modification based on
external stimuli would be highly desirable. However, so far, it has been too
challenging to implement these in wet chemistries. In this paper we extend the
chemical perceptron, a model previously proposed by the authors, to function as
an analog instead of a binary system. The new analog asymmetric signal
perceptron learns through feedback and supports Michaelis-Menten kinetics. The
results show that our perceptron is able to learn linear and nonlinear
(quadratic) functions of two inputs. To the best of our knowledge, it is the
first simulated chemical system capable of doing so. The small number of
species and reactions and their simplicity allows for a mapping to an actual
wet implementation using DNA-strand displacement or deoxyribozymes. Our results
are an important step toward actual biochemical systems that can learn and
adapt.
|
1404.0431 | Learning Latent Block Structure in Weighted Networks | stat.ML cs.SI physics.data-an physics.soc-ph | Community detection is an important task in network analysis, in which we aim
to learn a network partition that groups together vertices with similar
community-level connectivity patterns. By finding such groups of vertices with
similar structural roles, we extract a compact representation of the network's
large-scale structure, which can facilitate its scientific interpretation and
the prediction of unknown or future interactions. Popular approaches, including
the stochastic block model, assume edges are unweighted, which limits their
utility by throwing away potentially useful information. We introduce the
`weighted stochastic block model' (WSBM), which generalizes the stochastic
block model to networks with edge weights drawn from any exponential family
distribution. This model learns from both the presence and weight of edges,
allowing it to discover structure that would otherwise be hidden when weights
are discarded or thresholded. We describe a Bayesian variational algorithm for
efficiently approximating this model's posterior distribution over latent block
structures. We then evaluate the WSBM's performance on both edge-existence and
edge-weight prediction tasks for a set of real-world weighted networks. In all
cases, the WSBM performs as well or better than the best alternatives on these
tasks.
|
1404.0434 | New Asymptotic Metrics for Relative Generalized Hamming Weight | cs.IT math.CO math.IT | It was recently shown that RGHW (relative generalized Hamming weight) exactly
expresses the security of linear ramp secret sharing scheme. In this paper we
determine the true value of the asymptotic metric for RGHW previously proposed
by Zhuang et al. in 2013. Then we propose new asymptotic metrics useful for
investigating the optimal performance of linear ramp secret sharing scheme
constructed from a pair of linear codes. We also determine the true values of
the proposed metrics in many cases.
|
1404.0437 | Theory and Application of Shapelets to the Analysis of Surface
Self-assembly Imaging | cs.CV physics.data-an | A method for quantitative analysis of local pattern strength and defects in
surface self-assembly imaging is presented and applied to images of stripe and
hexagonal ordered domains. The presented method uses "shapelet" functions which
were originally developed for quantitative analysis of images of galaxies
($\propto 10^{20}\mathrm{m}$). In this work, they are used instead to quantify
the presence of translational order in surface self-assembled films ($\propto
10^{-9}\mathrm{m}$) through reformulation into "steerable" filters. The
resulting method is both computationally efficient (with respect to the number
of filter evaluations), robust to variation in pattern feature shape, and,
unlike previous approaches, is applicable to a wide variety of pattern types.
An application of the method is presented which uses a nearest-neighbour
analysis to distinguish between uniform (defect-free) and non-uniform
(strained, defect-containing) regions within imaged self-assembled domains,
both with striped and hexagonal patterns.
|
1404.0444 | Setting Parameters for Biological Models With ANIMO | q-bio.MN cs.CE | ANIMO (Analysis of Networks with Interactive MOdeling) is a software for
modeling biological networks, such as e.g. signaling, metabolic or gene
networks. An ANIMO model is essentially the sum of a network topology and a
number of interaction parameters. The topology describes the interactions
between biological entities in form of a graph, while the parameters determine
the speed of occurrence of such interactions. When a mismatch is observed
between the behavior of an ANIMO model and experimental data, we want to update
the model so that it explains the new data. In general, the topology of a model
can be expanded with new (known or hypothetical) nodes, and enables it to match
experimental data. However, the unrestrained addition of new parts to a model
causes two problems: models can become too complex too fast, to the point of
being intractable, and too many parts marked as "hypothetical" or "not known"
make a model unrealistic. Even if changing the topology is normally the easier
task, these problems push us to try a better parameter fit as a first step, and
resort to modifying the model topology only as a last resource. In this paper
we show the support added in ANIMO to ease the task of expanding the knowledge
on biological networks, concentrating in particular on the parameter settings.
|
1404.0453 | Cellular Automata and Its Applications in Bioinformatics: A Review | cs.CE cs.LG | This paper aims at providing a survey on the problems that can be easily
addressed by cellular automata in bioinformatics. Some of the authors have
proposed algorithms for addressing some problems in bioinformatics but the
application of cellular automata in bioinformatics is a virgin field in
research. None of the researchers has tried to relate the major problems in
bioinformatics and find a common solution. Extensive literature surveys were
conducted. We have considered some papers in various journals and conferences
for conduct of our research. This paper provides intuition towards relating
various problems in bioinformatics logically and tries to attain a common frame
work for addressing the same.
|
1404.0466 | piCholesky: Polynomial Interpolation of Multiple Cholesky Factors for
Efficient Approximate Cross-Validation | cs.LG cs.NA | The dominant cost in solving least-square problems using Newton's method is
often that of factorizing the Hessian matrix over multiple values of the
regularization parameter ($\lambda$). We propose an efficient way to
interpolate the Cholesky factors of the Hessian matrix computed over a small
set of $\lambda$ values. This approximation enables us to optimally minimize
the hold-out error while incurring only a fraction of the cost compared to
exact cross-validation. We provide a formal error bound for our approximation
scheme and present solutions to a set of key implementation challenges that
allow our approach to maximally exploit the compute power of modern
architectures. We present a thorough empirical analysis over multiple datasets
to show the effectiveness of our approach.
|
1404.0471 | Full-Duplex Wireless-Powered Communication Network with Energy Causality | cs.IT math.IT | In this paper, we consider a wireless communication network with a
full-duplex hybrid access point (HAP) and a set of wireless users with energy
harvesting capabilities. The HAP implements the full-duplex through two
antennas: one for broadcasting wireless energy to users in the downlink and one
for receiving independent information from users via
time-division-multiple-access (TDMA) in the uplink at the same time. All users
can continuously harvest wireless power from the HAP until its transmission
slot, i.e., the energy causality constraint is modeled by assuming that energy
harvested in the future cannot be used for tranmission. Hence, latter users'
energy harvesting time is coupled with the transmission time of previous users.
Under this setup, we investigate the sum-throughput maximization (STM) problem
and the total-time minimization (TTM) problem for the proposed multi-user
full-duplex wireless-powered network. The STM problem is proved to be a convex
optimization problem. The optimal solution strategy is then obtained in
closed-form expression, which can be computed with linear complexity. It is
also shown that the sum throughput is non-decreasing with increasing of the
number of users. For the TTM problem, by exploiting the properties of the
coupling constraints, we propose a two-step algorithm to obtain an optimal
solution. Then, for each problem, two suboptimal solutions are proposed and
investigated. Finally, the effect of user scheduling on STM and TTM are
investigated through simulations. It is also shown that different user
scheduling strategies should be used for STM and TTM.
|
1404.0530 | Modelling the Self-similarity in Complex Networks Based on Coulomb's Law | cs.SI physics.soc-ph | Recently, self-similarity of complex networks have attracted much attention.
Fractal dimension of complex network is an open issue. Hub repulsion plays an
important role in fractal topologies. This paper models the repulsion among the
nodes in the complex networks in calculation of the fractal dimension of the
networks. The Coulomb's law is adopted to represent the repulse between two
nodes of the network quantitatively. A new method to calculate the fractal
dimension of complex networks is proposed. The Sierpinski triangle network and
some real complex networks are investigated. The results are illustrated to
show that the new model of self-similarity of complex networks is reasonable
and efficient.
|
1404.0533 | A Comparative Study of Modern Inference Techniques for Structured
Discrete Energy Minimization Problems | cs.CV | Szeliski et al. published an influential study in 2006 on energy minimization
methods for Markov Random Fields (MRF). This study provided valuable insights
in choosing the best optimization technique for certain classes of problems.
While these insights remain generally useful today, the phenomenal success of
random field models means that the kinds of inference problems that have to be
solved changed significantly. Specifically, the models today often include
higher order interactions, flexible connectivity structures, large
la\-bel-spaces of different cardinalities, or learned energy tables. To reflect
these changes, we provide a modernized and enlarged study. We present an
empirical comparison of 32 state-of-the-art optimization techniques on a corpus
of 2,453 energy minimization instances from diverse applications in computer
vision. To ensure reproducibility, we evaluate all methods in the OpenGM 2
framework and report extensive results regarding runtime and solution quality.
Key insights from our study agree with the results of Szeliski et al. for the
types of models they studied. However, on new and challenging types of models
our findings disagree and suggest that polyhedral methods and integer
programming solvers are competitive in terms of runtime and solution quality
over a large range of model types.
|
1404.0540 | Modeling contaminant intrusion in water distribution networks based on D
numbers | cs.AI cs.CE | Efficient modeling on uncertain information plays an important role in
estimating the risk of contaminant intrusion in water distribution networks.
Dempster-Shafer evidence theory is one of the most commonly used methods.
However, the Dempster-Shafer evidence theory has some hypotheses including the
exclusive property of the elements in the frame of discernment, which may not
be consistent with the real world. In this paper, based on a more effective
representation of uncertainty, called D numbers, a new method that allows the
elements in the frame of discernment to be non-exclusive is proposed. To
demonstrate the efficiency of the proposed method, we apply it to the water
distribution networks to estimate the risk of contaminant intrusion.
|
1404.0554 | From ADP to the Brain: Foundations, Roadmap, Challenges and Research
Priorities | cs.NE | This paper defines and discusses Mouse Level Computational Intelligence
(MLCI) as a grand challenge for the coming century. It provides a specific
roadmap to reach that target, citing relevant work and review papers and
discussing the relation to funding priorities in two NSF funding activities:
the ongoing Energy, Power and Adaptive Systems program (EPAS) and the recent
initiative in Cognitive Optimization and Prediction (COPN). It elaborates on
the first step, vector intelligence, a challenge in the development of
universal learning systems, which itself will require considerable new research
to attain. This in turn is a crucial prerequisite to true functional
understanding of how mammal brains achieve such general learning capabilities.
|
1404.0566 | Weyl group orbit functions in image processing | cs.CV | We deal with the Fourier-like analysis of functions on discrete grids in
two-dimensional simplexes using $C-$ and $E-$ Weyl group orbit functions. For
these cases we present the convolution theorem. We provide an example of
application of image processing using the $C-$ functions and the convolutions
for spatial filtering of the treated image.
|
1404.0576 | A Lyapunov redesign of coordination algorithms for cyberphysical systems | math.OC cs.SY | The objective is to design distributed coordination strategies for a network
of agents in a cyber-physical environment. In particular, we concentrate on the
rendez-vous of agents having double-integrator dynamics with the addition of a
damping term in the velocity dynamics. We start with distributed controllers
that solve the problem in continuous-time, and we then explain how to implement
these using event-based sampling. The idea is to define a triggering rule per
edge using a clock variable which only depends on the local variables. The
triggering laws are designed to compensate for the perturbative term introduced
by the sampling, a technique that reminds of Lyapunov-based control redesign.
We first present an event-triggered solution which requires continuous
measurement of the relative position and we then explain how to convert it to a
self-triggered policy. The latter only requires the measurements of the
relative position and velocity at the last transmission instants, which is
useful to reduce both the communication and the computation costs. The
strategies guarantee the existence of a uniform minimum amount of times between
any two edge events. The analysis is carried out using an invariance principle
for hybrid systems.
|
1404.0578 | Mental Disorder Recovery Correlated with Centralities and Interactions
on an Online Social Network | cs.SI cs.CY | Recent research has established both a theoretical basis and strong empirical
evidence that effective social behavior plays a beneficial role in the
maintenance of physical and psychological well-being of people. To test whether
social behavior and well-being are also associated in online communities, we
studied the correlations between the recovery of patients with mental disorders
and their behaviors in online social media. As the source of the data related
to the social behavior and progress of mental recovery, we used PatientsLikeMe
(PLM), the world's first open-participation research platform for the
development of patient-centered health outcome measures. We first constructed
an online social network structure based on patient-to-patient ties among 200
patients obtained from PLM. We then characterized patients' online social
activities by measuring the numbers of "posts and views" and "helpful marks"
each patient obtained. The patients' recovery data were obtained from their
self-reported status information that was also available on PLM. We found that
some node properties (in-degree, eigenvector centrality and PageRank) and the
two online social activity measures were significantly correlated with
patients' recovery. Furthermore, we re-collected the patients' recovery data
two months after the first data collection. We found significant correlations
between the patients' social behaviors and the second recovery data, which were
collected two months apart. Our results indicated that social interactions in
online communities such as PLM were significantly associated with the current
and future recoveries of patients with mental disorders.
|
1404.0600 | MBIS: Multivariate Bayesian Image Segmentation Tool | cs.CV | We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering
tool based on the mixture of multivariate normal distributions model. MBIS
supports multi-channel bias field correction based on a B-spline model. A
second methodological novelty is the inclusion of graph-cuts optimization for
the stationary anisotropic hidden Markov random field model. Along with MBIS,
we release an evaluation framework that contains three different experiments on
multi-site data. We first validate the accuracy of segmentation and the
estimated bias field for each channel. MBIS outperforms a widely used
segmentation tool in a cross-comparison evaluation. The second experiment
demonstrates the robustness of results on atlas-free segmentation of two image
sets from scan-rescan protocols on 21 healthy subjects. Multivariate
segmentation is more replicable than the monospectral counterpart on
T1-weighted images. Finally, we provide a third experiment to illustrate how
MBIS can be used in a large-scale study of tissue volume change with increasing
age in 584 healthy subjects. This last result is meaningful as multivariate
segmentation performs robustly without the need for prior knowledge
|
1404.0606 | Monadic Datalog Containment on Trees | cs.LO cs.CC cs.DB | We show that the query containment problem for monadic datalog on finite
unranked labeled trees can be solved in 2-fold exponential time when (a)
considering unordered trees using the axes child and descendant, and when (b)
considering ordered trees using the axes firstchild, nextsibling, child, and
descendant. When omitting the descendant-axis, we obtain that in both cases the
problem is EXPTIME-complete.
|
1404.0627 | Extraction of Projection Profile, Run-Histogram and Entropy Features
Straight from Run-Length Compressed Text-Documents | cs.CV | Document Image Analysis, like any Digital Image Analysis requires
identification and extraction of proper features, which are generally extracted
from uncompressed images, though in reality images are made available in
compressed form for the reasons such as transmission and storage efficiency.
However, this implies that the compressed image should be decompressed, which
indents additional computing resources. This limitation induces the motivation
to research in extracting features directly from the compressed image. In this
research, we propose to extract essential features such as projection profile,
run-histogram and entropy for text document analysis directly from run-length
compressed text-documents. The experimentation illustrates that features are
extracted directly from the compressed image without going through the stage of
decompression, because of which the computing time is reduced. The feature
values so extracted are exactly identical to those extracted from uncompressed
images.
|
1404.0640 | Conceptive Artificial Intelligence: Insights from design theory | cs.AI | The current paper offers a perspective on what we term conceptive
intelligence - the capacity of an agent to continuously think of new object
definitions (tasks, problems, physical systems, etc.) and to look for methods
to realize them. The framework, called a Brouwer machine, is inspired by
previous research in design theory and modeling, with its roots in the
constructivist mathematics of intuitionism. The dual constructivist perspective
we describe offers the possibility to create novelty both in terms of the types
of objects and the methods for constructing objects. More generally, the
theoretical work on which Brouwer machines are based is called imaginative
constructivism. Based on the framework and the theory, we discuss many
paradigms and techniques omnipresent in AI research and their merits and
shortcomings for modeling aspects of design, as described by imaginative
constructivism. To demonstrate and explain the type of creative process
expressed by the notion of a Brouwer machine, we compare this concept with a
system using genetic algorithms for scientific law discovery.
|
1404.0649 | A probabilistic estimation and prediction technique for dynamic
continuous social science models: The evolution of the attitude of the Basque
Country population towards ETA as a case study | cs.LG | In this paper, we present a computational technique to deal with uncertainty
in dynamic continuous models in Social Sciences. Considering data from surveys,
the method consists of determining the probability distribution of the survey
output and this allows to sample data and fit the model to the sampled data
using a goodness-of-fit criterion based on the chi-square-test. Taking the
fitted parameters non-rejected by the chi-square-test, substituting them into
the model and computing their outputs, we build 95% confidence intervals in
each time instant capturing uncertainty of the survey data (probabilistic
estimation). Using the same set of obtained model parameters, we also provide a
prediction over the next few years with 95% confidence intervals (probabilistic
prediction). This technique is applied to a dynamic social model describing the
evolution of the attitude of the Basque Country population towards the
revolutionary organization ETA.
|
1404.0660 | Waterfilling Theorems in the Time-Frequency Plane for the Heat Channel
and a Related Source | cs.IT math.IT | The capacity of the heat channel, a linear time-varying (LTV) filter with
additive white Gaussian noise (AWGN), is characterized by waterfilling in the
time-frequency plane. Similarly, the rate distortion function for a related
nonstationary source is characterized by reverse waterfilling in the
time-frequency plane. The source is formed by the white Gaussian noise response
of the same LTV filter as before. The proofs of both waterfilling theorems rely
on a specific Szego theorem for a positive definite operator associated with
the filter. An essentially self-contained proof of the Szego theorem is given.
The waterfilling theorems compare well with classical results of Gallager and
Berger. In case of the nonstationary source it is observed that the part of the
classical power spectral density (PSD) is taken by the Wigner-Ville spectrum
(WVS).
|
1404.0662 | Privacy-Preserving Social Network with Multigrained and Multilevel
Access Control | cs.SI cs.CR | I study two privacy-preserving social network graphs to dis- close the types
of relationships of connecting edges and provide flexible multigrained access
control. To create such graphs, my schemes employ the concept of secretaries
and types of relationships. It is significantly more efficient than those that
using expensive cryptographic primitives. I also show how these schemes can be
used for multigrained access control with various options. In addition, I
describe how much these schemes are resilient to infer the types of connecting
edges.
|
1404.0672 | Thermodynamic Hypothesis as Social Choice: An Impossibility Theorem for
Protein Folding | cs.CE cs.GT | Protein Folding is concerned with the reasons and mechanism behind a
protein's tertiary structure. The thermodynamic hypothesis of Anfinsen
postulates an universal energy function (UEF) characterizing the tertiary
structure, defined consistently across proteins, in terms of their aminoacid
sequence.
We consider the approach of examining multiple protein structure descriptors
in the PDB (Protein Data Bank), and infer individual preferences, biases
favoring particular classes of aminoacid interactions in each of them, later
aggregating these individual preferences into a global preference. This 2-step
process would ideally expose intrinsic biases on classes of aminoacid
interactions in the UEF itself. The intuition is that any intrinsic biases in
the UEF are expressed within each protein in a specific manner consistent with
its specific aminoacid sequence, size, and fold (consistently with Anfinsen's
thermodynamic hypothesis), making a 1-step, holistic aggregation less
desirable.
Our intention is to illustrate how some impossibility results from voting
theory would apply in this setting, being possibly applicable to other protein
folding problems as well. We consider concepts and results from voting theory
and unveil methodological difficulties for the approach mentioned above. With
our observations, we intend to highlight how key theoretical barriers, already
exposed by economists, can be relevant for the development of new methods, new
algorithms, for problems related to protein folding.
|
1404.0695 | Multi-objective Flower Algorithm for Optimization | cs.NE math.OC | Flower pollination algorithm is a new nature-inspired algorithm, based on the
characteristics of flowering plants. In this paper, we extend this flower
algorithm to solve multi-objective optimization problems in engineering. By
using the weighted sum method with random weights, we show that the proposed
multi-objective flower algorithm can accurately find the Pareto fronts for a
set of test functions. We then solve a bi-objective disc brake design problem,
which indeed converges quickly.
|
1404.0696 | D-P2P-Sim+:A Novel Distributed Framework for P2P Protocols Performance
Testing | cs.DB | In recent IoT (Internet of Things) and Web 2.0 technologies, a critical
problem arises with respect to storing and processing the large amount of
collected data. In this paper we develop and evaluate distributed
infrastructures for storing and processing large amount of such data. We
present a distributed framework that supports customized deployment of a
variety of indexing engines over million-node overlays. The proposed framework
provides the appropriate integrated set of tools that allows applications
processing large amount of data, to evaluate and test the performance of
various application protocols for very large scale deployments (multi million
nodes - billions of keys). The key aim is to provide the appropriate
environment that contributes in taking decisions regarding the choice of the
protocol in storage P2P systems for a variety of big data applications. Using
lightweight and efficient collection mechanisms, our system enables real-time
registration of multiple measures, integrating support for real-life parameters
such as node failure models and recovery strategies. Experiments have been
performed at the PlanetLab network and at a typical research laboratory in
order to verify scalability and show maximum re-usability of our setup.
D-P2P-Sim+ framework is publicly available at
http://code.google.com/p/d-p2p-sim/downloads/list.
|
1404.0703 | Joins via Geometric Resolutions: Worst-case and Beyond | cs.DB cs.DS | We present a simple geometric framework for the relational join. Using this
framework, we design an algorithm that achieves the fractional hypertree-width
bound, which generalizes classical and recent worst-case algorithmic results on
computing joins. In addition, we use our framework and the same algorithm to
show a series of what are colloquially known as beyond worst-case results. The
framework allows us to prove results for data stored in Btrees,
multidimensional data structures, and even multiple indices per table. A key
idea in our framework is formalizing the inference one does with an index as a
type of geometric resolution; transforming the algorithmic problem of computing
joins to a geometric problem. Our notion of geometric resolution can be viewed
as a geometric analog of logical resolution. In addition to the geometry and
logic connections, our algorithm can also be thought of as backtracking search
with memoization.
|
1404.0708 | Computational Optimization, Modelling and Simulation: Recent Trends and
Challenges | cs.NE math.OC | Modelling, simulation and optimization form an integrated part of modern
design practice in engineering and industry. Tremendous progress has been
observed for all three components over the last few decades. However, many
challenging issues remain unresolved, and the current trends tend to use
nature-inspired algorithms and surrogate-based techniques for modelling and
optimization. This 4th workshop on Computational Optimization, Modelling and
Simulation (COMS 2013) at ICCS 2013 will further summarize the latest
developments of optimization and modelling and their applications in science,
engineering and industry. In this review paper, we will analyse the recent
trends in modelling and optimization, and their associated challenges. We will
discuss important topics for further research, including parameter-tuning,
large-scale problems, and the gaps between theory and applications.
|
1404.0736 | Exploiting Linear Structure Within Convolutional Networks for Efficient
Evaluation | cs.CV cs.LG | We present techniques for speeding up the test-time evaluation of large
convolutional networks, designed for object recognition tasks. These models
deliver impressive accuracy but each image evaluation requires millions of
floating point operations, making their deployment on smartphones and
Internet-scale clusters problematic. The computation is dominated by the
convolution operations in the lower layers of the model. We exploit the linear
structure present within the convolutional filters to derive approximations
that significantly reduce the required computation. Using large
state-of-the-art models, we demonstrate we demonstrate speedups of
convolutional layers on both CPU and GPU by a factor of 2x, while keeping the
accuracy within 1% of the original model.
|
1404.0751 | Subspace Learning from Extremely Compressed Measurements | stat.ML cs.LG | We consider learning the principal subspace of a large set of vectors from an
extremely small number of compressive measurements of each vector. Our
theoretical results show that even a constant number of measurements per column
suffices to approximate the principal subspace to arbitrary precision, provided
that the number of vectors is large. This result is achieved by a simple
algorithm that computes the eigenvectors of an estimate of the covariance
matrix. The main insight is to exploit an averaging effect that arises from
applying a different random projection to each vector. We provide a number of
simulations confirming our theoretical results.
|
1404.0760 | Information Flow Decomposition in Feedback Systems: General Case Study | cs.IT math.IT | We derive three fundamental decompositions on relevant information quantities
in feedback systems. The feedback systems considered in this paper are only
restricted to be causal in time domain and the channels are allowed to be
subject to arbitrary distribution. These decompositions comprise the well-known
mutual information and the directed information, and indicate a law of
conservation of information flows in the closed-loop network.
|
1404.0766 | Ornstein Isomorphism and Algorithmic Randomness | cs.IT math.IT | In 1970, Donald Ornstein proved a landmark result in dynamical systems, viz.,
two Bernoulli systems with the same entropy are isomorphic except for a measure
0 set. Keane and Smorodinsky gave a finitary proof of this result. They also
indicated how one can generalize the result to mixing Markov Shifts. We adapt
their construction to show that if two computable mixing Markov systems have
the same entropy, then there is a layerwise computable isomorphism defined on
all Martin-Lof random points in the system. Since the set of Martin-Lof random
points forms a measure 1 set, it implies the classical result for such systems.
This result uses several recent developments in computable analysis and
algorithmic randomness. Following the work by Braverman, Nandakumar, and Hoyrup
and Rojas introduced discontinuous functions into the study of algorithmic
randomness. We utilize Hoyrup and Rojas' elegant notion of layerwise computable
functions to produce the test of randomness in our result. Further, we use the
recent result of the effective Shannon-McMillan-Breiman theorem, independently
established by Hochman and Hoyrup to prove the properties of our construction.
We show that the result cannot be improved to include all points in the
systems - only trivial computable isomorphisms exist between systems with the
same entropy.
|
1404.0774 | GPU Accelerated Fractal Image Compression for Medical Imaging in
Parallel Computing Platform | cs.DC cs.CV | In this paper, we implemented both sequential and parallel version of fractal
image compression algorithms using CUDA (Compute Unified Device Architecture)
programming model for parallelizing the program in Graphics Processing Unit for
medical images, as they are highly similar within the image itself. There are
several improvement in the implementation of the algorithm as well. Fractal
image compression is based on the self similarity of an image, meaning an image
having similarity in majority of the regions. We take this opportunity to
implement the compression algorithm and monitor the effect of it using both
parallel and sequential implementation. Fractal compression has the property of
high compression rate and the dimensionless scheme. Compression scheme for
fractal image is of two kind, one is encoding and another is decoding. Encoding
is very much computational expensive. On the other hand decoding is less
computational. The application of fractal compression to medical images would
allow obtaining much higher compression ratios. While the fractal magnification
an inseparable feature of the fractal compression would be very useful in
presenting the reconstructed image in a highly readable form. However, like all
irreversible methods, the fractal compression is connected with the problem of
information loss, which is especially troublesome in the medical imaging. A
very time consuming encoding pro- cess, which can last even several hours, is
another bothersome drawback of the fractal compression.
|
1404.0789 | The Least Wrong Model Is Not in the Data | cs.LG | The true process that generated data cannot be determined when multiple
explanations are possible. Prediction requires a model of the probability that
a process, chosen randomly from the set of candidate explanations, generates
some future observation. The best model includes all of the information
contained in the minimal description of the data that is not contained in the
data. It is closely related to the Halting Problem and is logarithmic in the
size of the data. Prediction is difficult because the ideal model is not
computable, and the best computable model is not "findable." However, the error
from any approximation can be bounded by the size of the description using the
model.
|
1404.0832 | Multiple Access Channels with Combined Cooperation and Partial Cribbing | cs.IT math.IT | In this paper we study the multiple access channel (MAC) with combined
cooperation and partial cribbing and characterize its capacity region.
Cooperation means that the two encoders send a message to one another via a
rate-limited link prior to transmission, while partial cribbing means that each
of the two encoders obtains a deterministic function of the other encoder's
output with or without delay. Prior work in this field dealt separately with
cooperation and partial cribbing. However, by combining these two methods we
can achieve significantly higher rates. Remarkably, the capacity region does
not require an additional auxiliary random variable (RV) since the purpose of
both cooperation and partial cribbing is to generate a common message between
the encoders. In the proof we combine methods of block Markov coding, backward
decoding, double rate-splitting, and joint typicality decoding. Furthermore, we
present the Gaussian MAC with combined one-sided cooperation and quantized
cribbing. For this model, we give an achievability scheme that shows how many
cooperation or quantization bits are required in order to achieve a Gaussian
MAC with full cooperation/cribbing capacity region. After establishing our main
results, we consider two cases where only one auxiliary RV is needed. The first
is a rate distortion dual setting for the MAC with a common message, a private
message and combined cooperation and cribbing. The second is a state-dependent
MAC with cooperation, where the state is known at a partially cribbing encoder
and at the decoder. However, there are cases where more than one auxiliary RV
is needed, e.g., when the cooperation and cribbing are not used for the same
purposes. We present a MAC with an action-dependent state, where the action is
based on the cooperation but not on the cribbing. Therefore, in this case more
than one auxiliary RV is needed.
|
1404.0835 | Games for the Strategic Influence of Expectations | cs.GT cs.LO cs.MA | We introduce a new class of games where each player's aim is to randomise her
strategic choices in order to affect the other players' expectations aside from
her own. The way each player intends to exert this influence is expressed
through a Boolean combination of polynomial equalities and inequalities with
rational coefficients. We offer a logical representation of these games as well
as a computational study of the existence of equilibria.
|
1404.0837 | Reasoning about Knowledge and Strategies: Epistemic Strategy Logic | cs.LO cs.AI | In this paper we introduce Epistemic Strategy Logic (ESL), an extension of
Strategy Logic with modal operators for individual knowledge. This enhanced
framework allows us to represent explicitly and to reason about the knowledge
agents have of their own and other agents' strategies. We provide a semantics
to ESL in terms of epistemic concurrent game models, and consider the
corresponding model checking problem. We show that the complexity of model
checking ESL is not worse than (non-epistemic) Strategy Logic
|
1404.0840 | Refining and Delegating Strategic Ability in ATL | cs.LO cs.GT cs.MA | We propose extending Alternating-time Temporal Logic (ATL) by an operator <i
refines-to G> F to express that agent i can distribute its powers to a set of
sub-agents G in a way which satisfies ATL condition f on the strategic ability
of the coalitions they may form, possibly together with others agents. We prove
the decidability of model-checking of formulas whose subformulas with this
operator as the main connective have the form <i_1 refines-to G_1>...<i_m
refines-to G_m> f, with no further occurrences of this operator in f.
|
1404.0841 | A Resolution Prover for Coalition Logic | cs.LO cs.AI | We present a prototype tool for automated reasoning for Coalition Logic, a
non-normal modal logic that can be used for reasoning about cooperative agency.
The theorem prover CLProver is based on recent work on a resolution-based
calculus for Coalition Logic that operates on coalition problems, a normal form
for Coalition Logic. We provide an overview of coalition problems and of the
resolution-based calculus for Coalition Logic. We then give details of the
implementation of CLProver and present the results for a comparison with an
existing tableau-based solver.
|
1404.0845 | Partial Preferences for Mediated Bargaining | cs.GT cs.MA | In this work we generalize standard Decision Theory by assuming that two
outcomes can also be incomparable. Two motivating scenarios show how
incomparability may be helpful to represent those situations where, due to lack
of information, the decision maker would like to maintain different options
alive and defer the final decision. In particular, a new axiomatization is
given which turns out to be a weakening of the classical set of axioms used in
Decision Theory. Preliminary results show how preferences involving complex
distributions are related to judgments on single alternatives.
|
1404.0847 | Execution Time Analysis for Industrial Control Applications | cs.SE cs.SY | Estimating the execution time of software components is often mandatory when
evaluating the non-functional properties of software-intensive systems. This
particularly holds for real-time embedded systems, e.g., in the context of
industrial automation. In practice it is however very hard to obtain reliable
execution time estimates which are accurate, but not overly pessimistic with
respect to the typical behavior of the software.
This article proposes two new concepts to ease the use of execution time
analysis for industrial control applications: (1) a method based on recurring
occurrences of code sequences for automatically creating a timing model of a
given processor and (2) an interactive way to integrate execution time analysis
into the development environment, thus making timing analysis results easily
accessible for software developers. The proposed methods are validated by an
industrial case study, which shows that a significant amount of code reuse is
present in a set of representative industrial control applications.
|
1404.0850 | Application of Ontologies in Identifying Requirements Patterns in Use
Cases | cs.SE cs.CL cs.IR | Use case specifications have successfully been used for requirements
description. They allow joining, in the same modeling space, the expectations
of the stakeholders as well as the needs of the software engineer and analyst
involved in the process. While use cases are not meant to describe a system's
implementation, by formalizing their description we are able to extract
implementation relevant information from them. More specifically, we are
interested in identifying requirements patterns (common requirements with
typical implementation solutions) in support for a requirements based software
development approach. In the paper we propose the transformation of Use Case
descriptions expressed in a Controlled Natural Language into an ontology
expressed in the Web Ontology Language (OWL). OWL's query engines can then be
used to identify requirements patterns expressed as queries over the ontology.
We describe a tool that we have developed to support the approach and provide
an example of usage.
|
1404.0854 | Enabling Automatic Certification of Online Auctions | cs.LO cs.AI | We consider the problem of building up trust in a network of online auctions
by software agents. This requires agents to have a deeper understanding of
auction mechanisms and be able to verify desirable properties of a given
mechanism. We have shown how these mechanisms can be formalised as semantic web
services in OWL-S, a good enough expressive machine-readable formalism enabling
software agents, to discover, invoke, and execute a web service. We have also
used abstract interpretation to translate the auction's specifications from
OWL-S, based on description logic, to COQ, based on typed lambda calculus, in
order to enable automatic verification of desirable properties of the auction
by the software agents. For this language translation, we have discussed the
syntactic transformation as well as the semantics connections between both
concrete and abstract domains. This work contributes to the implementation of
the vision of agent-mediated e-commerce systems.
|
1404.0864 | Generalized Signal Alignment For Arbitrary MIMO Two-Way Relay Channels | cs.IT math.IT | In this paper, we consider the arbitrary MIMO two-way relay channels, where
there are $K$ source nodes, each equipped with $M_i$ antennas, for
$i=1,2,\cdots,K$, and one relay node, equipped with $N$ antennas. Each source
node can exchange independent messages with arbitrary other source nodes
assisted by the relay. We extend our newly-proposed transmission scheme,
generalized signal alignment (GSA) in [1], to arbitrary MIMO two-way relay
channels when $N>M_i+M_j$, $\forall i \neq j$. The key idea of GSA is to cancel
the interference for each data pair in its specific subspace by two steps. This
is realized by jointly designing the precoding matrices at all source nodes and
the processing matrix at the relay node. Moreover, the aligned subspaces are
orthogonal to each other. By applying the GSA, we show that a necessary
condition on the antenna configuration to achieve the DoF upper bound $\min
\{\sum_{i=1}^K M_i, 2\sum_{i=2}^K M_i,2N\}$ is $N \geq \max\{\sum_{i=1}^K
M_i-M_s-M_t+d_{s,t}\mid \forall s,t\}$. Here, $d_{s,t}$ denotes the DoF of the
message exchanged between source node $s$ and $t$. In the special case when the
arbitrary MIMO two-way relay channel reduces to the $K$-user MIMO Y channel, we
show that our achievable region of DoF upper bound is larger than the previous
work.
|
1404.0868 | A Novel Genetic Algorithm using Helper Objectives for the 0-1 Knapsack
Problem | cs.NE | The 0-1 knapsack problem is a well-known combinatorial optimisation problem.
Approximation algorithms have been designed for solving it and they return
provably good solutions within polynomial time. On the other hand, genetic
algorithms are well suited for solving the knapsack problem and they find
reasonably good solutions quickly. A naturally arising question is whether
genetic algorithms are able to find solutions as good as approximation
algorithms do. This paper presents a novel multi-objective optimisation genetic
algorithm for solving the 0-1 knapsack problem. Experiment results show that
the new algorithm outperforms its rivals, the greedy algorithm, mixed strategy
genetic algorithm, and greedy algorithm + mixed strategy genetic algorithm.
|
1404.0900 | Influence Maximization: Near-Optimal Time Complexity Meets Practical
Efficiency | cs.SI cs.DB | Given a social network G and a constant k, the influence maximization problem
asks for k nodes in G that (directly and indirectly) influence the largest
number of nodes under a pre-defined diffusion model. This problem finds
important applications in viral marketing, and has been extensively studied in
the literature. Existing algorithms for influence maximization, however, either
trade approximation guarantees for practical efficiency, or vice versa. In
particular, among the algorithms that achieve constant factor approximations
under the prominent independent cascade (IC) model or linear threshold (LT)
model, none can handle a million-node graph without incurring prohibitive
overheads.
This paper presents TIM, an algorithm that aims to bridge the theory and
practice in influence maximization. On the theory side, we show that TIM runs
in O((k+\ell) (n+m) \log n / \epsilon^2) expected time and returns a
(1-1/e-\epsilon)-approximate solution with at least 1 - n^{-\ell} probability.
The time complexity of TIM is near-optimal under the IC model, as it is only a
\log n factor larger than the \Omega(m + n) lower-bound established in previous
work (for fixed k, \ell, and \epsilon). Moreover, TIM supports the triggering
model, which is a general diffusion model that includes both IC and LT as
special cases. On the practice side, TIM incorporates novel heuristics that
significantly improve its empirical efficiency without compromising its
asymptotic performance. We experimentally evaluate TIM with the largest
datasets ever tested in the literature, and show that it outperforms the
state-of-the-art solutions (with approximation guarantees) by up to four orders
of magnitude in terms of running time. In particular, when k = 50, \epsilon =
0.2, and \ell = 1, TIM requires less than one hour on a commodity machine to
process a network with 41.6 million nodes and 1.4 billion edges.
|
1404.0904 | On a correlational clustering of integers | math.NT cs.AI | Correlation clustering is a concept of machine learning. The ultimate goal of
such a clustering is to find a partition with minimal conflicts. In this paper
we investigate a correlation clustering of integers, based upon the greatest
common divisor.
|
1404.0906 | Optimal Power Control for Analog Bidirectional Relaying with Long-Term
Relay Power Constraint | cs.IT math.IT | Wireless systems that carry delay-sensitive information (such as speech
and/or video signals) typically transmit with fixed data rates, but may
occasionally suffer from transmission outages caused by the random nature of
the fading channels. If the transmitter has instantaneous channel state
information (CSI) available, it can compensate for a significant portion of
these outages by utilizing power allocation. In a conventional dual-hop
bidirectional amplify-and-forward (AF) relaying system, the relay already has
instantaneous CSI of both links available, as this is required for relay gain
adjustment. We therefore develop an optimal power allocation strategy for the
relay, which adjusts its instantaneous output power to the minimum level
required to avoid outages, but only if the required output power is below some
cutoff level; otherwise, the relay is silent in order to conserve power and
prolong its lifetime. The proposed scheme is proven to minimize the system
outage probability, subject to an average power constraint at the relay and
fixed output powers at the end nodes.
|
1404.0933 | Bayes and Naive Bayes Classifier | cs.LG | The Bayesian Classification represents a supervised learning method as well
as a statistical method for classification. Assumes an underlying probabilistic
model and it allows us to capture uncertainty about the model in a principled
way by determining probabilities of the outcomes. This Classification is named
after Thomas Bayes (1702-1761), who proposed the Bayes Theorem. Bayesian
classification provides practical learning algorithms and prior knowledge and
observed data can be combined. Bayesian Classification provides a useful
perspective for understanding and evaluating many learning algorithms. It
calculates explicit probabilities for hypothesis and it is robust to noise in
input data. In statistical classification the Bayes classifier minimises the
probability of misclassification. That was a visual intuition for a simple case
of the Bayes classifier, also called: 1)Idiot Bayes 2)Naive Bayes 3)Simple
Bayes
|
1404.0953 | Implementing Anti-Unification Modulo Equational Theory | cs.LO cs.AI | We present an implementation of E-anti-unification as defined in Heinz
(1995), where tree-grammar descriptions of equivalence classes of terms are
used to compute generalizations modulo equational theories. We discuss several
improvements, including an efficient implementation of variable-restricted
E-anti-unification from Heinz (1995), and give some runtime figures about them.
We present applications in various areas, including lemma generation in
equational inductive proofs, intelligence tests, diverging Knuth-Bendix
completion, strengthening of induction hypotheses, and theory formation about
finite algebras.
|
1404.0964 | Distributed Hypothesis Testing with Social Learning and Symmetric Fusion | cs.IT cs.MA cs.SI math.IT | We study the utility of social learning in a distributed detection model with
agents sharing the same goal: a collective decision that optimizes an agreed
upon criterion. We show that social learning is helpful in some cases but is
provably futile (and thus essentially a distraction) in other cases.
Specifically, we consider Bayesian binary hypothesis testing performed by a
distributed detection and fusion system, where all decision-making agents have
binary votes that carry equal weight. Decision-making agents in the team
sequentially make local decisions based on their own private signals and all
precedent local decisions. It is shown that the optimal decision rule is not
affected by precedent local decisions when all agents observe conditionally
independent and identically distributed private signals. Perfect Bayesian
reasoning will cancel out all effects of social learning. When the agents
observe private signals with different signal-to-noise ratios, social learning
is again futile if the team decision is only approved by unanimity. Otherwise,
social learning can strictly improve the team performance. Furthermore, the
order in which agents make their decisions affects the team decision.
|
1404.0965 | Compressed Sensing Bayes Risk Minimization for Under-determined Systems
via Sphere Detection | cs.IT math.IT | The application of Compresses Sensing is a promising physical layer
technology for the joint activity and data detection of signals. Detecting the
activity pattern correctly has severe impact on the system performance and is
therefore of major concern. In contrast to previous work, in this paper we
optimize joint activity and data detection in under-determined systems by
minimizing the Bayes-Risk for erroneous activity detection. We formulate a new
Compressed Sensing Bayes-Risk detector which directly allows to influence error
rates at the activity detection dynamically by a parameter that can be
controlled at higher layers. We derive the detector for a general linear system
and show that our detector outperforms classical Compressed Sensing approaches
by investigating an overloaded CDMA system.
|
1404.0969 | A Class of Reducible Cyclic Codes and Their Weight Distribution | cs.IT math.IT | In this paper, a family of reducible cyclic codes over GF(p) whose duals have
four zeros is presented, where p is an odd prime. Furthermore, the weight
distribution of these cyclic codes is determined.
|
1404.0971 | Improved channel estimation for interference cancellation in random
access methods for satellite communications | cs.IT math.IT | In the context of satellite communications, random access (RA) methods can
significantly increase throughput and reduce latency over the network. The
recent RA methods are based on multi-user multiple access transmission at the
same time and frequency combined with interference cancellation and iterative
decoding at the receiver. Generally, it is assumed that perfect knowledge of
the interference is available at the receiver. In practice, the interference
term has to be accurately estimated to avoid performance degradation. Several
estimation techniques have been proposed lately in the case of superimposed
signals. In this paper, we present an improved channel estimation technique
that combines estimation using an autocorrelation based method and the
Expectation-Maximization algorithm, and uses Pilot Symbol Assisted Modulation
to further improve the performance and achieve optimal interference
cancellation.
|
1404.0979 | Kernel-Based Adaptive Online Reconstruction of Coverage Maps With Side
Information | cs.NI cs.LG stat.ML | In this paper, we address the problem of reconstructing coverage maps from
path-loss measurements in cellular networks. We propose and evaluate two
kernel-based adaptive online algorithms as an alternative to typical offline
methods. The proposed algorithms are application-tailored extensions of
powerful iterative methods such as the adaptive projected subgradient method
and a state-of-the-art adaptive multikernel method. Assuming that the moving
trajectories of users are available, it is shown how side information can be
incorporated in the algorithms to improve their convergence performance and the
quality of the estimation. The complexity is significantly reduced by imposing
sparsity-awareness in the sense that the algorithms exploit the compressibility
of the measurement data to reduce the amount of data which is saved and
processed. Finally, we present extensive simulations based on realistic data to
show that our algorithms provide fast, robust estimates of coverage maps in
real-world scenarios. Envisioned applications include path-loss prediction
along trajectories of mobile users as a building block for anticipatory
buffering or traffic offloading.
|
1404.1006 | Contrasting Effects of Strong Ties on SIR and SIS Processes in Temporal
Networks | physics.soc-ph cs.SI | Most real networks are characterized by connectivity patterns that evolve in
time following complex, non-Markovian, dynamics. Here we investigate the impact
of this ubiquitous feature by studying the Susceptible-Infected-Recovered (SIR)
and Susceptible-Infected-Susceptible (SIS) epidemic models on activity driven
networks with and without memory (i.e., Markovian and non-Markovian). We show
that while memory inhibits the spreading process in SIR models, where the
epidemic threshold is moved to larger values, it plays the opposite effect in
the case of the SIS, where the threshold is lowered. The heterogeneity in tie
strengths, and the frequent repetition of connections that it entails, allows
in fact less virulent SIS-like diseases to survive in tightly connected local
clusters that serve as reservoir for the virus. We validate this picture by
evaluating the threshold of both processes in a real temporal network. Our
findings confirm the important role played by non-Markovian network dynamics on
dynamical processes
|
1404.1009 | You are What you Eat (and Drink): Identifying Cultural Boundaries by
Analyzing Food & Drink Habits in Foursquare | cs.SI cs.CY physics.data-an physics.soc-ph | Food and drink are two of the most basic needs of human beings. However, as
society evolved, food and drink became also a strong cultural aspect, being
able to describe strong differences among people. Traditional methods used to
analyze cross-cultural differences are mainly based on surveys and, for this
reason, they are very difficult to represent a significant statistical sample
at a global scale. In this paper, we propose a new methodology to identify
cultural boundaries and similarities across populations at different scales
based on the analysis of Foursquare check-ins. This approach might be useful
not only for economic purposes, but also to support existing and novel
marketing and social applications. Our methodology consists of the following
steps. First, we map food and drink related check-ins extracted from Foursquare
into users' cultural preferences. Second, we identify particular individual
preferences, such as the taste for a certain type of food or drink, e.g., pizza
or sake, as well as temporal habits, such as the time and day of the week when
an individual goes to a restaurant or a bar. Third, we show how to analyze this
information to assess the cultural distance between two countries, cities or
even areas of a city. Fourth, we apply a simple clustering technique, using
this cultural distance measure, to draw cultural boundaries across countries,
cities and regions.
|
1404.1066 | Parallel Support Vector Machines in Practice | cs.LG | In this paper, we evaluate the performance of various parallel optimization
methods for Kernel Support Vector Machines on multicore CPUs and GPUs. In
particular, we provide the first comparison of algorithms with explicit and
implicit parallelization. Most existing parallel implementations for multi-core
or GPU architectures are based on explicit parallelization of Sequential
Minimal Optimization (SMO)---the programmers identified parallelizable
components and hand-parallelized them, specifically tuned for a particular
architecture. We compare these approaches with each other and with implicitly
parallelized algorithms---where the algorithm is expressed such that most of
the work is done within few iterations with large dense linear algebra
operations. These can be computed with highly-optimized libraries, that are
carefully parallelized for a large variety of parallel platforms. We highlight
the advantages and disadvantages of both approaches and compare them on various
benchmark data sets. We find an approximate implicitly parallel algorithm which
is surprisingly efficient, permits a much simpler implementation, and leads to
unprecedented speedups in SVM training.
|
1404.1068 | Multi-User Coverage Probability of Uplink Cellular Systems: a Stochastic
Geometry Approach | cs.IT math.IT | We analyze the coverage probability of multi-user uplink cellular networks
with fractional power control. We use a stochastic geometry approach where the
mobile users are distributed as a Poisson Point Process (PPP), whereas the
serving base station (BS) is placed at the origin. Using conditional thinning,
we are able to calculate the coverage probability of $k$ users which are
allocated a set of orthogonal resources in the cell of interest, obtaining
analytical expressions for this probability considering their respective
distances to the serving BS. These expressions give useful insights on the
interplay between the power control policy, the interference level and the
degree of fairness among different users in the system.
|
1404.1069 | Rewarding evolutionary fitness with links between populations promotes
cooperation | q-bio.PE cs.SI physics.soc-ph | Evolution of cooperation in the prisoner's dilemma and the public goods game
is studied, where initially players belong to two independent structured
populations. Simultaneously with the strategy evolution, players whose current
utility exceeds a threshold are rewarded by an external link to a player
belonging to the other population. Yet as soon as the utility drops below the
threshold, the external link is terminated. The rewarding of current
evolutionary fitness thus introduces a time-varying interdependence between the
two populations. We show that, regardless of the details of the evolutionary
game and the interaction structure, the self-organization of fitness and reward
gives rise to distinguished players that act as strong catalysts of cooperative
behavior. However, there also exist critical utility thresholds beyond which
distinguished players are no longer able to percolate. The interdependence
between the two populations then vanishes, and cooperators are forced to rely
on traditional network reciprocity alone. We thus demonstrate that a simple
strategy-independent form of rewarding may significantly expand the scope of
cooperation on structured populations. The formation of links outside the
immediate community seems particularly applicable in human societies, where an
individual is typically member in many different social networks.
|
1404.1089 | Linear Hamilton Jacobi Bellman Equations in High Dimensions | math.OC cs.SY | The Hamilton Jacobi Bellman Equation (HJB) provides the globally optimal
solution to large classes of control problems. Unfortunately, this generality
comes at a price, the calculation of such solutions is typically intractible
for systems with more than moderate state space size due to the curse of
dimensionality. This work combines recent results in the structure of the HJB,
and its reduction to a linear Partial Differential Equation (PDE), with methods
based on low rank tensor representations, known as a separated representations,
to address the curse of dimensionality. The result is an algorithm to solve
optimal control problems which scales linearly with the number of states in a
system, and is applicable to systems that are nonlinear with stochastic forcing
in finite-horizon, average cost, and first-exit settings. The method is
demonstrated on inverted pendulum, VTOL aircraft, and quadcopter models, with
system dimension two, six, and twelve respectively.
|
1404.1100 | A Tutorial on Principal Component Analysis | cs.LG stat.ML | Principal component analysis (PCA) is a mainstay of modern data analysis - a
black box that is widely used but (sometimes) poorly understood. The goal of
this paper is to dispel the magic behind this black box. This manuscript
focuses on building a solid intuition for how and why principal component
analysis works. This manuscript crystallizes this knowledge by deriving from
simple intuitions, the mathematics behind PCA. This tutorial does not shy away
from explaining the ideas informally, nor does it shy away from the
mathematics. The hope is that by addressing both aspects, readers of all levels
will be able to gain a better understanding of PCA as well as the when, the how
and the why of applying this technique.
|
1404.1107 | Performance of Multiantenna Linear MMSE Receivers in Doubly Stochastic
Networks | cs.IT math.IT | A technique is presented to characterize the
Signal-to-Interference-plus-Noise Ratio (SINR) of a representative link with a
multiantenna linear Minimum-Mean-Square-Error receiver in a wireless network
with transmitting nodes distributed according to a doubly stochastic process,
which is a generalization of the Poisson point process. The cumulative
distribution function of the SINR of the representative link is derived
assuming independent Rayleigh fading between antennas. Several representative
spatial node distributions are considered, including networks with both
deterministic and random clusters, strip networks (used to model roadways,
e.g.), hard-core networks and networks with generalized path-loss models. In
addition, it is shown that if the number of antennas at the representative
receiver is increased linearly with the nominal node density, the
signal-to-interference ratio converges in distribution to a random variable
that is non-zero in general, and a positive constant in certain cases. This
result indicates that to the extent that the system assumptions hold, it is
possible to scale such networks by increasing the number of receiver antennas
linearly with the node density. The results presented here are useful in
characterizing the performance of multiantenna wireless networks in more
general network models than what is currently available.
|
1404.1112 | Duration-Differentiated Services in Electricity | cs.SY | The integration of renewable sources poses challenges at the operational and
economic levels of the power grid. In terms of keeping the balance between
supply and demand, the usual scheme of supply following load may not be
appropriate for large penetration levels of uncertain and intermittent
renewable supply. In this paper, we focus on an alternative scheme in which the
load follows the supply, exploiting the flexibility associated with the demand
side. We consider a model of flexible loads that are to be serviced by
zero-marginal cost renewable power together with conventional generation if
necessary. Each load demands 1 kW for a specified number of time slots within
an operational period. The flexibility of a load resides in the fact that the
service may be delivered over any slots within the operational period. Loads
therefore require flexible energy services that are differentiated by the
demanded duration. We focus on two problems associated with
durations-differentiated loads. The first problem deals with the operational
decisions that a supplier has to make to serve a given set of duration
differentiated loads. The second problem focuses on a market implementation for
duration differentiated services. We give necessary and sufficient conditions
under which the available power can service the loads, and we describe an
algorithm that constructs an appropriate allocation. In the event the available
supply is inadequate, we characterize the minimum amount of power that must be
purchased to service the loads. Next we consider a forward market where
consumers can purchase duration differentiated energy services. We first
characterize social welfare maximizing allocations in this forward market and
then show the existence of an efficient competitive equilibrium.
|
1404.1113 | Interference-Based Optimal Power-Efficient Access Scheme for Cognitive
Radio Networks | cs.IT cs.NI math.IT | In this paper, we propose a new optimization-based access strategy of
multipacket reception (MPR) channel for multiple secondary users (SUs)
accessing the primary user (PU) spectrum opportunistically. We devise an
analytical model that realizes the multipacket access strategy of SUs that
maximizes the throughput of individual backlogged SUs subject to queue
stability of the PU. All the network receiving nodes have MPR capability. We
aim at maximizing the throughput of the individual SUs such that the PU's queue
is maintained stable. Moreover, we are interested in providing an
energy-efficient cognitive scheme. Therefore, we include energy constraints on
the PU and SU average transmitted energy to the optimization problem. Each SU
accesses the medium with certain probability that depends on the PU's activity,
i.e., active or inactive. The numerical results show the advantage in terms of
SU throughput of the proposed scheme over the conventional access scheme, where
the SUs access the channel randomly with fixed power when the PU is sensed to
be idle.
|
1404.1116 | Resolving Multi-path Interference in Time-of-Flight Imaging via
Modulation Frequency Diversity and Sparse Regularization | cs.CV cs.IT math.IT physics.optics | Time-of-flight (ToF) cameras calculate depth maps by reconstructing phase
shifts of amplitude-modulated signals. For broad illumination or transparent
objects, reflections from multiple scene points can illuminate a given pixel,
giving rise to an erroneous depth map. We report here a sparsity regularized
solution that separates K-interfering components using multiple modulation
frequency measurements. The method maps ToF imaging to the general framework of
spectral estimation theory and has applications in improving depth profiles and
exploiting multiple scattering.
|
1404.1129 | An Efficient Two-Stage Sparse Representation Method | cs.CV | There are a large number of methods for solving under-determined linear
inverse problem. Many of them have very high time complexity for large
datasets. We propose a new method called Two-Stage Sparse Representation (TSSR)
to tackle this problem. We decompose the representing space of signals into two
parts, the measurement dictionary and the sparsifying basis. The dictionary is
designed to approximate a sub-Gaussian distribution to exploit its
concentration property. We apply sparse coding to the signals on the dictionary
in the first stage, and obtain the training and testing coefficients
respectively. Then we design the basis to approach an identity matrix in the
second stage, to acquire the Restricted Isometry Property (RIP) and
universality property. The testing coefficients are encoded over the basis and
the final representing coefficients are obtained. We verify that the projection
of testing coefficients onto the basis is a good approximation of the signal
onto the representing space. Since the projection is conducted on a much
sparser space, the runtime is greatly reduced. For concrete realization, we
provide an instance for the proposed TSSR. Experiments on four biometrics
databases show that TSSR is effective and efficient, comparing with several
classical methods for solving linear inverse problem.
|
1404.1140 | Scalable Planning and Learning for Multiagent POMDPs: Extended Version | cs.AI cs.LG | Online, sample-based planning algorithms for POMDPs have shown great promise
in scaling to problems with large state spaces, but they become intractable for
large action and observation spaces. This is particularly problematic in
multiagent POMDPs where the action and observation space grows exponentially
with the number of agents. To combat this intractability, we propose a novel
scalable approach based on sample-based planning and factored value functions
that exploits structure present in many multiagent settings. This approach
applies not only in the planning case, but also in the Bayesian reinforcement
learning setting. Experimental results show that we are able to provide high
quality solutions to large multiagent planning and learning problems.
|
1404.1144 | AIS-MACA- Z: MACA based Clonal Classifier for Splicing Site, Protein
Coding and Promoter Region Identification in Eukaryotes | cs.CE cs.LG | Bioinformatics incorporates information regarding biological data storage,
accessing mechanisms and presentation of characteristics within this data. Most
of the problems in bioinformatics and be addressed efficiently by computer
techniques. This paper aims at building a classifier based on Multiple
Attractor Cellular Automata (MACA) which uses fuzzy logic with version Z to
predict splicing site, protein coding and promoter region identification in
eukaryotes. It is strengthened with an artificial immune system technique
(AIS), Clonal algorithm for choosing rules of best fitness. The proposed
classifier can handle DNA sequences of lengths 54,108,162,252,354. This
classifier gives the exact boundaries of both protein and promoter regions with
an average accuracy of 90.6%. This classifier can predict the splicing site
with 97% accuracy. This classifier was tested with 1, 97,000 data components
which were taken from Fickett & Toung , EPDnew, and other sequences from a
renowned medical university.
|
1404.1148 | Hadamard Coded Modulation: An Alternative to OFDM for Optical Wireless
Communications | cs.IT math.IT | Orthogonal frequency division multiplexing (OFDM) is a modulation technique
susceptible to source, channel and amplifier nonlinearities because of its high
peak-to-average ratio (PAPR). The distortion gets worse by increasing the
average power of the OFDM signals since larger portion of the signals are
affected by nonlinearity. In this paper we introduce Hadamard coded modulation
(HCM) that uses the fast Walsh-Hadamard transform (FWHT) to modulate data as an
alternative technique to OFDM in direct-detection wireless optical systems.
This technique is shown to have a better performance for high average optical
power scenarios because of its small PAPR, and can be used instead of OFDM in
two scenarios: 1) in optical systems that require high average optical powers
such as visible light communications (VLC), and 2) in optical wireless systems
unconstrained by average power, for which HCM achieves lower bit error rate
(BER) compared to OFDM. The power efficiency of HCM can be improved by removing
a part of the signal's DC bias without losing any information. In this way, the
amplitude of the transmitted signal is decreased and the signals become less
susceptible to nonlinearity. Interleaving can be applied on HCM to make the
resulting signals resistent against inter-symbol interference (ISI) effects in
dispersive channels by uniformly distributing the interference over all
symbols.
|
1404.1151 | Recognition of Handwritten MODI Numerals using Hu and Zernike features | cs.CV | Handwritten automatic character recognition has attracted many researchers
all over the world to contribute automatic character recognition domain. Shape
identification and feature extraction is very important part of any character
recognition system and success of method is highly dependent on selection of
features. However feature extraction is the most important step in defining the
shape of the character as precisely and as uniquely as possible. This is indeed
the most important step and complex task as well and achieved success by using
invariance property, irrespective of position and orientation. Zernike moments
describes shape, identify rotation invariant due to its Orthogonality property.
MODI is an ancient script of India had cursive and complex representation of
characters. The work described in this paper presents efficiency of Zernike
moments over Hus moment for automatic recognition of handwritten MODI numerals.
|
1404.1168 | Persistence based analysis of consensus protocols for dynamic graph
networks | cs.SY | This article deals with the consensus problem involving agents with
time-varying singularities in the dynamics or communication in undirected graph
networks. Existing results provide control laws which guarantee asymptotic
consensus. These results are based on the analysis of a system switching
between piecewise constant and time-invariant dynamics. This work introduces a
new analysis technique relying upon classical notions of persistence of
excitation to study the convergence properties of the time-varying multi-agent
dynamics. Since the individual edge weights pass through singularities and vary
with time, the closed-loop dynamics consists of a non-autonomous linear system.
Instead of simplifying to a piecewise continuous switched system as in
literature, smooth variations in edge weights are allowed, albeit assuming an
underlying persistence condition which characterizes sufficient inter-agent
communication to reach consensus. The consensus task is converted to
edge-agreement in order to study a stabilization problem to which classical
persistence based results apply. The new technique allows precise computation
of the rate of convergence to the consensus value.
|
1404.1178 | Reliable Reporting for Massive M2M Communications with Periodic Resource
Pooling | cs.IT cs.NI math.IT | This letter considers a wireless M2M communication scenario with a massive
number of M2M devices. Each device needs to send its reports within a given
deadline and with certain reliability, e. g. 99.99%. A pool of resources
available to all M2M devices is periodically available for transmission. The
number of transmissions required by an M2M device within the pool is random due
to two reasons - random number of arrived reports since the last reporting
opportunity and requests for retransmission due to random channel errors. We
show how to dimension the pool of M2M-dedicated resources in order to guarantee
the desired reliability of the report delivery within the deadline. The fact
that the pool of resources is used by a massive number of devices allows to
base the dimensioning on the central limit theorem. The results are interpreted
in the context of LTE, but they are applicable to any M2M communication system.
|
1404.1183 | Phase retrieval for the Cauchy wavelet transform | math.FA cs.IT math.IT | We consider the phase retrieval problem in which one tries to reconstruct a
function from the modulus of its wavelet transform. We study the unicity and
stability of the reconstruction. In the case where the wavelets are Cauchy
wavelets, we prove that the modulus of the wavelet transform uniquely
determines the function up to a global phase. We show that the reconstruction
operator is continuous but not uniformly continuous. We describe how to
construct pairs of functions which are far away in $L^2$-norm but whose wavelet
transforms are very close, in modulus. The principle is to modulate the wavelet
transform of a fixed initial function by a phase which varies slowly in both
time and frequency. This construction seems to cover all the instabilities that
we observe in practice; we give a partial formal justification to this fact.
Finally, we describe an exact reconstruction algorithm and use it to
numerically confirm our analysis of the stability question.
|
1404.1193 | Cost minimization for fading channels with energy harvesting and
conventional energy | cs.IT math.IT | In this paper, we investigate resource allocation strategies for a
point-to-point wireless communications system with hybrid energy sources
consisting of an energy harvester and a conventional energy source. In
particular, as an incentive to promote the use of renewable energy, we assume
that the renewable energy has a lower cost than the conventional energy. Then,
by assuming that the non-causal information of the energy arrivals and the
channel power gains are available, we minimize the total energy cost of such a
system over $N$ fading slots under a proposed outage constraint together with
the energy harvesting constraints. The outage constraint requires a minimum
fixed number of slots to be reliably decoded, and thus leads to a mixed-integer
programming formulation for the optimization problem. This constraint is
useful, for example, if an outer code is used to recover all the data bits.
Optimal linear time algorithms are obtained for two extreme cases, i.e., the
number of outage slot is $1$ or $N-1$. For the general case, a lower bound
based on the linear programming relaxation, and two suboptimal algorithms are
proposed. It is shown that the proposed suboptimal algorithms exhibit only a
small gap from the lower bound. We then extend the proposed algorithms to the
multi-cycle scenario in which the outage constraint is imposed for each cycle
separately. Finally, we investigate the resource allocation strategies when
only causal information on the energy arrivals and only channel statistics is
available. It is shown that the greedy energy allocation is optimal for this
scenario.
|
1404.1237 | Operational Rate-Distortion Performance of Single-source and Distributed
Compressed Sensing | cs.IT math.IT | We consider correlated and distributed sources without cooperation at the
encoder. For these sources, we derive the best achievable performance in the
rate-distortion sense of any distributed compressed sensing scheme, under the
constraint of high--rate quantization. Moreover, under this model we derive a
closed--form expression of the rate gain achieved by taking into account the
correlation of the sources at the receiver and a closed--form expression of the
average performance of the oracle receiver for independent and joint
reconstruction. Finally, we show experimentally that the exploitation of the
correlation between the sources performs close to optimal and that the only
penalty is due to the missing knowledge of the sparsity support as in (non
distributed) compressed sensing. Even if the derivation is performed in the
large system regime, where signal and system parameters tend to infinity,
numerical results show that the equations match simulations for parameter
values of practical interest.
|
1404.1269 | Unified Performance Analysis of Mixed Line of Sight RF-FSO Fixed Gain
Dual-Hop Transmission Systems | cs.IT math.IT | In this work, we carry out a unified performance analysis of a dual-hop fixed
gain relay system over asymmetric links composed of both radio-frequency (RF)
and unified free-space optics (FSO) under the effect of pointing errors. The RF
link is modeled by the Nakagami-$m$ fading channel and the FSO link by the
Gamma-Gamma fading channel subject to both types of detection techniques (i.e.
heterodyne detection and intensity modulation with direct detection (IM/DD)).
In particular, we derive new unified closed-form expressions for the cumulative
distribution function, the probability density function, the moment generation
function, and the moments of the end-to-end signal-to-noise ratio of these
systems in terms of the Meijer's G function. Based on these formulas, we offer
exact closed-form expressions for the outage probability, the higher-order
amount of fading, and the average bit-error rate of a variety of binary
modulations in terms of the Meijer's G function. Further, an exact closed-form
expression for the end-to-end ergodic capacity for the Nakagami-$m$-unified FSO
relay links is derived in terms of the bivariate G function. All the given
results are verified via Computer-based Monte-Carlo simulations.
|
1404.1270 | Semantics and Validation of Shapes Schemas for RDF | cs.DB | We present a formal semantics and proof of soundness for shapes schemas, an
expressive schema language for RDF graphs that is the foundation of Shape
Expressions Language 2.0. It can be used to describe the vocabulary and the
structure of an RDF graph, and to constrain the admissible properties and
values for nodes in that graph. The language defines a typing mechanism called
shapes against which nodes of the graph can be checked. It includes an
algebraic grouping operator, a choice operator and cardinality constraints for
the number of allowed occurrences of a property. Shapes can be combined using
Boolean operators, and can use possibly recursive references to other shapes.
We describe the syntax of the language and define its semantics. The
semantics is proven to be well-defined for schemas that satisfy a reasonable
syntactic restriction, namely stratified use of negation and recursion. We
present two algorithms for the validation of an RDF graph against a shapes
schema. The first algorithm is a direct implementation of the semantics,
whereas the second is a non-trivial improvement. We also briefly give
implementation guidelines.
|
1404.1282 | Hierarchical Dirichlet Scaling Process | cs.LG | We present the \textit{hierarchical Dirichlet scaling process} (HDSP), a
Bayesian nonparametric mixed membership model. The HDSP generalizes the
hierarchical Dirichlet process (HDP) to model the correlation structure between
metadata in the corpus and mixture components. We construct the HDSP based on
the normalized gamma representation of the Dirichlet process, and this
construction allows incorporating a scaling function that controls the
membership probabilities of the mixture components. We develop two scaling
methods to demonstrate that different modeling assumptions can be expressed in
the HDSP. We also derive the corresponding approximate posterior inference
algorithms using variational Bayes. Through experiments on datasets of
newswire, medical journal articles, conference proceedings, and product
reviews, we show that the HDSP results in a better predictive performance than
labeled LDA, partially labeled LDA, and author topic model and a better
negative review classification performance than the supervised topic model and
SVM.
|
1404.1283 | The Kinetic Basis of Self-Organized Pattern Formation | cs.FL cs.SY | In his seminal paper on morphogenesis (1952), Alan Turing demonstrated that
different spatio-temporal patterns can arise due to instability of the
homogeneous state in reaction-diffusion systems, but at least two species are
necessary to produce even the simplest stationary patterns. This paper is aimed
to propose a novel model of the analog (continuous state) kinetic automaton and
to show that stationary and dynamic patterns can arise in one-component
networks of kinetic automata. Possible applicability of kinetic networks to
modeling of real-world phenomena is also discussed.
|
1404.1292 | Review of Face Detection Systems Based Artificial Neural Networks
Algorithms | cs.CV cs.NE | Face detection is one of the most relevant applications of image processing
and biometric systems. Artificial neural networks (ANN) have been used in the
field of image processing and pattern recognition. There is lack of literature
surveys which give overview about the studies and researches related to the
using of ANN in face detection. Therefore, this research includes a general
review of face detection studies and systems which based on different ANN
approaches and algorithms. The strengths and limitations of these literature
studies and systems were included also.
|
1404.1295 | Detecting criminal organizations in mobile phone networks | cs.SI physics.soc-ph | The study of criminal networks using traces from heterogeneous communication
media is acquiring increasing importance in nowadays society. The usage of
communication media such as phone calls and online social networks leaves
digital traces in the form of metadata that can be used for this type of
analysis. The goal of this work is twofold: first we provide a theoretical
framework for the problem of detecting and characterizing criminal
organizations in networks reconstructed from phone call records. Then, we
introduce an expert system to support law enforcement agencies in the task of
unveiling the underlying structure of criminal networks hidden in communication
data. This platform allows for statistical network analysis, community
detection and visual exploration of mobile phone network data. It allows
forensic investigators to deeply understand hierarchies within criminal
organizations, discovering members who play central role and provide connection
among sub-groups. Our work concludes illustrating the adoption of our
computational framework for a real-word criminal investigation.
|
1404.1312 | Lattices over Eisenstein Integers for Compute-and-Forward | cs.IT math.IT | In this paper, we consider the use of lattice codes over Eisenstein integers
for implementing a compute-and-forward protocol in wireless networks when
channel state information is not available at the transmitter. We extend the
compute-and-forward paradigm of Nazer and Gastpar to decoding Eisenstein
integer combinations of transmitted messages at relays by proving the existence
of a sequence of pairs of nested lattices over Eisenstein integers in which the
coarse lattice is good for covering and the fine lattice can achieve the
Poltyrev limit. Using this result, we show that both the outage performance and
error-correcting performance of nested lattice codebooks over Eisenstein
integers surpasses lattice codebooks over integers considered by Nazer and
Gastpar with no additional computational complexity.
|
1404.1333 | Understanding Machine-learned Density Functionals | physics.chem-ph cs.LG physics.comp-ph stat.ML | Kernel ridge regression is used to approximate the kinetic energy of
non-interacting fermions in a one-dimensional box as a functional of their
density. The properties of different kernels and methods of cross-validation
are explored, and highly accurate energies are achieved. Accurate {\em
constrained optimal densities} are found via a modified Euler-Lagrange
constrained minimization of the total energy. A projected gradient descent
algorithm is derived using local principal component analysis. Additionally, a
sparse grid representation of the density can be used without degrading the
performance of the methods. The implications for machine-learned density
functional approximations are discussed.
|
1404.1338 | Structure of Belarusian educational and research web portal of nuclear
knowledge | cs.CY cs.SI physics.soc-ph | The main objectives and instruments to develop Belarusian educational and
research web portal of nuclear knowledge are discussed. Draft structure of
portal is presented.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.