id stringlengths 9 16 | title stringlengths 4 278 | categories stringlengths 5 104 | abstract stringlengths 6 4.09k |
|---|---|---|---|
1306.0926 | Self-Iterating Soft Equalizer | cs.IT math.IT | A self-iterating soft equalizer (SISE) consisting of a few relatively weak
constituent equalizers is shown to provide robust performance even in severe
intersymbol interference (ISI) channels that exhibit deep nulls and valleys
within the signal band. Constituent equalizers are allowed to exchange soft
information in the absence of interleavers based on the method that are
designed to suppress significant correlation among their soft outputs. The
resulting SISE works well as a stand-alone equalizer or as the equalizer
component of a turbo equalization system. The performance advantages over
existing methods are validated with bit-error-rate (BER) simulations and
extrinsic information transfer (EXIT) chart analysis. It is shown that in turbo
equalizer setting the SISE achieves performance closer to the maximum a
posteriori probability equalizer than any other known schemes in very severe
ISI channels.
|
1306.0940 | (More) Efficient Reinforcement Learning via Posterior Sampling | stat.ML cs.LG | Most provably-efficient learning algorithms introduce optimism about
poorly-understood states and actions to encourage exploration. We study an
alternative approach for efficient exploration, posterior sampling for
reinforcement learning (PSRL). This algorithm proceeds in repeated episodes of
known duration. At the start of each episode, PSRL updates a prior distribution
over Markov decision processes and takes one sample from this posterior. PSRL
then follows the policy that is optimal for this sample during the episode. The
algorithm is conceptually simple, computationally efficient and allows an agent
to encode prior knowledge in a natural way. We establish an $\tilde{O}(\tau S
\sqrt{AT})$ bound on the expected regret, where $T$ is time, $\tau$ is the
episode length and $S$ and $A$ are the cardinalities of the state and action
spaces. This bound is one of the first for an algorithm not based on optimism,
and close to the state of the art for any reinforcement learning algorithm. We
show through simulation that PSRL significantly outperforms existing algorithms
with similar regret bounds.
|
1306.0944 | An Exact Path-Loss Density Model for Mobiles in a Cellular System | cs.IT math.IT | In trying to emulate the spatial position of wireless nodes for purpose of
analysis, we rely on stochastic simulation. And, it is customary, for mobile
systems, to consider a base-station radiation coverage by an ideal cell shape.
For cellular analysis, a hexagon contour is always preferred mainly because of
its tessellating nature. Despite this fact, largely due to its intrinsic
simplicity, in literature only random dispersion model for a circular shape is
known. However, if considered, this will result an unfair nodes density
specifically at the edges of non-circular contours. As a result, in this paper,
we showed the exact random number generation technique required for nodes
scattering inside a hexagon. Next, motivated from a system channel perspective,
we argued the need for the exhaustive random mobile dropping process, and hence
derived a generic close-form expression for the path-loss distribution density
between a base-station and a mobile. Last, simulation was used to reaffirm the
validity of the theoretical analysis using values from the new IEEE 802.20
standard.
|
1306.0946 | Closed-Form Path-Loss Predictor for Gaussianly Distributed Nodes | cs.IT cs.NI math.IT | The emulation of wireless nodes spatial position is a practice used by
deployment engineers and network planners to analyze the characteristics of a
network. In particular, nodes geolocation will directly impact factors such as
connectivity, signals fidelity, and service quality. In literature, in addition
to typical homogenous scattering, normal distribution is frequently used to
model mobiles concentration in a cellular system. Moreover, Gaussian dropping
is often considered as an effective placement method for airborne sensor
deployment. Despite the practicality of this model, getting the network channel
loss distribution still relies on exhaustive Monte Carlo simulation. In this
paper, we argue the need for this inefficient approach and hence derived a
generic and exact closed-form expression for the path-loss distribution density
between a base-station and a network of nodes. Simulation was used to reaffirm
the validity of the theoretical analysis using values from the new IEEE 802.20
standard.
|
1306.0957 | Dual codes of product semi-linear codes | cs.IT math.IT math.RA | Let $\mathbb{F}_q$ be a finite field with $q$ elements and denote by $\theta
: \mathbb{F}_q\to\mathbb{F}_q$ an automorphism of $\mathbb{F}_q$. In this
paper, we deal with linear codes of $\mathbb{F}_q^n$ invariant under a
semi-linear map $T:\mathbb{F}_q^n\to\mathbb{F}_q^n$ for some $n\geq 2$. In
particular, we study three kind of their dual codes, some relations between
them and we focus on codes which are products of module skew codes in the
non-commutative polynomial ring $\mathbb{F}_q[X,\theta]$ as a subcase of linear
codes invariant by a semi-linear map $T$. In this setting we give also an
algorithm for encoding, decoding and detecting errors and we show a method to
construct codes invariant under a fixed $T$.
|
1306.0963 | Inferring Robot Task Plans from Human Team Meetings: A Generative
Modeling Approach with Logic-Based Prior | cs.AI cs.CL cs.RO stat.ML | We aim to reduce the burden of programming and deploying autonomous systems
to work in concert with people in time-critical domains, such as military field
operations and disaster response. Deployment plans for these operations are
frequently negotiated on-the-fly by teams of human planners. A human operator
then translates the agreed upon plan into machine instructions for the robots.
We present an algorithm that reduces this translation burden by inferring the
final plan from a processed form of the human team's planning conversation. Our
approach combines probabilistic generative modeling with logical plan
validation used to compute a highly structured prior over possible plans. This
hybrid approach enables us to overcome the challenge of performing inference
over the large solution space with only a small amount of noisy data from the
team planning session. We validate the algorithm through human subject
experimentation and show we are able to infer a human team's final plan with
83% accuracy on average. We also describe a robot demonstration in which two
people plan and execute a first-response collaborative task with a PR2 robot.
To the best of our knowledge, this is the first work that integrates a logical
planning technique within a generative model to perform plan inference.
|
1306.0968 | BER Analysis of Decision-Feedback Multiple Symbol Detection in
Noncoherent MIMO Ultra-Wideband Systems | cs.IT math.IT | In this paper, we investigate noncoherent multiple-input multiple-output
(MIMO) ultra-wideband (UWB) systems where the signal is encoded by differential
space-time block code (DSTBC). DSTBC enables noncoherent MIMO UWB systems to
achieve diversity gain. However, the traditional noncoherent symbol-by-symbol
differential detection (DD) for DSTBC-UWB suffers from performance degradation
compared with the coherent detection. We introduce a noncoherent multiple
symbol detection (MSD) scheme to enhance the performance of DSTBC-UWB system.
Although the MSD scheme can boost the performance more as the observation
window size gets to larger, the complexity of the exhaustive search for MSD
also exponentially increases in terms of the window size. To decrease the
computational complexity, the concept of decision-feedback (DF) is introduced
to MSD for DSTBC-UWB in this paper. The resultant DF-MSD yields reasonable
complexity and also solid performance improvement. We provide the bit error
rate (BER) analysis for the proposed DF-MSD. Both theoretical analysis and
simulation results validate the proposed scheme.
|
1306.0969 | Secrecy Wireless Information and Power Transfer with MISO Beamforming | cs.IT math.IT | The dual use of radio signals for simultaneous wireless information and power
transfer (SWIPT) has recently drawn significant attention. To meet the
practical requirement that energy receivers (ERs) operate with much higher
received power than information receivers (IRs), ERs need to be deployed closer
to the transmitter than IRs. However, due to the broadcast nature of wireless
channels, one critical issue is that the messages sent to IRs cannot be
eavesdropped by ERs, which possess better channels from the transmitter. In
this paper, we address this new secrecy communication problem in a multiuser
multiple-input single-output (MISO) SWIPT system where a multi-antenna
transmitter sends information and energy simultaneously to one IR and multiple
ERs, each with a single antenna. By optimizing transmit beamforming vectors and
their power allocation, we maximize the weighted sum-energy transferred to ERs
subject to a secrecy rate constraint for the information sent to the IR. We
solve this non-convex problem optimally by reformulating it into a two-stage
problem. First, we fix the signal-to-interference-plus-noise ratio (SINR) at
the IR and obtain the optimal beamforming solution by applying the technique of
semidefinite relaxation (SDR). Then the original problem is solved by a
one-dimension search over the optimal SINR value for the IR. Furthermore, two
suboptimal low-complexity beamforming schemes are proposed, and their
achievable (secrecy) rate-energy (R-E) regions are compared against that by the
optimal scheme.
|
1306.0974 | Distributed Bayesian inference for consistent labeling of tracked
objects in non-overlapping camera networks | cs.CV | One of the fundamental requirements for visual surveillance using
non-overlapping camera networks is the correct labeling of tracked objects on
each camera in a consistent way,in the sense that the captured tracklets, or
observations in this paper, of the same object at different cameras should be
assigned with the same label. In this paper, we formulate this task as a
Bayesian inference problem and propose a distributed inference framework in
which the posterior distribution of labeling variable corresponding to each
observation, conditioned on all history appearance and spatio-temporal evidence
made in the whole networks, is calculated based solely on local information
processing on each camera and mutual information exchanging between neighboring
cameras. In our framework, the number of objects presenting in the monitored
region, i.e. the sampling space of labeling variables, does not need to be
specified beforehand. Instead, it can be determined automatically on the fly.
In addition, we make no assumption about the appearance distribution of a
single object, but use similarity scores between appearance pairs, given by
advanced object re-identification algorithm, as appearance likelihood for
inference. This feature makes our method very flexible and competitive when
observing condition undergoes large changes across camera views. To cope with
the problem of missing detection, which is critical for distributed inference,
we consider an enlarged neighborhood of each camera during inference and use a
mixture model to describe the higher order spatio-temporal constraints. The
robustness of the algorithm against missing detection is improved at the cost
of slightly increased computation and communication burden at each camera node.
Finally, we demonstrate the effectiveness of our method through experiments on
an indoor Office Building dataset and an outdoor Campus Garden dataset.
|
1306.0992 | Any network codes comes from an algebraic curve taking osculating spaces | math.AG cs.IT math.IT | In this note we prove that every network code over $\mathbb {F}_q$ may be
realized taking some of the osculating spaces of a smooth projective curve.
|
1306.1023 | Quaternion Fourier Transform on Quaternion Fields and Generalizations | math.RA cs.CV math-ph math.MP | We treat the quaternionic Fourier transform (QFT) applied to quaternion
fields and investigate QFT properties useful for applications. Different forms
of the QFT lead us to different Plancherel theorems. We relate the QFT
computation for quaternion fields to the QFT of real signals. We research the
general linear ($GL$) transformation behavior of the QFT with matrices,
Clifford geometric algebra and with examples. We finally arrive at wide-ranging
non-commutative multivector FT generalizations of the QFT. Examples given are
new volume-time and spacetime algebra Fourier transformations.
|
1306.1031 | LLAMA: Leveraging Learning to Automatically Manage Algorithms | cs.AI | Algorithm portfolio and selection approaches have achieved remarkable
improvements over single solvers. However, the implementation of such systems
is often highly customised and specific to the problem domain. This makes it
difficult for researchers to explore different techniques for their specific
problems. We present LLAMA, a modular and extensible toolkit implemented as an
R package that facilitates the exploration of a range of different portfolio
techniques on any problem domain. It implements the algorithm selection
approaches most commonly used in the literature and leverages the extensive
library of machine learning algorithms and techniques in R. We describe the
current capabilities and limitations of the toolkit and illustrate its usage on
a set of example SAT problems.
|
1306.1034 | ROTUNDE - A Smart Meeting Cinematography Initiative: Tools, Datasets,
and Benchmarks for Cognitive Interpretation and Control | cs.AI cs.CV cs.HC | We construe smart meeting cinematography with a focus on professional
situations such as meetings and seminars, possibly conducted in a distributed
manner across socio-spatially separated groups. The basic objective in smart
meeting cinematography is to interpret professional interactions involving
people, and automatically produce dynamic recordings of discussions, debates,
presentations etc in the presence of multiple communication modalities. Typical
modalities include gestures (e.g., raising one's hand for a question,
applause), voice and interruption, electronic apparatus (e.g., pressing a
button), movement (e.g., standing-up, moving around) etc. ROTUNDE, an instance
of smart meeting cinematography concept, aims to: (a) develop
functionality-driven benchmarks with respect to the interpretation and control
capabilities of human-cinematographers, real-time video editors, surveillance
personnel, and typical human performance in everyday situations; (b) Develop
general tools for the commonsense cognitive interpretation of dynamic scenes
from the viewpoint of visuo-spatial cognition centred perceptual
narrativisation. Particular emphasis is placed on declarative representations
and interfacing mechanisms that seamlessly integrate within large-scale
cognitive (interaction) systems and companion technologies consisting of
diverse AI sub-components. For instance, the envisaged tools would provide
general capabilities for high-level commonsense reasoning about space, events,
actions, change, and interaction.
|
1306.1057 | Generic Correlation Increases Noncoherent MIMO Capacity | cs.IT math.IT | We study the high-SNR capacity of MIMO Rayleigh block-fading channels in the
noncoherent setting where neither transmitter nor receiver has a priori channel
state information. We show that when the number of receive antennas is
sufficiently large and the temporal correlation within each block is "generic"
(in the sense used in the interference-alignment literature), the capacity
pre-log is given by T(1-1/N) for T<N, where T denotes the number of transmit
antennas and N denotes the block length. A comparison with the widely used
constant block-fading channel (where the fading is constant within each block)
shows that for a large block length, generic correlation increases the capacity
pre-log by a factor of about four.
|
1306.1066 | Bayesian Differential Privacy through Posterior Sampling | stat.ML cs.LG | Differential privacy formalises privacy-preserving mechanisms that provide
access to a database. We pose the question of whether Bayesian inference itself
can be used directly to provide private access to data, with no modification.
The answer is affirmative: under certain conditions on the prior, sampling from
the posterior distribution can be used to achieve a desired level of privacy
and utility. To do so, we generalise differential privacy to arbitrary dataset
metrics, outcome spaces and distribution families. This allows us to also deal
with non-i.i.d or non-tabular datasets. We prove bounds on the sensitivity of
the posterior to the data, which gives a measure of robustness. We also show
how to use posterior sampling to provide differentially private responses to
queries, within a decision-theoretic framework. Finally, we provide bounds on
the utility and on the distinguishability of datasets. The latter are
complemented by a novel use of Le Cam's method to obtain lower bounds. All our
general results hold for arbitrary database metrics, including those for the
common definition of differential privacy. For specific choices of the metric,
we give a number of examples satisfying our assumptions.
|
1306.1073 | Web Synchronization Simulations using the ResourceSync Framework | cs.DL cs.DB | Maintenance of multiple, distributed up-to-date copies of collections of
changing Web resources is important in many application contexts and is often
achieved using ad hoc or proprietary synchronization solutions. ResourceSync is
a resource synchronization framework that integrates with the Web architecture
and leverages XML sitemaps. We define a model for the ResourceSync framework as
a basis for understanding its properties. We then describe experiments in which
simulations of a variety of synchronization scenarios illustrate the effects of
model configuration on consistency, latency, and data transfer efficiency.
These results provide insight into which congurations are appropriate for
various application scenarios.
|
1306.1076 | CSMA using the Bethe Approximation: Scheduling and Utility Maximization | cs.NI cs.IT math.IT | CSMA (Carrier Sense Multiple Access), which resolves contentions over
wireless networks in a fully distributed fashion, has recently gained a lot of
attentions since it has been proved that appropriate control of CSMA parameters
guarantees optimality in terms of stability (i.e., scheduling) and system- wide
utility (i.e., scheduling and congestion control). Most CSMA-based algorithms
rely on the popular MCMC (Markov Chain Monte Carlo) technique, which enables
one to find optimal CSMA parameters through iterative loops of
`simulation-and-update'. However, such a simulation-based approach often
becomes a major cause of exponentially slow convergence, being poorly adaptive
to flow/topology changes. In this paper, we develop distributed iterative
algorithms which produce approximate solutions with convergence in polynomial
time for both stability and utility maximization problems. In particular, for
the stability problem, the proposed distributed algorithm requires, somewhat
surprisingly, only one iteration among links. Our approach is motivated by the
Bethe approximation (introduced by Yedidia, Freeman and Weiss in 2005) allowing
us to express approximate solutions via a certain non-linear system with
polynomial size. Our polynomial convergence guarantee comes from directly
solving the non-linear system in a distributed manner, rather than multiple
simulation-and-update loops in existing algorithms. We provide numerical
results to show that the algorithm produces highly accurate solutions and
converges much faster than the prior ones.
|
1306.1083 | Discriminative Parameter Estimation for Random Walks Segmentation:
Technical Report | cs.CV cs.LG | The Random Walks (RW) algorithm is one of the most e - cient and easy-to-use
probabilistic segmentation methods. By combining contrast terms with prior
terms, it provides accurate segmentations of medical images in a fully
automated manner. However, one of the main drawbacks of using the RW algorithm
is that its parameters have to be hand-tuned. we propose a novel discriminative
learning framework that estimates the parameters using a training dataset. The
main challenge we face is that the training samples are not fully supervised.
Speci cally, they provide a hard segmentation of the images, instead of a
proba-bilistic segmentation. We overcome this challenge by treating the optimal
probabilistic segmentation that is compatible with the given hard segmentation
as a latent variable. This allows us to employ the latent support vector
machine formulation for parameter estimation. We show that our approach signi
cantly outperforms the baseline methods on a challenging dataset consisting of
real clinical 3D MRI volumes of skeletal muscles.
|
1306.1091 | Deep Generative Stochastic Networks Trainable by Backprop | cs.LG | We introduce a novel training principle for probabilistic models that is an
alternative to maximum likelihood. The proposed Generative Stochastic Networks
(GSN) framework is based on learning the transition operator of a Markov chain
whose stationary distribution estimates the data distribution. The transition
distribution of the Markov chain is conditional on the previous state,
generally involving a small move, so this conditional distribution has fewer
dominant modes, being unimodal in the limit of small moves. Thus, it is easier
to learn because it is easier to approximate its partition function, more like
learning to perform supervised function approximation, with gradients that can
be obtained by backprop. We provide theorems that generalize recent work on the
probabilistic interpretation of denoising autoencoders and obtain along the way
an interesting justification for dependency networks and generalized
pseudolikelihood, along with a definition of an appropriate joint distribution
and sampling mechanism even when the conditionals are not consistent. GSNs can
be used with missing inputs and can be used to sample subsets of variables
given the rest. We validate these theoretical results with experiments on two
image datasets using an architecture that mimics the Deep Boltzmann Machine
Gibbs sampler but allows training to proceed with simple backprop, without the
need for layerwise pretraining.
|
1306.1097 | Algebraic signal sampling, Gibbs phenomenon and Prony-type systems | math.NA cs.IT math.CA math.IT | Systems of Prony type appear in various signal reconstruction problems such
as finite rate of innovation, superresolution and Fourier inversion of
piecewise smooth functions. We propose a novel approach for solving Prony-type
systems, which requires sampling the signal at arithmetic progressions. By
keeping the number of equations small and fixed, we demonstrate that such
"decimation" can lead to practical improvements in the reconstruction accuracy.
As an application, we provide a solution to the so-called Eckhoff's conjecture,
which asked for reconstructing jump positions and magnitudes of a
piecewise-smooth function from its Fourier coefficients with maximal possible
asymptotic accuracy -- thus eliminating the Gibbs phenomenon.
|
1306.1101 | Practical Secrecy using Artificial Noise | cs.IT math.IT | In this paper, we consider the use of artificial noise for secure
communications. We propose the notion of practical secrecy as a new design
criterion based on the behavior of the eavesdropper's error probability $P_E$,
as the signal-to-noise ratio goes to infinity. We then show that the practical
secrecy can be guaranteed by the randomly distributed artificial noise with
specified power. We show that it is possible to achieve practical secrecy even
when the eavesdropper can afford more antennas than the transmitter.
|
1306.1102 | Detectability of communities in heterogeneous networks | physics.soc-ph cond-mat.stat-mech cs.SI | Communities are fundamental entities for the characterization of the
structure of real networks. The standard approach to the identification of
communities in networks is based on the optimization of a quality function
known as "modularity". Although modularity has been at the center of an intense
research activity and many methods for its maximization have been proposed, not
much it is yet known about the necessary conditions that communities need to
satisfy in order to be detectable with modularity maximization methods. Here,
we develop a simple theory to establish these conditions, and we successfully
apply it to various classes of network models. Our main result is that
heterogeneity in the degree distribution helps modularity to correctly recover
the community structure of a network and that, in the realistic case of
scale-free networks with degree exponent $\gamma < 2.5$, modularity is always
able to detect the presence of communities.
|
1306.1110 | An agent based multi-optional model for the diffusion of innovations | stat.AP cs.SI physics.soc-ph | We propose a model for the diffusion of several products competing in a
common market based on the generalization of the Ising model of statiscal
mechanics (Potts model). Using an agent based implementation, we analyze two
problems: (i) a three options case, i.e. to adopt a product A, a product B, or
non-adoption and (ii) a four options case, i.e. the adoption of product A,
product B, both, or none. In the first case we analyze a launching strategy for
one of the two products, which delays its launching with the objective of
competing with improvements. Market shares reached by each product are then
estimated at market saturation. Finally, simulations are carried out with
varying degrees of social network topology, uncertainty, and population
homogeneity.
|
1306.1144 | Control Strategies for Mobile Robot With Obstacle Avoidance | cs.RO | Obstacle avoidance is an important task in the field of robotics, since the
goal of autonomous robot is to reach the destination without collision. Several
algorithms have been proposed for obstacle avoidance, having drawbacks and
benefits. In this survey paper, we mainly discussed different algorithms for
robot navigation with obstacle avoidance. We also compared all provided
algorithms and mentioned their characteristics; advantages and disadvantages,
so that we can select final efficient algorithm by fusing discussed algorithms.
Comparison table is provided for justifying the area of interest
|
1306.1153 | Efficient Single-Source Shortest Path and Distance Queries on Large
Graphs | cs.DB cs.DS | This paper investigates two types of graph queries: {\em single source
distance (SSD)} queries and {\em single source shortest path (SSSP)} queries.
Given a node $v$ in a graph $G$, an SSD query from $v$ asks for the distance
from $v$ to any other node in $G$, while an SSSP query retrieves the shortest
path from $v$ to any other node. These two types of queries are fundamental
building blocks of numerous graph algorithms, and they find important
applications in graph analysis, especially in the computation of graph
measures. Most of the existing solutions for SSD and SSSP queries, however,
require that the input graph fits in the main memory, which renders them
inapplicable for the massive disk-resident graphs commonly used in web and
social applications. The only exceptions are a few techniques that are designed
to be I/O efficient, but they all focus on undirected and/or unweighted graphs,
and they only offer sub-optimal query efficiency.
To address the deficiency of existing work, this paper presents {\em
Highways-on-Disk (HoD)}, a disk-based index that supports both SSD and SSSP
queries on directed and weighted graphs. The key idea of HoD is to augment the
input graph with a set of auxiliary edges, and exploit them during query
processing to reduce I/O and computation costs. We experimentally evaluate HoD
on both directed and undirected real-world graphs with up to billions of nodes
and edges, and we demonstrate that HoD significantly outperforms alternative
solutions in terms of query efficiency.
|
1306.1154 | Sparse Representation of a Polytope and Recovery of Sparse Signals and
Low-rank Matrices | cs.IT math.IT math.ST stat.ML stat.TH | This paper considers compressed sensing and affine rank minimization in both
noiseless and noisy cases and establishes sharp restricted isometry conditions
for sparse signal and low-rank matrix recovery. The analysis relies on a key
technical tool which represents points in a polytope by convex combinations of
sparse vectors. The technique is elementary while leads to sharp results.
It is shown that for any given constant $t\ge {4/3}$, in compressed sensing
$\delta_{tk}^A < \sqrt{(t-1)/t}$ guarantees the exact recovery of all $k$
sparse signals in the noiseless case through the constrained $\ell_1$
minimization, and similarly in affine rank minimization
$\delta_{tr}^\mathcal{M}< \sqrt{(t-1)/t}$ ensures the exact reconstruction of
all matrices with rank at most $r$ in the noiseless case via the constrained
nuclear norm minimization. Moreover, for any $\epsilon>0$,
$\delta_{tk}^A<\sqrt{\frac{t-1}{t}}+\epsilon$ is not sufficient to guarantee
the exact recovery of all $k$-sparse signals for large $k$. Similar result also
holds for matrix recovery. In addition, the conditions $\delta_{tk}^A <
\sqrt{(t-1)/t}$ and $\delta_{tr}^\mathcal{M}< \sqrt{(t-1)/t}$ are also shown to
be sufficient respectively for stable recovery of approximately sparse signals
and low-rank matrices in the noisy case.
|
1306.1157 | Linear Network Coding, Linear Index Coding and Representable Discrete
Polymatroids | cs.IT math.IT | Discrete polymatroids are the multi-set analogue of matroids. In this paper,
we explore the connections among linear network coding, linear index coding and
representable discrete polymatroids. We consider vector linear solutions of
networks over a field $\mathbb{F}_q,$ with possibly different message and edge
vector dimensions, which are referred to as linear fractional solutions. We
define a \textit{discrete polymatroidal} network and show that a linear
fractional solution over a field $\mathbb{F}_q,$ exists for a network if and
only if the network is discrete polymatroidal with respect to a discrete
polymatroid representable over $\mathbb{F}_q.$ An algorithm to construct
networks starting from certain class of discrete polymatroids is provided.
Every representation over $\mathbb{F}_q$ for the discrete polymatroid, results
in a linear fractional solution over $\mathbb{F}_q$ for the constructed
network. Next, we consider the index coding problem and show that a linear
solution to an index coding problem exists if and only if there exists a
representable discrete polymatroid satisfying certain conditions which are
determined by the index coding problem considered. El Rouayheb et. al. showed
that the problem of finding a multi-linear representation for a matroid can be
reduced to finding a \textit{perfect linear index coding solution} for an index
coding problem obtained from that matroid. We generalize the result of El
Rouayheb et. al. by showing that the problem of finding a representation for a
discrete polymatroid can be reduced to finding a perfect linear index coding
solution for an index coding problem obtained from that discrete polymatroid.
|
1306.1185 | Multiclass Total Variation Clustering | stat.ML cs.LG math.OC | Ideas from the image processing literature have recently motivated a new set
of clustering algorithms that rely on the concept of total variation. While
these algorithms perform well for bi-partitioning tasks, their recursive
extensions yield unimpressive results for multiclass clustering tasks. This
paper presents a general framework for multiclass total variation clustering
that does not rely on recursion. The results greatly outperform previous total
variation algorithms and compare well with state-of-the-art NMF approaches.
|
1306.1187 | Decentralized Data Reduction with Quantization Constraints | cs.IT math.IT | A guiding principle for data reduction in statistical inference is the
sufficiency principle. This paper extends the classical sufficiency principle
to decentralized inference, i.e., data reduction needs to be achieved in a
decentralized manner. We examine the notions of local and global sufficient
statistics and the relationship between the two for decentralized inference
under different observation models. We then consider the impacts of
quantization on decentralized data reduction which is often needed when
communications among sensors are subject to finite capacity constraints. The
central question we intend to ask is: if each node in a decentralized inference
system has to summarize its data using a finite number of bits, is it still
optimal to implement data reduction using global sufficient statistics prior to
quantization? We show that the answer is negative using a simple example and
proceed to identify conditions under which sufficiency based data reduction
followed by quantization is indeed optimal. They include the well known case
when the data at decentralized nodes are conditionally independent as well as a
class of problems with conditionally dependent observations that admit
conditional independence structure through the introduction of an appropriately
chosen hidden variable.
|
1306.1267 | Loop Calculus and Bootstrap-Belief Propagation for Perfect Matchings on
Arbitrary Graphs | cond-mat.stat-mech cs.AI math.PR | This manuscript discusses computation of the Partition Function (PF) and the
Minimum Weight Perfect Matching (MWPM) on arbitrary, non-bipartite graphs. We
present two novel problem formulations - one for computing the PF of a Perfect
Matching (PM) and one for finding MWPMs - that build upon the inter-related
Bethe Free Energy, Belief Propagation (BP), Loop Calculus (LC), Integer Linear
Programming (ILP) and Linear Programming (LP) frameworks. First, we describe an
extension of the LC framework to the PM problem. The resulting formulas, coined
(fractional) Bootstrap-BP, express the PF of the original model via the BFE of
an alternative PM problem. We then study the zero-temperature version of this
Bootstrap-BP formula for approximately solving the MWPM problem. We do so by
leveraging the Bootstrap-BP formula to construct a sequence of MWPM problems,
where each new problem in the sequence is formed by contracting odd-sized
cycles (or blossoms) from the previous problem. This Bootstrap-and-Contract
procedure converges reliably and generates an empirically tight upper bound for
the MWPM. We conclude by discussing the relationship between our iterative
procedure and the famous Blossom Algorithm of Edmonds '65 and demonstrate the
performance of the Bootstrap-and-Contract approach on a variety of weighted PM
problems.
|
1306.1271 | Predictability of social interactions | cs.SI cs.CY physics.soc-ph stat.AP | The ability to predict social interactions between people has profound
applications including targeted marketing and prediction of information
diffusion and disease propagation. Previous work has shown that the location of
an individual at any given time is highly predictable. This study examines the
predictability of social interactions between people to determine whether
interaction patterns are similarly predictable. I find that the locations and
times of interactions for an individual are highly predictable; however, the
other person the individual interacts with is less predictable. Furthermore, I
show that knowledge of the locations and times of interactions has almost no
effect on the predictability of the other person. Finally I demonstrate that a
simple Markov chain model is able to achieve close to the upper bound in terms
of predicting the next person with whom a given individual will interact.
|
1306.1298 | Multiclass Semi-Supervised Learning on Graphs using Ginzburg-Landau
Functional Minimization | stat.ML cs.LG math.ST physics.data-an stat.TH | We present a graph-based variational algorithm for classification of
high-dimensional data, generalizing the binary diffuse interface model to the
case of multiple classes. Motivated by total variation techniques, the method
involves minimizing an energy functional made up of three terms. The first two
terms promote a stepwise continuous classification function with sharp
transitions between classes, while preserving symmetry among the class labels.
The third term is a data fidelity term, allowing us to incorporate prior
information into the model in a semi-supervised framework. The performance of
the algorithm on synthetic data, as well as on the COIL and MNIST benchmark
datasets, is competitive with state-of-the-art graph-based multiclass
segmentation methods.
|
1306.1300 | Personalized Email Community Detection using Collaborative Similarity
Measure | cs.SI physics.soc-ph | Email service providers have employed many email classification and
prioritization systems over the last decade to improve their services. In order
to assist email services, we propose a personalized email community detection
method to discover the groupings of email users based on their structural and
semantic intimacy. We extract the personalized social graph from a set of
emails by uniquely leveraging each node with communication behavior.
Subsequently, collaborative similarity measure (CSM) based intra-graph
clustering approach detects personalized communities. The empirical analysis
shows effectiveness of the resultant communities in terms of evaluation
measures, i.e. density, entropy and f-measure. Moreover, email strainer,
dynamic group prediction, and fraudulent account detection are suggested as the
potential applications from both the service provider and user's point of view.
|
1306.1301 | Recognition of Indian Sign Language in Live Video | cs.CV | Sign Language Recognition has emerged as one of the important area of
research in Computer Vision. The difficulty faced by the researchers is that
the instances of signs vary with both motion and appearance. Thus, in this
paper a novel approach for recognizing various alphabets of Indian Sign
Language is proposed where continuous video sequences of the signs have been
considered. The proposed system comprises of three stages: Preprocessing stage,
Feature Extraction and Classification. Preprocessing stage includes skin
filtering, histogram matching. Eigen values and Eigen Vectors were considered
for feature extraction stage and finally Eigen value weighted Euclidean
distance is used to recognize the sign. It deals with bare hands, thus allowing
the user to interact with the system in natural way. We have considered 24
different alphabets in the video sequences and attained a success rate of
96.25%.
|
1306.1304 | Towards a Simple Relationship to Estimate the Capacity of Static and
Mobile Wireless Networks | cs.NI cs.IT cs.SY math.IT | Extensive research has been done on studying the capacity of wireless
multi-hop networks. These efforts have led to many sophisticated and customized
analytical studies on the capacity of particular networks. While most of the
analyses are intellectually challenging, they lack universal properties that
can be extended to study the capacity of a different network. In this paper, we
sift through various capacity-impacting parameters and present a simple
relationship that can be used to estimate the capacity of both static and
mobile networks. Specifically, we show that the network capacity is determined
by the average number of simultaneous transmissions, the link capacity and the
average number of transmissions required to deliver a packet to its
destination. Our result is valid for both finite networks and asymptotically
infinite networks. We then use this result to explain and better understand the
insights of some existing results on the capacity of static networks, mobile
networks and hybrid networks and the multicast capacity. The capacity analysis
using the aforementioned relationship often becomes simpler. The relationship
can be used as a powerful tool to estimate the capacity of different networks.
Our work makes important contributions towards developing a generic methodology
for network capacity analysis that is applicable to a variety of different
scenarios.
|
1306.1310 | Electromagnetic Lens-focusing Antenna Enabled Massive MIMO | cs.IT math.IT | Massive multiple-input multiple-output (MIMO) techniques have been recently
advanced to tremendously improve the performance of wireless networks. However,
the use of very large antenna arrays brings new issues, such as the
significantly increased hardware cost and signal processing cost and
complexity. In order to reap the enormous gain of massive MIMO and yet reduce
its cost to an affordable level, this paper proposes a novel system design by
integrating an electromagnetic (EM) lens with the large antenna array, termed
\emph{electromagnetic lens antenna} (ELA). An ELA has the capability of
focusing the power of any incident plane wave passing through the EM lens to a
small subset of the antenna array, while the location of focal area is
dependent on the angle of arrival (AoA) of the wave. As compared to
conventional antenna arrays without the EM lens, the proposed system can
substantially reduce the number of required radio frequency (RF) chains at the
receiver and hence, the implementation costs. In this paper, we investigate the
proposed system under a simplified single-user uplink transmission setup, by
characterizing the power distribution of the ELA as well as the resulting
channel model. Furthermore, by assuming antenna selection used at the receiver,
we show the throughput gains of the proposed system over conventional antenna
arrays given the same number of selected antennas.
|
1306.1323 | Verdict Accuracy of Quick Reduct Algorithm using Clustering and
Classification Techniques for Gene Expression Data | cs.LG cs.CE stat.ML | In most gene expression data, the number of training samples is very small
compared to the large number of genes involved in the experiments. However,
among the large amount of genes, only a small fraction is effective for
performing a certain task. Furthermore, a small subset of genes is desirable in
developing gene expression based diagnostic tools for delivering reliable and
understandable results. With the gene selection results, the cost of biological
experiment and decision can be greatly reduced by analyzing only the marker
genes. An important application of gene expression data in functional genomics
is to classify samples according to their gene expression profiles. Feature
selection (FS) is a process which attempts to select more informative features.
It is one of the important steps in knowledge discovery. Conventional
supervised FS methods evaluate various feature subsets using an evaluation
function or metric to select only those features which are related to the
decision classes of the data under consideration. This paper studies a feature
selection method based on rough set theory. Further K-Means, Fuzzy C-Means
(FCM) algorithm have implemented for the reduced feature set without
considering class labels. Then the obtained results are compared with the
original class labels. Back Propagation Network (BPN) has also been used for
classification. Then the performance of K-Means, FCM, and BPN are analyzed
through the confusion matrix. It is found that the BPN is performing well
comparatively.
|
1306.1326 | Performance analysis of unsupervised feature selection methods | cs.LG | Feature selection (FS) is a process which attempts to select more informative
features. In some cases, too many redundant or irrelevant features may
overpower main features for classification. Feature selection can remedy this
problem and therefore improve the prediction accuracy and reduce the
computational overhead of classification algorithms. The main aim of feature
selection is to determine a minimal feature subset from a problem domain while
retaining a suitably high accuracy in representing the original features. In
this paper, Principal Component Analysis (PCA), Rough PCA, Unsupervised Quick
Reduct (USQR) algorithm and Empirical Distribution Ranking (EDR) approaches are
applied to discover discriminative features that will be the most adequate ones
for classification. Efficiency of the approaches is evaluated using standard
classification metrics.
|
1306.1334 | Tuple Value Based Multiplicative Data Perturbation Approach To Preserve
Privacy In Data Stream Mining | cs.DB | Huge volume of data from domain specific applications such as medical,
financial, library, telephone, shopping records and individual are regularly
generated. Sharing of these data is proved to be beneficial for data mining
application. On one hand such data is an important asset to business decision
making by analyzing it. On the other hand data privacy concerns may prevent
data owners from sharing information for data analysis. In order to share data
while preserving privacy, data owner must come up with a solution which
achieves the dual goal of privacy preservation as well as an accuracy of data
mining task - clustering and classification. An efficient and effective
approach has been proposed that aims to protect privacy of sensitive
information and obtaining data clustering with minimum information loss.
|
1306.1343 | The User Feedback on SentiWordNet | cs.CL cs.IR | With the release of SentiWordNet 3.0 the related Web interface has been
restyled and improved in order to allow users to submit feedback on the
SentiWordNet entries, in the form of the suggestion of alternative triplets of
values for an entry. This paper reports on the release of the user feedback
collected so far and on the plans for the future.
|
1306.1346 | Rethinking the Secrecy Outage Formulation: A Secure Transmission Design
Perspective | cs.IT math.IT | This letter studies information-theoretic security without knowing the
eavesdropper's channel fading state. We present an alternative secrecy outage
formulation to measure the probability that message transmissions fail to
achieve perfect secrecy. Using this formulation, we design two transmission
schemes that satisfy the given security requirement while achieving good
throughput performance.
|
1306.1350 | Diffusion map for clustering fMRI spatial maps extracted by independent
component analysis | cs.CE cs.LG stat.ML | Functional magnetic resonance imaging (fMRI) produces data about activity
inside the brain, from which spatial maps can be extracted by independent
component analysis (ICA). In datasets, there are n spatial maps that contain p
voxels. The number of voxels is very high compared to the number of analyzed
spatial maps. Clustering of the spatial maps is usually based on correlation
matrices. This usually works well, although such a similarity matrix inherently
can explain only a certain amount of the total variance contained in the
high-dimensional data where n is relatively small but p is large. For
high-dimensional space, it is reasonable to perform dimensionality reduction
before clustering. In this research, we used the recently developed diffusion
map for dimensionality reduction in conjunction with spectral clustering. This
research revealed that the diffusion map based clustering worked as well as the
more traditional methods, and produced more compact clusters when needed.
|
1306.1356 | Analysis $\ell_1$-recovery with frames and Gaussian measurements | cs.IT math.IT | This paper provides novel results for the recovery of signals from
undersampled measurements based on analysis $\ell_1$-minimization, when the
analysis operator is given by a frame. We both provide so-called uniform and
nonuniform recovery guarantees for cosparse (analysis-sparse) signals using
Gaussian random measurement matrices. The nonuniform result relies on a
recovery condition via tangent cones and the uniform recovery guarantee is
based on an analysis version of the null space property. Examining these
conditions for Gaussian random matrices leads to precise bounds on the number
of measurements required for successful recovery. In the special case of
standard sparsity, our result improves a bound due to Rudelson and Vershynin
concerning the exact reconstruction of sparse signals from Gaussian
measurements with respect to the constant and extends it to stability under
passing to approximately sparse signals and to robustness under noise on the
measurements.
|
1306.1358 | Geometric operations implemented by conformal geometric algebra neural
nodes | cs.CV cs.NE math.RA | Geometric algebra is an optimal frame work for calculating with vectors. The
geometric algebra of a space includes elements that represent all the its
subspaces (lines, planes, volumes, ...). Conformal geometric algebra expands
this approach to elementary representations of arbitrary points, point pairs,
lines, circles, planes and spheres. Apart from including curved objects,
conformal geometric algebra has an elegant unified quaternion like
representation for all proper and improper Euclidean transformations, including
reflections at spheres, general screw transformations and scaling. Expanding
the concepts of real and complex neurons we arrive at the new powerful concept
of conformal geometric algebra neurons. These neurons can easily take the above
mentioned geometric objects or sets of these objects as inputs and apply a wide
range of geometric transformations via the geometric algebra valued weights.
|
1306.1365 | The verification of virtual community members socio-demographic profile | cs.CY cs.SI physics.soc-ph | This article considers the current problem of investigation and development
of the method of web-members' socio-demographic characteristics' profile
validation based on analysis of socio-demographic characteristics. The
topicality of the paper is determined by the necessity to identify the
web-community member by means of computer-linguistic analysis of their
information track (all information about web-community members, which posted on
the Internet). The formal model of basic socio-demographic characteristics of
virtual communities' member is formed. The algorithm of these characteristics
verification is developed.
|
1306.1392 | PyHST2: an hybrid distributed code for high speed tomographic
reconstruction with iterative reconstruction and a priori knowledge
capabilities | math.NA cs.CV | We present the PyHST2 code which is in service at ESRF for phase-contrast and
absorption tomography. This code has been engineered to sustain the high data
flow typical of the third generation synchrotron facilities (10 terabytes per
experiment) by adopting a distributed and pipelined architecture. The code
implements, beside a default filtered backprojection reconstruction, iterative
reconstruction techniques with a-priori knowledge. These latter are used to
improve the reconstruction quality or in order to reduce the required data
volume and reach a given quality goal. The implemented a-priori knowledge
techniques are based on the total variation penalisation and a new recently
found convex functional which is based on overlapping patches.
We give details of the different methods and their implementations while the
code is distributed under free license.
We provide methods for estimating, in the absence of ground-truth data, the
optimal parameters values for a-priori techniques.
|
1306.1421 | Bayesian Inference of Natural Rankings in Incomplete Competition
Networks | physics.soc-ph cs.AI cs.SI physics.data-an | Competition between a complex system's constituents and a corresponding
reward mechanism based on it have profound influence on the functioning,
stability, and evolution of the system. But determining the dominance hierarchy
or ranking among the constituent parts from the strongest to the weakest --
essential in determining reward or penalty -- is almost always an ambiguous
task due to the incomplete nature of competition networks. Here we introduce
``Natural Ranking," a desirably unambiguous ranking method applicable to a
complete (full) competition network, and formulate an analytical model based on
the Bayesian formula inferring the expected mean and error of the natural
ranking of nodes from an incomplete network. We investigate its potential and
uses in solving issues in ranking by applying to a real-world competition
network of economic and social importance.
|
1306.1433 | Tight Lower Bound on the Probability of a Binomial Exceeding its
Expectation | cs.LG stat.ML | We give the proof of a tight lower bound on the probability that a binomial
random variable exceeds its expected value. The inequality plays an important
role in a variety of contexts, including the analysis of relative deviation
bounds in learning theory and generalization bounds for unbounded loss
functions.
|
1306.1462 | K-Algorithm A Modified Technique for Noise Removal in Handwritten
Documents | cs.CV | OCR has been an active research area since last few decades. OCR performs the
recognition of the text in the scanned document image and converts it into
editable form. The OCR process can have several stages like pre-processing,
segmentation, recognition and post processing. The pre-processing stage is a
crucial stage for the success of OCR, which mainly deals with noise removal. In
the present paper, a modified technique for noise removal named as K-Algorithm
has been proposed, which has two stages as filtering and binarization. The
proposed technique shows improvised results in comparison to median filtering
technique.
|
1306.1467 | Highly Scalable, Parallel and Distributed AdaBoost Algorithm using Light
Weight Threads and Web Services on a Network of Multi-Core Machines | cs.DC cs.LG | AdaBoost is an important algorithm in machine learning and is being widely
used in object detection. AdaBoost works by iteratively selecting the best
amongst weak classifiers, and then combines several weak classifiers to obtain
a strong classifier. Even though AdaBoost has proven to be very effective, its
learning execution time can be quite large depending upon the application e.g.,
in face detection, the learning time can be several days. Due to its increasing
use in computer vision applications, the learning time needs to be drastically
reduced so that an adaptive near real time object detection system can be
incorporated. In this paper, we develop a hybrid parallel and distributed
AdaBoost algorithm that exploits the multiple cores in a CPU via light weight
threads, and also uses multiple machines via a web service software
architecture to achieve high scalability. We present a novel hierarchical web
services based distributed architecture and achieve nearly linear speedup up to
the number of processors available to us. In comparison with the previously
published work, which used a single level master-slave parallel and distributed
implementation [1] and only achieved a speedup of 2.66 on four nodes, we
achieve a speedup of 95.1 on 31 workstations each having a quad-core processor,
resulting in a learning time of only 4.8 seconds per feature.
|
1306.1478 | Agents and owl-s based semantic web service discovery with user
preference support | cs.IR cs.SE | Service-oriented computing (SOC) is an interdisciplinary paradigm that
revolutionizes the very fabric of distributed software development applications
that adopt service-oriented architectures (SOA) can evolve during their
lifespan and adapt to changing or unpredictable environments more easily. SOA
is built around the concept of Web Services. Although the Web services
constitute a revolution in Word Wide Web, they are always regarded as
non-autonomous entities and can be exploited only after their discovery. With
the help of software agents, Web services are becoming more efficient and more
dynamic. The topic of this paper is the development of an agent based approach
for Web services discovery and selection in witch, OWL-S is used to describe
Web services, QoS and service customer request. We develop an efficient
semantic service matching which takes into account concepts properties to match
concepts in Web service and service customer request descriptions. Our approach
is based on an architecture composed of four layers: Web service and Request
description layer, Functional match layer, QoS computing layer and Reputation
computing layer.
|
1306.1486 | Strong Structural Controllability and Observability of Linear
Time-Varying Systems | math.OC cs.SY math.CO | In this note we consider continuous-time systems x'(t) = A(t) x(t) + B(t)
u(t), y(t) = C(t) x(t) + D(t) u(t), as well as discrete-time systems x(t+1) =
A(t) x(t) + B(t) u(t), y(t) = C(t) x(t) + D(t) u(t) whose coefficient matrices
A, B, C and D are not exactly known. More precisely, all that is known about
the systems is their nonzero pattern, i.e., the locations of the nonzero
entries in the coefficient matrices. We characterize the patterns that
guarantee controllability and observability, respectively, for all choices of
nonzero time functions at the matrix positions defined by the pattern, which
extends a result by Mayeda and Yamada for time-invariant systems. As it turns
out, the conditions on the patterns for time-invariant and for time-varying
discrete-time systems coincide, provided that the underlying time interval is
sufficiently long. In contrast, the conditions for time-varying continuous-time
systems are more restrictive than in the time-invariant case.
|
1306.1491 | Gaussian Process-Based Decentralized Data Fusion and Active Sensing for
Mobility-on-Demand System | cs.RO cs.DC cs.LG cs.MA | Mobility-on-demand (MoD) systems have recently emerged as a promising
paradigm of one-way vehicle sharing for sustainable personal urban mobility in
densely populated cities. In this paper, we enhance the capability of a MoD
system by deploying robotic shared vehicles that can autonomously cruise the
streets to be hailed by users. A key challenge to managing the MoD system
effectively is that of real-time, fine-grained mobility demand sensing and
prediction. This paper presents a novel decentralized data fusion and active
sensing algorithm for real-time, fine-grained mobility demand sensing and
prediction with a fleet of autonomous robotic vehicles in a MoD system. Our
Gaussian process (GP)-based decentralized data fusion algorithm can achieve a
fine balance between predictive power and time efficiency. We theoretically
guarantee its predictive performance to be equivalent to that of a
sophisticated centralized sparse approximation for the GP model: The
computation of such a sparse approximate GP model can thus be distributed among
the MoD vehicles, hence achieving efficient and scalable demand prediction.
Though our decentralized active sensing strategy is devised to gather the most
informative demand data for demand prediction, it can achieve a dual effect of
fleet rebalancing to service the mobility demands. Empirical evaluation on
real-world mobility demand data shows that our proposed algorithm can achieve a
better balance between predictive accuracy and time efficiency than
state-of-the-art algorithms.
|
1306.1511 | SPATA: A Seeding and Patching Algorithm for Hybrid Transcriptome
Assembly | cs.CE q-bio.GN | Transcriptome assembly from RNA-Seq reads is an active area of bioinformatics
research. The ever-declining cost and the increasing depth of RNA-Seq have
provided unprecedented opportunities to better identify expressed transcripts.
However, the nonlinear transcript structures and the ultra-high throughput of
RNA-Seq reads pose significant algorithmic and computational challenges to the
existing transcriptome assembly approaches, either reference-guided or de novo.
While reference-guided approaches offer good sensitivity, they rely on
alignment results of the splice-aware aligners and are thus unsuitable for
species with incomplete reference genomes. In contrast, de novo approaches do
not depend on the reference genome but face a computational daunting task
derived from the complexity of the graph built for the whole transcriptome. In
response to these challenges, we present a hybrid approach to exploit an
incomplete reference genome without relying on splice-aware aligners. We have
designed a split-and-align procedure to efficiently localize the reads to
individual genomic loci, which is followed by an accurate de novo assembly to
assemble reads falling into each locus. Using extensive simulation data, we
demonstrate a high accuracy and precision in transcriptome reconstruction by
comparing to selected transcriptome assembly tools. Our method is implemented
in assemblySAM, a GUI software freely available at
http://sammate.sourceforge.net.
|
1306.1520 | Policy Search: Any Local Optimum Enjoys a Global Performance Guarantee | cs.LG cs.AI cs.RO math.OC | Local Policy Search is a popular reinforcement learning approach for handling
large state spaces. Formally, it searches locally in a paramet erized policy
space in order to maximize the associated value function averaged over some
predefined distribution. It is probably commonly b elieved that the best one
can hope in general from such an approach is to get a local optimum of this
criterion. In this article, we show th e following surprising result:
\emph{any} (approximate) \emph{local optimum} enjoys a \emph{global performance
guarantee}. We compare this g uarantee with the one that is satisfied by Direct
Policy Iteration, an approximate dynamic programming algorithm that does some
form of Poli cy Search: if the approximation error of Local Policy Search may
generally be bigger (because local search requires to consider a space of s
tochastic policies), we argue that the concentrability coefficient that appears
in the performance bound is much nicer. Finally, we discuss several practical
and theoretical consequences of our analysis.
|
1306.1553 | Direct Uncertainty Estimation in Reinforcement Learning | cs.AI | Optimal probabilistic approach in reinforcement learning is computationally
infeasible. Its simplification consisting in neglecting difference between true
environment and its model estimated using limited number of observations causes
exploration vs exploitation problem. Uncertainty can be expressed in terms of a
probability distribution over the space of environment models, and this
uncertainty can be propagated to the action-value function via Bellman
iterations, which are computationally insufficiently efficient though. We
consider possibility of directly measuring uncertainty of the action-value
function, and analyze sufficiency of this facilitated approach.
|
1306.1556 | Diversity Polynomials for the Analysis of Temporal Correlations in
Wireless Networks | cs.IT cs.NI math.IT math.PR | The interference in wireless networks is temporally correlated, since the
node or user locations are correlated over time and the interfering
transmitters are a subset of these nodes. For a wireless network where
(potential) interferers form a Poisson point process and use ALOHA for channel
access, we calculate the joint success and outage probabilities of n
transmissions over a reference link. The results are based on the diversity
polynomial, which captures the temporal interference correlation. The joint
outage probability is used to determine the diversity gain (as the SIR goes to
infinity), and it turns out that there is no diversity gain in simple
retransmission schemes, even with independent Rayleigh fading over all links.
We also determine the complete joint SIR distribution for two transmissions and
the distribution of the local delay, which is the time until a repeated
transmission over the reference link succeeds.
|
1306.1557 | Extending Universal Intelligence Models with Formal Notion of
Representation | cs.AI | Solomonoff induction is known to be universal, but incomputable. Its
approximations, namely, the Minimum Description (or Message) Length (MDL)
principles, are adopted in practice in the efficient, but non-universal form.
Recent attempts to bridge this gap leaded to development of the
Representational MDL principle that originates from formal decomposition of the
task of induction. In this paper, possible extension of the RMDL principle in
the context of universal intelligence agents is considered, for which
introduction of representations is shown to be an unavoidable meta-heuristic
and a step toward efficient general intelligence. Hierarchical representations
and model optimization with the use of information-theoretic interpretation of
the adaptive resonance are also discussed.
|
1306.1586 | Strong converse for the classical capacity of entanglement-breaking and
Hadamard channels via a sandwiched Renyi relative entropy | quant-ph cs.IT math-ph math.IT math.MP | A strong converse theorem for the classical capacity of a quantum channel
states that the probability of correctly decoding a classical message converges
exponentially fast to zero in the limit of many channel uses if the rate of
communication exceeds the classical capacity of the channel. Along with a
corresponding achievability statement for rates below the capacity, such a
strong converse theorem enhances our understanding of the capacity as a very
sharp dividing line between achievable and unachievable rates of communication.
Here, we show that such a strong converse theorem holds for the classical
capacity of all entanglement-breaking channels and all Hadamard channels (the
complementary channels of the former). These results follow by bounding the
success probability in terms of a "sandwiched" Renyi relative entropy, by
showing that this quantity is subadditive for all entanglement-breaking and
Hadamard channels, and by relating this quantity to the Holevo capacity. Prior
results regarding strong converse theorems for particular covariant channels
emerge as a special case of our results.
|
1306.1591 | Autonomous search for a diffusive source in an unknown environment | cs.AI cs.RO q-bio.NC | The paper presents an approach to olfactory search for a diffusive emitting
source of tracer (e.g. aerosol, gas) in an environment with unknown map of
randomly placed and shaped obstacles.
The measurements of tracer concentration are sporadic, noisy and without
directional information. The search domain is discretised and modelled by a
finite two-dimensional lattice. The links is the lattice represent the
traversable paths for emitted particles and for the searcher. A missing link in
the lattice indicates a blocked paths, due to the walls or obstacles. The
searcher must simultaneously estimate the source parameters, the map of the
search domain and its own location within the map. The solution is formulated
in the sequential Bayesian framework and implemented as a Rao-Blackwellised
particle filter with information-driven motion control. The numerical results
demonstrate the concept and its performance.
|
1306.1603 | Infrared face recognition: a literature review | cs.CV | Automatic face recognition (AFR) is an area with immense practical potential
which includes a wide range of commercial and law enforcement applications, and
it continues to be one of the most active research areas of computer vision.
Even after over three decades of intense research, the state-of-the-art in AFR
continues to improve, benefiting from advances in a range of different fields
including image processing, pattern recognition, computer graphics and
physiology. However, systems based on visible spectrum images continue to face
challenges in the presence of illumination, pose and expression changes, as
well as facial disguises, all of which can significantly decrease their
accuracy. Amongst various approaches which have been proposed in an attempt to
overcome these limitations, the use of infrared (IR) imaging has emerged as a
particularly promising research direction. This paper presents a comprehensive
and timely review of the literature on this subject.
|
1306.1609 | Vesselness features and the inverse compositional AAM for robust face
recognition using thermal IR | cs.CV | Over the course of the last decade, infrared (IR) and particularly thermal IR
imaging based face recognition has emerged as a promising complement to
conventional, visible spectrum based approaches which continue to struggle when
applied in the real world. While inherently insensitive to visible spectrum
illumination changes, IR images introduce specific challenges of their own,
most notably sensitivity to factors which affect facial heat emission patterns,
e.g. emotional state, ambient temperature, and alcohol intake. In addition,
facial expression and pose changes are more difficult to correct in IR images
because they are less rich in high frequency detail which is an important cue
for fitting any deformable model. We describe a novel method which addresses
these challenges. To normalize for pose and facial expression changes we
generate a synthetic frontal image of a face in a canonical, neutral facial
expression from an image of the face in an arbitrary pose and facial
expression. This is achieved by piecewise affine warping which follows active
appearance model (AAM) fitting. This is the first publication which explores
the use of an AAM on thermal IR images; we propose a pre-processing step which
enhances detail in thermal images, making AAM convergence faster and more
accurate. To overcome the problem of thermal IR image sensitivity to the
pattern of facial temperature emissions we describe a representation based on
reliable anatomical features. In contrast to previous approaches, our
representation is not binary; rather, our method accounts for the reliability
of the extracted features. This makes the proposed representation much more
robust both to pose and scale changes. The effectiveness of the proposed
approach is demonstrated on the largest public database of thermal IR images of
faces on which it achieved 100% identification, significantly outperforming
previous methods.
|
1306.1619 | Statistical Denoising for single molecule fluorescence microscopic
images | cs.CV | Single molecule fluorescence microscopy is a powerful technique for
uncovering detailed information about biological systems, both in vitro and in
vivo. In such experiments, the inherently low signal to noise ratios mean that
accurate algorithms to separate true signal and background noise are essential
to generate meaningful results. To this end, we have developed a new and robust
method to reduce noise in single molecule fluorescence images by using a
Gaussian Markov Random Field (GMRF) prior in a Bayesian framework. Two
different strategies are proposed to build the prior - an intrinsic GMRF, with
a stationary relationship between pixels and a heterogeneous intrinsic GMRF,
with a differently weighted relationship between pixels classified as molecules
and background. Testing with synthetic and real experimental fluorescence
images demonstrates that the heterogeneous intrinsic GMRF is superior to other
conventional de-noising approaches.
|
1306.1632 | A Generalized Channel Coding Theory for Distributed Communication | cs.IT math.IT | This paper presents generalized channel coding theorems for a time-slotted
distributed communication system where a transmitter-receiver pair is
communicating in parallel with other transmitters. Assume that the channel code
of each transmitter is chosen arbitrarily in each time slot. The coding choice
of a transmitter is denoted by a code index parameter, which is known neither
to other transmitters nor to the receiver. Fundamental performance limitation
of the system is characterized using an achievable region defined in the space
of the code index vectors. As the codeword length is taken to infinity, for all
code index vectors inside the region, the receiver will decode the message
reliably, while for all code index vectors outside the region, the receiver
will report a collision reliably. A generalized system error performance
measure is defined as the weighted sum of probabilities of different types of
communication error events. Assume that the receiver chooses an "operation
region" and intends to decode the message if the code index vector is inside
the operation region. Achievable bounds on the tradeoff between the operation
region and the generalize error performance measure are obtained under the
assumption of a finite codeword length.
|
1306.1650 | OPS-QFTs: A new type of quaternion Fourier transforms based on the
orthogonal planes split with one or two general pure quaternions | math.RA cs.CV | We explain the orthogonal planes split (OPS) of quaternions based on the
arbitrary choice of one or two linearly independent pure unit quaternions
$f,g$. Next we systematically generalize the quaternionic Fourier transform
(QFT) applied to quaternion fields to conform with the OPS determined by $f,g$,
or by only one pure unit quaternion $f$, comment on their geometric meaning,
and establish inverse transformations.
Keywords: Clifford geometric algebra, quaternion geometry, quaternion Fourier
transform, inverse Fourier transform, orthogonal planes split
|
1306.1653 | Non-constant bounded holomorphic functions of hyperbolic numbers -
Candidates for hyperbolic activation functions | cs.NE cs.CV math.RA | The Liouville theorem states that bounded holomorphic complex functions are
necessarily constant. Holomorphic functions fulfill the socalled Cauchy-Riemann
(CR) conditions. The CR conditions mean that a complex $z$-derivative is
independent of the direction. Holomorphic functions are ideal for activation
functions of complex neural networks, but the Liouville theorem makes them
useless. Yet recently the use of hyperbolic numbers, lead to the construction
of hyperbolic number neural networks. We will describe the Cauchy-Riemann
conditions for hyperbolic numbers and show that there exists a new interesting
type of bounded holomorphic functions of hyperbolic numbers, which are not
constant. We give examples of such functions. They therefore substantially
expand the available candidates for holomorphic activation functions for
hyperbolic number neural networks.
Keywords: Hyperbolic numbers, Liouville theorem, Cauchy-Riemann conditions,
bounded holomorphic functions
|
1306.1662 | Receiver Concepts and Resource Allocation for OSC Downlink Transmission | cs.IT math.IT | Voice services over Adaptive Multi-user channels on One Slot (VAMOS) has been
standardized as an extension to the Global System for Mobile Communications
(GSM). The aim of VAMOS is to increase the capacity of GSM, while maintaining
backward compatibility with the legacy system. To this end, the Orthogonal
Sub-channels (OSC) concept is employed, where two Gaussian minimum-shift keying
(GMSK) signals are transmitted in the same time slot and with the same carrier
frequency. To fully exploit the possible capacity gain of OSC, new receiver
concepts are necessary. In contrast to the base station, where multiple
antennas can be employed, the mobile station is typically equipped with only
one receive antenna. Therefore, the downlink receiver design is a very
challenging task. Different concepts for channel estimation, user separation,
and equalization at the receiver of an OSC downlink transmission are introduced
in this paper. Furthermore, the system capacity must be improved by suitable
downlink power and resource allocation algorithms. Making realistic assumptions
on the information available at the base station, an algorithm for joint power
and radio resource allocation is proposed. Simulation results show the
excellent performance of the proposed channel estimation algorithms,
equalization schemes, and joint radio resource and power allocation algorithms
in realistic VAMOS environments.
|
1306.1665 | Single Bit and Reduced Dimension Diffusion Strategies Over Distributed
Networks | cs.SY | We introduce novel diffusion based adaptive estimation strategies for
distributed networks that have significantly less communication load and
achieve comparable performance to the full information exchange configurations.
After local estimates of the desired data is produced in each node, a single
bit of information (or a reduced dimensional data vector) is generated using
certain random projections of the local estimates. This newly generated data is
diffused and then used in neighboring nodes to recover the original full
information. We provide the complete state-space description and the mean
stability analysis of our algorithms.
|
1306.1669 | Quaternionic Fourier-Mellin Transform | math.RA cs.CV | In this contribution we generalize the classical Fourier Mellin transform [S.
Dorrode and F. Ghorbel, Robust and efficient Fourier-Mellin transform
approximations for gray-level image reconstruction and complete invariant
description, Computer Vision and Image Understanding, 83(1) (2001), 57-78, DOI
10.1006/cviu.2001.0922.], which transforms functions $f$ representing, e.g., a
gray level image defined over a compact set of $\mathbb{R}^2$. The quaternionic
Fourier Mellin transform (QFMT) applies to functions $f: \mathbb{R}^2
\rightarrow \mathbb{H}$, for which $|f|$ is summable over $\mathbb{R}_+^*
\times \mathbb{S}^1$ under the measure $d\theta \frac{dr}{r}$. $\mathbb{R}_+^*$
is the multiplicative group of positive and non-zero real numbers. We
investigate the properties of the QFMT similar to the investigation of the
quaternionic Fourier Transform (QFT) in [E. Hitzer, Quaternion Fourier
Transform on Quaternion Fields and Generalizations, Advances in Applied
Clifford Algebras, 17(3) (2007), 497-517.; E. Hitzer, Directional Uncertainty
Principle for Quaternion Fourier Transforms, Advances in Applied Clifford
Algebras, 20(2) (2010), 271-284, online since 08 July 2009.].
|
1306.1676 | Algebraic foundations of split hypercomplex nonlinear adaptive filtering | cs.CV math.RA | A split hypercomplex learning algorithm for the training of nonlinear finite
impulse response adaptive filters for the processing of hypercomplex signals of
any dimension is proposed. The derivation strictly takes into account the laws
of hypercomplex algebra and hypercomplex calculus, some of which have been
neglected in existing learning approaches (e.g. for quaternions). Already in
the case of quaternions we can predict improvements in performance of
hypercomplex processes. The convergence of the proposed algorithms is
rigorously analyzed.
Keywords: Quaternionic adaptive filtering, Hypercomplex adaptive filtering,
Nonlinear adaptive filtering, Hypercomplex Multilayer Perceptron, Clifford
geometric algebra
|
1306.1679 | Clifford Fourier-Mellin transform with two real square roots of -1 in
Cl(p,q), p+q=2 | math.RA cs.CV | We describe a non-commutative generalization of the complex Fourier-Mellin
transform to Clifford algebra valued signal functions over the domain
$\R^{p,q}$ taking values in Cl(p,q), p+q=2.
Keywords: algebra, Fourier transforms; Logic, set theory, and algebra,
Fourier analysis, Integral transforms
|
1306.1689 | Verification of Query Completeness over Processes [Extended Version] | cs.DB | Data completeness is an essential aspect of data quality, and has in turn a
huge impact on the effective management of companies. For example, statistics
are computed and audits are conducted in companies by implicitly placing the
strong assumption that the analysed data are complete. In this work, we are
interested in studying the problem of completeness of data produced by business
processes, to the aim of automatically assessing whether a given database query
can be answered with complete information in a certain state of the process. We
formalize so-called quality-aware processes that create data in the real world
and store it in the company's information system possibly at a later point.
|
1306.1704 | Geo-Spotting: Mining Online Location-based Services for Optimal Retail
Store Placement | cs.SI cs.CE physics.soc-ph | The problem of identifying the optimal location for a new retail store has
been the focus of past research, especially in the field of land economy, due
to its importance in the success of a business. Traditional approaches to the
problem have factored in demographics, revenue and aggregated human flow
statistics from nearby or remote areas. However, the acquisition of relevant
data is usually expensive. With the growth of location-based social networks,
fine grained data describing user mobility and popularity of places has
recently become attainable.
In this paper we study the predictive power of various machine learning
features on the popularity of retail stores in the city through the use of a
dataset collected from Foursquare in New York. The features we mine are based
on two general signals: geographic, where features are formulated according to
the types and density of nearby places, and user mobility, which includes
transitions between venues or the incoming flow of mobile users from distant
areas. Our evaluation suggests that the best performing features are common
across the three different commercial chains considered in the analysis,
although variations may exist too, as explained by heterogeneities in the way
retail facilities attract users. We also show that performance improves
significantly when combining multiple features in supervised learning
algorithms, suggesting that the retail success of a business may depend on
multiple factors.
|
1306.1716 | Fast greedy algorithm for subspace clustering from corrupted and
incomplete data | cs.LG cs.DS math.NA stat.ML | We describe the Fast Greedy Sparse Subspace Clustering (FGSSC) algorithm
providing an efficient method for clustering data belonging to a few
low-dimensional linear or affine subspaces. The main difference of our
algorithm from predecessors is its ability to work with noisy data having a
high rate of erasures (missed entries with the known coordinates) and errors
(corrupted entries with unknown coordinates). We discuss here how to implement
the fast version of the greedy algorithm with the maximum efficiency whose
greedy strategy is incorporated into iterations of the basic algorithm.
We provide numerical evidences that, in the subspace clustering capability,
the fast greedy algorithm outperforms not only the existing state-of-the art
SSC algorithm taken by the authors as a basic algorithm but also the recent
GSSC algorithm. At the same time, its computational cost is only slightly
higher than the cost of SSC.
The numerical evidence of the algorithm significant advantage is presented
for a few synthetic models as well as for the Extended Yale B dataset of facial
images. In particular, the face recognition misclassification rate turned out
to be 6-20 times lower than for the SSC algorithm. We provide also the
numerical evidence that the FGSSC algorithm is able to perform clustering of
corrupted data efficiently even when the sum of subspace dimensions
significantly exceeds the dimension of the ambient space.
|
1306.1723 | Querying over Federated SPARQL Endpoints ---A State of the Art Survey | cs.DB cs.DC | The increasing amount of Linked Data and its inherent distributed nature have
attracted significant attention throughout the research community and amongst
practitioners to search data, in the past years. Inspired by research results
from traditional distributed databases, different approaches for managing
federation over SPARQL Endpoints have been introduced. SPARQL is the
standardised query language for RDF, the default data model used in Linked Data
deployments and SPARQL Endpoints are a popular access mechanism provided by
many Linked Open Data (LOD) repositories. In this paper, we initially give an
overview of the federation framework infrastructure and then proceed with a
comparison of existing SPARQL federation frameworks. Finally, we highlight
shortcomings in existing frameworks, which we hope helps spawning new research
directions.
|
1306.1730 | A Conceptual Metadata Framework for Spatial Data Warehouse | cs.DB | Metadata represents the information about data to be stored in Data
Warehouses.It is a mandatory element of Data Warehouse to build an efficient
Data Warehouse.Metadata helps in data integration,lineage,data quality and
populating transformed data into data warehouse.Spatial data warehouses are
based on spatial data mostly collected from Geographical Information
Systems(GIS)and the transactional systems that are specific to an application
or enterprise.Metadata design and deployment is the most critical phase in
building of data warehouse where it is mandatory to bring the spatial
information and data modeling together.In this paper,we present a holistic
metadata framework that drives metadata creation for spatial data warehouse.
Theoretically, the proposed metadata framework improves the efficiency of
accessing of data in response to frequent queries on SDWs.In other words, the
proposed framework decreases the response time of the query and accurate
information is fetched from Data Warehouse including the spatial information.
|
1306.1743 | Performing Informetric Analysis on Information Retrieval Test
Collections: Preliminary Experiments in the Physics Domain | cs.IR cs.DL | The combination of informetric analysis and information retrieval allows a
twofold application. (1) While in-formetrics analysis is primarily used to gain
insights into a scientific domain, it can be used to build recommen-dation or
alternative ranking services. They are usually based on methods like
co-occurrence or citation analyses. (2) Information retrieval and its
decades-long tradition of rigorous evaluation using standard document corpora,
predefined topics and relevance judgements can be used as a test bed for
informetric analyses. We show a preliminary experiment on how both domains can
be connected using the iSearch test collection, a standard information
retrieval test collection derived from the open access arXiv.org preprint
server. In this paper the aim is to draw a conclusion about the appropriateness
of iSearch as a test bed for the evaluation of a retrieval or recommendation
system that applies informetric methods to improve retrieval results for the
user. Based on an interview study with physicists, bibliographic coupling and
author-co-citation analysis, important authors for ten different research
questions are identified. The results show that the analysed corpus includes
these authors and their corresponding documents. This study is a first step
towards a combination of retrieval evaluations and the evaluation of
informetric analyses methods.
|
1306.1751 | Toward the Performance vs. Feedback Tradeoff for the Two-User MISO
Broadcast Channel | cs.IT math.IT | For the two-user MISO broadcast channel with imperfect and delayed channel
state information at the transmitter (CSIT), the work explores the tradeoff
between performance on the one hand, and CSIT timeliness and accuracy on the
other hand. The work considers a broad setting where communication takes place
in the presence of a random fading process, and in the presence of a feedback
process that, at any point in time, may provide CSIT estimates - of some
arbitrary accuracy - for any past, current or future channel realization. This
feedback quality may fluctuate in time across all ranges of CSIT accuracy and
timeliness, ranging from perfectly accurate and instantaneously available
estimates, to delayed estimates of minimal accuracy. Under standard
assumptions, the work derives the degrees-of-freedom (DoF) region, which is
tight for a large range of CSIT quality. This derived DoF region concisely
captures the effect of channel correlations, the accuracy of predicted,
current, and delayed-CSIT, and generally captures the effect of the quality of
CSIT offered at any time, about any channel.
The work also introduces novel schemes which - in the context of imperfect
and delayed CSIT - employ encoding and decoding with a phase-Markov structure.
The results hold for a large class of block and non-block fading channel
models, and they unify and extend many prior attempts to capture the effect of
imperfect and delayed feedback. This generality also allows for consideration
of novel pertinent settings, such as the new periodically evolving feedback
setting, where a gradual accumulation of feedback bits progressively improves
CSIT as time progresses across a finite coherence period.
|
1306.1822 | Illumination-invariant face recognition from a single image across
extreme pose using a dual dimension AAM ensemble in the thermal infrared
spectrum | cs.CV | Over the course of the last decade, infrared (IR) and particularly thermal IR
imaging based face recognition has emerged as a promising complement to
conventional, visible spectrum based approaches which continue to struggle when
applied in practice. While inherently insensitive to visible spectrum
illumination changes, IR data introduces specific challenges of its own, most
notably sensitivity to factors which affect facial heat emission patterns, e.g.
emotional state, ambient temperature, and alcohol intake. In addition, facial
expression and pose changes are more difficult to correct in IR images because
they are less rich in high frequency detail which is an important cue for
fitting any deformable model. In this paper we describe a novel method which
addresses these major challenges. Specifically, when comparing two thermal IR
images of faces, we mutually normalize their poses and facial expressions by
using an active appearance model (AAM) to generate synthetic images of the two
faces with a neutral facial expression and in the same view (the average of the
two input views). This is achieved by piecewise affine warping which follows
AAM fitting. A major contribution of our work is the use of an AAM ensemble in
which each AAM is specialized to a particular range of poses and a particular
region of the thermal IR face space. Combined with the contributions from our
previous work which addressed the problem of reliable AAM fitting in the
thermal IR spectrum, and the development of a person-specific representation
robust to transient changes in the pattern of facial temperature emissions, the
proposed ensemble framework accurately matches faces across the full range of
yaw from frontal to profile, even in the presence of scale variation (e.g. due
to the varying distance of a subject from the camera).
|
1306.1840 | Loss-Proportional Subsampling for Subsequent ERM | cs.LG stat.ML | We propose a sampling scheme suitable for reducing a data set prior to
selecting a hypothesis with minimum empirical risk. The sampling only considers
a subset of the ultimate (unknown) hypothesis set, but can nonetheless
guarantee that the final excess risk will compare favorably with utilizing the
entire original data set. We demonstrate the practical benefits of our approach
on a large dataset which we subsample and subsequently fit with boosted trees.
|
1306.1849 | New Results on Equilibria in Strategic Candidacy | cs.GT cs.AI cs.MA | We consider a voting setting where candidates have preferences about the
outcome of the election and are free to join or leave the election. The
corresponding candidacy game, where candidates choose strategically to
participate or not, has been studied %initially by Dutta et al., who showed
that no non-dictatorial voting procedure satisfying unanimity is
candidacy-strategyproof, that is, is such that the joint action where all
candidates enter the election is always a pure strategy Nash equilibrium. Dutta
et al. also showed that for some voting tree procedures, there are candidacy
games with no pure Nash equilibria, and that for the rule that outputs the
sophisticated winner of voting by successive elimination, all games have a pure
Nash equilibrium. No results were known about other voting rules. Here we prove
several such results. For four candidates, the message is, roughly, that most
scoring rules (with the exception of Borda) do not guarantee the existence of a
pure Nash equilibrium but that Condorcet-consistent rules, for an odd number of
voters, do. For five candidates, most rules we study no longer have this
guarantee. Finally, we identify one prominent rule that guarantees the
existence of a pure Nash equilibrium for any number of candidates (and for an
odd number of voters): the Copeland rule. We also show that under mild
assumptions on the voting rule, the existence of strong equilibria cannot be
guaranteed.
|
1306.1850 | Enhancement of a Novel Method for Mutational Disease Prediction using
Bioinformatics Techniques and Backpropagation Algorithm | cs.CE q-bio.QM | The noval method for mutational disease prediction using bioinformatics tools
and datasets for diagnosis the malignant mutations with powerful Artificial
Neural Network (Backpropagation Network) for classifying these malignant
mutations are related to gene(s) (like BRCA1 and BRCA2) cause a disease (breast
cancer). This noval method did not take in consideration just like adopted for
dealing, analyzing and treat the gene sequences for extracting useful
information from the sequence, also exceeded the environment factors which play
important roles in deciding and calculating some of genes features in order to
view its functional parts and relations to diseases. This paper is proposed an
enhancement of a novel method as a first way for diagnosis and prediction the
disease by mutations considering and introducing multi other features show the
alternations, changes in the environment as well as genes, comparing sequences
to gain information about the structure or function of a query sequence, also
proposing optimal and more accurate system for classification and dealing with
specific disorder using backpropagation with mean square rate 0.000000001.
Index Terms (Homology sequence, GC content and AT content, Bioinformatics,
Backpropagation Network, BLAST, DNA Sequence, Protein Sequence)
|
1306.1851 | A Factor Graph Approach to Joint OFDM Channel Estimation and Decoding in
Impulsive Noise Environments | cs.IT math.IT stat.ML | We propose a novel receiver for orthogonal frequency division multiplexing
(OFDM) transmissions in impulsive noise environments. Impulsive noise arises in
many modern wireless and wireline communication systems, such as Wi-Fi and
powerline communications, due to uncoordinated interference that is much
stronger than thermal noise. We first show that the bit-error-rate optimal
receiver jointly estimates the propagation channel coefficients, the noise
impulses, the finite-alphabet symbols, and the unknown bits. We then propose a
near-optimal yet computationally tractable approach to this joint estimation
problem using loopy belief propagation. In particular, we merge the recently
proposed "generalized approximate message passing" (GAMP) algorithm with the
forward-backward algorithm and soft-input soft-output decoding using a "turbo"
approach. Numerical results indicate that the proposed receiver drastically
outperforms existing receivers under impulsive noise and comes within 1 dB of
the matched-filter bound. Meanwhile, with N tones, the proposed
factor-graph-based receiver has only O(N log N) complexity, and it can be
parallelized.
|
1306.1881 | Artificial Ant Species on Solving Optimization Problems | cs.MA | During the last years several ant-based techniques were involved to solve
hard and complex optimization problems. The current paper is a short study
about the influence of artificial ant species in solving optimization problems.
There are studied the artificial Pharaoh Ants, Lasius Niger and also artificial
ants with no special specificity used commonly in Ant Colony Optimization.
|
1306.1894 | Speckle Reduction with Adaptive Stack Filters | cs.CV | Stack filters are a special case of non-linear filters. They have a good
performance for filtering images with different types of noise while preserving
edges and details. A stack filter decomposes an input image into stacks of
binary images according to a set of thresholds. Each binary image is then
filtered by a Boolean function, which characterizes the filter. Adaptive stack
filters can be computed by training using a prototype (ideal) image and its
corrupted version, leading to optimized filters with respect to a loss
function. In this work we propose the use of training with selected samples for
the estimation of the optimal Boolean function. We study the performance of
adaptive stack filters when they are applied to speckled imagery, in particular
to Synthetic Aperture Radar (SAR) images. This is done by evaluating the
quality of the filtered images through the use of suitable image quality
indexes and by measuring the classification accuracy of the resulting images.
We used SAR images as input, since they are affected by speckle noise that
makes classification a difficult task.
|
1306.1907 | Analytical Coexistence Benchmark for Assessing the Utmost Interference
Tolerated by IEEE 802.20 | cs.IT math.IT | Whether it is crosstalk, harmonics, or in-band operation of wireless
technologies, interference between a reference system and a host of offenders
is virtually unavoidable. In past contributions, a benchmark has been
established and considered for coexistence analysis with a number of
technologies including FWA, UMTS, and WiMAX. However, the previously presented
model does not take into account the mobility factor of the reference node in
addition to a number of interdependent requirements regarding the link
direction, channel state, data rate and system factors; hence limiting its
applicability for the MBWA (IEEE 802.20) standard. Thus, over diverse modes, in
this correspondence we analytically derived the greatest aggregate interference
level tolerated for high-fidelity transmission tailored specifically for the
MBWA standard. Our results, in the form of benchmark indicators, should be of
particular interest to peers analyzing and researching RF coexistence scenarios
with this new protocol.
|
1306.1913 | Emotional Expression Classification using Time-Series Kernels | cs.CV cs.LG stat.ML | Estimation of facial expressions, as spatio-temporal processes, can take
advantage of kernel methods if one considers facial landmark positions and
their motion in 3D space. We applied support vector classification with kernels
derived from dynamic time-warping similarity measures. We achieved over 99%
accuracy - measured by area under ROC curve - using only the 'motion pattern'
of the PCA compressed representation of the marker point vector, the so-called
shape parameters. Beyond the classification of full motion patterns, several
expressions were recognized with over 90% accuracy in as few as 5-6 frames from
their onset, about 200 milliseconds.
|
1306.1922 | Collaborative 20 Questions for Target Localization | cs.IT math.IT | We consider the problem of 20 questions with noise for multiple players under
the minimum entropy criterion in the setting of stochastic search, with
application to target localization. Each player yields a noisy response to a
binary query governed by a certain error probability. First, we propose a
sequential policy for constructing questions that queries each player in
sequence and refines the posterior of the target location. Second, we consider
a joint policy that asks all players questions in parallel at each time instant
and characterize the structure of the optimal policy for constructing the
sequence of questions. This generalizes the single player probabilistic
bisection method for stochastic search problems. Third, we prove an equivalence
between the two schemes showing that, despite the fact that the sequential
scheme has access to a more refined filtration, the joint scheme performs just
as well on average. Fourth, we establish convergence rates of the mean-square
error (MSE) and derive error exponents. Lastly, we obtain an extension to the
case of unknown error probabilities. This framework provides a mathematical
model for incorporating a human in the loop for active machine learning
systems.
|
1306.1927 | Learning About Meetings | stat.AP cs.CL | Most people participate in meetings almost every day, multiple times a day.
The study of meetings is important, but also challenging, as it requires an
understanding of social signals and complex interpersonal dynamics. Our aim
this work is to use a data-driven approach to the science of meetings. We
provide tentative evidence that: i) it is possible to automatically detect when
during the meeting a key decision is taking place, from analyzing only the
local dialogue acts, ii) there are common patterns in the way social dialogue
acts are interspersed throughout a meeting, iii) at the time key decisions are
made, the amount of time left in the meeting can be predicted from the amount
of time that has passed, iv) it is often possible to predict whether a proposal
during a meeting will be accepted or rejected based entirely on the language
(the set of persuasive words) used by the speaker.
|
1306.1956 | Rendezvous of Two Robots with Constant Memory | cs.MA cs.CG cs.RO | We study the impact that persistent memory has on the classical rendezvous
problem of two mobile computational entities, called robots, in the plane. It
is well known that, without additional assumptions, rendezvous is impossible if
the entities are oblivious (i.e., have no persistent memory) even if the system
is semi-synchronous (SSynch). It has been recently shown that rendezvous is
possible even if the system is asynchronous (ASynch) if each robot is endowed
with O(1) bits of persistent memory, can transmit O(1) bits in each cycle, and
can remember (i.e., can persistently store) the last received transmission.
This setting is overly powerful.
In this paper we weaken that setting in two different ways: (1) by
maintaining the O(1) bits of persistent memory but removing the communication
capabilities; and (2) by maintaining the O(1) transmission capability and the
ability to remember the last received transmission, but removing the ability of
an agent to remember its previous activities. We call the former setting
finite-state (FState) and the latter finite-communication (FComm). Note that,
even though its use is very different, in both settings, the amount of
persistent memory of a robot is constant.
We investigate the rendezvous problem in these two weaker settings. We model
both settings as a system of robots endowed with visible lights: in FState, a
robot can only see its own light, while in FComm a robot can only see the other
robot's light. We prove, among other things, that finite-state robots can
rendezvous in SSynch, and that finite-communication robots are able to
rendezvous even in ASynch. All proofs are constructive: in each setting, we
present a protocol that allows the two robots to rendezvous in finite time.
|
1306.2003 | Comparing Edge Detection Methods based on Stochastic Entropies and
Distances for PolSAR Imagery | math.ST cs.CV eess.IV stat.TH | Polarimetric synthetic aperture radar (PolSAR) has achieved a prominent
position as a remote imaging method. However, PolSAR images are contaminated by
speckle noise due to the coherent illumination employed during the data
acquisition. This noise provides a granular aspect to the image, making its
processing and analysis (such as in edge detection) hard tasks. This paper
discusses seven methods for edge detection in multilook PolSAR images. In all
methods, the basic idea consists in detecting transition points in the finest
possible strip of data which spans two regions. The edge is contoured using the
transitions points and a B-spline curve. Four stochastic distances, two
differences of entropies, and the maximum likelihood criterion were used under
the scaled complex Wishart distribution; the first six stem from the h-phi
class of measures. The performance of the discussed detection methods was
quantified and analyzed by the computational time and probability of correct
edge detection, with respect to the number of looks, the backscatter matrix as
a whole, the SPAN, the covariance an the spatial resolution. The detection
procedures were applied to three real PolSAR images. Results provide evidence
that the methods based on the Bhattacharyya distance and the difference of
Shannon entropies outperform the other techniques.
|
1306.2009 | CSMA over Time-varying Channels: Optimality, Uniqueness and Limited
Backoff Rate | cs.NI cs.IT math.IT | Recent studies on MAC scheduling have shown that carrier sense multiple
access (CSMA) algo- rithms can be throughput optimal for arbitrary wireless
network topology. However, these results are highly sensitive to the underlying
assumption on 'static' or 'fixed' system conditions. For example, if channel
conditions are time-varying, it is unclear how each node can adjust its CSMA
parameters, so-called backoff and channel holding times, using its local
channel information for the desired high performance. In this paper, we study
'channel-aware' CSMA (A-CSMA) algorithms in time-varying channels, where they
adjust their parameters as some function of the current channel capacity.
First, we show that the achievable rate region of A-CSMA equals to the maximum
rate region if and only if the function is exponential. Furthermore, given an
exponential function in A-CSMA, we design updating rules for their parameters,
which achieve throughput optimality for an arbitrary wireless network topology.
They are the first CSMA algorithms in the literature which are proved to be
throughput optimal under time-varying channels. Moreover, we also consider the
case when back-off rates of A- CSMA are highly restricted compared to the speed
of channel variations, and characterize the throughput performance of A-CSMA in
terms of the underlying wireless network topology. Our results not only guide a
high-performance design on MAC scheduling under highly time-varying scenarios,
but also provide new insights on the performance of CSMA algorithms in relation
to their backoff rates and the network topology.
|
1306.2015 | CSI Feedback Reduction for MIMO Interference Alignment | cs.IT math.IT | Interference alignment (IA) is a linear precoding strategy that can achieve
optimal capacity scaling at high SNR in interference networks. Most of the
existing IA designs require full channel state information (CSI) at the
transmitters, which induces a huge CSI signaling cost. Hence it is desirable to
improve the feedback efficiency for IA and in this paper, we propose a novel IA
scheme with a significantly reduced CSI feedback. To quantify the CSI feedback
cost, we introduce a novel metric, namely the feedback dimension. This metric
serves as a first-order measurement of CSI feedback overhead. Due to the
partial CSI feedback constraint, conventional IA schemes can not be applied and
hence, we develop a novel IA precoder / decorrelator design and establish new
IA feasibility conditions. Via dynamic feedback profile design, the proposed IA
scheme can also achieve a flexible tradeoff between the degree of freedom (DoF)
requirements for data streams, the antenna resources and the CSI feedback cost.
We show by analysis and simulations that the proposed scheme achieves
substantial reductions of CSI feedback overhead under the same DoF requirement
in MIMO interference networks.
|
1306.2019 | Proceedings Fourth International Workshop on Computational Models for
Cell Processes | cs.CE | The fourth international workshop on Computational Models for Cell Processes
(CompMod 2013) took place on June 11, 2013 at the {\AA}bo Akademi University,
Turku, Finland, in conjunction with iFM 2013. The first edition of the workshop
(2008) took place in Turku, Finland, in conjunction with Formal Methods 2008,
the second edition (2009) took place in Eindhoven, the Netherlands, as well in
conjunction with Formal Methods 2009, and the third one took place in Aachen,
Germany, in conjunction with CONCUR 2013. This volume contains the final
versions of all contributions accepted for presentation at the workshop.
The goal of the CompMod workshop series is to bring together researchers in
Computer Science and Mathematics (both discrete and continuous), interested in
the opportunities and the challenges of Systems Biology. The Program Committee
of CompMod 2013 selected 3 papers for presentation at the workshop. In
addition, we had two invited talks and five informal presentations.
The scientific program of the workshop spans an interesting mix of approaches
to systems and even synthetic biology, encompassing several different modeling
approaches, ranging from quantitative to qualitative techniques, from
continuous to discrete mathematics, and from deterministic to stochastic
methods. We thank our invited speakers Daniela Besozzi (Universita degli Studi
di Milano, Milano, Italy) and Juho Rousu (Aalto University, Finland) for
accepting our invitation and for presenting some of their recent results at
CompMod 2013.
The technical contributions address the mathematical modeling of the PDGF
signalling pathway, the canonical labelling of site graphs, rule-based modeling
of polymerization reactions, rule-based modeling as a platform for the analysis
of synthetic self-assembled nano-systems, robustness analysis of stochastic
systems, an algebraic approach to gene assembly in ciliates, and large-scale
text mining of biomedical literature.
|
1306.2025 | Flexibly-bounded Rationality and Marginalization of Irrationality
Theories for Decision Making | cs.AI | In this paper the theory of flexibly-bounded rationality which is an
extension to the theory of bounded rationality is revisited. Rational decision
making involves using information which is almost always imperfect and
incomplete together with some intelligent machine which if it is a human being
is inconsistent to make decisions. In bounded rationality, this decision is
made irrespective of the fact that the information to be used is incomplete and
imperfect and that the human brain is inconsistent and thus this decision that
is to be made is taken within the bounds of these limitations. In the theory of
flexibly-bounded rationality, advanced information analysis is used, the
correlation machine is applied to complete missing information and artificial
intelligence is used to make more consistent decisions. Therefore
flexibly-bounded rationality expands the bounds within which rationality is
exercised. Because human decision making is essentially irrational, this paper
proposes the theory of marginalization of irrationality in decision making to
deal with the problem of satisficing in the presence of irrationality.
|
1306.2035 | Minimax Theory for High-dimensional Gaussian Mixtures with Sparse Mean
Separation | stat.ML cs.LG math.ST stat.TH | While several papers have investigated computationally and statistically
efficient methods for learning Gaussian mixtures, precise minimax bounds for
their statistical performance as well as fundamental limits in high-dimensional
settings are not well-understood. In this paper, we provide precise information
theoretic bounds on the clustering accuracy and sample complexity of learning a
mixture of two isotropic Gaussians in high dimensions under small mean
separation. If there is a sparse subset of relevant dimensions that determine
the mean separation, then the sample complexity only depends on the number of
relevant dimensions and mean separation, and can be achieved by a simple
computationally efficient procedure. Our results provide the first step of a
theoretical basis for recent methods that combine feature selection and
clustering.
|
1306.2040 | A Numerical Example about the Geometric Approach to the Output
Regulation Problem with Stability for Linear Switching Systems | cs.SY math.OC | This note presents a numerical example worked out in order to illustrate the
solution to the output regulation problem with quadratic stability for linear
switching systems derived in [1].
|
1306.2081 | 3D model retrieval using global and local radial distances | cs.GR cs.CV cs.IR | 3D model retrieval techniques can be classified as histogram-based,
view-based and graph-based approaches. We propose a hybrid shape descriptor
which combines the global and local radial distance features by utilizing the
histogram-based and view-based approaches respectively. We define an
area-weighted global radial distance with respect to the center of the bounding
sphere of the model and encode its distribution into a 2D histogram as the
global radial distance shape descriptor. We then uniformly divide the bounding
cube of a 3D model into a set of small cubes and define their centers as local
centers. Then, we compute the local radial distance of a point based on the
nearest local center. By sparsely sampling a set of views and encoding the
local radial distance feature on the rendered views by color coding, we extract
the local radial distance shape descriptor. Based on these two shape
descriptors, we develop a hybrid radial distance shape descriptor for 3D model
retrieval. Experiment results show that our hybrid shape descriptor outperforms
several typical histogram-based and view-based approaches.
|
1306.2084 | Logistic Tensor Factorization for Multi-Relational Data | stat.ML cs.LG | Tensor factorizations have become increasingly popular approaches for various
learning tasks on structured data. In this work, we extend the RESCAL tensor
factorization, which has shown state-of-the-art results for multi-relational
learning, to account for the binary nature of adjacency tensors. We study the
improvements that can be gained via this approach on various benchmark datasets
and show that the logistic extension can improve the prediction results
significantly.
|
1306.2086 | Byzantine Fault Tolerant Distributed Quickest Change Detection | math.PR cs.IT cs.SY math.IT math.OC | We introduce and solve the problem of Byzantine fault tolerant distributed
quickest change detection in both continuous and discrete time setups. In this
problem, multiple sensors sequentially observe random signals from the
environment and send their observations to a control center that will determine
whether there is a change in the statistical behavior of the observations. We
assume that the signals are independent and identically distributed across
sensors. An unknown subset of sensors are compromised and will send arbitrarily
modified and even artificially generated signals to the control center. It is
shown that the performance of the the so-called CUSUM statistic, which is
optimal when all sensors are honest, will be significantly degraded in the
presence of even a single dishonest sensor. In particular, instead of in a
logarithmically the detection delay grows linearly with the average run length
(ARL) to false alarm. To mitigate such a performance degradation, we propose a
fully distributed low complexity detection scheme. We show that the proposed
scheme can recover the log scaling. We also propose a centralized group-wise
scheme that can further reduce the detection delay.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.