id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1203.0594
|
Learning DNF Expressions from Fourier Spectrum
|
cs.LG cs.CC cs.DS
|
Since its introduction by Valiant in 1984, PAC learning of DNF expressions
remains one of the central problems in learning theory. We consider this
problem in the setting where the underlying distribution is uniform, or more
generally, a product distribution. Kalai, Samorodnitsky and Teng (2009) showed
that in this setting a DNF expression can be efficiently approximated from its
"heavy" low-degree Fourier coefficients alone. This is in contrast to previous
approaches where boosting was used and thus Fourier coefficients of the target
function modified by various distributions were needed. This property is
crucial for learning of DNF expressions over smoothed product distributions, a
learning model introduced by Kalai et al. (2009) and inspired by the seminal
smoothed analysis model of Spielman and Teng (2001).
We introduce a new approach to learning (or approximating) a polynomial
threshold functions which is based on creating a function with range [-1,1]
that approximately agrees with the unknown function on low-degree Fourier
coefficients. We then describe conditions under which this is sufficient for
learning polynomial threshold functions. Our approach yields a new, simple
algorithm for approximating any polynomial-size DNF expression from its "heavy"
low-degree Fourier coefficients alone. Our algorithm greatly simplifies the
proof of learnability of DNF expressions over smoothed product distributions.
We also describe an application of our algorithm to learning monotone DNF
expressions over product distributions. Building on the work of Servedio
(2001), we give an algorithm that runs in time $\poly((s \cdot
\log{(s/\eps)})^{\log{(s/\eps)}}, n)$, where $s$ is the size of the target DNF
expression and $\eps$ is the accuracy. This improves on $\poly((s \cdot
\log{(ns/\eps)})^{\log{(s/\eps)} \cdot \log{(1/\eps)}}, n)$ bound of Servedio
(2001).
|
1203.0617
|
Bayesian inference under differential privacy
|
cs.DB
|
Bayesian inference is an important technique throughout statistics. The
essence of Beyesian inference is to derive the posterior belief updated from
prior belief by the learned information, which is a set of differentially
private answers under differential privacy. Although Bayesian inference can be
used in a variety of applications, it becomes theoretically hard to solve when
the number of differentially private answers is large. To facilitate Bayesian
inference under differential privacy, this paper proposes a systematic
mechanism. The key step of the mechanism is the implementation of Bayesian
updating with the best linear unbiased estimator derived by Gauss-Markov
theorem. In addition, we also apply the proposed inference mechanism into an
online queryanswering system, the novelty of which is that the utility for
users is guaranteed by Bayesian inference in the form of credible interval and
confidence level. Theoretical and experimental analysis are shown to
demonstrate the efficiency and effectiveness of both inference mechanism and
online query-answering system.
|
1203.0631
|
Checking Tests for Read-Once Functions over Arbitrary Bases
|
cs.DM cs.CC cs.LG
|
A Boolean function is called read-once over a basis B if it can be expressed
by a formula over B where no variable appears more than once. A checking test
for a read-once function f over B depending on all its variables is a set of
input vectors distinguishing f from all other read-once functions of the same
variables. We show that every read-once function f over B has a checking test
containing O(n^l) vectors, where n is the number of relevant variables of f and
l is the largest arity of functions in B. For some functions, this bound cannot
be improved by more than a constant factor. The employed technique involves
reconstructing f from its l-variable projections and provides a stronger form
of Kuznetsov's classic theorem on read-once representations.
|
1203.0648
|
Towards Electronic Shopping of Composite Product
|
cs.SE cs.AI math.OC
|
In the paper, frameworks for electronic shopping of composite (modular)
products are described: (a) multicriteria selection (product is considered as a
whole system, it is a traditional approach), (b) combinatorial synthesis
(composition) of the product from its components, (c) aggregation of the
product from several selected products/prototypes. The following product model
is examined: (i) general tree-like structure, (ii) set of system
parts/components (leaf nodes), (iii) design alternatives (DAs) for each
component, (iv) ordinal priorities for DAs, and (v) estimates of compatibility
between DAs for different components. The combinatorial synthesis is realized
as morphological design of a composite (modular) product or an extended
composite product (e.g., product and support services as financial
instruments). Here the solving process is based on Hierarchical Morphological
Multicriteria Design (HMMD): (i) multicriteria selection of alternatives for
system parts, (ii) composing the selected alternatives into a resultant
combination (while taking into account ordinal quality of the alternatives
above and their compatibility). The aggregation framework is based on
consideration of aggregation procedures, for example: (i) addition procedure:
design of a products substructure or an extended substructure ('kernel') and
addition of elements, and (ii) design procedure: design of the composite
solution based on all elements of product superstructure. Applied numerical
examples (e.g., composite product, extended composite product, product repair
plan, and product trajectory) illustrate the proposed approaches.
|
1203.0652
|
A likelihood-based framework for the analysis of discussion threads
|
cs.SI physics.soc-ph
|
Online discussion threads are conversational cascades in the form of posted
messages that can be generally found in social systems that comprise
many-to-many interaction such as blogs, news aggregators or bulletin board
systems. We propose a framework based on generative models of growing trees to
analyse the structure and evolution of discussion threads. We consider the
growth of a discussion to be determined by an interplay between popularity,
novelty and a trend (or bias) to reply to the thread originator. The relevance
of these features is estimated using a full likelihood approach and allows to
characterize the habits and communication patterns of a given platform and/or
community.
|
1203.0653
|
Kolmogorov complexity and the asymptotic bound for error-correcting
codes
|
cs.IT math.IT
|
The set of all error--correcting block codes over a fixed alphabet with $q$
letters determines a recursively enumerable set of rational points in the unit
square with coordinates $(R,\delta)$:= (relative transmission rate, relative
minimal distance). Limit points of this set form a closed subset, defined by
$R\le \alpha_q(\delta)$, where $\alpha_q(\delta)$ is a continuous decreasing
function called asymptotic bound. Its existence was proved by the first--named
author in 1981 ([Man1]), but no approaches to the computation of this function
are known, and in [Man5] it was even suggested that this function might be
uncomputable in the sense of constructive analysis.
In this note we show that the asymptotic bound becomes computable with the
assistance of an oracle producing codes in the order of their growing
Kolmogorov complexity. Moreover, a natural partition function involving
complexity allows us to interpret the asymptotic bound as a curve dividing two
different thermodynamic phases of codes.
|
1203.0656
|
Contribution of Case Based Reasoning (CBR) in the Exploitation of Return
of Experience. Application to Accident Scenarii in Railroad Transport
|
cs.AI cs.HC
|
The study is from a base of accident scenarii in rail transport (feedback) in
order to develop a tool to share build and sustain knowledge and safety and
secondly to exploit the knowledge stored to prevent the reproduction of
accidents / incidents. This tool should ultimately lead to the proposal of
prevention and protection measures to minimize the risk level of a new
transport system and thus to improve safety. The approach to achieving this
goal largely depends on the use of artificial intelligence techniques and
rarely the use of a method of automatic learning in order to develop a
feasibility model of a software tool based on case based reasoning (CBR) to
exploit stored knowledge in order to create know-how that can help stimulate
domain experts in the task of analysis, evaluation and certification of a new
system.
|
1203.0683
|
A Method of Moments for Mixture Models and Hidden Markov Models
|
cs.LG stat.ML
|
Mixture models are a fundamental tool in applied statistics and machine
learning for treating data taken from multiple subpopulations. The current
practice for estimating the parameters of such models relies on local search
heuristics (e.g., the EM algorithm) which are prone to failure, and existing
consistent methods are unfavorable due to their high computational and sample
complexity which typically scale exponentially with the number of mixture
components. This work develops an efficient method of moments approach to
parameter estimation for a broad class of high-dimensional mixture models with
many components, including multi-view mixtures of Gaussians (such as mixtures
of axis-aligned Gaussians) and hidden Markov models. The new method leads to
rigorous unsupervised learning results for mixture models that were not
achieved by previous works; and, because of its simplicity, it offers a viable
alternative to EM for practical deployment.
|
1203.0695
|
Cooperative Compute-and-Forward
|
cs.IT math.IT
|
We examine the benefits of user cooperation under compute-and-forward. Much
like in network coding, receivers in a compute-and-forward network recover
finite-field linear combinations of transmitters' messages. Recovery is enabled
by linear codes: transmitters map messages to a linear codebook, and receivers
attempt to decode the incoming superposition of signals to an integer
combination of codewords. However, the achievable computation rates are low if
channel gains do not correspond to a suitable linear combination. In response
to this challenge, we propose a cooperative approach to compute-and-forward. We
devise a lattice-coding approach to block Markov encoding with which we
construct a decode-and-forward style computation strategy. Transmitters
broadcast lattice codewords, decode each other's messages, and then
cooperatively transmit resolution information to aid receivers in decoding the
integer combinations. Using our strategy, we show that cooperation offers a
significant improvement both in the achievable computation rate and in the
diversity-multiplexing tradeoff.
|
1203.0696
|
Dynamic Server Allocation over Time Varying Channels with Switchover
Delay
|
math.OC cs.IT math.IT
|
We consider a dynamic server allocation problem over parallel queues with
randomly varying connectivity and server switchover delay between the queues.
At each time slot the server decides either to stay with the current queue or
switch to another queue based on the current connectivity and the queue length
information. Switchover delay occurs in many telecommunications applications
and is a new modeling component of this problem that has not been previously
addressed. We show that the simultaneous presence of randomly varying
connectivity and switchover delay changes the system stability region and the
structure of optimal policies. In the first part of the paper, we consider a
system of two parallel queues, and develop a novel approach to explicitly
characterize the stability region of the system using state-action frequencies
which are stationary solutions to a Markov Decision Process (MDP) formulation.
We then develop a frame-based dynamic control (FBDC) policy, based on the
state-action frequencies, and show that it is throughput-optimal asymptotically
in the frame length. The FBDC policy is applicable to a broad class of network
control systems and provides a new framework for developing throughput-optimal
network control policies using state-action frequencies. Furthermore, we
develop simple Myopic policies that provably achieve more than 90% of the
stability region. In the second part of the paper, we extend our results to
systems with an arbitrary but finite number of queues.
|
1203.0697
|
Learning High-Dimensional Mixtures of Graphical Models
|
stat.ML cs.AI cs.LG
|
We consider unsupervised estimation of mixtures of discrete graphical models,
where the class variable corresponding to the mixture components is hidden and
each mixture component over the observed variables can have a potentially
different Markov graph structure and parameters. We propose a novel approach
for estimating the mixture components, and our output is a tree-mixture model
which serves as a good approximation to the underlying graphical model mixture.
Our method is efficient when the union graph, which is the union of the Markov
graphs of the mixture components, has sparse vertex separators between any pair
of observed variables. This includes tree mixtures and mixtures of bounded
degree graphs. For such models, we prove that our method correctly recovers the
union graph structure and the tree structures corresponding to
maximum-likelihood tree approximations of the mixture components. The sample
and computational complexities of our method scale as $\poly(p, r)$, for an
$r$-component mixture of $p$-variate graphical models. We further extend our
results to the case when the union graph has sparse local separators between
any pair of observed variables, such as mixtures of locally tree-like graphs,
and the mixture components are in the regime of correlation decay.
|
1203.0699
|
Ambiguous Language and Differences in Beliefs
|
cs.AI cs.GT
|
Standard models of multi-agent modal logic do not capture the fact that
information is often ambiguous, and may be interpreted in different ways by
different agents. We propose a framework that can model this, and consider
different semantics that capture different assumptions about the agents'
beliefs regarding whether or not there is ambiguity. We consider the impact of
ambiguity on a seminal result in economics: Aumann's result saying that agents
with a common prior cannot agree to disagree. This result is known not to hold
if agents do not have a common prior; we show that it also does not hold in the
presence of ambiguity. We then consider the tradeoff between assuming a common
interpretation (i.e., no ambiguity) and a common prior (i.e., shared initial
beliefs).
|
1203.0714
|
Towards an intelligence based conceptual framework for e-maintenance
|
cs.MA
|
Since the time when concept of e-maintenance was introduced, most of the
works insisted on the relevance of the underlying Information and Communication
Technologies infrastructure. Through a review of current e-maintenance
conceptual approaches and realizations, this paper aims to reconsider the
predominance of ICT within e-maintenance projects and literature. The review
brings to light the importance of intelligence as a fundamental dimension of
e-maintenance that is to be led in a holistic predefined manner rather than
isolated efforts within ICT driven approaches. As a contribution towards an
intelligence based e-maintenance conceptual framework, a proposal is outlined
in this paper to model e-maintenance system as an intelligent system. The
proposed frame is based on CogAff architecture for intelligent agents. Within
the proposed frame, more importance was reserved to the environment that the
system is to be continuously aware of: Plant Environment, Internal and External
Enterprise Environment and Human Environment. In addition to the abilities
required for internal coherent behavior of the system, requirements for
maintenance activities support are also mapped within the same frame according
to corresponding levels of management. A case study was detailed in this paper
sustaining the applicability of the proposal in relation to the classification
of existing e-maintenance platforms. However, more work is needed to enhance
exhaustiveness of the frame to serve as a comparison tool of existing
e-maintenance systems. At the conceptual level, our future work is to use the
proposed frame in an e-maintenance project.
|
1203.0728
|
The maximum number of minimal codewords in an $[n,k]-$code
|
cs.IT math.CO math.IT
|
Upper and lower bounds are derived for the quantity in the title, which is
tabulated for modest values of $n$ and $k.$ An application to graphs with many
cycles is given.
|
1203.0730
|
Achievability proof via output statistics of random binning
|
cs.IT math.IT
|
This paper introduces a new and ubiquitous framework for establishing
achievability results in \emph{network information theory} (NIT) problems. The
framework uses random binning arguments and is based on a duality between
channel and source coding problems. {Further,} the framework uses pmf
approximation arguments instead of counting and typicality. This allows for
proving coordination and \emph{strong} secrecy problems where certain
statistical conditions on the distribution of random variables need to be
satisfied. These statistical conditions include independence between messages
and eavesdropper's observations in secrecy problems and closeness to a certain
distribution (usually, i.i.d. distribution) in coordination problems. One
important feature of the framework is to enable one {to} add an eavesdropper
and obtain a result on the secrecy rates "for free."
We make a case for generality of the framework by studying examples in the
variety of settings containing channel coding, lossy source coding, joint
source-channel coding, coordination, strong secrecy, feedback and relaying. In
particular, by investigating the framework for the lossy source coding problem
over broadcast channel, it is shown that the new framework provides a simple
alternative scheme to \emph{hybrid} coding scheme. Also, new results on secrecy
rate region (under strong secrecy criterion) of wiretap broadcast channel and
wiretap relay channel are derived. In a set of accompanied papers, we have
shown the usefulness of the framework to establish achievability results for
coordination problems including interactive channel simulation, coordination
via relay and channel simulation via another channel.
|
1203.0731
|
Coordination via a relay
|
cs.IT math.IT
|
In this paper, we study the problem of coordinating two nodes which can only
exchange information via a relay at limited rates. The nodes are allowed to do
a two-round interactive two-way communication with the relay, after which they
should be able to generate i.i.d. copies of two random variables with a given
joint distribution within a vanishing total variation distance. We prove inner
and outer bounds on the coordination capacity region for this problem. Our
inner bound is proved using the technique of "output statistics of random
binning" that has recently been developed by Yassaee, et al.
|
1203.0744
|
A Report on Multilinear PCA Plus Multilinear LDA to Deal with Tensorial
Data: Visual Classification as An Example
|
cs.CV
|
In practical applications, we often have to deal with high order data, such
as a grayscale image and a video sequence are intrinsically 2nd-order tensor
and 3rd-order tensor, respectively. For doing clustering or classification of
these high order data, it is a conventional way to vectorize these data before
hand, as PCA or FDA does, which often induce the curse of dimensionality
problem. For this reason, experts have developed many methods to deal with the
tensorial data, such as multilinear PCA, multilinear LDA, and so on. In this
paper, we still address the problem of high order data representation and
recognition, and propose to study the result of merging multilinear PCA and
multilinear LDA into one scenario, we name it \textbf{GDA} for the abbreviation
of Generalized Discriminant Analysis. To evaluate GDA, we perform a series of
experiments, and the experimental results demonstrate our GDA outperforms a
selection of competing methods such (2D)$^2$PCA, (2D)$^2$LDA, and MDA.
|
1203.0747
|
A review of EO image information mining
|
cs.IR
|
We analyze the state of the art of content-based retrieval in Earth
observation image archives focusing on complete systems showing promise for
operational implementation. The different paradigms at the basis of the main
system families are introduced. The approaches taken are analyzed, focusing in
particular on the phases after primitive feature extraction. The solutions
envisaged for the issues related to feature simplification and synthesis,
indexing, semantic labeling are reviewed. The methodologies for query
specification and execution are analyzed.
|
1203.0781
|
Posterior Mean Super-Resolution with a Compound Gaussian Markov Random
Field Prior
|
cs.CV
|
This manuscript proposes a posterior mean (PM) super-resolution (SR) method
with a compound Gaussian Markov random field (MRF) prior. SR is a technique to
estimate a spatially high-resolution image from observed multiple
low-resolution images. A compound Gaussian MRF model provides a preferable
prior for natural images that preserves edges. PM is the optimal estimator for
the objective function of peak signal-to-noise ratio (PSNR). This estimator is
numerically determined by using variational Bayes (VB). We then solve the
conjugate prior problem on VB and the exponential-order calculation cost
problem of a compound Gaussian MRF prior with simple Taylor approximations. In
experiments, the proposed method roughly overcomes existing methods.
|
1203.0788
|
Evolution of Wikipedia's Category Structure
|
physics.soc-ph cs.DL cs.SI
|
Wikipedia, as a social phenomenon of collaborative knowledge creating, has
been studied extensively from various points of views. The category system of
Wikipedia, introduced in 2004, has attracted relatively little attention. In
this study, we focus on the documentation of knowledge, and the transformation
of this documentation with time. We take Wikipedia as a proxy for knowledge in
general and its category system as an aspect of the structure of this
knowledge. We investigate the evolution of the category structure of the
English Wikipedia from its birth in 2004 to 2008. We treat the category system
as if it is a hierarchical Knowledge Organization System, capturing the changes
in the distributions of the top categories. We investigate how the clustering
of articles, defined by the category system, matches the direct link network
between the articles and show how it changes over time. We find the Wikipedia
category network mostly stable, but with occasional reorganization. We show
that the clustering matches the link structure quite well, except short periods
preceding the reorganizations.
|
1203.0856
|
Online Discriminative Dictionary Learning for Image Classification Based
on Block-Coordinate Descent Method
|
cs.CV
|
Previous researches have demonstrated that the framework of dictionary
learning with sparse coding, in which signals are decomposed as linear
combinations of a few atoms of a learned dictionary, is well adept to
reconstruction issues. This framework has also been used for discrimination
tasks such as image classification. To achieve better performances of
classification, experts develop several methods to learn a discriminative
dictionary in a supervised manner. However, another issue is that when the data
become extremely large in scale, these methods will be no longer effective as
they are all batch-oriented approaches. For this reason, we propose a novel
online algorithm for discriminative dictionary learning, dubbed \textbf{ODDL}
in this paper. First, we introduce a linear classifier into the conventional
dictionary learning formulation and derive a discriminative dictionary learning
problem. Then, we exploit an online algorithm to solve the derived problem.
Unlike the most existing approaches which update dictionary and classifier
alternately via iteratively solving sub-problems, our approach directly
explores them jointly. Meanwhile, it can largely shorten the runtime for
training and is also particularly suitable for large-scale classification
issues. To evaluate the performance of the proposed ODDL approach in image
recognition, we conduct some experiments on three well-known benchmarks, and
the experimental results demonstrate ODDL is fairly promising for image
classification tasks.
|
1203.0876
|
An MLP based Approach for Recognition of Handwritten `Bangla' Numerals
|
cs.CV cs.AI
|
The work presented here involves the design of a Multi Layer Perceptron (MLP)
based pattern classifier for recognition of handwritten Bangla digits using a
76 element feature vector. Bangla is the second most popular script and
language in the Indian subcontinent and the fifth most popular language in the
world. The feature set developed for representing handwritten Bangla numerals
here includes 24 shadow features, 16 centroid features and 36 longest-run
features. On experimentation with a database of 6000 samples, the technique
yields an average recognition rate of 96.67% evaluated after three-fold cross
validation of results. It is useful for applications related to OCR of
handwritten Bangla Digit and can also be extended to include OCR of handwritten
characters of Bangla alphabet.
|
1203.0882
|
Handwritten Bangla Alphabet Recognition using an MLP Based Classifier
|
cs.CV cs.AI
|
The work presented here involves the design of a Multi Layer Perceptron (MLP)
based classifier for recognition of handwritten Bangla alphabet using a 76
element feature set Bangla is the second most popular script and language in
the Indian subcontinent and the fifth most popular language in the world. The
feature set developed for representing handwritten characters of Bangla
alphabet includes 24 shadow features, 16 centroid features and 36 longest-run
features. Recognition performances of the MLP designed to work with this
feature set are experimentally observed as 86.46% and 75.05% on the samples of
the training and the test sets respectively. The work has useful application in
the development of a complete OCR system for handwritten Bangla text.
|
1203.0905
|
Autocalibration with the Minimum Number of Cameras with Known Pixel
Shape
|
cs.CV
|
In 3D reconstruction, the recovery of the calibration parameters of the
cameras is paramount since it provides metric information about the observed
scene, e.g., measures of angles and ratios of distances. Autocalibration
enables the estimation of the camera parameters without using a calibration
device, but by enforcing simple constraints on the camera parameters. In the
absence of information about the internal camera parameters such as the focal
length and the principal point, the knowledge of the camera pixel shape is
usually the only available constraint. Given a projective reconstruction of a
rigid scene, we address the problem of the autocalibration of a minimal set of
cameras with known pixel shape and otherwise arbitrarily varying intrinsic and
extrinsic parameters. We propose an algorithm that only requires 5 cameras (the
theoretical minimum), thus halving the number of cameras required by previous
algorithms based on the same constraint. To this purpose, we introduce as our
basic geometric tool the six-line conic variety (SLCV), consisting in the set
of planes intersecting six given lines of 3D space in points of a conic. We
show that the set of solutions of the Euclidean upgrading problem for three
cameras with known pixel shape can be parameterized in a computationally
efficient way. This parameterization is then used to solve autocalibration from
five or more cameras, reducing the three-dimensional search space to a
two-dimensional one. We provide experiments with real images showing the good
performance of the technique.
|
1203.0924
|
An Efficient Algorithm to Calculate BICM Capacity
|
cs.IT math.IT
|
Bit-interleaved coded modulation (BICM) is a practical approach for reliable
communication over the AWGN channel in the bandwidth limited regime. For a
signal point constellation with 2^m points, BICM labels the signal points with
bit strings of length m and then treats these m bits separately both at the
transmitter and the receiver. BICM capacity is defined as the maximum of a
certain achievable rate. Maximization has to be done over the probability mass
functions (pmf) of the bits. This is a non-convex optimization problem. So far,
the optimal bit pmfs were determined via exhaustive search, which is of
exponential complexity in m. In this work, an algorithm called bit-alternating
convex concave method (Bacm) is developed. This algorithm calculates BICM
capacity with a complexity that scales approximately as m^3. The algorithm
iteratively applies convex optimization techniques. Bacm is used to calculate
BICM capacity of 4,8,16,32, and 64-PAM in AWGN. For PAM constellations with
more than 8 points, the presented values are the first results known in the
literature.
|
1203.0960
|
Near Capacity Approaching for Large MIMO Systems by Non-Binary LDPC
Codes with MMSE Detection
|
cs.IT math.IT
|
In this paper, we have investigated the application of non-binary LDPC codes
to spatial multiplexing MIMO systems with a large number of low power antennas.
We demonstrate that such large MIMO systems incorporating with low-complexity
MMSE detector and non-binary LDPC codes can achieve low probability of bit
error at near MIMO capacity. The new proposed non-binary LDPC coded system also
performs better than other coded large MIMO systems known in the present
literature. For instance, non-binary LDPC coded BPSK-MIMO system with 600
transmit/receive antennas performs within 3.4 dB from the capacity while the
best known turbo coded system operates about 9.4 dB away from the capacity.
Based on the simulation results provided in this paper, the proposed non-binary
LDPC coded large MIMO system is capable of supporting ultra high spectral
efficiency at low bit error rate.
|
1203.0970
|
Infinite Shift-invariant Grouped Multi-task Learning for Gaussian
Processes
|
cs.LG astro-ph.IM stat.ML
|
Multi-task learning leverages shared information among data sets to improve
the learning performance of individual tasks. The paper applies this framework
for data where each task is a phase-shifted periodic time series. In
particular, we develop a novel Bayesian nonparametric model capturing a mixture
of Gaussian processes where each task is a sum of a group-specific function and
a component capturing individual variation, in addition to each task being
phase shifted. We develop an efficient \textsc{em} algorithm to learn the
parameters of the model. As a special case we obtain the Gaussian mixture model
and \textsc{em} algorithm for phased-shifted periodic time series. Furthermore,
we extend the proposed model by using a Dirichlet Process prior and thereby
leading to an infinite mixture model that is capable of doing automatic model
selection. A Variational Bayesian approach is developed for inference in this
model. Experiments in regression, classification and class discovery
demonstrate the performance of the proposed models using both synthetic data
and real-world time series data from astrophysics. Our methods are particularly
useful when the time series are sparsely and non-synchronously sampled.
|
1203.1005
|
Sparse Subspace Clustering: Algorithm, Theory, and Applications
|
cs.CV cs.IR cs.IT cs.LG math.IT math.OC stat.ML
|
In many real-world problems, we are dealing with collections of
high-dimensional data, such as images, videos, text and web documents, DNA
microarray data, and more. Often, high-dimensional data lie close to
low-dimensional structures corresponding to several classes or categories the
data belongs to. In this paper, we propose and study an algorithm, called
Sparse Subspace Clustering (SSC), to cluster data points that lie in a union of
low-dimensional subspaces. The key idea is that, among infinitely many possible
representations of a data point in terms of other points, a sparse
representation corresponds to selecting a few points from the same subspace.
This motivates solving a sparse optimization program whose solution is used in
a spectral clustering framework to infer the clustering of data into subspaces.
Since solving the sparse optimization program is in general NP-hard, we
consider a convex relaxation and show that, under appropriate conditions on the
arrangement of subspaces and the distribution of data, the proposed
minimization program succeeds in recovering the desired sparse representations.
The proposed algorithm can be solved efficiently and can handle data points
near the intersections of subspaces. Another key advantage of the proposed
algorithm with respect to the state of the art is that it can deal with data
nuisances, such as noise, sparse outlying entries, and missing entries,
directly by incorporating the model of the data into the sparse optimization
program. We demonstrate the effectiveness of the proposed algorithm through
experiments on synthetic data as well as the two real-world problems of motion
segmentation and face clustering.
|
1203.1007
|
Agnostic System Identification for Model-Based Reinforcement Learning
|
cs.LG cs.AI cs.SY stat.ML
|
A fundamental problem in control is to learn a model of a system from
observations that is useful for controller synthesis. To provide good
performance guarantees, existing methods must assume that the real system is in
the class of models considered during learning. We present an iterative method
with strong guarantees even in the agnostic case where the system is not in the
class. In particular, we show that any no-regret online learning algorithm can
be used to obtain a near-optimal policy, provided some model achieves low
training error and access to a good exploration distribution. Our approach
applies to both discrete and continuous domains. We demonstrate its efficacy
and scalability on a challenging helicopter domain from the literature.
|
1203.1021
|
Development of an Ontology to Assist the Modeling of Accident Scenarii
"Application on Railroad Transport "
|
cs.AI
|
In a world where communication and information sharing are at the heart of
our business, the terminology needs are most pressing. It has become imperative
to identify the terms used and defined in a consensual and coherent way while
preserving linguistic diversity. To streamline and strengthen the process of
acquisition, representation and exploitation of scenarii of train accidents, it
is necessary to harmonize and standardize the terminology used by players in
the security field. The research aims to significantly improve analytical
activities and operations of the various safety studies, by tracking the error
in system, hardware, software and human. This paper presents the contribution
of ontology to modeling scenarii for rail accidents through a knowledge model
based on a generic ontology and domain ontology. After a detailed presentation
of the state of the art material, this article presents the first results of
the developed model.
|
1203.1069
|
A Symbolic Approach to the Design of Nonlinear Networked Control Systems
|
cs.SY
|
Networked control systems (NCS) are spatially distributed systems where
communication among plants, sensors, actuators and controllers occurs in a
shared communication network. NCS have been studied for the last ten years and
important research results have been obtained. These results are in the area of
stability and stabilizability. However, while important, these results must be
complemented in different areas to be able to design effective NCS. In this
paper we approach the control design of NCS using symbolic (finite) models.
Symbolic models are abstract descriptions of continuous systems where one
symbol corresponds to an "aggregate" of continuous states. We consider a fairly
general multiple-loop network architecture where plants communicate with
digital controllers through a shared, non-ideal, communication network
characterized by variable sampling and transmission intervals, variable
communication delays, quantization errors, packet losses and limited bandwidth.
We first derive a procedure to obtain symbolic models that are proven to
approximate NCS in the sense of alternating approximate bisimulation. We then
use these symbolic models to design symbolic controllers that realize
specifications expressed in terms of automata on infinite strings. An example
is provided where we address the control design of a pair of nonlinear control
systems sharing a common communication network. The closed-loop NCS obtained is
validated through the OMNeT++ network simulation framework.
|
1203.1095
|
Search Combinators
|
cs.AI
|
The ability to model search in a constraint solver can be an essential asset
for solving combinatorial problems. However, existing infrastructure for
defining search heuristics is often inadequate. Either modeling capabilities
are extremely limited or users are faced with a general-purpose programming
language whose features are not tailored towards writing search heuristics. As
a result, major improvements in performance may remain unexplored.
This article introduces search combinators, a lightweight and
solver-independent method that bridges the gap between a conceptually simple
modeling language for search (high-level, functional and naturally
compositional) and an efficient implementation (low-level, imperative and
highly non-modular). By allowing the user to define application-tailored search
strategies from a small set of primitives, search combinators effectively
provide a rich domain-specific language (DSL) for modeling search to the user.
Remarkably, this DSL comes at a low implementation cost to the developer of a
constraint solver.
The article discusses two modular implementation approaches and shows, by
empirical evaluation, that search combinators can be implemented without
overhead compared to a native, direct implementation in a constraint solver.
|
1203.1105
|
Pairwise interaction pattern in the weighted communication network
|
physics.soc-ph cs.SI
|
Although recent studies show that both topological structures and human
dynamics can strongly affect information spreading on social networks, the
complicated interplay of the two significant factors has not yet been clearly
described. In this work, we find a strong pairwise interaction based on
analyzing the weighted network generated by the short message communication
dataset within a Chinese tele-communication provider. The pairwise interaction
bridges the network topological structure and human interaction dynamics, which
can promote local information spreading between pairs of communication partners
and in contrast can also suppress global information (e.g., rumor) cascade and
spreading. In addition, the pairwise interaction is the basic pattern of group
conversations and it can greatly reduce the waiting time of communication
events between a pair of intimate friends. Our findings are also helpful for
communication operators to design novel tariff strategies and optimize their
communication services.
|
1203.1150
|
A New Analysis Method for Simulations Using Node Categorizations
|
cs.SI physics.soc-ph
|
Most research concerning the influence of network structure on phenomena
taking place on the network focus on relationships between global statistics of
the network structure and characteristic properties of those phenomena, even
though local structure has a significant effect on the dynamics of some
phenomena. In the present paper, we propose a new analysis method for phenomena
on networks based on a categorization of nodes. First, local statistics such as
the average path length and the clustering coefficient for a node are
calculated and assigned to the respective node. Then, the nodes are categorized
using the self-organizing map (SOM) algorithm. Characteristic properties of the
phenomena of interest are visualized for each category of nodes. The validity
of our method is demonstrated using the results of two simulation models. The
proposed method is useful as a research tool to understand the behavior of
networks, in particular, for the large-scale networks that existing
visualization techniques cannot work well.
|
1203.1179
|
An efficient strategy to suppress epidemic explosion in heterogeneous
metapopulation networks
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We propose an efficient strategy to suppress epidemic explosion in
heterogeneous metapopulation networks, wherein each node represents a
subpopulation with any number of individuals and is assigned a curing rate that
is proportional to $k^{\alpha}$ with $k$ the node degree and $\alpha$ an
adjustable parameter. We have performed stochastic simulations of the dynamical
reaction-diffusion processes associated with the
susceptible-infected-susceptible model in scale-free networks. We found that
the epidemic threshold reaches a maximum when the exponent $\alpha$ is tuned to
be $\alpha_{opt}\simeq 1.3$. This nontrivial phenomenon is robust to the change
of the network size and the average degree. In addition, we have carried out a
mean field analysis to further validate our scheme, which also demonstrates
that epidemic explosion follows different routes for $\alpha$ larger or less
than $\alpha_{opt}$. Our work suggests that in order to effectively suppress
epidemic spreading on heterogeneous complex networks, subpopulations with
higher degrees should be allocated more resources than just being linearly
dependent on the degree $k$.
|
1203.1180
|
Incremental Temporal Logic Synthesis of Control Policies for Robots
Interacting with Dynamic Agents
|
cs.RO
|
We consider the synthesis of control policies from temporal logic
specifications for robots that interact with multiple dynamic environment
agents. Each environment agent is modeled by a Markov chain whereas the robot
is modeled by a finite transition system (in the deterministic case) or Markov
decision process (in the stochastic case). Existing results in probabilistic
verification are adapted to solve the synthesis problem. To partially address
the state explosion issue, we propose an incremental approach where only a
small subset of environment agents is incorporated in the synthesis procedure
initially and more agents are successively added until we hit the constraints
on computational resources. Our algorithm runs in an anytime fashion where the
probability that the robot satisfies its specification increases as the
algorithm progresses.
|
1203.1212
|
Codes Satisfying the Chain Condition with a Poset Weights
|
cs.IT math.IT
|
In this paper we extend the concept of generalized Wei weights for
poset-weight codes and show that all linear codes C satisfy the chain condition
if support of C is a subposet totally ordered.
|
1203.1251
|
The collective oscillation period of inter-coupled Goodwin oscillators
|
cs.SY nlin.CD physics.bio-ph q-bio.MN
|
Many biological oscillators are arranged in networks composed of many
inter-coupled cellular oscillators. However, results are still lacking on the
collective oscillation period of inter-coupled gene regulatory oscillators,
which, as has been reported, may be different from the oscillation period of an
autonomous cellular oscillator. Based on the Goodwin oscillator, we analyze the
collective oscillation pattern of coupled cellular oscillator networks. First
we give a condition under which the oscillator network exhibits oscillatory and
synchronized behavior, then we estimate the collective oscillation period based
on a multivariable harmonic balance technique. Analytical results are derived
in terms of biochemical parameters, thus giving insight into the basic
mechanism of biological oscillation and providing guidance in synthetic biology
design. Simulation results are given to confirm the theoretical predictions.
|
1203.1263
|
NLSEmagic: Nonlinear Schr\"odinger Equation Multidimensional
Matlab-based GPU-accelerated Integrators using Compact High-order Schemes
|
cs.MS cs.CE physics.comp-ph
|
We present a simple to use, yet powerful code package called NLSEmagic to
numerically integrate the nonlinear Schr\"odinger equation in one, two, and
three dimensions. NLSEmagic is a high-order finite-difference code package
which utilizes graphic processing unit (GPU) parallel architectures. The codes
running on the GPU are many times faster than their serial counterparts, and
are much cheaper to run than on standard parallel clusters. The codes are
developed with usability and portability in mind, and therefore are written to
interface with MATLAB utilizing custom GPU-enabled C codes with the
MEX-compiler interface. The packages are freely distributed, including user
manuals and set-up files.
|
1203.1276
|
Optimal Control Design under Limited Model Information for Discrete-Time
Linear Systems with Stochastically-Varying Parameters
|
math.OC cs.SY
|
The value of plant model information available in the control design process
is discussed. We design optimal state-feedback controllers for interconnected
discrete-time linear systems with stochastically-varying parameters. The
parameters are assumed to be independently and identically distributed random
variables in time. The design of each controller relies only on (i) exact local
plant model information and (ii) statistical beliefs about the model of the
rest of the system. We consider both finite-horizon and infinite-horizon
quadratic cost functions. The optimal state-feedback controller is derived in
both cases. The optimal controller is shown to be linear in the state and to
depend on the model parameters and their statistics in a particular way.
Furthermore, we study the value of model information in optimal control design
using the performance degradation ratio which is defined as the supremum (over
all possible initial conditions) of the ratio of the cost of the optimal
controller with limited model information scaled by the cost of the optimal
controller with full model information. An upper bound for the performance
degradation ratio is presented for the case of fully-actuated subsystems.
Comparisons are made between designs based on limited, statistical, and full
model information. Throughout the paper, we use a power network example to
illustrate concepts and results.
|
1203.1278
|
Efficient recovery-based error estimation for the smoothed finite
element method for smooth and singular linear elasticity
|
cs.NA cs.CE math.NA
|
An error control technique aimed to assess the quality of smoothed finite
element approximations is presented in this paper. Finite element techniques
based on strain smoothing appeared in 2007 were shown to provide significant
advantages compared to conventional finite element approximations. In
particular, a widely cited strength of such methods is improved accuracy for
the same computational cost. Yet, few attempts have been made to directly
assess the quality of the results obtained during the simulation by evaluating
an estimate of the discretization error. Here we propose a recovery type error
estimator based on an enhanced recovery technique. The salient features of the
recovery are: enforcement of local equilibrium and, for singular problems a
"smooth+singular" decomposition of the recovered stress. We evaluate the
proposed estimator on a number of test cases from linear elastic structural
mechanics and obtain precise error estimations whose effectivities, both at
local and global levels, are improved compared to recovery procedures not
implementing these features.
|
1203.1301
|
Optimal Use of Current and Outdated Channel State Information - Degrees
of Freedom of the MISO BC with Mixed CSIT
|
cs.IT math.IT
|
We consider a multiple-input-single-output (MISO) broadcast channel with
mixed channel state information at the transmitter (CSIT) that consists of
imperfect current CSIT and perfect outdated CSIT. Recent work by Kobayashi et
al. presented a scheme which exploits both imperfect current CSIT and perfect
outdated CSIT and achieves higher degrees of freedom (DoF) than possible with
only imperfect current CSIT or only outdated CSIT individually. In this work,
we further improve the achievable DoF in this setting by incorporating
additional private messages, and provide a tight information theoretic DoF
outer bound, thereby identifying the DoF optimal use of mixed CSIT. The new
result is stronger even in the original setting of only delayed CSIT, because
it allows us to remove the restricting assumption of statistically equivalent
fading for all users.
|
1203.1304
|
Analytical Modeling of Uplink Cellular Networks
|
cs.IT cs.NI math.IT math.PR
|
Cellular uplink analysis has typically been undertaken by either a simple
approach that lumps all interference into a single deterministic or random
parameter in a Wyner-type model, or via complex system level simulations that
often do not provide insight into why various trends are observed. This paper
proposes a novel middle way using point processes that is both accurate and
also results in easy-to-evaluate integral expressions based on the Laplace
transform of the interference. We assume mobiles and base stations are randomly
placed in the network with each mobile pairing up to its closest base station.
Compared to related recent work on downlink analysis, the proposed uplink model
differs in two key features. First, dependence is considered between user and
base station point processes to make sure each base station serves a single
mobile in the given resource block. Second, per-mobile power control is
included, which further couples the transmission of mobiles due to
location-dependent channel inversion. Nevertheless, we succeed in deriving the
coverage (equivalently outage) probability of a typical link in the network.
This model can be used to address a wide variety of system design questions in
the future. In this paper we focus on the implications for power control and
see that partial channel inversion should be used at low
signal-to-interference-plus-noise ratio (SINR), while full power transmission
is optimal at higher SINR.
|
1203.1338
|
Network Structure, Topology and Dynamics in Generalized Models of
Synchronization
|
cond-mat.dis-nn cs.SI nlin.CD physics.soc-ph
|
We explore the interplay of network structure, topology, and dynamic
interactions between nodes using the paradigm of distributed synchronization in
a network of coupled oscillators. As the network evolves to a global steady
state, interconnected oscillators synchronize in stages, revealing network's
underlying community structure. Traditional models of synchronization assume
that interactions between nodes are mediated by a conservative process, such as
diffusion. However, social and biological processes are often non-conservative.
We propose a new model of synchronization in a network of oscillators coupled
via non-conservative processes. We study dynamics of synchronization of a
synthetic and real-world networks and show that different synchronization
models reveal different structures within the same network.
|
1203.1349
|
The Evolution of Complex Networks: A New Framework
|
physics.soc-ph cond-mat.stat-mech cs.SI stat.AP
|
We introduce a new framework for the analysis of the dynamics of networks,
based on randomly reinforced urn (RRU) processes, in which the weight of the
edges is determined by a reinforcement mechanism. We rigorously explain the
empirical evidence that in many real networks there is a subset of "dominant
edges" that control a major share of the total weight of the network.
Furthermore, we introduce a new statistical procedure to study the evolution of
networks over time, assessing if a given instance of the nework is taken at its
steady state or not. Our results are quite general, since they are not based on
a particular probability distribution or functional form of the weights. We
test our model in the context of the International Trade Network, showing the
existence of a core of dominant links and determining its size.
|
1203.1376
|
MIMO Multiple Access Channel with an Arbitrarily Varying Eavesdropper
|
cs.IT math.IT
|
A two-transmitter Gaussian multiple access wiretap channel with multiple
antennas at each of the nodes is investigated. The channel matrices at the
legitimate terminals are fixed and revealed to all the terminals, whereas the
channel matrix of the eavesdropper is arbitrarily varying and only known to the
eavesdropper. The secrecy degrees of freedom (s.d.o.f.) region under a strong
secrecy constraint is characterized. A transmission scheme that orthogonalizes
the transmit signals of the two users at the intended receiver and uses a
single-user wiretap code is shown to be sufficient to achieve the s.d.o.f.
region. The converse involves establishing an upper bound on a
weighted-sum-rate expression. This is accomplished by using induction, where at
each step one combines the secrecy and multiple-access constraints associated
with an adversary eavesdropping a carefully selected group of sub-channels.
|
1203.1378
|
Epidemic Intelligence for the Crowd, by the Crowd (Full Version)
|
cs.SI cs.CY physics.soc-ph
|
Tracking Twitter for public health has shown great potential. However, most
recent work has been focused on correlating Twitter messages to influenza
rates, a disease that exhibits a marked seasonal pattern. In the presence of
sudden outbreaks, how can social media streams be used to strengthen
surveillance capacity? In May 2011, Germany reported an outbreak of
Enterohemorrhagic Escherichia coli (EHEC). It was one of the largest described
outbreaks of EHEC/HUS worldwide and the largest in Germany. In this work, we
study the crowd's behavior in Twitter during the outbreak. In particular, we
report how tracking Twitter helped to detect key user messages that triggered
signal detection alarms before MedISys and other well established early warning
systems. We also introduce a personalized learning to rank approach that
exploits the relationships discovered by: (i) latent semantic topics computed
using Latent Dirichlet Allocation (LDA), and (ii) observing the social tagging
behavior in Twitter, to rank tweets for epidemic intelligence. Our results
provide the grounds for new public health research based on social media.
|
1203.1394
|
Towards a class of complex networks models for conflict dynamics
|
physics.soc-ph cond-mat.stat-mech cs.SI math-ph math.MP
|
Using properties of isospectral flows we introduce a class of equations
useful to represent signed complex networks free continuous time evolution
Jammed and balanced states are obtained introducing a class of link potentials
breaking isospectral invariance of the network. Applications to conflict
dynamics in social and international relations networks are discussed.
|
1203.1406
|
Communication over Individual Channels -- a general framework
|
cs.IT math.IT
|
We consider the problem of communicating over a channel for which no
mathematical model is specified, and the achievable rates are determined as a
function of the channel input and output sequences known a-posteriori, without
assuming any a-priori relation between them. In a previous paper we have shown
that the empirical mutual information between the input and output sequences is
achievable without specifying the channel model, by using feedback and common
randomness, and a similar result for real-valued input and output alphabets. In
this paper, we present a unifying framework which includes the two previous
results as particular cases. We characterize the region of rate functions which
are achievable, and show that asymptotically the rate function is equivalent to
a conditional distribution of the channel input given the output. We present a
scheme that achieves these rates with asymptotically vanishing overheads.
|
1203.1410
|
Improved Method for Searching of Interleavers Using Garello's Method
|
cs.IT math.IT
|
In this paper an improved method for searching good interleavers from a
certain set is proposed. The first few terms, corresponding to maximum distance
of approximately 40 of the distance spectra, for turbo codes using these
interleavers are determined by means of Garello's method. The method is applied
to find quadratic permutation polynomials (QPP) based interleavers. Compared to
previous methods for founding QPP based interleavers, the search complexity is
reduced, allowing to find interleavers of higher length. This method has been
applied for QPP interleavers with lengths from the LTE (Long Term Evolution)
standard up to 1504. The analyzed classes are those with the largest spread QPP
(LS-QPP), with the D parameter equal to that of LTE interleaver (D_L_T_E-QPP),
and the class consisting of all QPP interleavers for lengths up to 1008. The
distance spectrum optimization is made for all classes. For the class of LS-QPP
interleavers of small lengths, the search led to superior or at least equal
performances with those of the LTE standard. For larger lengths the search in
the class of D_L_T_E-QPP interleavers is preferred. The interleavers from the
entire class of QPPs lead, in general, to weaker FER (Frame Error Rate)
performance.
|
1203.1418
|
A Note on a Conjecture for Balanced Elementary Symmetric Boolean
Functions
|
cs.IT math.IT
|
In 2008, Cusick {\it et al.} conjectured that certain elementary symmetric
Boolean functions of the form $\sigma_{2^{t+1}l-1, 2^t}$ are the only nonlinear
balanced ones, where $t$, $l$ are any positive integers, and
$\sigma_{n,d}=\bigoplus_{1\le i_1<...<i_d\le n}x_{i_1}x_{i_2}...x_{i_d}$ for
positive integers $n$, $1\le d\le n$. In this note, by analyzing the weight of
$\sigma_{n, 2^t}$ and $\sigma_{n, d}$, we prove that ${\rm wt}(\sigma_{n,
d})<2^{n-1}$ holds in most cases, and so does the conjecture. According to the
remainder of modulo 4, we also consider the weight of $\sigma_{n, d}$ from two
aspects: $n\equiv 3({\rm mod\}4)$ and $n\not\equiv 3({\rm mod\}4)$. Thus, we
can simplify the conjecture. In particular, our results cover the most known
results. In order to fully solve the conjecture, we also consider the weight of
$\sigma_{n, 2^t+2^s}$ and give some experiment results on it.
|
1203.1426
|
Optimizing spread dynamics on graphs by message passing
|
cond-mat.dis-nn cs.SI math.OC
|
Cascade processes are responsible for many important phenomena in natural and
social sciences. Simple models of irreversible dynamics on graphs, in which
nodes activate depending on the state of their neighbors, have been
successfully applied to describe cascades in a large variety of contexts. Over
the last decades, many efforts have been devoted to understand the typical
behaviour of the cascades arising from initial conditions extracted at random
from some given ensemble. However, the problem of optimizing the trajectory of
the system, i.e. of identifying appropriate initial conditions to maximize (or
minimize) the final number of active nodes, is still considered to be
practically intractable, with the only exception of models that satisfy a sort
of diminishing returns property called submodularity. Submodular models can be
approximately solved by means of greedy strategies, but by definition they lack
cooperative characteristics which are fundamental in many real systems. Here we
introduce an efficient algorithm based on statistical physics for the
optimization of trajectories in cascade processes on graphs. We show that for a
wide class of irreversible dynamics, even in the absence of submodularity, the
spread optimization problem can be solved efficiently on large networks.
Analytic and algorithmic results on random graphs are complemented by the
solution of the spread maximization problem on a real-world network (the
Epinions consumer reviews network).
|
1203.1429
|
Probabilistic Optimal Estimation and Filtering under Uncertainty
|
cs.SY math.OC
|
The classical approach to system identification is based on stochastic
assumptions about the measurement error, and provides estimates that have
random nature. Worst-case identification, on the other hand, only assumes the
knowledge of deterministic error bounds, and establishes guaranteed estimates,
thus being in principle better suited for the use in control design. However, a
main limitation of such deterministic bounds lies on their potential
conservatism, thus leading to estimates of restricted use.
In this paper, we propose a rapprochement between the stochastic and
worst-case paradigms. In particular, based on a probabilistic framework for
linear estimation problems, we derive new computational results. These results
combine elements from information-based complexity with recent developments in
the theory of randomized algorithms. The main idea in this line of research is
to "discard" sets of measure at most \epsilon, where \epsilon is a
probabilistic accuracy, from the set of deterministic estimates. Therefore, we
are decreasing the so-called worst-case radius of information at the expense of
a given probabilistic ``risk."
In this setting, we compute a trade-off curve, called violation function,
which shows how the radius of information decreases as a function of the
accuracy. To this end, we construct randomized and deterministic algorithms
which provide approximations of this function. We report extensive simulations
showing numerical comparisons between the stochastic, worst-case and
probabilistic approaches, thus demonstrating the efficacy of the methods
proposed in this paper.
|
1203.1435
|
On a (\beta,q)-generalized Fisher information and inequalities involving
q-Gaussian distributions
|
math-ph cond-mat.stat-mech cs.IT math.IT math.MP
|
In the present paper, we would like to draw attention to a possible
generalized Fisher information that fits well in the formalism of nonextensive
thermostatistics. This generalized Fisher information is defined for densities
on $\mathbb{R}^{n}.$ Just as the maximum R\'enyi or Tsallis entropy subject to
an elliptic moment constraint is a generalized q-Gaussian, we show that the
minimization of the generalized Fisher information also leads a generalized
q-Gaussian. This yields a generalized Cram\'er-Rao inequality. In addition, we
show that the generalized Fisher information naturally pops up in a simple
inequality that links the generalized entropies, the generalized Fisher
information and an elliptic moment. Finally, we give an extended Stam
inequality. In this series of results, the extremal functions are the
generalized q-Gaussians. Thus, these results complement the classical
characterization of the generalized q-Gaussian and introduce a generalized
Fisher information as a new information measure in nonextensive
thermostatistics.
|
1203.1439
|
Exploring complex networks by means of adaptive walkers
|
nlin.AO cond-mat.dis-nn cs.SI physics.soc-ph
|
Finding efficient algorithms to explore large networks with the aim of
recovering information about their structure is an open problem. Here, we
investigate this challenge by proposing a model in which random walkers with
previously assigned home nodes navigate through the network during a fixed
amount of time. We consider that the exploration is successful if the walker
gets the information gathered back home, otherwise, no data is retrieved.
Consequently, at each time step, the walkers, with some probability, have the
choice to either go backward approaching their home or go farther away. We show
that there is an optimal solution to this problem in terms of the average
information retrieved and the degree of the home nodes and design an adaptive
strategy based on the behavior of the random walker. Finally, we compare
different strategies that emerge from the model in the context of network
reconstruction. Our results could be useful for the discovery of unknown
connections in large scale networks.
|
1203.1457
|
PageRank optimization applied to spam detection
|
math.OC cs.IR
|
We give a new link spam detection and PageRank demotion algorithm called
MaxRank. Like TrustRank and AntiTrustRank, it starts with a seed of hand-picked
trusted and spam pages. We define the MaxRank of a page as the frequency of
visit of this page by a random surfer minimizing an average cost per time unit.
On a given page, the random surfer selects a set of hyperlinks and clicks with
uniform probability on any of these hyperlinks. The cost function penalizes
spam pages and hyperlink removals. The goal is to determine a hyperlink
deletion policy that minimizes this score. The MaxRank is interpreted as a
modified PageRank vector, used to sort web pages instead of the usual PageRank
vector. The bias vector of this ergodic control problem, which is unique up to
an additive constant, is a measure of the "spamicity" of each page, used to
detect spam pages. We give a scalable algorithm for MaxRank computation that
allowed us to perform experimental results on the WEBSPAM-UK2007 dataset. We
show that our algorithm outperforms both TrustRank and AntiTrustRank for spam
and nonspam page detection.
|
1203.1483
|
Learning Random Kernel Approximations for Object Recognition
|
cs.CV cs.LG
|
Approximations based on random Fourier features have recently emerged as an
efficient and formally consistent methodology to design large-scale kernel
machines. By expressing the kernel as a Fourier expansion, features are
generated based on a finite set of random basis projections, sampled from the
Fourier transform of the kernel, with inner products that are Monte Carlo
approximations of the original kernel. Based on the observation that different
kernel-induced Fourier sampling distributions correspond to different kernel
parameters, we show that an optimization process in the Fourier domain can be
used to identify the different frequency bands that are useful for prediction
on training data. Moreover, the application of group Lasso to random feature
vectors corresponding to a linear combination of multiple kernels, leads to
efficient and scalable reformulations of the standard multiple kernel learning
model \cite{Varma09}. In this paper we develop the linear Fourier approximation
methodology for both single and multiple gradient-based kernel learning and
show that it produces fast and accurate predictors on a complex dataset such as
the Visual Object Challenge 2011 (VOC2011).
|
1203.1505
|
Performance of a Distributed Stochastic Approximation Algorithm
|
math.OC cs.DC cs.SY
|
In this paper, a distributed stochastic approximation algorithm is studied.
Applications of such algorithms include decentralized estimation, optimization,
control or computing. The algorithm consists in two steps: a local step, where
each node in a network updates a local estimate using a stochastic
approximation algorithm with decreasing step size, and a gossip step, where a
node computes a local weighted average between its estimates and those of its
neighbors. Convergence of the estimates toward a consensus is established under
weak assumptions. The approach relies on two main ingredients: the existence of
a Lyapunov function for the mean field in the agreement subspace, and a
contraction property of the random matrices of weights in the subspace
orthogonal to the agreement subspace. A second order analysis of the algorithm
is also performed under the form of a Central Limit Theorem. The
Polyak-averaged version of the algorithm is also considered.
|
1203.1513
|
Invariant Scattering Convolution Networks
|
cs.CV
|
A wavelet scattering network computes a translation invariant image
representation, which is stable to deformations and preserves high frequency
information for classification. It cascades wavelet transform convolutions with
non-linear modulus and averaging operators. The first network layer outputs
SIFT-type descriptors whereas the next layers provide complementary invariant
information which improves classification. The mathematical analysis of wavelet
scattering networks explains important properties of deep convolution networks
for classification.
A scattering representation of stationary processes incorporates higher order
moments and can thus discriminate textures having the same Fourier power
spectrum. State of the art classification results are obtained for handwritten
digits and texture discrimination, using a Gaussian kernel SVM and a generative
PCA classifier.
|
1203.1515
|
Multiple Change Point Estimation in Stationary Ergodic Time Series
|
stat.ML cs.IT math.IT math.ST stat.TH
|
Given a heterogeneous time-series sample, the objective is to find points in
time (called change points) where the probability distribution generating the
data has changed. The data are assumed to have been generated by arbitrary
unknown stationary ergodic distributions. No modelling, independence or mixing
assumptions are made. A novel, computationally efficient, nonparametric method
is proposed, and is shown to be asymptotically consistent in this general
framework. The theoretical results are complemented with experimental
evaluations.
|
1203.1521
|
Oracle-order Recovery Performance of Greedy Pursuits with Replacement
against General Perturbations
|
cs.IT math.IT
|
Applying the theory of compressive sensing in practice always takes different
kinds of perturbations into consideration. In this paper, the recovery
performance of greedy pursuits with replacement for sparse recovery is analyzed
when both the measurement vector and the sensing matrix are contaminated with
additive perturbations. Specifically, greedy pursuits with replacement include
three algorithms, compressive sampling matching pursuit (CoSaMP), subspace
pursuit (SP), and iterative hard thresholding (IHT), where the support
estimation is evaluated and updated in each iteration. Based on restricted
isometry property, a unified form of the error bounds of these recovery
algorithms is derived under general perturbations for compressible signals. The
results reveal that the recovery performance is stable against both
perturbations. In addition, these bounds are compared with that of oracle
recovery--- least squares solution with the locations of some largest entries
in magnitude known a priori. The comparison shows that the error bounds of
these algorithms only differ in coefficients from the lower bound of oracle
recovery for some certain signal and perturbations, as reveals that
oracle-order recovery performance of greedy pursuits with replacement is
guaranteed. Numerical simulations are performed to verify the conclusions.
|
1203.1524
|
On the Influence of Informed Agents on Learning and Adaptation over
Networks
|
cs.IT cs.SI math.IT
|
Adaptive networks consist of a collection of agents with adaptation and
learning abilities. The agents interact with each other on a local level and
diffuse information across the network through their collaborations. In this
work, we consider two types of agents: informed agents and uninformed agents.
The former receive new data regularly and perform consultation and in-network
tasks, while the latter do not collect data and only participate in the
consultation tasks. We examine the performance of adaptive networks as a
function of the proportion of informed agents and their distribution in space.
The results reveal some interesting and surprising trade-offs between
convergence rate and mean-square performance. In particular, among other
results, it is shown that the performance of adaptive networks does not
necessarily improve with a larger proportion of informed agents. Instead, it is
established that the larger the proportion of informed agents is, the faster
the convergence rate of the network becomes albeit at the expense of some
deterioration in mean-square performance. The results further establish that
uninformed agents play an important role in determining the steady-state
performance of the network, and that it is preferable to keep some of the
highly connected agents uninformed. The arguments reveal an important interplay
among three factors: the number and distribution of informed agents in the
network, the convergence rate of the learning process, and the estimation
accuracy in steady-state. Expressions that quantify these relations are
derived, and simulations are included to support the theoretical findings. We
further apply the results to two models that are widely used to represent
behavior over complex networks, namely, the Erdos-Renyi and scale-free models.
|
1203.1527
|
Optimum Subcodes of Self-Dual Codes and Their Optimum Distance Profiles
|
cs.IT math.CO math.IT
|
Binary optimal codes often contain optimal or near-optimal subcodes. In this
paper we show that this is true for the family of self-dual codes. One approach
is to compute the optimum distance profiles (ODPs) of linear codes, which was
introduced by Luo, et. al. (2010). One of our main results is the development
of general algorithms, called the Chain Algorithms, for finding ODPs of linear
codes. Then we determine the ODPs for the Type II codes of lengths up to 24 and
the extremal Type II codes of length 32, give a partial result of the ODP of
the extended quadratic residue code $q_{48}$ of length 48. We also show that
there does not exist a $[48,k,16]$ subcode of $q_{48}$ for $k \ge 17$, and we
find a first example of a doubly-even self-complementary $[48, 16, 16]$ code.
|
1203.1528
|
A Two-Dimensional Signal Space for Intensity-Modulated Channels
|
cs.IT math.IT
|
A two-dimensional signal space for intensity- modulated channels is
presented. Modulation formats using this signal space are designed to maximize
the minimum distance between signal points while satisfying average and peak
power constraints. The uncoded, high-signal-to-noise ratio, power and spectral
efficiencies are compared to those of the best known formats. The new formats
are simpler than existing subcarrier formats, and are superior if the bandwidth
is measured as 90% in-band power. Existing subcarrier formats are better if the
bandwidth is measured as 99% in-band power.
|
1203.1535
|
Performance Analysis of l_0 Norm Constraint Least Mean Square Algorithm
|
cs.IT cs.PF math.IT
|
As one of the recently proposed algorithms for sparse system identification,
$l_0$ norm constraint Least Mean Square ($l_0$-LMS) algorithm modifies the cost
function of the traditional method with a penalty of tap-weight sparsity. The
performance of $l_0$-LMS is quite attractive compared with its various
precursors. However, there has been no detailed study of its performance. This
paper presents all-around and throughout theoretical performance analysis of
$l_0$-LMS for white Gaussian input data based on some reasonable assumptions.
Expressions for steady-state mean square deviation (MSD) are derived and
discussed with respect to algorithm parameters and system sparsity. The
parameter selection rule is established for achieving the best performance.
Approximated with Taylor series, the instantaneous behavior is also derived. In
addition, the relationship between $l_0$-LMS and some previous arts and the
sufficient conditions for $l_0$-LMS to accelerate convergence are set up.
Finally, all of the theoretical results are compared with simulations and are
shown to agree well in a large range of parameter setting.
|
1203.1538
|
Proof of Convergence and Performance Analysis for Sparse Recovery via
Zero-point Attracting Projection
|
cs.IT cs.PF math.IT
|
A recursive algorithm named Zero-point Attracting Projection (ZAP) is
proposed recently for sparse signal reconstruction. Compared with the reference
algorithms, ZAP demonstrates rather good performance in recovery precision and
robustness. However, any theoretical analysis about the mentioned algorithm,
even a proof on its convergence, is not available. In this work, a strict proof
on the convergence of ZAP is provided and the condition of convergence is put
forward. Based on the theoretical analysis, it is further proved that ZAP is
non-biased and can approach the sparse solution to any extent, with the proper
choice of step-size. Furthermore, the case of inaccurate measurements in noisy
scenario is also discussed. It is proved that disturbance power linearly
reduces the recovery precision, which is predictable but not preventable. The
reconstruction deviation of $p$-compressible signal is also provided. Finally,
numerical simulations are performed to verify the theoretical analysis.
|
1203.1548
|
Retrieval of Sparse Solutions of Multiple-Measurement Vectors via
Zero-point Attracting Projection
|
cs.IT math.IT
|
A new sparse signal recovery algorithm for multiple-measurement vectors (MMV)
problem is proposed in this paper. The sparse representation is iteratively
drawn based on the idea of zero-point attracting projection (ZAP). In each
iteration, the solution is first updated along the negative gradient direction
of an approximate $\ell_{2,0}$ norm to encourage sparsity, and then projected
to the solution space to satisfy the under-determined equation. A variable step
size scheme is adopted further to accelerate the convergence as well as to
improve the recovery accuracy. Numerical simulations demonstrate that the
performance of the proposed algorithm exceeds the references in various
aspects, as well as when applied to the Modulated Wideband Converter, where
recovering MMV problem is crucial to its performance.
|
1203.1569
|
SPARQL for a Web of Linked Data: Semantics and Computability (Extended
Version)
|
cs.DB
|
The World Wide Web currently evolves into a Web of Linked Data where content
providers publish and link data as they have done with hypertext for the last
20 years. While the declarative query language SPARQL is the de facto for
querying a-priory defined sets of data from the Web, no language exists for
querying the Web of Linked Data itself. However, it seems natural to ask
whether SPARQL is also suitable for such a purpose.
In this paper we formally investigate the applicability of SPARQL as a query
language for Linked Data on the Web. In particular, we study two query models:
1) a full-Web semantics where the scope of a query is the complete set of
Linked Data on the Web and 2) a family of reachability-based semantics which
restrict the scope to data that is reachable by traversing certain data links.
For both models we discuss properties such as monotonicity and computability as
well as the implications of querying a Web that is infinitely large due to data
generating servers.
|
1203.1570
|
In-network Sparsity-regularized Rank Minimization: Algorithms and
Applications
|
cs.MA cs.IT cs.NI math.IT stat.ML
|
Given a limited number of entries from the superposition of a low-rank matrix
plus the product of a known fat compression matrix times a sparse matrix,
recovery of the low-rank and sparse components is a fundamental task subsuming
compressed sensing, matrix completion, and principal components pursuit. This
paper develops algorithms for distributed sparsity-regularized rank
minimization over networks, when the nuclear- and $\ell_1$-norm are used as
surrogates to the rank and nonzero entry counts of the sought matrices,
respectively. While nuclear-norm minimization has well-documented merits when
centralized processing is viable, non-separability of the singular-value sum
challenges its distributed minimization. To overcome this limitation, an
alternative characterization of the nuclear norm is adopted which leads to a
separable, yet non-convex cost minimized via the alternating-direction method
of multipliers. The novel distributed iterations entail reduced-complexity
per-node tasks, and affordable message passing among single-hop neighbors.
Interestingly, upon convergence the distributed (non-convex) estimator provably
attains the global optimum of its centralized counterpart, regardless of
initialization. Several application domains are outlined to highlight the
generality and impact of the proposed framework. These include unveiling
traffic anomalies in backbone networks, predicting networkwide path latencies,
and mapping the RF ambiance using wireless cognitive radios. Simulations with
synthetic and real network data corroborate the convergence of the novel
distributed algorithm, and its centralized performance guarantees.
|
1203.1588
|
Rate Maximization for Half-Duplex Multiple Access with Cooperating
Transmitters
|
cs.IT math.IT
|
We derive the optimal resource allocation of a practical half-duplex scheme
for the Gaussian multiple access channel with transmitter cooperation (MAC-TC).
Based on rate splitting and superposition coding, two users transmit
information to a destination over 3 phases, such that the users partially
exchange their information during the first 2 phases and cooperatively transmit
to the destination during the last one. This scheme is near capacity-achieving
when the inter-user links are stronger than each user-destination link; it also
includes partial decode-forward relaying as a special case. We propose
efficient algorithms to find the optimal resource allocation for maximizing
either the individual or the sum rate and identify the corresponding optimal
scheme for each channel configuration. For fixed phase durations, the power
allocation problem is convex and can be solved analytically based on the KKT
conditions. The optimal phase durations can then be obtained numerically using
simple search methods. Results show that as the interuser link qualities
increase, the optimal scheme moves from no cooperation to partial then to full
cooperation, in which the users fully exchange their information and
cooperatively send it to the destination. Therefore, in practical systems with
strong inter-user links, simple decode-forward relaying at both users is
rate-optimal.
|
1203.1596
|
Multiple Operator-valued Kernel Learning
|
stat.ML cs.LG
|
Positive definite operator-valued kernels generalize the well-known notion of
reproducing kernels, and are naturally adapted to multi-output learning
situations. This paper addresses the problem of learning a finite linear
combination of infinite-dimensional operator-valued kernels which are suitable
for extending functional data analysis methods to nonlinear contexts. We study
this problem in the case of kernel ridge regression for functional responses
with an lr-norm constraint on the combination coefficients. The resulting
optimization problem is more involved than those of multiple scalar-valued
kernel learning since operator-valued kernels pose more technical and
theoretical issues. We propose a multiple operator-valued kernel learning
algorithm based on solving a system of linear operator equations by using a
block coordinatedescent procedure. We experimentally validate our approach on a
functional regression task in the context of finger movement prediction in
brain-computer interfaces.
|
1203.1643
|
Coding Delay Analysis of Chunked Codes over Line Networks
|
cs.IT math.IT
|
In this paper, we analyze the coding delay and the average coding delay of
Chunked network Codes (CC) over line networks with Bernoulli losses and
deterministic regular or Poisson transmissions. Chunked codes are an attractive
alternative to random linear network codes due to their lower complexity. Our
results, which include upper bounds on the delay and the average delay, are the
first of their kind for CC over networks with such probabilistic traffics.
These results demonstrate that a stand-alone CC or a precoded CC provides a
better tradeoff between the computational complexity and the convergence speed
to the network capacity over the probabilistic traffics compared to arbitrary
deterministic traffics. The performance of CC over the latter traffics has
already been studied in the literature.
|
1203.1647
|
A Survey of Prediction Using Social Media
|
cs.SI physics.soc-ph
|
Social media comprises interactive applications and platforms for creating,
sharing and exchange of user-generated contents. The past ten years have
brought huge growth in social media, especially online social networking
services, and it is changing our ways to organize and communicate. It
aggregates opinions and feelings of diverse groups of people at low cost.
Mining the attributes and contents of social media gives us an opportunity to
discover social structure characteristics, analyze action patterns
qualitatively and quantitatively, and sometimes the ability to predict future
human related events. In this paper, we firstly discuss the realms which can be
predicted with current social media, then overview available predictors and
techniques of prediction, and finally discuss challenges and possible future
directions.
|
1203.1685
|
Statistical Function Tagging and Grammatical Relations of Myanmar
Sentences
|
cs.CL
|
This paper describes a context free grammar (CFG) based grammatical relations
for Myanmar sentences which combine corpus-based function tagging system. Part
of the challenge of statistical function tagging for Myanmar sentences comes
from the fact that Myanmar has free-phrase-order and a complex morphological
system. Function tagging is a pre-processing step to show grammatical relations
of Myanmar sentences. In the task of function tagging, which tags the function
of Myanmar sentences with correct segmentation, POS (part-of-speech) tagging
and chunking information, we use Naive Bayesian theory to disambiguate the
possible function tags of a word. We apply context free grammar (CFG) to find
out the grammatical relations of the function tags. We also create a functional
annotated tagged corpus for Myanmar and propose the grammar rules for Myanmar
sentences. Experiments show that our analysis achieves a good result with
simple sentences and complex sentences.
|
1203.1687
|
An Analytical Approach to the Adoption of Asymmetric Bidirectional
Firewalls: Need for Regulation?
|
cs.SY math.OC
|
Recent incidents of cybersecurity violations have revealed the importance of
having firewalls and other intrusion detection systems to monitor traffic
entering and leaving access networks. But the adoption of such security
measures is often stymied by `free-riding' effects and `shortsightedness' among
Internet service providers (ISPs). In this work, we develop an analytical
framework that not only accounts for these issues but also incorporates
technological factors, like asymmetries in the performance of bidirectional
firewalls. Results on the equilibrium adoption and stability are presented,
along with detailed analysis on several policy issues related to social
welfare, price of anarchy, and price of shortsightedness.
|
1203.1711
|
Quantization Reference Voltage of the Modulated Wideband Converter
|
cs.IT math.IT
|
The Modulated Wideband Converter (MWC) is a recently proposed
analog-to-digital converter (ADC) based on Compressive Sensing (CS) theory.
Unlike conventional ADCs, its quantization reference voltage, which is
important to the system performance, does not equal the maximum amplitude of
original analog signal. In this paper, the quantization reference voltage of
the MWC is theoretically analyzed and the conclusion demonstrates that the
reference voltage is proportional to the square root of $q$, which is a
trade-off parameter between sampling rate and number of channels. Further
discussions and simulation results show that the reference voltage is
proportional to the square root of $Nq$ when the signal consists of $N$
narrowband signals.
|
1203.1714
|
Efficient Recovery of Block Sparse Signals via Zero-point Attracting
Projection
|
cs.IT math.IT
|
In this paper, we consider compressed sensing (CS) of block-sparse signals,
i.e., sparse signals that have nonzero coefficients occurring in clusters. An
efficient algorithm, called zero-point attracting projection (ZAP) algorithm,
is extended to the scenario of block CS. The block version of ZAP algorithm
employs an approximate $l_{2,0}$ norm as the cost function, and finds its
minimum in the solution space via iterations. For block sparse signals, an
analysis of the stability of the local minimums of this cost function under the
perturbation of noise reveals an advantage of the proposed algorithm over its
original non-block version in terms of reconstruction error. Finally, numerical
experiments show that the proposed algorithm outperforms other state of the art
methods for the block sparse problem in various respects, especially the
stability under noise.
|
1203.1740
|
An Input-Output Simulation Approach to Controlling Multi-AffineSystems
for Linear Temporal Logic Specifications
|
cs.SY
|
This paper presents an input-output simulation approach to controlling
multi-affine systems for linear temporal logic (LTL) specifications, which
consists of the following steps. First, we partition the state space into
rectangles, each of which satisfies atomic LTL propositions. Then, we study the
control of multi-affine systems on rectangles including the control of driving
all trajectories starting from a rectangle to exit through a facet and the
control of stabilizing the system towards a desired point. With the proposed
controllers, a finitely abstracted transition system is constructed which is
shown to be input-output simulated by the rectangular transition system of the
multi-affine system. Since input-output simulation preserves LTL properties,
the controller synthesis of the multi-affine system for LTL specifications is
achieved by designing a nonblocking supervisor for the abstracted transition
system and by continuously implementing the resulting supervisor for the
original multi-affine system.
|
1203.1743
|
Variable types for meaning assembly: a logical syntax for generic noun
phrases introduced by most
|
math.LO cs.CL cs.LO
|
This paper proposes a way to compute the meanings associated with sentences
with generic noun phrases corresponding to the generalized quantifier most. We
call these generics specimens and they resemble stereotypes or prototypes in
lexical semantics. The meanings are viewed as logical formulae that can
thereafter be interpreted in your favourite models. To do so, we depart
significantly from the dominant Fregean view with a single untyped universe.
Indeed, our proposal adopts type theory with some hints from Hilbert
\epsilon-calculus (Hilbert, 1922; Avigad and Zach, 2008) and from medieval
philosophy, see e.g. de Libera (1993, 1996). Our type theoretic analysis bears
some resemblance with ongoing work in lexical semantics (Asher 2011; Bassac et
al. 2010; Moot, Pr\'evot and Retor\'e 2011). Our model also applies to
classical examples involving a class, or a generic element of this class, which
is not uttered but provided by the context. An outcome of this study is that,
in the minimalism-contextualism debate, see Conrad (2011), if one adopts a type
theoretical view, terms encode the purely semantic meaning component while
their typing is pragmatically determined.
|
1203.1745
|
Bisimilarity Enforcing Supervisory Control for Deterministic
Specifications
|
cs.SY
|
This paper investigates the supervisory control of nondeterministic discrete
event systems to enforce bisimilarity with respect to deterministic
specifications. A notion of synchronous simulation-based controllability is
introduced as a necessary and sufficient condition for the existence of a
bisimilarity enforcing supervisor, and a polynomial algorithm is developed to
verify such a condition. When the existence condition holds, a supervisor
achieving bisimulation equivalence is constructed. Furthermore, when the
existence condition does not hold, two different methods are provided for
synthesizing maximal permissive sub-specifications.
|
1203.1751
|
Remote Sensing and Control for Establishing and Maintaining Digital
Irrigation
|
cs.SY
|
The remotely sensed data from an unknown location is transmitted in real time
through internet and gathered in a PC. The data is collected for a considerable
period of time and analyzed in PC as to assess the suitability and fertility of
the land for establishing an electronic plantation in that area. The analysis
also helps deciding the plantation of appropriate plants in the location
identified. The system performing this task with appropriate transducers
installed in remote area, the methodologies involved in transmission and data
gathering are reported.. The second part of the project deals with data
gathering from remote site and issuing control signals to remote appliances in
the site; all performed through internet. Therefore, this control scheme is a
duplex system monitoring the irrigation activities by collecting data in one
direction and issuing commands on the opposite direction. This scheme maintains
the digital irrigation systems effectively and efficiently as to utilize the
resources optimally for yielding enhanced production. The methodologies
involved in extending the two way communication of data are presented.
|
1203.1758
|
Coordinated Beamforming with Relaxed Zero Forcing: The Sequential
Orthogonal Projection Combining Method and Rate Control
|
cs.IT math.IT
|
In this paper, coordinated beamforming based on relaxed zero forcing (RZF)
for K transmitter-receiver pair multiple-input single-output (MISO) and
multiple-input multiple-output (MIMO) interference channels is considered. In
the RZF coordinated beamforming, conventional zero-forcing interference leakage
constraints are relaxed so that some predetermined interference leakage to
undesired receivers is allowed in order to increase the beam design space for
larger rates than those of the zero-forcing (ZF) scheme or to make beam design
feasible when ZF is impossible. In the MISO case, it is shown that the
rate-maximizing beam vector under the RZF framework for a given set of
interference leakage levels can be obtained by sequential orthogonal projection
combining (SOPC). Based on this, exact and approximate closed-form solutions
are provided in two-user and three-user cases, respectively, and an efficient
beam design algorithm for RZF coordinated beamforming is provided in general
cases. Furthermore, the rate control problem under the RZF framework is
considered. A centralized approach and a distributed heuristic approach are
proposed to control the position of the designed rate-tuple in the achievable
rate region. Finally, the RZF framework is extended to MIMO interference
channels by deriving a new lower bound on the rate of each user.
|
1203.1765
|
A comparative evaluation of two algorithms of detection of masses on
mammograms
|
cs.CV
|
In this paper, we implement and carry out the comparison of two methods of
computer-aided-detection of masses on mammograms. The two algorithms basically
consist of 3 steps each: segmentation, binarization and noise suppression using
different techniques for each step. A database of 60 images was used to compare
the performance of the two algorithms in terms of general detection efficiency,
conservation of size and shape of detected masses.
|
1203.1793
|
Using Hausdorff Distance for New Medical Image Annotation
|
cs.IR cs.CV
|
Medical images annotation is most of the time a repetitive hard task.
Collecting old similar annotations and assigning them to new medical images may
not only enhance the annotation process, but also reduce ambiguity caused by
repetitive annotations. The goal of this work is to propose an approach based
on Hausdorff distance able to compute similarity between a new medical image
and old stored images. User has to choose then one of the similar images and
annotations related to the selected one are assigned to the new one.
|
1203.1804
|
Near-Optimal Compressive Binary Search
|
cs.IT math.IT
|
We propose a simple modification to the recently proposed compressive binary
search. The modification removes an unnecessary and suboptimal factor of log
log n from the SNR requirement, making the procedure optimal (up to a small
constant). Simulations show that the new procedure performs significantly
better in practice as well. We also contrast this problem with the more well
known problem of noisy binary search.
|
1203.1820
|
Flow-based reputation: more than just ranking
|
cs.CY cs.SI physics.soc-ph
|
The last years have seen a growing interest in collaborative systems like
electronic marketplaces and P2P file sharing systems where people are intended
to interact with other people. Those systems, however, are subject to security
and operational risks because of their open and distributed nature. Reputation
systems provide a mechanism to reduce such risks by building trust
relationships among entities and identifying malicious entities. A popular
reputation model is the so called flow-based model. Most existing reputation
systems based on such a model provide only a ranking, without absolute
reputation values; this makes it difficult to determine whether entities are
actually trustworthy or untrustworthy. In addition, those systems ignore a
significant part of the available information; as a consequence, reputation
values may not be accurate. In this paper, we present a flow-based reputation
metric that gives absolute values instead of merely a ranking. Our metric makes
use of all the available information. We study, both analytically and
numerically, the properties of the proposed metric and the effect of attacks on
reputation values.
|
1203.1823
|
Enhancement Techniques for Local Content Preservation and Contrast
Improvement in Images
|
cs.CV cs.MM
|
There are several images that do not have uniform brightness which pose a
challenging problem for image enhancement systems. As histogram equalization
has been successfully used to correct for uniform brightness problems, a
histogram equalization method that utilizes human visual system based
thresholding(human vision thresholding) as well as logarithmic processing
techniques were introduced later . But these methods are not good for
preserving the local content of the image which is a major factor for various
images like medical and aerial images. Therefore new method is proposed here.
This method is referred as "Human vision thresholding with enhancement
technique for dark blurred images for local content preservation". It uses
human vision thresholding together with an existing enhancement method for dark
blurred images. Furthermore a comparative study with another method for local
content preservation is done which is further extended to make it suitable for
contrast improvement. Experimental results shows that the proposed methods
outperforms the former existing methods in preserving the local content for
standard images, medical and aerial images.
|
1203.1833
|
Crowdsourcing Predictors of Behavioral Outcomes
|
cs.CY cs.SI physics.soc-ph
|
Generating models from large data sets -- and determining which subsets of
data to mine -- is becoming increasingly automated. However choosing what data
to collect in the first place requires human intuition or experience, usually
supplied by a domain expert. This paper describes a new approach to machine
science which demonstrates for the first time that non-domain experts can
collectively formulate features, and provide values for those features such
that they are predictive of some behavioral outcome of interest. This was
accomplished by building a web platform in which human groups interact to both
respond to questions likely to help predict a behavioral outcome and pose new
questions to their peers. This results in a dynamically-growing online survey,
but the result of this cooperative behavior also leads to models that can
predict user's outcomes based on their responses to the user-generated survey
questions. Here we describe two web-based experiments that instantiate this
approach: the first site led to models that can predict users' monthly electric
energy consumption; the other led to models that can predict users' body mass
index. As exponential increases in content are often observed in successful
online collaborative communities, the proposed methodology may, in the future,
lead to similar exponential rises in discovery and insight into the causal
factors of behavioral outcomes.
|
1203.1849
|
Enumeration of Splitting Subspaces over Finite Fields
|
math.CO cs.IT math.IT
|
We discuss an elementary, yet unsolved, problem of Niederreiter concerning
the enumeration of a class of subspaces of finite dimensional vector spaces
over finite fields. A short and self-contained account of some recent progress
on this problem is included and some related problems are discussed.
|
1203.1850
|
On Pseudocodewords and Improved Union Bound of Linear Programming
Decoding of HDPC Codes
|
cs.IT math.IT
|
In this paper, we present an improved union bound on the Linear Programming
(LP) decoding performance of the binary linear codes transmitted over an
additive white Gaussian noise channels. The bounding technique is based on the
second-order of Bonferroni-type inequality in probability theory, and it is
minimized by Prim's minimum spanning tree algorithm. The bound calculation
needs the fundamental cone generators of a given parity-check matrix rather
than only their weight spectrum, but involves relatively low computational
complexity. It is targeted to high-density parity-check codes, where the number
of their generators is extremely large and these generators are spread densely
in the Euclidean space. We explore the generator density and make a comparison
between different parity-check matrix representations. That density effects on
the improvement of the proposed bound over the conventional LP union bound. The
paper also presents a complete pseudo-weight distribution of the fundamental
cone generators for the BCH[31,21,5] code.
|
1203.1854
|
Local-Optimality Guarantees for Optimal Decoding Based on Paths
|
cs.IT math.CO math.IT
|
This paper presents a unified analysis framework that captures recent
advances in the study of local-optimality characterizations for codes on
graphs. These local-optimality characterizations are based on combinatorial
structures embedded in the Tanner graph of the code. Local-optimality implies
both unique maximum-likelihood (ML) optimality and unique linear-programming
(LP) decoding optimality. Also, an iterative message-passing decoding algorithm
is guaranteed to find the unique locally-optimal codeword, if one exists.
We demonstrate this proof technique by considering a definition of
local-optimality that is based on the simplest combinatorial structures in
Tanner graphs, namely, paths of length $h$. We apply the technique of
local-optimality to a family of Tanner codes. Inverse polynomial bounds in the
code length are proved on the word error probability of LP-decoding for this
family of Tanner codes.
|
1203.1858
|
Distributional Measures of Semantic Distance: A Survey
|
cs.CL
|
The ability to mimic human notions of semantic distance has widespread
applications. Some measures rely only on raw text (distributional measures) and
some rely on knowledge sources such as WordNet. Although extensive studies have
been performed to compare WordNet-based measures with human judgment, the use
of distributional measures as proxies to estimate semantic distance has
received little attention. Even though they have traditionally performed poorly
when compared to WordNet-based measures, they lay claim to certain uniquely
attractive features, such as their applicability in resource-poor languages and
their ability to mimic both semantic similarity and semantic relatedness.
Therefore, this paper presents a detailed study of distributional measures.
Particular attention is paid to flesh out the strengths and limitations of both
WordNet-based and distributional measures, and how distributional measures of
distance can be brought more in line with human notions of semantic distance.
We conclude with a brief discussion of recent work on hybrid measures.
|
1203.1868
|
Broadcasters and Hidden Influentials in Online Protest Diffusion
|
physics.soc-ph cs.SI
|
This paper explores the growth of online mobilizations using data from the
'indignados' (the 'outraged') movement in Spain, which emerged under the
influence of the revolution in Egypt and as a precursor to the global Occupy
mobilizations. The data tracks Twitter activity around the protests that took
place in May 2011, which led to the formation of camp sites in dozens of cities
all over the country and massive daily demonstrations during the week prior to
the elections of May 22. We reconstruct the network of tens of thousands of
users, and monitor their message activity for a month (25 April 2011 to 25 May
2011). Using both the structure of the network and levels of activity in
message exchange, we identify four types of users and we analyze their role in
the growth of the protest. Drawing from theories of online collective action
and research on information diffusion in networks the paper centers on the
following questions: How does protest information spread in online networks?
How do different actors contribute to that diffusion? How do mainstream media
interact with new media? Do they help amplify protest messages? And what is the
role of less popular but far more frequent users in the growth of online
mobilizations? This paper aims to inform the theoretical debate on whether
digital technologies are changing the logic of collective action, and provide
evidence of how new media facilitates the coordination of offline
mobilizations.
|
1203.1869
|
Degraded Broadcast Diamond Channels with Non-Causal State Information at
the Source
|
cs.IT math.IT
|
A state-dependent degraded broadcast diamond channel is studied where the
source-to-relays cut is modeled with two noiseless, finite-capacity digital
links with a degraded broadcasting structure, while the relays-to-destination
cut is a general multiple access channel controlled by a random state. It is
assumed that the source has non-causal channel state information and the relays
have no state information. Under this model, first, the capacity is
characterized for the case where the destination has state information, i.e.,
has access to the state sequence. It is demonstrated that in this case, a joint
message and state transmission scheme via binning is optimal. Next, the case
where the destination does not have state information, i.e., the case with
state information at the source only, is considered. For this scenario, lower
and upper bounds on the capacity are derived for the general discrete
memoryless model. Achievable rates are then computed for the case in which the
relays-to-destination cut is affected by an additive Gaussian state. Numerical
results are provided that illuminate the performance advantages that can be
accrued by leveraging non-causal state information at the source.
|
1203.1878
|
Outlier detection from ETL Execution trace
|
cs.DB
|
Extract, Transform, Load (ETL) is an integral part of Data Warehousing (DW)
implementation. The commercial tools that are used for this purpose captures
lot of execution trace in form of various log files with plethora of
information. However there has been hardly any initiative where any proactive
analyses have been done on the ETL logs to improve their efficiency. In this
paper we utilize outlier detection technique to find the processes varying most
from the group in terms of execution trace. As our experiment was carried on
actual production processes, any outlier we would consider as a signal rather
than a noise. To identify the input parameters for the outlier detection
algorithm we employ a survey among developer community with varied mix of
experience and expertise. We use simple text parsing to extract these features
from the logs, as shortlisted from the survey. Subsequently we applied outlier
detection technique (Clustering based) on the logs. By this process we reduced
our domain of detailed analysis from 500 logs to 44 logs (8 Percentage). Among
the 5 outlier cluster, 2 of them are genuine concern, while the other 3 figure
out because of the huge number of rows involved.
|
1203.1882
|
Multi source feedback based performance appraisal system using Fuzzy
logic decision support system
|
cs.AI
|
In Multi-Source Feedback or 360 Degree Feedback, data on the performance of
an individual are collected systematically from a number of stakeholders and
are used for improving performance. The 360-Degree Feedback approach provides a
consistent management philosophy meeting the criterion outlined previously. The
360-degree feedback appraisal process describes a human resource methodology
that is frequently used for both employee appraisal and employee development.
Used in employee performance appraisals, the 360-degree feedback methodology is
differentiated from traditional, top-down appraisal methods in which the
supervisor responsible for the appraisal provides the majority of the data.
Instead it seeks to use information gained from other sources to provide a
fuller picture of employees' performances. Similarly, when this technique used
in employee development it augments employees' perceptions of training needs
with those of the people with whom they interact. The 360-degree feedback based
appraisal is a comprehensive method where in the feedback about the employee
comes from all the sources that come into contact with the employee on his/her
job. The respondents for an employee can be her/his peers, managers,
subordinates team members, customers, suppliers and vendors. Hence anyone who
comes into contact with the employee, the 360 degree appraisal has four
components that include self-appraisal, superior's appraisal, subordinate's
appraisal student's appraisal and peer's appraisal .The proposed system is an
attempt to implement the 360 degree feedback based appraisal system in
academics especially engineering colleges.
|
1203.1889
|
Distributional Measures as Proxies for Semantic Relatedness
|
cs.CL
|
The automatic ranking of word pairs as per their semantic relatedness and
ability to mimic human notions of semantic relatedness has widespread
applications. Measures that rely on raw data (distributional measures) and
those that use knowledge-rich ontologies both exist. Although extensive studies
have been performed to compare ontological measures with human judgment, the
distributional measures have primarily been evaluated by indirect means. This
paper is a detailed study of some of the major distributional measures; it
lists their respective merits and limitations. New measures that overcome these
drawbacks, that are more in line with the human notions of semantic
relatedness, are suggested. The paper concludes with an exhaustive comparison
of the distributional and ontology-based measures. Along the way, significant
research problems are identified. Work on these problems may lead to a better
understanding of how semantic relatedness is to be measured.
|
1203.1892
|
Restricted Isometry Property in Quantized Network Coding of Sparse
Messages
|
cs.IT math.IT
|
In this paper, we study joint network coding and distributed source coding of
inter-node dependent messages, with the perspective of compressed sensing.
Specifically, the theoretical guarantees for robust $\ell_1$-min recovery of an
under-determined set of linear network coded sparse messages are investigated.
We discuss the guarantees for $\ell_1$-min decoding of quantized network coded
messages, using the proposed local network coding coefficients in \cite{naba},
based on Restricted Isometry Property (RIP) of the resulting measurement
matrix. Moreover, the relation between tail probability of $\ell_2$-norms and
satisfaction of RIP is derived and used to compare our designed measurement
matrix, with i.i.d. Gaussian measurement matrix. Finally, we present our
numerical evaluations, which shows that the proposed design of network coding
coefficients result in a measurement matrix with an RIP behavior, similar to
that of i.i.d. Gaussian matrix.
|
1203.1952
|
Worst-case Optimal Join Algorithms
|
cs.DB cs.DS math.CO
|
Efficient join processing is one of the most fundamental and well-studied
tasks in database research. In this work, we examine algorithms for natural
join queries over many relations and describe a novel algorithm to process
these queries optimally in terms of worst-case data complexity. Our result
builds on recent work by Atserias, Grohe, and Marx, who gave bounds on the size
of a full conjunctive query in terms of the sizes of the individual relations
in the body of the query. These bounds, however, are not constructive: they
rely on Shearer's entropy inequality which is information-theoretic. Thus, the
previous results leave open the question of whether there exist algorithms
whose running time achieve these optimal bounds. An answer to this question may
be interesting to database practice, as it is known that any algorithm based on
the traditional select-project-join style plans typically employed in an RDBMS
are asymptotically slower than the optimal for some queries. We construct an
algorithm whose running time is worst-case optimal for all natural join
queries. Our result may be of independent interest, as our algorithm also
yields a constructive proof of the general fractional cover bound by Atserias,
Grohe, and Marx without using Shearer's inequality. This bound implies two
famous inequalities in geometry: the Loomis-Whitney inequality and the
Bollob\'as-Thomason inequality. Hence, our results algorithmically prove these
inequalities as well. Finally, we discuss how our algorithm can be used to
compute a relaxed notion of joins.
|
1203.1985
|
Substructure and Boundary Modeling for Continuous Action Recognition
|
cs.CV
|
This paper introduces a probabilistic graphical model for continuous action
recognition with two novel components: substructure transition model and
discriminative boundary model. The first component encodes the sparse and
global temporal transition prior between action primitives in state-space model
to handle the large spatial-temporal variations within an action class. The
second component enforces the action duration constraint in a discriminative
way to locate the transition boundaries between actions more accurately. The
two components are integrated into a unified graphical structure to enable
effective training and inference. Our comprehensive experimental results on
both public and in-house datasets show that, with the capability to incorporate
additional information that had not been explicitly or efficiently modeled by
previous methods, our proposed algorithm achieved significantly improved
performance for continuous action recognition.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.