id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0911.4414
|
Designing fuzzy rule based classifier using self-organizing feature map
for analysis of multispectral satellite images
|
cs.CV cs.NE
|
We propose a novel scheme for designing fuzzy rule based classifier. An SOFM
based method is used for generating a set of prototypes which is used to
generate a set of fuzzy rules. Each rule represents a region in the feature
space that we call the context of the rule. The rules are tuned with respect to
their context. We justified that the reasoning scheme may be different in
different context leading to context sensitive inferencing. To realize context
sensitive inferencing we used a softmin operator with a tunable parameter. The
proposed scheme is tested on several multispectral satellite image data sets
and the performance is found to be much better than the results reported in the
literature.
|
0911.4416
|
Land cover classification using fuzzy rules and aggregation of
contextual information through evidence theory
|
cs.CV cs.NE
|
Land cover classification using multispectral satellite image is a very
challenging task with numerous practical applications. We propose a multi-stage
classifier that involves fuzzy rule extraction from the training data and then
generation of a possibilistic label vector for each pixel using the fuzzy rule
base. To exploit the spatial correlation of land cover types we propose four
different information aggregation methods which use the possibilistic class
label of a pixel and those of its eight spatial neighbors for making the final
classification decision. Three of the aggregation methods use Dempster-Shafer
theory of evidence while the remaining one is modeled after the fuzzy k-NN
rule. The proposed methods are tested with two benchmark seven channel
satellite images and the results are found to be quite satisfactory. They are
also compared with a Markov random field (MRF) model-based contextual
classification method and found to perform consistently better.
|
0911.4432
|
The Role of Feedback in Two-way Secure Communications
|
cs.IT math.IT
|
Most practical communication links are bi-directional. In these models, since
the source node also receives signals, its encoder has the option of computing
its output based on the signals it received in the past. On the other hand,
from a practical point of view, it would also be desirable to identify the
cases where such an encoder design may not improve communication rates. This
question is particularly interesting for the case where the transmitted
messages and the feedback signals are subject to eavesdropping. In this work,
we investigate the question of how much impact the feedback has on the secrecy
capacity by studying two fundamental models. First, we consider the Gaussian
two-way wiretap channel and derive an outer bound for its secrecy capacity
region. We show that the secrecy rate loss can be unbounded when feedback
signals are not utilized except for a special case we identify, and thus
conclude that utilizing feedback can be highly beneficial in general. Second,
we consider a half-duplex Gaussian two-way relay channel where the relay node
is also an eavesdropper, and find that the impact of feedback is less
pronounced compared to the previous scenario. Specifically, the loss in secrecy
rate, when ignoring the feedback, is quantified to be less than 0.5 bit per
channel use when the relay power goes to infinity. This achievable rate region
is obtained with simple time sharing along with cooperative jamming, which,
with its simplicity and near optimum performance, is a viable alternative to an
encoder that utilizes feedback signals.
|
0911.4507
|
On Feasibility of Interference Alignment in MIMO Interference Networks
|
cs.IT math.IT
|
We explore the feasibility of interference alignment in signal vector space
-- based only on beamforming -- for K-user MIMO interference channels. Our main
contribution is to relate the feasibility issue to the problem of determining
the solvability of a multivariate polynomial system, considered extensively in
algebraic geometry. It is well known, e.g. from Bezout's theorem, that generic
polynomial systems are solvable if and only if the number of equations does not
exceed the number of variables. Following this intuition, we classify signal
space interference alignment problems as either proper or improper based on the
number of equations and variables. Rigorous connections between feasible and
proper systems are made through Bernshtein's theorem for the case where each
transmitter uses only one beamforming vector. The multi-beam case introduces
dependencies among the coefficients of a polynomial system so that the system
is no longer generic in the sense required by both theorems. In this case, we
show that the connection between feasible and proper systems can be further
strengthened (since the equivalency between feasible and proper systems does
not always hold) by including standard information theoretic outer bounds in
the feasibility analysis.
|
0911.4510
|
Bigraphical models for protein and membrane interactions
|
cs.CE cs.LO q-bio.MN q-bio.QM
|
We present a bigraphical framework suited for modeling biological systems
both at protein level and at membrane level. We characterize formally bigraphs
corresponding to biologically meaningful systems, and bigraphic rewriting rules
representing biologically admissible interactions. At the protein level, these
bigraphic reactive systems correspond exactly to systems of kappa-calculus.
Membrane-level interactions are represented by just two general rules, whose
application can be triggered by protein-level interactions in a well-de\"ined
and precise way. This framework can be used to compare and merge models at
different abstraction levels; in particular, higher-level (e.g. mobility)
activities can be given a formal biological justification in terms of low-level
(i.e., protein) interactions. As examples, we formalize in our framework the
vesiculation and the phagocytosis processes.
|
0911.4511
|
Group-based Query Learning for rapid diagnosis in time-critical
situations
|
stat.ML cs.IT math.IT
|
In query learning, the goal is to identify an unknown object while minimizing
the number of "yes or no" questions (queries) posed about that object. We
consider three extensions of this fundamental problem that are motivated by
practical considerations in real-world, time-critical identification tasks such
as emergency response. First, we consider the problem where the objects are
partitioned into groups, and the goal is to identify only the group to which
the object belongs. Second, we address the situation where the queries are
partitioned into groups, and an algorithm may suggest a group of queries to a
human user, who then selects the actual query. Third, we consider the problem
of query learning in the presence of persistent query noise, and relate it to
group identification. To address these problems we show that a standard
algorithm for query learning, known as the splitting algorithm or generalized
binary search, may be viewed as a generalization of Shannon-Fano coding. We
then extend this result to the group-based settings, leading to new algorithms.
The performance of our algorithms is demonstrated on simulated data and on a
database used by first responders for toxic chemical identification.
|
0911.4513
|
A framework for protein and membrane interactions
|
cs.CE cs.LO q-bio.MN q-bio.QM
|
We introduce the BioBeta Framework, a meta-model for both protein-level and
membrane-level interactions of living cells. This formalism aims to provide a
formal setting where to encode, compare and merge models at different
abstraction levels; in particular, higher-level (e.g. membrane) activities can
be given a formal biological justification in terms of low-level (i.e.,
protein) interactions. A BioBeta specification provides a protein signature
together a set of protein reactions, in the spirit of the kappa-calculus.
Moreover, the specification describes when a protein configuration triggers one
of the only two membrane interaction allowed, that is "pinch" and "fuse". In
this paper we define the syntax and semantics of BioBeta, analyse its
properties, give it an interpretation as biobigraphical reactive systems, and
discuss its expressivity by comparing with kappa-calculus and modelling
significant examples. Notably, BioBeta has been designed after a bigraphical
metamodel for the same purposes. Hence, each instance of the calculus
corresponds to a bigraphical reactive system, and vice versa (almost).
Therefore, we can inherith the rich theory of bigraphs, such as the automatic
construction of labelled transition systems and behavioural congruences.
|
0911.4521
|
On the equivalence between minimal sufficient statistics, minimal
typical models and initial segments of the Halting sequence
|
cs.CC cs.IT math.IT
|
It is shown that the length of the algorithmic minimal sufficient statistic
of a binary string x, either in a representation of a finite set, computable
semimeasure, or a computable function, has a length larger than the
computational depth of x, and can solve the Halting problem for all programs
with length shorter than the m-depth of x. It is also shown that there are
strings for which the algorithmic minimal sufficient statistics can contain a
substantial amount of information that is not Halting information. The weak
sufficient statistic is introduced, and it is shown that a minimal weak
sufficient statistic for x is equivalent to a minimal typical model of x, and
to the Halting problem for all strings shorter than the BB-depth of x.
|
0911.4522
|
On the Number of Errors Correctable with Codes on Graphs
|
cs.IT cs.DM math.IT
|
We study ensembles of codes on graphs (generalized low-density parity-check,
or LDPC codes) constructed from random graphs and fixed local constrained
codes, and their extension to codes on hypergraphs. It is known that the
average minimum distance of codes in these ensembles grows linearly with the
code length. We show that these codes can correct a linearly growing number of
errors under simple iterative decoding algorithms. In particular, we show that
this property extends to codes constructed by parallel concatenation of Hamming
codes and other codes with small minimum distance. Previously known results
that proved this property for graph codes relied on graph expansion and
required the choice of local codes with large distance relative to their
length.
|
0911.4530
|
MIMO Z-Interference Channels: Capacity Under Strong and Noisy
Interference
|
cs.IT math.IT
|
The capacity regions of multiple-input multiple-output Gaussian
Z-interference channels are established for the very strong interference and
aligned strong interference cases. The sum-rate capacity of such channels is
established under noisy interference. These results generalize known results
for scalar Gaussian Z-interference channels.
|
0911.4640
|
Near-ML Signal Detection in Large-Dimension Linear Vector Channels Using
Reactive Tabu Search
|
cs.IT math.IT
|
Low-complexity near-optimal signal detection in large dimensional
communication systems is a challenge. In this paper, we present a reactive tabu
search (RTS) algorithm, a heuristic based combinatorial optimization technique,
to achieve low-complexity near-maximum likelihood (ML) signal detection in
linear vector channels with large dimensions. Two practically important
large-dimension linear vector channels are considered: i) multiple-input
multiple-output (MIMO) channels with large number (tens) of transmit and
receive antennas, and ii) severely delay-spread MIMO inter-symbol interference
(ISI) channels with large number (tens to hundreds) of multipath components.
These channels are of interest because the former offers the benefit of
increased spectral efficiency (several tens of bps/Hz) and the latter offers
the benefit of high time-diversity orders. Our simulation results show that,
while algorithms including variants of sphere decoding do not scale well for
large dimensions, the proposed RTS algorithm scales well for signal detection
in large dimensions while achieving increasingly closer to ML performance for
increasing number of dimensions.
|
0911.4650
|
CanICA: Model-based extraction of reproducible group-level ICA patterns
from fMRI time series
|
cs.CV stat.AP
|
Spatial Independent Component Analysis (ICA) is an increasingly used
data-driven method to analyze functional Magnetic Resonance Imaging (fMRI)
data. To date, it has been used to extract meaningful patterns without prior
information. However, ICA is not robust to mild data variation and remains a
parameter-sensitive algorithm. The validity of the extracted patterns is hard
to establish, as well as the significance of differences between patterns
extracted from different groups of subjects. We start from a generative model
of the fMRI group data to introduce a probabilistic ICA pattern-extraction
algorithm, called CanICA (Canonical ICA). Thanks to an explicit noise model and
canonical correlation analysis, our method is auto-calibrated and identifies
the group-reproducible data subspace before performing ICA. We compare our
method to state-of-the-art multi-subject fMRI ICA methods and show that the
features extracted are more reproducible.
|
0911.4704
|
Cooperative Relaying with State Available Non-Causally at the Relay
|
cs.IT math.IT
|
We consider a three-terminal state-dependent relay channel with the channel
state noncausally available at only the relay. Such a model may be useful for
designing cooperative wireless networks with some terminals equipped with
cognition capabilities, i.e., the relay in our setup. In the discrete
memoryless (DM) case, we establish lower and upper bounds on channel capacity.
The lower bound is obtained by a coding scheme at the relay that uses a
combination of codeword splitting, Gel'fand-Pinsker binning, and
decode-and-forward relaying. The upper bound improves upon that obtained by
assuming that the channel state is available at the source, the relay, and the
destination. For the Gaussian case, we also derive lower and upper bounds on
the capacity. The lower bound is obtained by a coding scheme at the relay that
uses a combination of codeword splitting, generalized dirty paper coding, and
decode-and-forward relaying; the upper bound is also better than that obtained
by assuming that the channel state is available at the source, the relay, and
the destination. In the case of degraded Gaussian channels, the lower bound
meets with the upper bound for some special cases, and, so, the capacity is
obtained for these cases. Furthermore, in the Gaussian case, we also extend the
results to the case in which the relay operates in a half-duplex mode.
|
0911.4727
|
Exchangeability and sets of desirable gambles
|
math.PR cs.AI math.ST stat.TH
|
Sets of desirable gambles constitute a quite general type of uncertainty
model with an interesting geometrical interpretation. We give a general
discussion of such models and their rationality criteria. We study
exchangeability assessments for them, and prove counterparts of de Finetti's
finite and infinite representation theorems. We show that the finite
representation in terms of count vectors has a very nice geometrical
interpretation, and that the representation in terms of frequency vectors is
tied up with multivariate Bernstein (basis) polynomials. We also lay bare the
relationships between the representations of updated exchangeable models, and
discuss conservative inference (natural extension) under exchangeability and
the extension of exchangeable sequences.
|
0911.4752
|
MIMO Radar Using Compressive Sampling
|
cs.IT math.IT
|
A MIMO radar system is proposed for obtaining angle and Doppler information
on potential targets. Transmitters and receivers are nodes of a small scale
wireless network and are assumed to be randomly scattered on a disk. The
transmit nodes transmit uncorrelated waveforms. Each receive node applies
compressive sampling to the received signal to obtain a small number of
samples, which the node subsequently forwards to a fusion center. Assuming that
the targets are sparsely located in the angle- Doppler space, based on the
samples forwarded by the receive nodes the fusion center formulates an
l1-optimization problem, the solution of which yields target angle and Doppler
information. The proposed approach achieves the superior resolution of MIMO
radar with far fewer samples than required by other approaches. This implies
power savings during the communication phase between the receive nodes and the
fusion center. Performance in the presence of a jammer is analyzed for the case
of slowly moving targets. Issues related to forming the basis matrix that spans
the angle-Doppler space, and for selecting a grid for that space are discussed.
Extensive simulation results are provided to demonstrate the performance of the
proposed approach at difference jammer and noise levels.
|
0911.4854
|
A Process Calculus for Molecular Interaction Maps
|
cs.CE cs.LO q-bio.MN
|
We present the MIM calculus, a modeling formalism with a strong biological
basis, which provides biologically-meaningful operators for representing the
interaction capabilities of molecular species. The operators of the calculus
are inspired by the reaction symbols used in Molecular Interaction Maps (MIMs),
a diagrammatic notation used by biologists. Models of the calculus can be
easily derived from MIM diagrams, for which an unambiguous and executable
interpretation is thus obtained. We give a formal definition of the syntax and
semantics of the MIM calculus, and we study properties of the formalism. A case
study is also presented to show the use of the calculus for modeling
biomolecular networks.
|
0911.4863
|
Statistical exponential families: A digest with flash cards
|
cs.LG
|
This document describes concisely the ubiquitous class of exponential family
distributions met in statistics. The first part recalls definitions and
summarizes main properties and duality with Bregman divergences (all proofs are
skipped). The second part lists decompositions and related formula of common
exponential family distributions. We recall the Fisher-Rao-Riemannian
geometries and the dual affine connection information geometries of statistical
manifolds. It is intended to maintain and update this document and catalog by
adding new distribution items.
|
0911.4874
|
Non-photorealistic image processing: an Impressionist rendering
|
cs.CV
|
The paper describes an image processing for a non-photorealistic rendering.
The algorithm is based on a random choice of a set of pixels from those ot the
original image and substitution of them with colour spots. An iterative
procedure is applied to cover, at a desired level, the canvas. The resulting
effect mimics the impressionist painting and Pointillism.
|
0911.4880
|
An Estimation Theoretic Approach for Sparsity Pattern Recovery in the
Noisy Setting
|
cs.IT math.IT
|
Compressed sensing deals with the reconstruction of sparse signals using a
small number of linear measurements. One of the main challenges in compressed
sensing is to find the support of a sparse signal. In the literature, several
bounds on the scaling law of the number of measurements for successful support
recovery have been derived where the main focus is on random Gaussian
measurement matrices. In this paper, we investigate the noisy support recovery
problem from an estimation theoretic point of view, where no specific
assumption is made on the underlying measurement matrix. The linear
measurements are perturbed by additive white Gaussian noise. We define the
output of a support estimator to be a set of position values in increasing
order. We set the error between the true and estimated supports as the
$\ell_2$-norm of their difference. On the one hand, this choice allows us to
use the machinery behind the $\ell_2$-norm error metric and on the other hand,
converts the support recovery into a more intuitive and geometrical problem.
First, by using the Hammersley-Chapman-Robbins (HCR) bound, we derive a
fundamental lower bound on the performance of any \emph{unbiased} estimator of
the support set. This lower bound provides us with necessary conditions on the
number of measurements for reliable $\ell_2$-norm support recovery, which we
specifically evaluate for uniform Gaussian measurement matrices. Then, we
analyze the maximum likelihood estimator and derive conditions under which the
HCR bound is achievable. This leads us to the number of measurements for the
optimum decoder which is sufficient for reliable $\ell_2$-norm support
recovery. Using this framework, we specifically evaluate sufficient conditions
for uniform Gaussian measurement matrices.
|
0911.4896
|
Diversity Order in ISI Channels with Single-Carrier Frequency-Domain
Equalizers
|
cs.IT math.IT
|
This paper analyzes the diversity gain achieved by single-carrier
frequency-domain equalizer (SC-FDE) in frequency selective channels, and
uncovers the interplay between diversity gain $d$, channel memory length $\nu$,
transmission block length $L$, and the spectral efficiency $R$. We specifically
show that for the class of minimum means-square error (MMSE) SC-FDE receivers,
for rates $R\leq\log\frac{L}{\nu}$ full diversity of $d=\nu+1$ is achievable,
while for higher rates the diversity is given by $d=\lfloor2^{-R}L\rfloor+1$.
In other words, the achievable diversity gain depends not only on the channel
memory length, but also on the desired spectral efficiency and the transmission
block length. A similar analysis reveals that for zero forcing SC-FDE, the
diversity order is always one irrespective of channel memory length and
spectral efficiency. These results are supported by simulations.
|
0911.4910
|
Adaptive information filtering for dynamic recommender systems
|
cs.IR cs.IT math.IT
|
The dynamic environment in the real world calls for the adaptive techniques
for information filtering, namely to provide real-time responses to the changes
of system data. Where many incremental algorithms are designed for this
purpose, they are usually challenged by the worse and worse performance
resulted from the cumulative errors over time. In this Letter, we propose two
incremental diffusion-based algorithms for the personalized recommendations,
which integrate some pieces of local and fast updatings to achieve the
approximate results. In addition to the fast responses, the errors of the
proposed algorithms do not cumulate over time, that is to say, the global
recomputing is unnecessary. This remarkable advantage is demonstrated by
several metrics on algorithmic accuracy for two movie recommender systems and a
social bookmarking system.
|
0911.4983
|
Modelling Cell Cycle using Different Levels of Representation
|
cs.CE cs.LO q-bio.CB q-bio.QM
|
Understanding the behaviour of biological systems requires a complex setting
of in vitro and in vivo experiments, which attracts high costs in terms of time
and resources. The use of mathematical models allows researchers to perform
computerised simulations of biological systems, which are called in silico
experiments, to attain important insights and predictions about the system
behaviour with a considerably lower cost. Computer visualisation is an
important part of this approach, since it provides a realistic representation
of the system behaviour. We define a formal methodology to model biological
systems using different levels of representation: a purely formal
representation, which we call molecular level, models the biochemical dynamics
of the system; visualisation-oriented representations, which we call visual
levels, provide views of the biological system at a higher level of
organisation and are equipped with the necessary spatial information to
generate the appropriate visualisation. We choose Spatial CLS, a formal
language belonging to the class of Calculi of Looping Sequences, as the
formalism for modelling all representation levels. We illustrate our approach
using the budding yeast cell cycle as a case study.
|
0911.4984
|
A compartmental model of the cAMP/PKA/MAPK pathway in Bio-PEPA
|
cs.CE cs.LO q-bio.MN q-bio.QM
|
The vast majority of biochemical systems involve the exchange of information
between different compartments, either in the form of transportation or via the
intervention of membrane proteins which are able to transmit stimuli between
bordering compartments. The correct quantitative handling of compartments is,
therefore, extremely important when modelling real biochemical systems. The
Bio-PEPA process algebra is equipped with the capability of explicitly defining
quantitative information such as compartment volumes and membrane surface
areas. Furthermore, the recent development of the Bio-PEPA Eclipse Plug-in
allows us to perform a correct stochastic simulation of multi-compartmental
models.
Here we present a Bio-PEPA compartmental model of the cAMP/PKA/MAPK pathway.
We analyse the system using the Bio-PEPA Eclipse Plug-in and we show the
correctness of our model by comparison with an existing ODE model. Furthermore,
we perform computational experiments in order to investigate certain properties
of the pathway. Specifically, we focus on the system response to the inhibition
and strengthening of feedback loops and to the variation in the activity of key
pathway reactions and we observe how these modifications affect the behaviour
of the pathway. These experiments are useful to understand the control and
regulatory mechanisms of the system.
|
0911.4986
|
New Solutions to the Firing Squad Synchronization Problems for Neural
and Hyperdag P Systems
|
cs.CE cs.DC cs.NE
|
We propose two uniform solutions to an open question: the Firing Squad
Synchronization Problem (FSSP), for hyperdag and symmetric neural P systems,
with anonymous cells. Our solutions take e_c+5 and 6e_c+7 steps, respectively,
where e_c is the eccentricity of the commander cell of the dag or digraph
underlying these P systems. The first and fast solution is based on a novel
proposal, which dynamically extends P systems with mobile channels. The second
solution is substantially longer, but is solely based on classical rules and
static channels. In contrast to the previous solutions, which work for
tree-based P systems, our solutions synchronize to any subset of the underlying
digraph; and do not require membrane polarizations or conditional rules, but
require states, as typically used in hyperdag and neural P systems.
|
0911.4987
|
Drip and Mate Operations Acting in Test Tube Systems and Tissue-like P
systems
|
cs.CE
|
The operations drip and mate considered in (mem)brane computing resemble the
operations cut and recombination well known from DNA computing. We here
consider sets of vesicles with multisets of objects on their outside membrane
interacting by drip and mate in two different setups: in test tube systems, the
vesicles may pass from one tube to another one provided they fulfill specific
constraints; in tissue-like P systems, the vesicles are immediately passed to
specified cells after having undergone a drip or mate operation. In both
variants, computational completeness can be obtained, yet with different
constraints for the drip and mate operations.
|
0911.4988
|
Abstract Interpretation for Probabilistic Termination of Biological
Systems
|
cs.LO cs.CE cs.FL q-bio.QM
|
In a previous paper the authors applied the Abstract Interpretation approach
for approximating the probabilistic semantics of biological systems, modeled
specifically using the Chemical Ground Form calculus. The methodology is based
on the idea of representing a set of experiments, which differ only for the
initial concentrations, by abstracting the multiplicity of reagents present in
a solution, using intervals. In this paper, we refine the approach in order to
address probabilistic termination properties. More in details, we introduce a
refinement of the abstract LTS semantics and we abstract the probabilistic
semantics using a variant of Interval Markov Chains. The abstract probabilistic
model safely approximates a set of concrete experiments and reports
conservative lower and upper bounds for probabilistic termination.
|
0911.4989
|
Dependencies and Simultaneity in Membrane Systems
|
cs.CE cs.DC cs.LO
|
Membrane system computations proceed in a synchronous fashion: at each step
all the applicable rules are actually applied. Hence each step depends on the
previous one. This coarse view can be refined by looking at the dependencies
among rule occurrences, by recording, for an object, which was the a rule that
produced it and subsequently (in a later step), which was the a rule that
consumed it. In this paper we propose a way to look also at the other main
ingredient in membrane system computations, namely the simultaneity in the rule
applications. This is achieved using zero-safe nets that allows to synchronize
transitions, i.e., rule occurrences. Zero-safe nets can be unfolded into
occurrence nets in a classical way, and to this unfolding an event structure
can be associated. The capability of capturing simultaneity of zero-safe nets
is transferred on the level of event structure by adding a way to express which
events occur simultaneously.
|
0911.5043
|
A Semantic Similarity Measure for Expressive Description Logics
|
cs.AI cs.LO
|
A totally semantic measure is presented which is able to calculate a
similarity value between concept descriptions and also between concept
description and individual or between individuals expressed in an expressive
description logic. It is applicable on symbolic descriptions although it uses a
numeric approach for the calculus. Considering that Description Logics stand as
the theoretic framework for the ontological knowledge representation and
reasoning, the proposed measure can be effectively used for agglomerative and
divisional clustering task applied to the semantic web domain.
|
0911.5046
|
Integrating the Probabilistic Models BM25/BM25F into Lucene
|
cs.IR
|
This document describes the BM25 and BM25F implementation using the Lucene
Java Framework. Both models have stood out at TREC by their performance and are
considered as state-of-the-art in the IR community. BM25 is applied to
retrieval on plain text documents, that is for documents that do not contain
fields, while BM25F is applied to documents with structure.
|
0911.5067
|
Asynchronous CDMA Systems with Random Spreading-Part II: Design Criteria
|
cs.IT math.IT math.PR
|
Totally asynchronous code-division multiple-access (CDMA) systems are
addressed. In Part I, the fundamental limits of asynchronous CDMA systems are
analyzed in terms of spectral efficiency and SINR at the output of the optimum
linear detector. The focus of Part II is the design of low-complexity
implementations of linear multiuser detectors in systems with many users that
admit a multistage representation, e.g. reduced rank multistage Wiener filters,
polynomial expansion detectors, weighted linear parallel interference
cancellers. The effects of excess bandwidth, chip-pulse shaping, and time delay
distribution on CDMA with suboptimum linear receiver structures are
investigated. Recursive expressions for universal weight design are given. The
performance in terms of SINR is derived in the large-system limit and the
performance improvement over synchronous systems is quantified. The
considerations distinguish between two ways of forming discrete-time
statistics: chip-matched filtering and oversampling.
|
0911.5104
|
A Bayesian Rule for Adaptive Control based on Causal Interventions
|
cs.AI cs.LG
|
Explaining adaptive behavior is a central problem in artificial intelligence
research. Here we formalize adaptive agents as mixture distributions over
sequences of inputs and outputs (I/O). Each distribution of the mixture
constitutes a `possible world', but the agent does not know which of the
possible worlds it is actually facing. The problem is to adapt the I/O stream
in a way that is compatible with the true world. A natural measure of
adaptation can be obtained by the Kullback-Leibler (KL) divergence between the
I/O distribution of the true world and the I/O distribution expected by the
agent that is uncertain about possible worlds. In the case of pure input
streams, the Bayesian mixture provides a well-known solution for this problem.
We show, however, that in the case of I/O streams this solution breaks down,
because outputs are issued by the agent itself and require a different
probabilistic syntax as provided by intervention calculus. Based on this
calculus, we obtain a Bayesian control rule that allows modeling adaptive
behavior with mixture distributions over I/O streams. This rule might allow for
a novel approach to adaptive control based on a minimum KL-principle.
|
0911.5106
|
A conversion between utility and information
|
cs.AI cs.IT math.IT
|
Rewards typically express desirabilities or preferences over a set of
alternatives. Here we propose that rewards can be defined for any probability
distribution based on three desiderata, namely that rewards should be
real-valued, additive and order-preserving, where the latter implies that more
probable events should also be more desirable. Our main result states that
rewards are then uniquely determined by the negative information content. To
analyze stochastic processes, we define the utility of a realization as its
reward rate. Under this interpretation, we show that the expected utility of a
stochastic process is its negative entropy rate. Furthermore, we apply our
results to analyze agent-environment interactions. We show that the expected
utility that will actually be achieved by the agent is given by the negative
cross-entropy from the input-output (I/O) distribution of the coupled
interaction system and the agent's I/O distribution. Thus, our results allow
for an information-theoretic interpretation of the notion of utility and the
characterization of agent-environment interactions in terms of entropy
dynamics.
|
0911.5116
|
Standardization of the formal representation of lexical information for
NLP
|
cs.CL
|
A survey of dictionary models and formats is presented as well as a
presentation of corresponding recent standardisation activities.
|
0911.5242
|
The ILIUM forward modelling algorithm for multivariate parameter
estimation and its application to derive stellar parameters from Gaia
spectrophotometry
|
astro-ph.IM astro-ph.GA astro-ph.SR cs.NE stat.ML
|
I introduce an algorithm for estimating parameters from multidimensional data
based on forward modelling. In contrast to many machine learning approaches it
avoids fitting an inverse model and the problems associated with this. The
algorithm makes explicit use of the sensitivities of the data to the
parameters, with the goal of better treating parameters which only have a weak
impact on the data. The forward modelling approach provides uncertainty (full
covariance) estimates in the predicted parameters as well as a goodness-of-fit
for observations. I demonstrate the algorithm, ILIUM, with the estimation of
stellar astrophysical parameters (APs) from simulations of the low resolution
spectrophotometry to be obtained by Gaia. The AP accuracy is competitive with
that obtained by a support vector machine. For example, for zero extinction
stars covering a wide range of metallicity, surface gravity and temperature,
ILIUM can estimate Teff to an accuracy of 0.3% at G=15 and to 4% for (lower
signal-to-noise ratio) spectra at G=20. [Fe/H] and logg can be estimated to
accuracies of 0.1-0.4dex for stars with G<=18.5. If extinction varies a priori
over a wide range (Av=0-10mag), then Teff and Av can be estimated quite
accurately (3-4% and 0.1-0.2mag respectively at G=15), but there is a strong
and ubiquitous degeneracy in these parameters which limits our ability to
estimate either accurately at faint magnitudes. Using the forward model we can
map these degeneracies (in advance), and thus provide a complete probability
distribution over solutions. (Abridged)
|
0911.5300
|
Improving zero-error classical communication with entanglement
|
quant-ph cs.IT math.IT
|
Given one or more uses of a classical channel, only a certain number of
messages can be transmitted with zero probability of error. The study of this
number and its asymptotic behaviour constitutes the field of classical
zero-error information theory, the quantum generalisation of which has started
to develop recently. We show that, given a single use of certain classical
channels, entangled states of a system shared by the sender and receiver can be
used to increase the number of (classical) messages which can be sent with no
chance of error. In particular, we show how to construct such a channel based
on any proof of the Bell-Kochen-Specker theorem. This is a new example of the
use of quantum effects to improve the performance of a classical task. We
investigate the connection between this phenomenon and that of
``pseudo-telepathy'' games. The use of generalised non-signalling correlations
to assist in this task is also considered. In this case, a particularly elegant
theory results and, remarkably, it is sometimes possible to transmit
information with zero-error using a channel with no unassisted zero-error
capacity.
|
0911.5372
|
Maximin affinity learning of image segmentation
|
cs.CV cs.AI cs.LG cs.NE
|
Images can be segmented by first using a classifier to predict an affinity
graph that reflects the degree to which image pixels must be grouped together
and then partitioning the graph to yield a segmentation. Machine learning has
been applied to the affinity classifier to produce affinity graphs that are
good in the sense of minimizing edge misclassification rates. However, this
error measure is only indirectly related to the quality of segmentations
produced by ultimately partitioning the affinity graph. We present the first
machine learning algorithm for training a classifier to produce affinity graphs
that are good in the sense of producing segmentations that directly minimize
the Rand index, a well known segmentation performance measure. The Rand index
measures segmentation performance by quantifying the classification of the
connectivity of image pixel pairs after segmentation. By using the simple graph
partitioning algorithm of finding the connected components of the thresholded
affinity graph, we are able to train an affinity classifier to directly
minimize the Rand index of segmentations resulting from the graph partitioning.
Our learning algorithm corresponds to the learning of maximin affinities
between image pixel pairs, which are predictive of the pixel-pair connectivity.
|
0911.5378
|
De la recherche sociale d'information \`a la recherche collaborative
d'information
|
cs.IR
|
In this paper, we explain social information retrieval (SIR) and
collaborative information retrieval (CIR). We see SIR as a way of knowing who
to collaborate with in resolving an information problem while CIR entails the
process of mutual understanding and solving of an information problem among
collaborators. We are interested in the transition from SIR to CIR hence we
developed a communication model to facilitate knowledge sharing during CIR.
|
0911.5385
|
Asynchronous CDMA Systems with Random Spreading-Part I: Fundamental
Limits
|
cs.IT math.IT math.PR
|
Spectral efficiency for asynchronous code division multiple access (CDMA)
with random spreading is calculated in the large system limit allowing for
arbitrary chip waveforms and frequency-flat fading. Signal to interference and
noise ratios (SINRs) for suboptimal receivers, such as the linear minimum mean
square error (MMSE) detectors, are derived. The approach is general and
optionally allows even for statistics obtained by under-sampling the received
signal.
All performance measures are given as a function of the chip waveform and the
delay distribution of the users in the large system limit. It turns out that
synchronizing users on a chip level impairs performance for all chip waveforms
with bandwidth greater than the Nyquist bandwidth, e.g., positive roll-off
factors. For example, with the pulse shaping demanded in the UMTS standard,
user synchronization reduces spectral efficiency up to 12% at 10 dB normalized
signal-to-noise ratio. The benefits of asynchronism stem from the finding that
the excess bandwidth of chip waveforms actually spans additional dimensions in
signal space, if the users are de-synchronized on the chip-level. The analysis
of linear MMSE detectors shows that the limiting interference effects can be
decoupled both in the user domain and in the frequency domain such that the
concept of the effective interference spectral density arises. This generalizes
and refines Tse and Hanly's concept of effective interference.
In Part II, the analysis is extended to any linear detector that admits a
representation as multistage detector and guidelines for the design of low
complexity multistage detectors with universal weights are provided.
|
0911.5394
|
Covering rough sets based on neighborhoods: An approach without using
neighborhoods
|
cs.AI
|
Rough set theory, a mathematical tool to deal with inexact or uncertain
knowledge in information systems, has originally described the indiscernibility
of elements by equivalence relations. Covering rough sets are a natural
extension of classical rough sets by relaxing the partitions arising from
equivalence relations to coverings. Recently, some topological concepts such as
neighborhood have been applied to covering rough sets. In this paper, we
further investigate the covering rough sets based on neighborhoods by
approximation operations. We show that the upper approximation based on
neighborhoods can be defined equivalently without using neighborhoods. To
analyze the coverings themselves, we introduce unary and composition operations
on coverings. A notion of homomorphismis provided to relate two covering
approximation spaces. We also examine the properties of approximations
preserved by the operations and homomorphisms, respectively.
|
0911.5395
|
An axiomatic approach to the roughness measure of rough sets
|
cs.AI
|
In Pawlak's rough set theory, a set is approximated by a pair of lower and
upper approximations. To measure numerically the roughness of an approximation,
Pawlak introduced a quantitative measure of roughness by using the ratio of the
cardinalities of the lower and upper approximations. Although the roughness
measure is effective, it has the drawback of not being strictly monotonic with
respect to the standard ordering on partitions. Recently, some improvements
have been made by taking into account the granularity of partitions. In this
paper, we approach the roughness measure in an axiomatic way. After
axiomatically defining roughness measure and partition measure, we provide a
unified construction of roughness measure, called strong Pawlak roughness
measure, and then explore the properties of this measure. We show that the
improved roughness measures in the literature are special instances of our
strong Pawlak roughness measure and introduce three more strong Pawlak
roughness measures as well. The advantage of our axiomatic approach is that
some properties of a roughness measure follow immediately as soon as the
measure satisfies the relevant axiomatic definition.
|
0911.5404
|
Laser Actuated Presentation System
|
cs.HC cs.CV
|
We present here a pattern sensitive PowerPoint presentation scheme. The
presentation is actuated by simple patterns drawn on the presentation screen by
a laser pointer. A specific pattern corresponds to a particular command
required to operate the presentation. Laser spot on the screen is captured by a
RGB webcam with a red filter mounted, and its location is identified at the
blue layer of each captured frame by estimating the mean position of the pixels
whose intensity is above a given threshold value. Measured Reliability,
Accuracy and Latency of our system are 90%, 10 pixels (in the worst case) and
38 ms respectively.
|
0911.5459
|
Shortest Two-way Linear Recurrences
|
cs.IT cs.SC math.IT
|
Let $s$ be a finite sequence over a field of length $n$. It is well-known
that if $s$ satisfies a linear recurrence of order $d$ with non-zero constant
term, then the reverse of $s$ also satisfies a recurrence of order $d$ (with
coefficients in reverse order). A recent article of A. Salagean proposed an
algorithm to find such a shortest 'two-way' recurrence -- which may be longer
than a linear recurrence for $s$ of shortest length $\LC_n$.
We give a new and simpler algorithm to compute a shortest two-way linear
recurrence. First we show that the pairs of polynomials we use to construct a
minimal polynomial iteratively are always relatively prime; we also give the
extended multipliers. Then we combine degree lower bounds with a
straightforward rewrite of a published algorithm due to the author to obtain
our simpler algorithm. The increase in shortest length is
$\max\{n+1-2\LC_n,0\}$.
|
0911.5462
|
Pigment Melanin: Pattern for Iris Recognition
|
cs.CV
|
Recognition of iris based on Visible Light (VL) imaging is a difficult
problem because of the light reflection from the cornea. Nonetheless, pigment
melanin provides a rich feature source in VL, unavailable in Near-Infrared
(NIR) imaging. This is due to biological spectroscopy of eumelanin, a chemical
not stimulated in NIR. In this case, a plausible solution to observe such
patterns may be provided by an adaptive procedure using a variational technique
on the image histogram. To describe the patterns, a shape analysis method is
used to derive feature-code for each subject. An important question is how much
the melanin patterns, extracted from VL, are independent of iris texture in
NIR. With this question in mind, the present investigation proposes fusion of
features extracted from NIR and VL to boost the recognition performance. We
have collected our own database (UTIRIS) consisting of both NIR and VL images
of 158 eyes of 79 individuals. This investigation demonstrates that the
proposed algorithm is highly sensitive to the patterns of cromophores and
improves the iris recognition rate.
|
0911.5487
|
Strong Spatial Mixing for Binary Markov Random Fields
|
cs.IT cs.DM math.IT
|
Gibbs distribution of binary Markov random fields on a sparse on average
graph is considered in this paper. The strong spatial mixing is proved under
the condition that the `external field' is uniformly large or small. Such
condition on `external field' is meaningful in physics.
|
0911.5508
|
Codes on graphs: Duality and MacWilliams identities
|
cs.IT math.IT
|
A conceptual framework involving partition functions of normal factor graphs
is introduced, paralleling a similar recent development by Al-Bashabsheh and
Mao. The partition functions of dual normal factor graphs are shown to be a
Fourier transform pair, whether or not the graphs have cycles. The original
normal graph duality theorem follows as a corollary. Within this framework,
MacWilliams identities are found for various local and global weight generating
functions of general group or linear codes on graphs; this generalizes and
provides a concise proof of the MacWilliams identity for linear time-invariant
convolutional codes that was recently found by Gluesing-Luerssen and Schneider.
Further MacWilliams identities are developed for terminated convolutional
codes, particularly for tail-biting codes, similar to those studied recently by
Bocharova, Hug, Johannesson and Kudryashov.
|
0911.5509
|
Interference Alignment Under Limited Feedback for MIMO Interference
Channels
|
cs.IT math.IT
|
While interference alignment schemes have been employed to realize the full
multiplexing gain of $K$-user interference channels, the analyses performed so
far have predominantly focused on the case when global channel knowledge is
available at each node of the network. This paper considers the problem where
each receiver knows its channels from all the transmitters and feeds back this
information using a limited number of bits to all other terminals. In
particular, channel quantization over the composite Grassmann manifold is
proposed and analyzed. It is shown, for $K$-user multiple-input,
multiple-output (MIMO) interference channels, that when the transmitters use an
interference alignment strategy as if the quantized channel estimates obtained
via this limited feedback are perfect, the full sum degrees of freedom of the
interference channel can be achieved as long as the feedback bit rate scales
sufficiently fast with the signal-to-noise ratio. Moreover, this is only one
extreme point of a continuous tradeoff between achievable degrees of freedom
region and user feedback rate scalings which are allowed to be non-identical.
It is seen that a slower scaling of feedback rate for any one user leads to
commensurately fewer degrees of freedom for that user alone.
|
0911.5515
|
Finite Dimensional Statistical Inference
|
cs.IT math.IT
|
In this paper, we derive the explicit series expansion of the eigenvalue
distribution of various models, namely the case of non-central Wishart
distributions, as well as correlated zero mean Wishart distributions. The tools
used extend those of the free probability framework, which have been quite
successful for high dimensional statistical inference (when the size of the
matrices tends to infinity), also known as free deconvolution. This
contribution focuses on the finite Gaussian case and proposes algorithmic
methods to compute the moments. Cases where asymptotic results fail to apply
are also discussed.
|
0911.5524
|
LS-CS-residual (LS-CS): Compressive Sensing on Least Squares Residual
|
cs.IT math.IT
|
We consider the problem of recursively and causally reconstructing time
sequences of sparse signals (with unknown and time-varying sparsity patterns)
from a limited number of noisy linear measurements. The sparsity pattern is
assumed to change slowly with time. The idea of our proposed solution,
LS-CS-residual (LS-CS), is to replace compressed sensing (CS) on the
observation by CS on the least squares (LS) residual computed using the
previous estimate of the support. We bound CS-residual error and show that when
the number of available measurements is small, the bound is much smaller than
that on CS error if the sparsity pattern changes slowly enough. We also obtain
conditions for "stability" of LS-CS over time for a signal model that allows
support additions and removals, and that allows coefficients to gradually
increase (decrease) until they reach a constant value (become zero). By
"stability", we mean that the number of misses and extras in the support
estimate remain bounded by time-invariant values (in turn implying a
time-invariant bound on LS-CS error). The concept is meaningful only if the
bounds are small compared to the support size. Numerical experiments backing
our claims are shown.
|
0911.5527
|
A model for randomized resource allocation in decentralized wireless
networks
|
cs.IT math.IT
|
In this paper, we consider a decentralized wireless communication network
with a fixed number $u$ of frequency sub-bands to be shared among $N$
transmitter-receiver pairs. It is assumed that the number of active users is a
random variable with a given probability mass function. Moreover, users are
unaware of each other's codebooks and hence, no multiuser detection is
possible. We propose a randomized Frequency Hopping (FH) scheme in which each
transmitter randomly hops over a subset of $u$ sub-bands from transmission to
transmission. We derive lower and upper bounds on the mutual information of
each user and demonstrate that, for large Signal-to-Noise Ratio (SNR) values,
the two bounds coincide. This observation enables us to compute the sum
multiplexing gain of the system and obtain the optimum hopping strategy for
maximizing this quantity. We compare the performance of the FH system with that
of the Frequency Division (FD) system in terms of several performance measures
and show that (depending on the probability mass function of the number of
active users) the FH system can offer a significant improvement implying a more
efficient usage of the spectrum.
|
0911.5548
|
A Decision-Optimization Approach to Quantum Mechanics and Game Theory
|
cs.GT cs.AI
|
The fundamental laws of quantum world upsets the logical foundation of
classic physics. They are completely counter-intuitive with many bizarre
behaviors. However, this paper shows that they may make sense from the
perspective of a general decision-optimization principle for cooperation. This
principle also offers a generalization of Nash equilibrium, a key concept in
game theory, for better payoffs and stability of game playing.
|
0911.5553
|
Randomized vs. orthogonal spectrum allocation in decentralized networks:
Outage Analysis
|
cs.IT math.IT
|
We address a decentralized wireless communication network with a fixed number
$u$ of frequency sub-bands to be shared among $N$ transmitter-receiver pairs.
It is assumed that the number of users $N$ is a random variable with a given
distribution and the channel gains are quasi-static Rayleigh fading. The
transmitters are assumed to be unaware of the number of active users in the
network as well as the channel gains and not capable of detecting the presence
of other users in a given frequency sub-band. Moreover, the users are unaware
of each other's codebooks and hence, no multiuser detection is possible. We
consider a randomized Frequency Hopping (FH) scheme in which each transmitter
randomly hops over a subset of the $u$ sub-bands from transmission to
transmission. Developing a new upper bound on the differential entropy of a
mixed Gaussian random vector and using entropy power inequality, we offer a
series of lower bounds on the achievable rate of each user. Thereafter, we
obtain lower bounds on the maximum transmission rate per user to ensure a
specified outage probability at a given Signal-to-Noise Ratio (SNR) level. We
demonstrate that the so-called outage capacity can be considerably higher in
the FH scheme than in the Frequency Division (FD) scenario for reasonable
distributions on the number of active users. This guarantees a higher spectral
efficiency in FH compared to FD.
|
0911.5568
|
Acquisition d'informations lexicales \`a partir de corpus C\'edric
Messiant et Thierry Poibeau
|
cs.CL cs.AI
|
This paper is about automatic acquisition of lexical information from
corpora, especially subcategorization acquisition.
|
0911.5667
|
End-to-End Algebraic Network Coding for Wireless TCP/IP Networks
|
cs.IT cs.NI math.IT
|
The Transmission Control Protocol (TCP) was designed to provide reliable
transport services in wired networks. In such networks, packet losses mainly
occur due to congestion. Hence, TCP was designed to apply congestion avoidance
techniques to cope with packet losses. Nowadays, TCP is also utilized in
wireless networks where, besides congestion, numerous other reasons for packet
losses exist. This results in reduced throughput and increased transmission
round-trip time when the state of the wireless channel is bad. We propose a new
network layer, that transparently sits below the transport layer and hides non
congestion-imposed packet losses from TCP. The network coding in this new layer
is based on the well-known class of Maximum Distance Separable (MDS) codes.
|
0911.5703
|
Hierarchies in Dictionary Definition Space
|
cs.CL cs.LG
|
A dictionary defines words in terms of other words. Definitions can tell you
the meanings of words you don't know, but only if you know the meanings of the
defining words. How many words do you need to know (and which ones) in order to
be able to learn all the rest from definitions? We reduced dictionaries to
their "grounding kernels" (GKs), about 10% of the dictionary, from which all
the other words could be defined. The GK words turned out to have
psycholinguistic correlates: they were learned at an earlier age and more
concrete than the rest of the dictionary. But one can compress still more: the
GK turns out to have internal structure, with a strongly connected "kernel
core" (KC) and a surrounding layer, from which a hierarchy of definitional
distances can be derived, all the way out to the periphery of the full
dictionary. These definitional distances, too, are correlated with
psycholinguistic variables (age of acquisition, concreteness, imageability,
oral and written frequency) and hence perhaps with the "mental lexicon" in each
of our heads.
|
0911.5708
|
Learning in a Large Function Space: Privacy-Preserving Mechanisms for
SVM Learning
|
cs.LG cs.CR cs.DB
|
Several recent studies in privacy-preserving learning have considered the
trade-off between utility or risk and the level of differential privacy
guaranteed by mechanisms for statistical query processing. In this paper we
study this trade-off in private Support Vector Machine (SVM) learning. We
present two efficient mechanisms, one for the case of finite-dimensional
feature mappings and one for potentially infinite-dimensional feature mappings
with translation-invariant kernels. For the case of translation-invariant
kernels, the proposed mechanism minimizes regularized empirical risk in a
random Reproducing Kernel Hilbert Space whose kernel uniformly approximates the
desired kernel with high probability. This technique, borrowed from large-scale
learning, allows the mechanism to respond with a finite encoding of the
classifier, even when the function class is of infinite VC dimension.
Differential privacy is established using a proof technique from algorithmic
stability. Utility--the mechanism's response function is pointwise
epsilon-close to non-private SVM with probability 1-delta--is proven by
appealing to the smoothness of regularized empirical risk minimization with
respect to small perturbations to the feature mapping. We conclude with a lower
bound on the optimal differential privacy of the SVM. This negative result
states that for any delta, no mechanism can be simultaneously
(epsilon,delta)-useful and beta-differentially private for small epsilon and
small beta.
|
0912.0034
|
Proceedings Third Workshop on Membrane Computing and Biologically
Inspired Process Calculi 2009
|
cs.CE cs.DC cs.FL cs.LO
|
This volume contains the accepted papers at the third Workshop on Membrane
Computing and Biologically Inspired Process Calculi, held in Bologna on 5th
September 2009. The papers are devoted to both membrane computing and
biologically inspired process calculi, as well as to other related formalisms.
The papers of this volume are selected by the programme committee due to their
quality and relevance; they have defined an exciting programme highlighting
interesting problems and stimulating the search for novel ways of describing
related biological phenomena. In addition, we had an invited talk given by Luca
Cardelli on a spatial process algebra for developmental biology. Membrane
systems were introduced as a class of distributed parallel computing devices
inspired by the observation that any biological system is a complex
hierarchical structure, with a flow of materials and information that underlies
their functioning. The emphasis is on the computational properties of the
model, and it makes use of automata, languages, and complexity theoretic tools.
On the other hand, certain calculi such as mobile ambients and brane calculi
work with similar notions (compartments, membranes). These calculi are used to
model and analyze the various biological systems. The workshop on Membrane
Computing and Biologically Inspired Process Calculi brings together researchers
working in these fields to present their recent work and discuss new ideas
concerning the formalisms, their properties and relationships.
|
0912.0071
|
Differentially Private Empirical Risk Minimization
|
cs.LG cs.AI cs.CR cs.DB
|
Privacy-preserving machine learning algorithms are crucial for the
increasingly common setting in which personal data, such as medical or
financial records, are analyzed. We provide general techniques to produce
privacy-preserving approximations of classifiers learned via (regularized)
empirical risk minimization (ERM). These algorithms are private under the
$\epsilon$-differential privacy definition due to Dwork et al. (2006). First we
apply the output perturbation ideas of Dwork et al. (2006), to ERM
classification. Then we propose a new method, objective perturbation, for
privacy-preserving machine learning algorithm design. This method entails
perturbing the objective function before optimizing over classifiers. If the
loss and regularizer satisfy certain convexity and differentiability criteria,
we prove theoretical results showing that our algorithms preserve privacy, and
provide generalization bounds for linear and nonlinear kernels. We further
present a privacy-preserving technique for tuning the parameters in general
machine learning algorithms, thereby providing end-to-end privacy guarantees
for the training process. We apply these results to produce privacy-preserving
analogues of regularized logistic regression and support vector machines. We
obtain encouraging results from evaluating their performance on real
demographic and benchmark data sets. Our results show that both theoretically
and empirically, objective perturbation is superior to the previous
state-of-the-art, output perturbation, in managing the inherent tradeoff
between privacy and learning performance.
|
0912.0086
|
Learning Mixtures of Gaussians using the k-means Algorithm
|
cs.LG
|
One of the most popular algorithms for clustering in Euclidean space is the
$k$-means algorithm; $k$-means is difficult to analyze mathematically, and few
theoretical guarantees are known about it, particularly when the data is {\em
well-clustered}. In this paper, we attempt to fill this gap in the literature
by analyzing the behavior of $k$-means on well-clustered data. In particular,
we study the case when each cluster is distributed as a different Gaussian --
or, in other words, when the input comes from a mixture of Gaussians.
We analyze three aspects of the $k$-means algorithm under this assumption.
First, we show that when the input comes from a mixture of two spherical
Gaussians, a variant of the 2-means algorithm successfully isolates the
subspace containing the means of the mixture components. Second, we show an
exact expression for the convergence of our variant of the 2-means algorithm,
when the input is a very large number of samples from a mixture of spherical
Gaussians. Our analysis does not require any lower bound on the separation
between the mixture components.
Finally, we study the sample requirement of $k$-means; for a mixture of 2
spherical Gaussians, we show an upper bound on the number of samples required
by a variant of 2-means to get close to the true solution. The sample
requirement grows with increasing dimensionality of the data, and decreasing
separation between the means of the Gaussians. To match our upper bound, we
show an information-theoretic lower bound on any algorithm that learns mixtures
of two spherical Gaussians; our lower bound indicates that in the case when the
overlap between the probability masses of the two distributions is small, the
sample requirement of $k$-means is {\em near-optimal}.
|
0912.0132
|
Opportunistic Adaptation Knowledge Discovery
|
cs.AI
|
Adaptation has long been considered as the Achilles' heel of case-based
reasoning since it requires some domain-specific knowledge that is difficult to
acquire. In this paper, two strategies are combined in order to reduce the
knowledge engineering cost induced by the adaptation knowledge (CA) acquisition
task: CA is learned from the case base by the means of knowledge discovery
techniques, and the CA acquisition sessions are opportunistically triggered,
i.e., at problem-solving time.
|
0912.0224
|
A Multi-stage Probabilistic Algorithm for Dynamic Path-Planning
|
cs.AI cs.RO
|
Probabilistic sampling methods have become very popular to solve single-shot
path planning problems. Rapidly-exploring Random Trees (RRTs) in particular
have been shown to be efficient in solving high dimensional problems. Even
though several RRT variants have been proposed for dynamic replanning, these
methods only perform well in environments with infrequent changes. This paper
addresses the dynamic path planning problem by combining simple techniques in a
multi-stage probabilistic algorithm. This algorithm uses RRTs for initial
planning and informed local search for navigation. We show that this
combination of simple techniques provides better responses to highly dynamic
environments than the RRT extensions.
|
0912.0229
|
Approximate Sparse Recovery: Optimizing Time and Measurements
|
cs.DS cs.IT math.IT
|
An approximate sparse recovery system consists of parameters $k,N$, an
$m$-by-$N$ measurement matrix, $\Phi$, and a decoding algorithm, $\mathcal{D}$.
Given a vector, $x$, the system approximates $x$ by $\widehat x
=\mathcal{D}(\Phi x)$, which must satisfy $\| \widehat x - x\|_2\le C \|x -
x_k\|_2$, where $x_k$ denotes the optimal $k$-term approximation to $x$. For
each vector $x$, the system must succeed with probability at least 3/4. Among
the goals in designing such systems are minimizing the number $m$ of
measurements and the runtime of the decoding algorithm, $\mathcal{D}$.
In this paper, we give a system with $m=O(k \log(N/k))$
measurements--matching a lower bound, up to a constant factor--and decoding
time $O(k\log^c N)$, matching a lower bound up to $\log(N)$ factors.
We also consider the encode time (i.e., the time to multiply $\Phi$ by $x$),
the time to update measurements (i.e., the time to multiply $\Phi$ by a
1-sparse $x$), and the robustness and stability of the algorithm (adding noise
before and after the measurements). Our encode and update times are optimal up
to $\log(N)$ factors.
|
0912.0238
|
Spectral Ranking
|
cs.IR cs.SI physics.soc-ph
|
We sketch the history of spectral ranking, a general umbrella name for
techniques that apply the theory of linear maps (in particular, eigenvalues and
eigenvectors) to matrices that do not represent geometric transformations, but
rather some kind of relationship between entities. Albeit recently made famous
by the ample press coverage of Google's PageRank algorithm, spectral ranking
was devised more than a century ago, and has been studied in tournament
ranking, psychology, social sciences, bibliometrics, economy and choice theory.
We describe the contribution given by previous scholars in precise and modern
mathematical terms: along the way, we show how to express in a general way
damped rankings, such as Katz's index, as dominant eigenvectors of perturbed
matrices, and then use results on the Drazin inverse to go back to the dominant
eigenvectors by a limit process. The result suggests a regularized definition
of spectral ranking that yields for a general matrix a unique vector depending
on a boundary condition.
|
0912.0265
|
Mapping the spatiotemporal dynamics of calcium signaling in cellular
neural networks using optical flow
|
cs.CE cs.CV q-bio.NC
|
An optical flow gradient algorithm was applied to spontaneously forming net-
works of neurons and glia in culture imaged by fluorescence optical microscopy
in order to map functional calcium signaling with single pixel resolution.
Optical flow estimates the direction and speed of motion of objects in an image
between subsequent frames in a recorded digital sequence of images (i.e. a
movie). Computed vector field outputs by the algorithm were able to track the
spatiotemporal dynamics of calcium signaling pat- terns. We begin by briefly
reviewing the mathematics of the optical flow algorithm, and then describe how
to solve for the displacement vectors and how to measure their reliability. We
then compare computed flow vectors with manually estimated vectors for the
progression of a calcium signal recorded from representative astrocyte
cultures. Finally, we applied the algorithm to preparations of primary
astrocytes and hippocampal neurons and to the rMC-1 Muller glial cell line in
order to illustrate the capability of the algorithm for capturing different
types of spatiotemporal calcium activity. We discuss the imaging requirements,
parameter selection and threshold selection for reliable measurements, and
offer perspectives on uses of the vector data.
|
0912.0266
|
Combining a Probabilistic Sampling Technique and Simple Heuristics to
solve the Dynamic Path Planning Problem
|
cs.AI cs.RO
|
Probabilistic sampling methods have become very popular to solve single-shot
path planning problems. Rapidly-exploring Random Trees (RRTs) in particular
have been shown to be very efficient in solving high dimensional problems. Even
though several RRT variants have been proposed to tackle the dynamic replanning
problem, these methods only perform well in environments with infrequent
changes. This paper addresses the dynamic path planning problem by combining
simple techniques in a multi-stage probabilistic algorithm. This algorithm uses
RRTs as an initial solution, informed local search to fix unfeasible paths and
a simple greedy optimizer. The algorithm is capable of recognizing when the
local search is stuck, and subsequently restart the RRT. We show that this
combination of simple techniques provides better responses to a highly dynamic
environment than the dynamic RRT variants.
|
0912.0270
|
Single-Agent On-line Path Planning in Continuous, Unpredictable and
Highly Dynamic Environments
|
cs.AI cs.RO
|
This document is a thesis on the subject of single-agent on-line path
planning in continuous,unpredictable and highly dynamic environments. The
problem is finding and traversing a collision-free path for a holonomic robot,
without kinodynamic restrictions, moving in an environment with several
unpredictably moving obstacles or adversaries. The availability of perfect
information of the environment at all times is assumed.
Several static and dynamic variants of the Rapidly Exploring Random Trees
(RRT) algorithm are explored, as well as an evolutionary algorithm for planning
in dynamic environments called the Evolutionary Planner/Navigator. A
combination of both kinds of algorithms is proposed to overcome shortcomings in
both, and then a combination of a RRT variant for initial planning and informed
local search for navigation, plus a simple greedy heuristic for optimization.
We show that this combination of simple techniques provides better responses to
highly dynamic environments than the RRT extensions.
|
0912.0312
|
The minimal polynomial of sequence obtained from componentwise linear
transformation of linear recurring sequence
|
cs.IT math.IT
|
Let $S=(s_1,s_2,...,s_m,...)$ be a linear recurring sequence with terms in
$GF(q^n)$ and $T$ be a linear transformation of $GF(q^n)$ over $GF(q)$. Denote
$T(S)=(T(s_1),T(s_2),...,T(s_m),...)$. In this paper, we first present counter
examples to show the main result in [A.M. Youssef and G. Gong, On linear
complexity of sequences over $GF(2^n)$, Theoretical Computer Science,
352(2006), 288-292] is not correct in general since Lemma 3 in that paper is
incorrect. Then, we determine the minimal polynomial of $T(S)$ if the canonical
factorization of the minimal polynomial of $S$ without multiple roots is known
and thus present the solution to the problem which was mainly considered in the
above paper but incorrectly solved. Additionally, as a special case, we
determine the minimal polynomial of $T(S)$ if the minimal polynomial of $S$ is
primitive. Finally, we give an upper bound on the linear complexity of $T(S)$
when $T$ exhausts all possible linear transformations of $GF(q^n)$ over
$GF(q)$. This bound is tight in some cases.
|
0912.0433
|
On the issues of building Information Warehouses
|
cs.HC cs.IR cs.SE
|
While performing knowledge-intensive tasks of professional nature, the
knowledge workers need to access and process large volume of information. Apart
from the quantity, they also require that the information received is of high
quality in terms of authenticity and details. This, in turn, requires that the
information delivered should also include argumentative support, exhibiting the
reasoning process behind their development and provenance to indicate their
lineage. In conventional document-centric practices for information management,
such details are difficult to capture, represent/archive and retrieve/deliver.
To achieve such capability we need to re-think some core issues of information
management from the above requirements perspective. In this paper we develop a
framework for comprehensive representation of information in archive, capturing
informational contents along with their context. We shall call it the
"Information Warehouse (IW)" framework of information archival. The IW is a
significant yet technologically realizable conceptual advancement which can
support efficiently some interesting classes of applications which can be very
useful to the knowledge workers.
|
0912.0549
|
Modular Workflow Engine for Distributed Services using Lightweight Java
Clients
|
cs.SE cs.CE
|
In this article we introduce the concept and the first implementation of a
lightweight client-server-framework as middleware for distributed computing. On
the client side an installation without administrative rights or privileged
ports can turn any computer into a worker node. Only a Java runtime environment
and the JAR files comprising the workflow client are needed. To connect all
clients to the engine one open server port is sufficient. The engine submits
data to the clients and orchestrates their work by workflow descriptions from a
central database. Clients request new task descriptions periodically, thus the
system is robust against network failures. In the basic set-up, data up- and
downloads are handled via HTTP communication with the server. The performance
of the modular system could additionally be improved using dedicated file
servers or distributed network file systems.
We demonstrate the design features of the proposed engine in real-world
applications from mechanical engineering. We have used this system on a compute
cluster in design-of-experiment studies, parameter optimisations and robustness
validations of finite element structures.
|
0912.0572
|
Isometric Multi-Manifolds Learning
|
cs.LG cs.CV
|
Isometric feature mapping (Isomap) is a promising manifold learning method.
However, Isomap fails to work on data which distribute on clusters in a single
manifold or manifolds. Many works have been done on extending Isomap to
multi-manifolds learning. In this paper, we first proposed a new
multi-manifolds learning algorithm (M-Isomap) with help of a general procedure.
The new algorithm preserves intra-manifold geodesics and multiple
inter-manifolds edges precisely. Compared with previous methods, this algorithm
can isometrically learn data distributed on several manifolds. Secondly, the
original multi-cluster manifold learning algorithm first proposed in
\cite{DCIsomap} and called D-C Isomap has been revised so that the revised D-C
Isomap can learn multi-manifolds data. Finally, the features and effectiveness
of the proposed multi-manifolds learning algorithms are demonstrated and
compared through experiments.
|
0912.0579
|
A Multidatabase System as 4-Tiered Client-Server Distributed
Heterogeneous Database System
|
cs.DB
|
In this paper, we describe a multidatabase system as 4tiered Client-Server
DBMS architectures. We discuss their functional components and provide an
overview of their performance characteristics. The first component of this
proposed system is a web based interface or Graphical User Interface, which
resides on top of the Client Application Program, the second component of the
system is a client Application program running in an application server, which
resides on top of the Global Database Management System, the third component of
the system is a Global Database Management System and global schema of the
multidatabase system server, which resides on top of the distributed
heterogeneous local component database system servers, and the fourth component
is remote heterogeneous local component database system servers. Transaction
submitted from client interface to a multidatabase system server through an
application server will be decomposed into a set of sub queries and will be
executed at various remote heterogeneous local component database servers and
also in case of information retrieval all sub queries will be composed and will
get back results to the end users.
|
0912.0581
|
Log-concavity, ultra-log-concavity, and a maximum entropy property of
discrete compound Poisson measures
|
math.CO cs.IT math.IT math.PR
|
Sufficient conditions are developed, under which the compound Poisson
distribution has maximal entropy within a natural class of probability measures
on the nonnegative integers. Recently, one of the authors [O. Johnson, {\em
Stoch. Proc. Appl.}, 2007] used a semigroup approach to show that the Poisson
has maximal entropy among all ultra-log-concave distributions with fixed mean.
We show via a non-trivial extension of this semigroup approach that the natural
analog of the Poisson maximum entropy property remains valid if the compound
Poisson distributions under consideration are log-concave, but that it fails in
general. A parallel maximum entropy result is established for the family of
compound binomial measures. Sufficient conditions for compound distributions to
be log-concave are discussed and applications to combinatorics are examined;
new bounds are derived on the entropy of the cardinality of a random
independent set in a claw-free graph, and a connection is drawn to Mason's
conjecture for matroids. The present results are primarily motivated by the
desire to provide an information-theoretic foundation for compound Poisson
approximation and associated limit theorems, analogous to the corresponding
developments for the central limit theorem and for Poisson approximation. Our
results also demonstrate new links between some probabilistic methods and the
combinatorial notions of log-concavity and ultra-log-concavity, and they add to
the growing body of work exploring the applications of maximum entropy
characterizations to problems in discrete mathematics.
|
0912.0597
|
Constructing Optimal Authentication Codes with Perfect Multi-fold
Secrecy
|
cs.CR cs.IT math.IT
|
We establish a construction of optimal authentication codes achieving perfect
multi-fold secrecy by means of combinatorial designs. This continues the
author's work (ISIT 2009) and answers an open question posed therein. As an
application, we present the first infinite class of optimal codes that provide
two-fold security against spoofing attacks and at the same time perfect two-
fold secrecy.
|
0912.0599
|
Conceptual Model for Communication
|
cs.NI cs.IT math.IT
|
A variety of idealized models of communication systems exist, and all may
have something in common. Starting with Shannons communication model and ending
with the OSI model, this paper presents progressively more advanced forms of
modeling of communication systems by tying communication models together based
on the notion of flow. The basic communication process is divided into
different spheres (sources, channels, and destinations), each with its own five
interior stages, receiving, processing, creating, releasing, and transferring
of information. The flow of information is ontologically distinguished from the
flow of physical signals, accordingly, Shannons model, network based OSI
models, and TCP IP are redesigned.
|
0912.0600
|
Sequential Clustering based Facial Feature Extraction Method for
Automatic Creation of Facial Models from Orthogonal Views
|
cs.CV
|
Multiview 3D face modeling has attracted increasing attention recently and
has become one of the potential avenues in future video systems. We aim to make
more reliable and robust automatic feature extraction and natural 3D feature
construction from 2D features detected on a pair of frontal and profile view
face images. We propose several heuristic algorithms to minimize possible
errors introduced by prevalent nonperfect orthogonal condition and noncoherent
luminance. In our approach, we first extract the 2D features that are visible
to both cameras in both views. Then, we estimate the coordinates of the
features in the hidden profile view based on the visible features extracted in
the two orthogonal views. Finally, based on the coordinates of the extracted
features, we deform a 3D generic model to perform the desired 3D clone
modeling. Present study proves the scope of resulted facial models for
practical applications like face recognition and facial animation.
|
0912.0603
|
Object Oriented Approach for Integration of Heterogeneous Databases in a
Multidatabase System and Local Schemas Modifications Propagation
|
cs.DB
|
One of the challenging problems in the multidatabase systems is to find the
most viable solution to the problem of interoperability of distributed
heterogeneous autonomous local component databases. This has resulted in the
creation of a global schema over set of these local component database schemas
to provide a uniform representation of local schemas. The aim of this paper is
to use object oriented approach to integrate schemas of distributed
heterogeneous autonomous local component database schemas into a global schema.
The resulting global schema provides a uniform interface and high level of
location transparency for retrieval of data from the local component databases.
A set of integration operators are defined to integrate local schemas based on
the semantic relevance of their classes and to provide a model independent
representation of virtual classes of the global schema. The schematic
representation and heterogeneity is also taken into account in the integration
process. Justifications about Object Oriented Modal are also discussed. Bottom
up local schema modifications propagation in Global schema is also considered
to maintain Global schema as local schemas are autonomous and evolve over time.
An example illustrates the applicability of the integration operator defined.
|
0912.0607
|
Reversible Image Authentication with Tamper Localization Based on
Integer Wavelet Transform
|
cs.CR cs.CV
|
In this paper, a new reversible image authentication technique with tamper
localization based on watermarking in integer wavelet transform is proposed. If
the image authenticity is verified, then the distortion due to embedding the
watermark can be completely removed from the watermarked image. If the image is
tampered, then the tampering positions can also be localized. Two layers of
watermarking are used. The first layer embedded in spatial domain verifies
authenticity and the second layer embedded in transform domain provides
reversibility. This technique utilizes selective LSB embedding and histogram
characteristics of the difference images of the wavelet coefficients and
modifies pixel values slightly to embed the watermark. Experimental results
demonstrate that the proposed scheme can detect any modifications of the
watermarked image.
|
0912.0717
|
Behavior and performance of the deep belief networks on image
classification
|
cs.NE cs.CV
|
We apply deep belief networks of restricted Boltzmann machines to bags of
words of sift features obtained from databases of 13 Scenes, 15 Scenes and
Caltech 256 and study experimentally their behavior and performance. We find
that the final performance in the supervised phase is reached much faster if
the system is pre-trained. Pre-training the system on a larger dataset keeping
the supervised dataset fixed improves the performance (for the 13 Scenes case).
After the unsupervised pre-training, neurons arise that form approximate
explicit representations for several categories (meaning they are mostly active
for this category). The last three facts suggest that unsupervised training
really discovers structure in these data. Pre-training can be done on a
completely different dataset (we use Corel dataset) and we find that the
supervised phase performs just as good (on the 15 Scenes dataset). This leads
us to conjecture that one can pre-train the system once (e.g. in a factory) and
subsequently apply it to many supervised problems which then learn much faster.
The best performance is obtained with single hidden layer system suggesting
that the histogram of sift features doesn't have much high level structure. The
overall performance is almost equal, but slightly worse then that of the
support vector machine and the spatial pyramidal matching.
|
0912.0756
|
Beamforming in MISO Systems: Empirical Results and EVM-based Analysis
|
cs.IT math.IT
|
We present an analytical, simulation, and experimental-based study of
beamforming Multiple Input Single Output (MISO) systems. We analyze the
performance of beamforming MISO systems taking into account implementation
complexity and effects of imperfect channel estimate, delayed feedback, real
Radio Frequency (RF) hardware, and imperfect timing synchronization. Our
results show that efficient implementation of codebook-based beamforming MISO
systems with good performance is feasible in the presence of channel and
implementation-induced imperfections. As part of our study we develop a
framework for Average Error Vector Magnitude Squared (AEVMS)-based analysis of
beamforming MISO systems which facilitates comparison of analytical,
simulation, and experimental results on the same scale. In addition, AEVMS
allows fair comparison of experimental results obtained from different wireless
testbeds. We derive novel expressions for the AEVMS of beamforming MISO systems
and show how the AEVMS relates to important system characteristics like the
diversity gain, coding gain, and error floor.
|
0912.0758
|
Comparison of Performance Metrics for QPSK and OQPSK Transmission Using
Root Raised Cosine and Raised Cosine Pulse shaping Filters for Applications
in Mobile Communication
|
cs.IT math.IT
|
Quadrature Phase Shift Keying (QPSK) and Offset Quadrature Phase Shift Keying
(OQPSK) are two well accepted modulation techniques used in Code Division
Multiple Access (CDMA) system. The Pulse Shaping Filters play an important role
in digital transmission. The type of Pulse Shaping Filter used, and its
behavior would influence the performance of the communication system. This in
turn, would have an effect on the performance of the Mobile Communication
system, in which the digital communication technique has been employed. In this
paper we have presented comparative study of some performance parameters or
performance metrics of a digital communication system like, Error Vector
Magnitude (EVM), Magnitude Error, Phase Error and Bandwidth Efficiency for a
QPSK transmission system. Root Raised Cosine (RRC) and Raised Cosine (RC) Pulse
shaping filters have been used for comparison. The measurement results serve as
a guideline to the system designer to select the proper pulse shaping filter
with the appropriate value of filter roll off factor (a) in a QPSK modulated
mobile communication system for optimal values of its different performance
metrics.
|
0912.0765
|
On the Energy Efficiency of LT Codes in Proactive Wireless Sensor
Networks
|
cs.IT math.IT
|
This paper presents the first in-depth analysis on the energy efficiency of
LT codes with Non Coherent M-ary Frequency Shift Keying (NC-MFSK), known as
green modulation [1], in a proactive Wireless Sensor Network (WSN) over
Rayleigh flat-fading channels with path-loss. We describe the proactive system
model according to a pre-determined time-based process utilized in practical
sensor nodes. The present analysis is based on realistic parameters including
the effect of channel bandwidth used in the IEEE 802.15.4 standard, and the
active mode duration. A comprehensive analysis, supported by some simulation
studies on the probability mass function of the LT code rate and coding gain,
shows that among uncoded NC-MFSK and various classical channel coding schemes,
the optimized LT coded NC-MFSK is the most energy-efficient scheme for distance
$d$ greater than the pre-determined threshold level $d_T$, where the
optimization is performed over coding and modulation parameters. In addition,
although uncoded NC-MFSK outperforms coded schemes for $d < d_T$, the energy
gap between LT coded and uncoded NC-MFSK is negligible for $d < d_T$ compared
to the other coded schemes. These results come from the flexibility of the LT
code to adjust its rate to suit instantaneous channel conditions, and suggest
that LT codes are beneficial in practical low-power WSNs with dynamic position
sensor nodes.
|
0912.0779
|
Training a Large Scale Classifier with the Quantum Adiabatic Algorithm
|
quant-ph cs.LG
|
In a previous publication we proposed discrete global optimization as a
method to train a strong binary classifier constructed as a thresholded sum
over weak classifiers. Our motivation was to cast the training of a classifier
into a format amenable to solution by the quantum adiabatic algorithm. Applying
adiabatic quantum computing (AQC) promises to yield solutions that are superior
to those which can be achieved with classical heuristic solvers. Interestingly
we found that by using heuristic solvers to obtain approximate solutions we
could already gain an advantage over the standard method AdaBoost. In this
communication we generalize the baseline method to large scale classifier
training. By large scale we mean that either the cardinality of the dictionary
of candidate weak classifiers or the number of weak learners used in the strong
classifier exceed the number of variables that can be handled effectively in a
single global optimization. For such situations we propose an iterative and
piecewise approach in which a subset of weak classifiers is selected in each
iteration via global optimization. The strong classifier is then constructed by
concatenating the subsets of weak classifiers. We show in numerical studies
that the generalized method again successfully competes with AdaBoost. We also
provide theoretical arguments as to why the proposed optimization method, which
does not only minimize the empirical loss but also adds L0-norm regularization,
is superior to versions of boosting that only minimize the empirical loss. By
conducting a Quantum Monte Carlo simulation we gather evidence that the quantum
adiabatic algorithm is able to handle a generic training problem efficiently.
|
0912.0797
|
On Syndrome Decoding for Slepian-Wolf Coding Based on Convolutional and
Turbo Codes
|
cs.IT math.IT
|
In source coding, either with or without side information at the decoder, the
ultimate performance can be achieved by means of random binning. Structured
binning into cosets of performing channel codes has been successfully employed
in practical applications. In this letter it is formally shown that various
convolutional- and turbo-syndrome decoding algorithms proposed in literature
lead in fact to the same estimate. An equivalent implementation is also
delineated by directly tackling syndrome decoding as a maximum a posteriori
probability problem and solving it by means of iterative message-passing. This
solution takes advantage of the exact same structures and algorithms used by
the conventional channel decoder for the code according to which the syndrome
is formed.
|
0912.0821
|
Lexical evolution rates by automated stability measure
|
cs.CL physics.soc-ph
|
Phylogenetic trees can be reconstructed from the matrix which contains the
distances between all pairs of languages in a family. Recently, we proposed a
new method which uses normalized Levenshtein distances among words with same
meaning and averages on all the items of a given list. Decisions about the
number of items in the input lists for language comparison have been debated
since the beginning of glottochronology. The point is that words associated to
some of the meanings have a rapid lexical evolution. Therefore, a large
vocabulary comparison is only apparently more accurate then a smaller one since
many of the words do not carry any useful information. In principle, one should
find the optimal length of the input lists studying the stability of the
different items. In this paper we tackle the problem with an automated
methodology only based on our normalized Levenshtein distance. With this
approach, the program of an automated reconstruction of languages relationships
is completed.
|
0912.0840
|
Applying an XML Warehouse to Social Network Analysis, Lessons from the
WebStand Project
|
cs.DB cs.CY
|
In this paper we present the state of advancement of the French ANR WebStand
project. The objective of this project is to construct a customizable XML based
warehouse platform to acquire, transform, analyze, store, query and export data
from the web, in particular mailing lists, with the final intension of using
this data to perform sociological studies focused on social groups of World
Wide Web, with a specific emphasis on the temporal aspects of this data. We are
currently using this system to analyze the standardization process of the W3C,
through its social network of standard setters.
|
0912.0868
|
Interference Alignment in Dense Wireless Networks
|
cs.IT math.IT
|
We consider arbitrary dense wireless networks, in which $n$ nodes are placed
in an arbitrary (deterministic) manner on a square region of unit area and
communicate with each other over Gaussian fading channels. We provide inner and
outer bounds for the $n\times n$-dimensional unicast and the $n\times
2^n$-dimensional multicast capacity regions of such a wireless network. These
inner and outer bounds differ only by a factor $O(\log(n))$, yielding a fairly
tight scaling characterization of the entire regions. The communication schemes
achieving the inner bounds use interference alignment as a central technique
and are, at least conceptually, surprisingly simple.
|
0912.0884
|
Measures of lexical distance between languages
|
cs.CL physics.soc-ph
|
The idea of measuring distance between languages seems to have its roots in
the work of the French explorer Dumont D'Urville \cite{Urv}. He collected
comparative words lists of various languages during his voyages aboard the
Astrolabe from 1826 to 1829 and, in his work about the geographical division of
the Pacific, he proposed a method to measure the degree of relation among
languages. The method used by modern glottochronology, developed by Morris
Swadesh in the 1950s, measures distances from the percentage of shared
cognates, which are words with a common historical origin. Recently, we
proposed a new automated method which uses normalized Levenshtein distance
among words with the same meaning and averages on the words contained in a
list. Recently another group of scholars \cite{Bak, Hol} proposed a refined of
our definition including a second normalization. In this paper we compare the
information content of our definition with the refined version in order to
decide which of the two can be applied with greater success to resolve
relationships among languages.
|
0912.0893
|
Performance Analysis on Molecular Dynamics Simulation of Protein Using
GROMACS
|
cs.CE q-bio.BM
|
Development of computer technology in chemistry, bring many application of
chemistry. Not only the application to visualize the structure of molecule but
also to molecular dynamics simulation. One of them is Gromacs. Gromacs is an
example of molecular dynamics application developed by Groningen University.
This application is a non-commercial and able to work in the operating system
Linux. The main ability of Gromacs is to perform molecular dynamics simulation
and minimization energy. In this paper, the author discusses about how to work
Gromacs in molecular dynamics simulation of some protein. In the molecular
dynamics simulation, Gromacs does not work alone. Gromacs interact with pymol
and Grace. Pymol is an application to visualize molecule structure and Grace is
an application in Linux to display graphs. Both applications will support
analysis of molecular dynamics simulation.
|
0912.0913
|
Search for overlapped communities by parallel genetic algorithms
|
cs.IR cs.GL physics.soc-ph
|
In the last decade the broad scope of complex networks has led to a rapid
progress. In this area a particular interest has the study of community
structures. The analysis of this type of structure requires the formalization
of the intuitive concept of community and the definition of indices of goodness
for the obtained results. A lot of algorithms has been presented to reach this
goal. In particular, an interesting problem is the search of overlapped
communities and it is field seems very interesting a solution based on the use
of genetic algorithms. The approach discusses in this paper is based on a
parallel implementation of a genetic algorithm and shows the performance
benefits of this solution.
|
0912.0936
|
Neural-estimator for the surface emission rate of atmospheric gases
|
cs.NE
|
The emission rate of minority atmospheric gases is inferred by a new approach
based on neural networks. The neural network applied is the multi-layer
perceptron with backpropagation algorithm for learning. The identification of
these surface fluxes is an inverse problem. A comparison between the new
neural-inversion and regularized inverse solution id performed. The results
obtained from the neural networks are significantly better. In addition, the
inversion with the neural netwroks is fster than regularized approaches, after
training.
|
0912.0950
|
Fingerprint Verification based on Gabor Filter Enhancement
|
cs.CR cs.CV
|
Human fingerprints are reliable characteristics for personnel identification
as it is unique and persistence. A fingerprint pattern consists of ridges,
valleys and minutiae. In this paper we propose Fingerprint Verification based
on Gabor Filter Enhancement (FVGFE) algorithm for minutiae feature extraction
and post processing based on 9 pixel neighborhood. A global feature extraction
and fingerprints enhancement are based on Hong enhancement method which is
simultaneously able to extract local ridge orientation and ridge frequency. It
is observed that the Sensitivity and Specificity values are better compared to
the existing algorithms.
|
0912.0955
|
Robust Multi biometric Recognition Using Face and Ear Images
|
cs.CR cs.CV
|
This study investigates the use of ear as a biometric for authentication and
shows experimental results obtained on a newly created dataset of 420 images.
Images are passed to a quality module in order to reduce False Rejection Rate.
The Principal Component Analysis (eigen ear) approach was used, obtaining 90.7
percent recognition rate. Improvement in recognition results is obtained when
ear biometric is fused with face biometric. The fusion is done at decision
level, achieving a recognition rate of 96 percent.
|
0912.0962
|
Adaptive Limited Feedback for Sum-Rate Maximizing Beamforming in
Cooperative Multicell Systems
|
cs.IT math.IT
|
Base station cooperation improves the sum-rates that can be achieved in
cellular systems. Conventional cooperation techniques require sharing large
amounts of information over finite-capacity backhaul links and assume that base
stations have full channel state information (CSI) of all the active users in
the system. In this paper, a new limited feedback strategy is proposed for
multicell beamforming where cooperation is restricted to sharing only the CSI
of active users among base stations. The system setup considered is a linear
array of cells based on the Wyner model. Each cell contains single-antenna
users and multi-antenna base stations. Closed-form expressions for the
beamforming vectors that approximately maximize the sum-rates in a multicell
system are first presented, assuming full CSI at the transmitter. For the more
practical case of a finite-bandwidth feedback link, CSI of the desired and
interfering channels is quantized at the receiver before being fed back to the
base station. An upper bound on the mean loss in sum rate due to random vector
quantization is derived. A new feedback-bit allocation strategy, to partition
the available bits between the desired and interfering channels, is developed
to approximately minimize the mean loss in sum-rate due to quantization. The
proposed feedback-bit partitioning algorithm is shown, using simulations, to
yield sum-rates close to the those obtained using full CSI at base stations.
|
0912.0965
|
Explicit Capacity-achieving Codes for Worst-Case Additive Errors
|
cs.IT cs.CC math.CO math.IT
|
For every p in (0,1/2), we give an explicit construction of binary codes of
rate approaching "capacity" 1-H(p) that enable reliable communication in the
presence of worst-case additive errors}, caused by a channel oblivious to the
codeword (but not necessarily the message). Formally, we give an efficient
"stochastic" encoding E(\cdot,\cdot) of messages combined with a small number
of auxiliary random bits, such that for every message m and every error vector
e (that could depend on m) that contains at most a fraction p of ones, w.h.p
over the random bits r chosen by the encoder, m can be efficiently recovered
from the corrupted codeword E(m,r) + e by a decoder without knowledge of the
encoder's randomness r.
Our construction for additive errors also yields explicit deterministic codes
of rate approaching 1-H(p) for the "average error" criterion: for every error
vector e of at most p fraction 1's, most messages m can be efficiently
(uniquely) decoded from the corrupted codeword C(m)+e. Note that such codes
cannot be linear, as the bad error patterns for all messages are the same in a
linear code. We also give a new proof of the existence of such codes based on
list decoding and certain algebraic manipulation detection codes. Our proof is
simpler than the previous proofs from the literature on arbitrarily varying
channels.
|
0912.0986
|
Fish recognition based on the combination between robust feature
selection, image segmentation and geometrical parameter techniques using
Artificial Neural Network and Decision Tree
|
cs.CV cs.NE
|
We presents in this paper a novel fish classification methodology based on a
combination between robust feature selection, image segmentation and
geometrical parameter techniques using Artificial Neural Network and Decision
Tree. Unlike existing works for fish classification, which propose descriptors
and do not analyze their individual impacts in the whole classification task
and do not make the combination between the feature selection, image
segmentation and geometrical parameter, we propose a general set of features
extraction using robust feature selection, image segmentation and geometrical
parameter and their correspondent weights that should be used as a priori
information by the classifier. In this sense, instead of studying techniques
for improving the classifiers structure itself, we consider it as a black box
and focus our research in the determination of which input information must
bring a robust fish discrimination.The main contribution of this paper is
enhancement recognize and classify fishes based on digital image and To develop
and implement a novel fish recognition prototype using global feature
extraction, image segmentation and geometrical parameters, it have the ability
to Categorize the given fish into its cluster and Categorize the clustered fish
into poison or non-poison fish, and categorizes the non-poison fish into its
family .
|
0912.1005
|
Performance analysis of Non Linear Filtering Algorithms for underwater
images
|
cs.MM cs.CV cs.IR
|
Image filtering algorithms are applied on images to remove the different
types of noise that are either present in the image during capturing or
injected in to the image during transmission. Underwater images when captured
usually have Gaussian noise, speckle noise and salt and pepper noise. In this
work, five different image filtering algorithms are compared for the three
different noise types. The performances of the filters are compared using the
Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE). The modified
spatial median filter gives desirable results in terms of the above two
parameters for the three different noise. Forty underwater images are taken for
study.
|
0912.1007
|
Designing Kernel Scheme for Classifiers Fusion
|
cs.LG cs.NE
|
In this paper, we propose a special fusion method for combining ensembles of
base classifiers utilizing new neural networks in order to improve overall
efficiency of classification. While ensembles are designed such that each
classifier is trained independently while the decision fusion is performed as a
final procedure, in this method, we would be interested in making the fusion
process more adaptive and efficient. This new combiner, called Neural Network
Kernel Least Mean Square1, attempts to fuse outputs of the ensembles of
classifiers. The proposed Neural Network has some special properties such as
Kernel abilities,Least Mean Square features, easy learning over variants of
patterns and traditional neuron capabilities. Neural Network Kernel Least Mean
Square is a special neuron which is trained with Kernel Least Mean Square
properties. This new neuron is used as a classifiers combiner to fuse outputs
of base neural network classifiers. Performance of this method is analyzed and
compared with other fusion methods. The analysis represents higher performance
of our new method as opposed to others.
|
0912.1009
|
Biogeography based Satellite Image Classification
|
cs.CV cs.LG
|
Biogeography is the study of the geographical distribution of biological
organisms. The mindset of the engineer is that we can learn from nature.
Biogeography Based Optimization is a burgeoning nature inspired technique to
find the optimal solution of the problem. Satellite image classification is an
important task because it is the only way we can know about the land cover map
of inaccessible areas. Though satellite images have been classified in past by
using various techniques, the researchers are always finding alternative
strategies for satellite image classification so that they may be prepared to
select the most appropriate technique for the feature extraction task in hand.
This paper is focused on classification of the satellite image of a particular
land cover using the theory of Biogeography based Optimization. The original
BBO algorithm does not have the inbuilt property of clustering which is
required during image classification. Hence modifications have been proposed to
the original algorithm and the modified algorithm is used to classify the
satellite image of a given region. The results indicate that highly accurate
land cover features can be extracted effectively when the proposed algorithm is
used.
|
0912.1010
|
Web Document Analysis for Companies Listed in Bursa Malaysia
|
cs.IR
|
This paper discusses a research on web document analysis for companies listed
on Bursa Malaysia which is the forerunner of financial and investment center in
Malaysia. Data set used in this research are from the company web documents
listed in the Main Board and Second Board on Bursa Malaysia. This research has
used the Web Resources Extraction System which was developed by the research
group mainly to extract information for the web documents involved. Our
research findings have shown that the level of website usage among the
companies on Bursa Malaysia is still minimal. Furthermore, research has also
found that 60.02 percent of the image files are utilized making it the most
used type of file in creating websites.
|
0912.1014
|
An ensemble approach for feature selection of Cyber Attack Dataset
|
cs.CR cs.LG
|
Feature selection is an indispensable preprocessing step when mining huge
datasets that can significantly improve the overall system performance.
Therefore in this paper we focus on a hybrid approach of feature selection.
This method falls into two phases. The filter phase select the features with
highest information gain and guides the initialization of search process for
wrapper phase whose output the final feature subset. The final feature subsets
are passed through the Knearest neighbor classifier for classification of
attacks. The effectiveness of this algorithm is demonstrated on DARPA KDDCUP99
cyber attack dataset.
|
0912.1015
|
Short Term Load Forecasting Using Multi Parameter Regression
|
cs.NE cs.CE
|
Short Term Load forecasting in this paper uses input data dependent on
parameters such as load for current hour and previous two hours, temperature
for current hour and previous two hours, wind for current hour and previous two
hours, cloud for current hour and previous two hours. Forecasting will be of
load demand for coming hour based on input parameters at that hour. In this
paper we are using multiparameter regression method for forecasting which has
error within tolerable range. Algorithms implementing these forecasting
techniques have been programmed using MATLAB and applied to the case study.
Other methodologies in this area are ANN, Fuzzy and Evolutionary Algorithms for
which investigations are under process. Adaptive multiparameter regression for
load forecasting, in near future will be possible.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.