id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0803.3946
|
On the `Semantics' of Differential Privacy: A Bayesian Formulation
|
cs.CR cs.DB
|
Differential privacy is a definition of "privacy'" for algorithms that
analyze and publish information about statistical databases. It is often
claimed that differential privacy provides guarantees against adversaries with
arbitrary side information. In this paper, we provide a precise formulation of
these guarantees in terms of the inferences drawn by a Bayesian adversary. We
show that this formulation is satisfied by both "vanilla" differential privacy
as well as a relaxation known as (epsilon,delta)-differential privacy. Our
formulation follows the ideas originally due to Dwork and McSherry [Dwork
2006]. This paper is, to our knowledge, the first place such a formulation
appears explicitly. The analysis of the relaxed definition is new to this
paper, and provides some concrete guidance for setting parameters when using
(epsilon,delta)-differential privacy.
|
0803.4026
|
High-dimensional analysis of semidefinite relaxations for sparse
principal components
|
math.ST cs.IT math.IT stat.TH
|
Principal component analysis (PCA) is a classical method for dimensionality
reduction based on extracting the dominant eigenvectors of the sample
covariance matrix. However, PCA is well known to behave poorly in the ``large
$p$, small $n$'' setting, in which the problem dimension $p$ is comparable to
or larger than the sample size $n$. This paper studies PCA in this
high-dimensional regime, but under the additional assumption that the maximal
eigenvector is sparse, say, with at most $k$ nonzero components. We consider a
spiked covariance model in which a base matrix is perturbed by adding a
$k$-sparse maximal eigenvector, and we analyze two computationally tractable
methods for recovering the support set of this maximal eigenvector, as follows:
(a) a simple diagonal thresholding method, which transitions from success to
failure as a function of the rescaled sample size
$\theta_{\mathrm{dia}}(n,p,k)=n/[k^2\log(p-k)]$; and (b) a more sophisticated
semidefinite programming (SDP) relaxation, which succeeds once the rescaled
sample size $\theta_{\mathrm{sdp}}(n,p,k)=n/[k\log(p-k)]$ is larger than a
critical threshold. In addition, we prove that no method, including the best
method which has exponential-time complexity, can succeed in recovering the
support if the order parameter $\theta_{\mathrm{sdp}}(n,p,k)$ is below a
threshold. Our results thus highlight an interesting trade-off between
computational and statistical efficiency in high-dimensional inference.
|
0803.4074
|
Reflective visualization and verbalization of unconscious preference
|
cs.AI
|
A new method is presented, that can help a person become aware of his or her
unconscious preferences, and convey them to others in the form of verbal
explanation. The method combines the concepts of reflection, visualization, and
verbalization. The method was tested in an experiment where the unconscious
preferences of the subjects for various artworks were investigated. In the
experiment, two lessons were learned. The first is that it helps the subjects
become aware of their unconscious preferences to verbalize weak preferences as
compared with strong preferences through discussion over preference diagrams.
The second is that it is effective to introduce an adjustable factor into
visualization to adapt to the differences in the subjects and to foster their
mutual understanding.
|
0803.4240
|
Neutral Fitness Landscape in the Cellular Automata Majority Problem
|
cs.NE
|
We study in detail the fitness landscape of a difficult cellular automata
computational task: the majority problem. Our results show why this problem
landscape is so hard to search, and we quantify the large degree of neutrality
found in various ways. We show that a particular subspace of the solution
space, called the "Olympus", is where good solutions concentrate, and give
measures to quantitatively characterize this subspace.
|
0803.4241
|
Evolving Dynamic Change and Exchange of Genotype Encoding in Genetic
Algorithms for Difficult Optimization Problems
|
cs.NE
|
The application of genetic algorithms (GAs) to many optimization problems in
organizations often results in good performance and high quality solutions. For
successful and efficient use of GAs, it is not enough to simply apply simple
GAs (SGAs). In addition, it is necessary to find a proper representation for
the problem and to develop appropriate search operators that fit well to the
properties of the genotype encoding. The representation must at least be able
to encode all possible solutions of an optimization problem, and genetic
operators such as crossover and mutation should be applicable to it. In this
paper, serial alternation strategies between two codings are formulated in the
framework of dynamic change of genotype encoding in GAs for function
optimization. Likewise, a new variant of GAs for difficult optimization
problems denoted {\it Split-and-Merge} GA (SM-GA) is developed using a parallel
implementation of an SGA and evolving a dynamic exchange of individual
representation in the context of Dual Coding concept. Numerical experiments
show that the evolved SM-GA significantly outperforms an SGA with static single
coding.
|
0803.4248
|
From Cells to Islands: An unified Model of Cellular Parallel Genetic
Algorithms
|
cs.NE
|
This paper presents the Anisotropic selection scheme for cellular Genetic
Algorithms (cGA). This new scheme allows to enhance diversity and to control
the selective pressure which are two important issues in Genetic Algorithms,
especially when trying to solve difficult optimization problems. Varying the
anisotropic degree of selection allows swapping from a cellular to an island
model of parallel genetic algorithm. Measures of performances and diversity
have been performed on one well-known problem: the Quadratic Assignment Problem
which is known to be difficult to optimize. Experiences show that, tuning the
anisotropic degree, we can find the accurate trade-off between cGA and island
models to optimize performances of parallel evolutionary algorithms. This
trade-off can be interpreted as the suitable degree of migration among
subpopulations in a parallel Genetic Algorithm.
|
0803.4253
|
Combinatorial Explorations in Su-Doku
|
cs.AI cs.CC
|
Su-Doku, a popular combinatorial puzzle, provides an excellent testbench for
heuristic explorations. Several interesting questions arise from its
deceptively simple set of rules. How many distinct Su-Doku grids are there? How
to find a solution to a Su-Doku puzzle? Is there a unique solution to a given
Su-Doku puzzle? What is a good estimation of a puzzle's difficulty? What is the
minimum puzzle size (the number of "givens")?
This paper explores how these questions are related to the well-known
alldifferent constraint which emerges in a wide variety of Constraint
Satisfaction Problems (CSP) and compares various algorithmic approaches based
on different formulations of Su-Doku.
|
0803.4332
|
On Sequential Estimation and Prediction for Discrete Time Series
|
math.PR cs.IT math.IT
|
The problem of extracting as much information as possible from a sequence of
observations of a stationary stochastic process $X_0,X_1,...X_n$ has been
considered by many authors from different points of view. It has long been
known through the work of D. Bailey that no universal estimator for
$\textbf{P}(X_{n+1}|X_0,X_1,...X_n)$ can be found which converges to the true
estimator almost surely. Despite this result, for restricted classes of
processes, or for sequences of estimators along stopping times, universal
estimators can be found. We present here a survey of some of the recent work
that has been done along these lines.
|
0803.4355
|
Grammar-Based Random Walkers in Semantic Networks
|
cs.AI cs.DS
|
Semantic networks qualify the meaning of an edge relating any two vertices.
Determining which vertices are most "central" in a semantic network is
difficult because one relationship type may be deemed subjectively more
important than another. For this reason, research into semantic network metrics
has focused primarily on context-based rankings (i.e. user prescribed
contexts). Moreover, many of the current semantic network metrics rank semantic
associations (i.e. directed paths between two vertices) and not the vertices
themselves. This article presents a framework for calculating semantically
meaningful primary eigenvector-based metrics such as eigenvector centrality and
PageRank in semantic networks using a modified version of the random walker
model of Markov chain analysis. Random walkers, in the context of this article,
are constrained by a grammar, where the grammar is a user defined data
structure that determines the meaning of the final vertex ranking. The ideas in
this article are presented within the context of the Resource Description
Framework (RDF) of the Semantic Web initiative.
|
0804.0006
|
Embedding in a perfect code
|
math.CO cs.IT math.IT
|
A binary 1-error-correcting code can always be embedded in a 1-perfect code
of some larger length
|
0804.0036
|
Complexity and algorithms for computing Voronoi cells of lattices
|
math.MG cs.CG cs.IT math.IT math.NT
|
In this paper we are concerned with finding the vertices of the Voronoi cell
of a Euclidean lattice. Given a basis of a lattice, we prove that computing the
number of vertices is a #P-hard problem. On the other hand we describe an
algorithm for this problem which is especially suited for low dimensional (say
dimensions at most 12) and for highly-symmetric lattices. We use our
implementation, which drastically outperforms those of current computer algebra
systems, to find the vertices of Voronoi cells and quantizer constants of some
prominent lattices.
|
0804.0041
|
On the reconstruction of block-sparse signals with an optimal number of
measurements
|
cs.IT cs.NA math.IT
|
Let A be an M by N matrix (M < N) which is an instance of a real random
Gaussian ensemble. In compressed sensing we are interested in finding the
sparsest solution to the system of equations A x = y for a given y. In general,
whenever the sparsity of x is smaller than half the dimension of y then with
overwhelming probability over A the sparsest solution is unique and can be
found by an exhaustive search over x with an exponential time complexity for
any y. The recent work of Cand\'es, Donoho, and Tao shows that minimization of
the L_1 norm of x subject to A x = y results in the sparsest solution provided
the sparsity of x, say K, is smaller than a certain threshold for a given
number of measurements. Specifically, if the dimension of y approaches the
dimension of x, the sparsity of x should be K < 0.239 N. Here, we consider the
case where x is d-block sparse, i.e., x consists of n = N / d blocks where each
block is either a zero vector or a nonzero vector. Instead of L_1-norm
relaxation, we consider the following relaxation min x \| X_1 \|_2 + \| X_2
\|_2 + ... + \| X_n \|_2, subject to A x = y where X_i = (x_{(i-1)d+1},
x_{(i-1)d+2}, ..., x_{i d}) for i = 1,2, ..., N. Our main result is that as n
-> \infty, the minimization finds the sparsest solution to Ax = y, with
overwhelming probability in A, for any x whose block sparsity is k/n < 1/2 -
O(\epsilon), provided M/N > 1 - 1/d, and d = \Omega(\log(1/\epsilon)/\epsilon).
The relaxation can be solved in polynomial time using semi-definite
programming.
|
0804.0050
|
Outage Probability of the Gaussian MIMO Free-Space Optical Channel with
PPM
|
cs.IT math.IT
|
The free-space optical channel has the potential to facilitate inexpensive,
wireless communication with fiber-like bandwidth under short deployment
timelines. However, atmospheric effects can significantly degrade the
reliability of a free-space optical link. In particular, atmospheric turbulence
causes random fluctuations in the irradiance of the received laser beam,
commonly referred to as scintillation. The scintillation process is slow
compared to the large data rates typical of optical transmission. As such, we
adopt a quasi-static block fading model and study the outage probability of the
channel under the assumption of orthogonal pulse-position modulation. We
investigate the mitigation of scintillation through the use of multiple lasers
and multiple apertures, thereby creating a multiple-input multiple output
(MIMO) channel. Non-ideal photodetection is also assumed such that the combined
shot noise and thermal noise are considered as signal-independent Additive
Gaussian white noise. Assuming perfect receiver channel state information
(CSI), we compute the signal-to-noise ratio exponents for the cases when the
scintillation is lognormal, exponential and gamma-gamma distributed, which
cover a wide range of atmospheric turbulence conditions. Furthermore, we
illustrate very large gains, in some cases larger than 15 dB, when transmitter
CSI is also available by adapting the transmitted electrical power.
|
0804.0066
|
Binary Decision Diagrams for Affine Approximation
|
cs.LO cs.AI
|
Selman and Kautz's work on ``knowledge compilation'' established how
approximation (strengthening and/or weakening) of a propositional
knowledge-base can be used to speed up query processing, at the expense of
completeness. In this classical approach, querying uses Horn over- and
under-approximations of a given knowledge-base, which is represented as a
propositional formula in conjunctive normal form (CNF). Along with the class of
Horn functions, one could imagine other Boolean function classes that might
serve the same purpose, owing to attractive deduction-computational properties
similar to those of the Horn functions. Indeed, Zanuttini has suggested that
the class of affine Boolean functions could be useful in knowledge compilation
and has presented an affine approximation algorithm. Since CNF is awkward for
presenting affine functions, Zanuttini considers both a sets-of-models
representation and the use of modulo 2 congruence equations. In this paper, we
propose an algorithm based on reduced ordered binary decision diagrams
(ROBDDs). This leads to a representation which is more compact than the sets of
models and, once we have established some useful properties of affine Boolean
functions, a more efficient algorithm.
|
0804.0143
|
Effects of High-Order Co-occurrences on Word Semantic Similarities
|
cs.CL
|
A computational model of the construction of word meaning through exposure to
texts is built in order to simulate the effects of co-occurrence values on word
semantic similarities, paragraph by paragraph. Semantic similarity is here
viewed as association. It turns out that the similarity between two words W1
and W2 strongly increases with a co-occurrence, decreases with the occurrence
of W1 without W2 or W2 without W1, and slightly increases with high-order
co-occurrences. Therefore, operationalizing similarity as a frequency of
co-occurrence probably introduces a bias: first, there are cases in which there
is similarity without co-occurrence and, second, the frequency of co-occurrence
overestimates similarity.
|
0804.0188
|
Support Vector Machine Classification with Indefinite Kernels
|
cs.LG cs.AI
|
We propose a method for support vector machine classification using
indefinite kernels. Instead of directly minimizing or stabilizing a nonconvex
loss function, our algorithm simultaneously computes support vectors and a
proxy kernel matrix used in forming the loss. This can be interpreted as a
penalized kernel learning problem where indefinite kernel matrices are treated
as a noisy observations of a true Mercer kernel. Our formulation keeps the
problem convex and relatively large problems can be solved efficiently using
the projected gradient or analytic center cutting plane methods. We compare the
performance of our technique with other methods on several classic data sets.
|
0804.0317
|
Parts-of-Speech Tagger Errors Do Not Necessarily Degrade Accuracy in
Extracting Information from Biomedical Text
|
cs.CL cs.IR
|
A recent study reported development of Muscorian, a generic text processing
tool for extracting protein-protein interactions from text that achieved
comparable performance to biomedical-specific text processing tools. This
result was unexpected since potential errors from a series of text analysis
processes is likely to adversely affect the outcome of the entire process. Most
biomedical entity relationship extraction tools have used biomedical-specific
parts-of-speech (POS) tagger as errors in POS tagging and are likely to affect
subsequent semantic analysis of the text, such as shallow parsing. This study
aims to evaluate the parts-of-speech (POS) tagging accuracy and attempts to
explore whether a comparable performance is obtained when a generic POS tagger,
MontyTagger, was used in place of MedPost, a tagger trained in biomedical text.
Our results demonstrated that MontyTagger, Muscorian's POS tagger, has a POS
tagging accuracy of 83.1% when tested on biomedical text. Replacing MontyTagger
with MedPost did not result in a significant improvement in entity relationship
extraction from text; precision of 55.6% from MontyTagger versus 56.8% from
MedPost on directional relationships and 86.1% from MontyTagger compared to
81.8% from MedPost on nondirectional relationships. This is unexpected as the
potential for poor POS tagging by MontyTagger is likely to affect the outcome
of the information extraction. An analysis of POS tagging errors demonstrated
that 78.5% of tagging errors are being compensated by shallow parsing. Thus,
despite 83.1% tagging accuracy, MontyTagger has a functional tagging accuracy
of 94.6%.
|
0804.0318
|
Moore and more and symmetry
|
cs.MA physics.comp-ph
|
In any spatially discrete model of pedestrian motion which uses a regular
lattice as basis, there is the question of how the symmetry between the
different directions of motion can be restored as far as possible but with
limited computational effort. This question is equivalent to the question ''How
important is the orientation of the axis of discretization for the result of
the simulation?'' An optimization in terms of symmetry can be combined with the
implementation of higher and heterogeniously distributed walking speeds by
representing different walking speeds via different amounts of cells an agent
may move during one round. Therefore all different possible neighborhoods for
speeds up to v = 10 (cells per round) will be examined for the amount of
deviation from radial symmetry. Simple criteria will be stated which will allow
find an optimal neighborhood for each speed. It will be shown that following
these criteria even the best mixture of steps in Moore and von Neumann
neighborhoods is unable to reproduce the optimal neighborhood for a speed as
low as 4.
|
0804.0337
|
On the Convexity of the MSE Region of Single-Antenna Users
|
cs.IT math.IT
|
We prove convexity of the sum-power constrained mean square error (MSE)
region in case of two single-antenna users communicating with a multi-antenna
base station. Due to the MSE duality this holds both for the vector broadcast
channel and the dual multiple access channel. Increasing the number of users to
more than two, we show by means of a simple counter-example that the resulting
MSE region is not necessarily convex any longer, even under the assumption of
single-antenna users. In conjunction with our former observation that the two
user MSE region is not necessarily convex for two multi-antenna users, this
extends and corrects the hitherto existing notion of the MSE region geometry.
|
0804.0352
|
Permeability Analysis based on information granulation theory
|
cs.NE cs.AI
|
This paper describes application of information granulation theory, on the
analysis of "lugeon data". In this manner, using a combining of Self Organizing
Map (SOM) and Neuro-Fuzzy Inference System (NFIS), crisp and fuzzy granules are
obtained. Balancing of crisp granules and sub- fuzzy granules, within non fuzzy
information (initial granulation), is rendered in open-close iteration. Using
two criteria, "simplicity of rules "and "suitable adaptive threshold error
level", stability of algorithm is guaranteed. In other part of paper, rough set
theory (RST), to approximate analysis, has been employed >.Validation of the
proposed methods, on the large data set of in-situ permeability in rock masses,
in the Shivashan dam, Iran, has been highlighted. By the implementation of the
proposed algorithm on the lugeon data set, was proved the suggested method,
relating the approximate analysis on the permeability, could be applied.
|
0804.0353
|
Graphical Estimation of Permeability Using RST&NFIS
|
cs.NE cs.AI
|
This paper pursues some applications of Rough Set Theory (RST) and
neural-fuzzy model to analysis of "lugeon data". In the manner, using Self
Organizing Map (SOM) as a pre-processing the data are scaled and then the
dominant rules by RST, are elicited. Based on these rules variations of
permeability in the different levels of Shivashan dam, Iran has been
highlighted. Then, via using a combining of SOM and an adaptive Neuro-Fuzzy
Inference System (NFIS) another analysis on the data was carried out. Finally,
a brief comparison between the obtained results of RST and SOM-NFIS (briefly
SONFIS) has been rendered.
|
0804.0385
|
On the Sum-Capacity of Degraded Gaussian Multiaccess Relay Channels
|
cs.IT math.IT
|
The sum-capacity is studied for a K-user degraded Gaussian multiaccess relay
channel (MARC) where the multiaccess signal received at the destination from
the K sources and relay is a degraded version of the signal received at the
relay from all sources, given the transmit signal at the relay. An outer bound
on the capacity region is developed using cutset bounds. An achievable rate
region is obtained for the decode-and-forward (DF) strategy. It is shown that
for every choice of input distribution, the rate regions for the inner (DF) and
outer bounds are given by the intersection of two K-dimensional polymatroids,
one resulting from the multiaccess link at the relay and the other from that at
the destination. Although the inner and outer bound rate regions are not
identical in general, for both cases, a classical result on the intersection of
two polymatroids is used to show that the intersection belongs to either the
set of active cases or inactive cases, where the two bounds on the K-user
sum-rate are active or inactive, respectively. It is shown that DF achieves the
capacity region for a class of degraded Gaussian MARCs in which the relay has a
high SNR link to the destination relative to the multiaccess link from the
sources to the relay. Otherwise, DF is shown to achieve the sum-capacity for an
active class of degraded Gaussian MARCs for which the DF sum-rate is maximized
by a polymatroid intersection belonging to the set of active cases. This class
is shown to include the class of symmetric Gaussian MARCs where all users
transmit at the same power.
|
0804.0441
|
Joint Beamforming for Multiaccess MIMO Systems with Finite Rate Feedback
|
cs.IT math.IT
|
This paper considers multiaccess multiple-input multiple-output (MIMO)
systems with finite rate feedback. The goal is to understand how to efficiently
employ the given finite feedback resource to maximize the sum rate by
characterizing the performance analytically. Towards this, we propose a joint
quantization and feedback strategy: the base station selects the strongest
users, jointly quantizes their strongest eigen-channel vectors and broadcasts a
common feedback to all the users. This joint strategy is different from an
individual strategy, in which quantization and feedback are performed across
users independently, and it improves upon the individual strategy in the same
way that vector quantization improves upon scalar quantization. In our proposed
strategy, the effect of user selection is analyzed by extreme order statistics,
while the effect of joint quantization is quantified by what we term ``the
composite Grassmann manifold''. The achievable sum rate is then estimated by
random matrix theory. Due to its simple implementation and solid performance
analysis, the proposed scheme provides a benchmark for multiaccess MIMO systems
with finite rate feedback.
|
0804.0506
|
Distributed Consensus over Wireless Sensor Networks Affected by
Multipath Fading
|
cs.DC cs.MA
|
The design of sensor networks capable of reaching a consensus on a globally
optimal decision test, without the need for a fusion center, is a problem that
has received considerable attention in the last years. Many consensus
algorithms have been proposed, with convergence conditions depending on the
graph describing the interaction among the nodes. In most works, the graph is
undirected and there are no propagation delays. Only recently, the analysis has
been extended to consensus algorithms incorporating propagation delays. In this
work, we propose a consensus algorithm able to converge to a globally optimal
decision statistic, using a wideband wireless network, governed by a fairly
simple MAC mechanism, where each link is a multipath, frequency-selective,
channel. The main contribution of the paper is to derive necessary and
sufficient conditions on the network topology and sufficient conditions on the
channel transfer functions guaranteeing the exponential convergence of the
consensus algorithm to a globally optimal decision value, for any bounded delay
condition.
|
0804.0510
|
Nonparametric Statistical Inference for Ergodic Processes
|
cs.IT math.IT math.ST stat.TH
|
In this work a method for statistical analysis of time series is proposed,
which is used to obtain solutions to some classical problems of mathematical
statistics under the only assumption that the process generating the data is
stationary ergodic. Namely, three problems are considered: goodness-of-fit (or
identity) testing, process classification, and the change point problem. For
each of the problems a test is constructed that is asymptotically accurate for
the case when the data is generated by stationary ergodic processes. The tests
are based on empirical estimates of distributional distance.
|
0804.0524
|
Bayesian Optimisation Algorithm for Nurse Scheduling
|
cs.NE cs.CE
|
Our research has shown that schedules can be built mimicking a human
scheduler by using a set of rules that involve domain knowledge. This chapter
presents a Bayesian Optimization Algorithm (BOA) for the nurse scheduling
problem that chooses such suitable scheduling rules from a set for each nurses
assignment. Based on the idea of using probabilistic models, the BOA builds a
Bayesian network for the set of promising solutions and samples these networks
to generate new candidate solutions. Computational results from 52 real data
instances demonstrate the success of this approach. It is also suggested that
the learning mechanism in the proposed algorithm may be suitable for other
scheduling problems.
|
0804.0528
|
Application of Rough Set Theory to Analysis of Hydrocyclone Operation
|
cs.AI
|
This paper describes application of rough set theory, on the analysis of
hydrocyclone operation. In this manner, using Self Organizing Map (SOM) as
preprocessing step, best crisp granules of data are obtained. Then, using a
combining of SOM and rough set theory (RST)-called SORST-, the dominant rules
on the information table, obtained from laboratory tests, are extracted. Based
on these rules, an approximate estimation on decision attribute is fulfilled.
Finally, a brief comparison of this method with the SOM-NFIS system (briefly
SONFIS) is highlighted.
|
0804.0539
|
Irregular turbo code design for the binary erasure channel
|
cs.IT math.IT
|
In this paper, the design of irregular turbo codes for the binary erasure
channel is investigated. An analytic expression of the erasure probability of
punctured recursive systematic convolutional codes is derived. This exact
expression will be used to track the density evolution of turbo codes over the
erasure channel, that will allow for the design of capacity-approaching
irregular turbo codes. Next, we propose a graph-optimal interleaver for
irregular turbo codes. Simulation results for different coding rates is shown
at the end.
|
0804.0558
|
Agent-Based Perception of an Environment in an Emergency Situation
|
cs.AI
|
We are interested in the problem of multiagent systems development for risk
detecting and emergency response in an uncertain and partially perceived
environment. The evaluation of the current situation passes by three stages
inside the multiagent system. In a first time, the situation is represented in
a dynamic way. The second step, consists to characterise the situation and
finally, it is compared with other similar known situations. In this paper, we
present an information modelling of an observed environment, that we have
applied on the RoboCupRescue Simulation System. Information coming from the
environment are formatted according to a taxonomy and using semantic features.
The latter are defined thanks to a fine ontology of the domain and are managed
by factual agents that aim to represent dynamically the current situation.
|
0804.0573
|
An Artificial Immune System as a Recommender System for Web Sites
|
cs.NE cs.AI
|
Artificial Immune Systems have been used successfully to build recommender
systems for film databases. In this research, an attempt is made to extend this
idea to web site recommendation. A collection of more than 1000 individuals web
profiles (alternatively called preferences / favourites / bookmarks file) will
be used. URLs will be classified using the DMOZ (Directory Mozilla) database of
the Open Directory Project as our ontology. This will then be used as the data
for the Artificial Immune Systems rather than the actual addresses. The first
attempt will involve using a simple classification code number coupled with the
number of pages within that classification code. However, this implementation
does not make use of the hierarchical tree-like structure of DMOZ.
Consideration will then be given to the construction of a similarity measure
for web profiles that makes use of this hierarchical information to build a
better-informed Artificial Immune System.
|
0804.0580
|
Explicit Learning: an Effort towards Human Scheduling Algorithms
|
cs.NE cs.AI
|
Scheduling problems are generally NP-hard combinatorial problems, and a lot
of research has been done to solve these problems heuristically. However, most
of the previous approaches are problem-specific and research into the
development of a general scheduling algorithm is still in its infancy.
Mimicking the natural evolutionary process of the survival of the fittest,
Genetic Algorithms (GAs) have attracted much attention in solving difficult
scheduling problems in recent years. Some obstacles exist when using GAs: there
is no canonical mechanism to deal with constraints, which are commonly met in
most real-world scheduling problems, and small changes to a solution are
difficult. To overcome both difficulties, indirect approaches have been
presented (in [1] and [2]) for nurse scheduling and driver scheduling, where
GAs are used by mapping the solution space, and separate decoding routines then
build solutions to the original problem.
|
0804.0599
|
Symmetry Breaking for Maximum Satisfiability
|
cs.AI cs.LO
|
Symmetries are intrinsic to many combinatorial problems including Boolean
Satisfiability (SAT) and Constraint Programming (CP). In SAT, the
identification of symmetry breaking predicates (SBPs) is a well-known, often
effective, technique for solving hard problems. The identification of SBPs in
SAT has been the subject of significant improvements in recent years, resulting
in more compact SBPs and more effective algorithms. The identification of SBPs
has also been applied to pseudo-Boolean (PB) constraints, showing that symmetry
breaking can also be an effective technique for PB constraints. This paper
extends further the application of SBPs, and shows that SBPs can be identified
and used in Maximum Satisfiability (MaxSAT), as well as in its most well-known
variants, including partial MaxSAT, weighted MaxSAT and weighted partial
MaxSAT. As with SAT and PB, symmetry breaking predicates for MaxSAT and
variants are shown to be effective for a representative number of problem
domains, allowing solving problem instances that current state of the art
MaxSAT solvers could not otherwise solve.
|
0804.0611
|
Channel State Feedback Schemes for Multiuser MIMO-OFDM Downlink
|
cs.IT math.IT
|
Channel state feedback schemes for the MIMO broadcast downlink have been
widely studied in the frequency-flat case. This work focuses on the more
relevant frequency selective case, where some important new aspects emerge. We
consider a MIMO-OFDM broadcast channel and compare achievable ergodic rates
under three channel state feedback schemes: analog feedback, direction
quantized feedback and "time-domain" channel quantized feedback. The first two
schemes are direct extensions of previously proposed schemes. The third scheme
is novel, and it is directly inspired by rate-distortion theory of Gaussian
correlated sources. For each scheme we derive the conditions under which the
system achieves full multiplexing gain. The key difference with respect to the
widely treated frequency-flat case is that in MIMO-OFDM the frequency-domain
channel transfer function is a Gaussian correlated source. The new time-domain
quantization scheme takes advantage of the channel frequency correlation
structure and outperforms the other schemes. Furthermore, it is by far simpler
to implement than complicated spherical vector quantization. In particular, we
observe that no structured codebook design and vector quantization is actually
needed for efficient channel state information feedback.
|
0804.0635
|
Source Coding with Mismatched Distortion Measures
|
cs.IT math.IT
|
We consider the problem of lossy source coding with a mismatched distortion
measure. That is, we investigate what distortion guarantees can be made with
respect to distortion measure $\tilde{\rho}$, for a source code designed such
that it achieves distortion less than $D$ with respect to distortion measure
$\rho$. We find a single-letter characterization of this mismatch distortion
and study properties of this quantity. These results give insight into the
robustness of lossy source coding with respect to modeling errors in the
distortion measure. They also provide guidelines on how to choose a good
tractable approximation of an intractable distortion measure.
|
0804.0686
|
Discrimination of two channels by adaptive methods and its application
to quantum system
|
quant-ph cs.IT math.IT math.ST stat.TH
|
The optimal exponential error rate for adaptive discrimination of two
channels is discussed. In this problem, adaptive choice of input signal is
allowed. This problem is discussed in various settings. It is proved that
adaptive choice does not improve the exponential error rate in these settings.
These results are applied to quantum state discrimination.
|
0804.0790
|
Outage behavior of slow fading channels with power control using noisy
quantized CSIT
|
cs.IT math.IT
|
The topic of this study is the outage behavior of multiple-antenna slow
fading channels with quantized feedback and partial power control. A fixed-rate
communication system is considered. It is known from the literature that with
error-free feedback, the outage-optimal quantizer for power control has a
circular structure. Moreover, the diversity gain of the system increases
polynomially with the cardinality of the power control codebook. Here, a
similar system is studied, but when the feedback link is error-prone. We prove
that in the high-SNR regime, the optimal quantizer structure with noisy
feedback is still circular and the optimal Voronoi regions are contiguous
non-zero probability intervals. Furthermore, the optimal power control codebook
resembles a channel optimized scalar quantizer (COSQ), i.e., the Voronoi
regions merge with erroneous feedback information. Using a COSQ, the outage
performance of the system is superior to that of a no-feedback scheme. However,
asymptotic analysis shows that the diversity gain of the system is the same as
a no-CSIT scheme if there is a non-zero and non-vanishing feedback error
probability.
|
0804.0813
|
Spatial Interference Cancelation for Mobile Ad Hoc Networks: Perfect CSI
|
cs.IT cs.NI math.IT
|
Interference between nodes directly limits the capacity of mobile ad hoc
networks. This paper focuses on spatial interference cancelation with perfect
channel state information (CSI), and analyzes the corresponding network
capacity. Specifically, by using multiple antennas, zero-forcing beamforming is
applied at each receiver for canceling the strongest interferers. Given spatial
interference cancelation, the network transmission capacity is analyzed in this
paper, which is defined as the maximum transmitting node density under
constraints on outage and the signal-to-interference-noise ratio. Assuming the
Poisson distribution for the locations of network nodes and spatially i.i.d.
Rayleigh fading channels, mathematical tools from stochastic geometry are
applied for deriving scaling laws for transmission capacity. Specifically, for
small target outage probability, transmission capacity is proved to increase
following a power law, where the exponent is the inverse of the size of antenna
array or larger depending on the pass loss exponent. As shown by simulations,
spatial interference cancelation increases transmission capacity by an order of
magnitude or more even if only one extra antenna is added to each node.
|
0804.0819
|
Kalman Filtered Compressed Sensing
|
cs.IT math.IT math.ST stat.TH
|
We consider the problem of reconstructing time sequences of spatially sparse
signals (with unknown and time-varying sparsity patterns) from a limited number
of linear "incoherent" measurements, in real-time. The signals are sparse in
some transform domain referred to as the sparsity basis. For a single spatial
signal, the solution is provided by Compressed Sensing (CS). The question that
we address is, for a sequence of sparse signals, can we do better than CS, if
(a) the sparsity pattern of the signal's transform coefficients' vector changes
slowly over time, and (b) a simple prior model on the temporal dynamics of its
current non-zero elements is available. The overall idea of our solution is to
use CS to estimate the support set of the initial signal's transform vector. At
future times, run a reduced order Kalman filter with the currently estimated
support and estimate new additions to the support set by applying CS to the
Kalman innovations or filtering error (whenever it is "large").
|
0804.0852
|
On the Influence of Selection Operators on Performances in Cellular
Genetic Algorithms
|
cs.AI
|
In this paper, we study the influence of the selective pressure on the
performance of cellular genetic algorithms. Cellular genetic algorithms are
genetic algorithms where the population is embedded on a toroidal grid. This
structure makes the propagation of the best so far individual slow down, and
allows to keep in the population potentially good solutions. We present two
selective pressure reducing strategies in order to slow down even more the best
solution propagation. We experiment these strategies on a hard optimization
problem, the quadratic assignment problem, and we show that there is a value
for of the control parameter for both which gives the best performance. This
optimal value does not find explanation on only the selective pressure,
measured either by take over time and diversity evolution. This study makes us
conclude that we need other tools than the sole selective pressure measures to
explain the performances of cellular genetic algorithms.
|
0804.0924
|
A Unified Semi-Supervised Dimensionality Reduction Framework for
Manifold Learning
|
cs.LG cs.AI
|
We present a general framework of semi-supervised dimensionality reduction
for manifold learning which naturally generalizes existing supervised and
unsupervised learning frameworks which apply the spectral decomposition.
Algorithms derived under our framework are able to employ both labeled and
unlabeled examples and are able to handle complex problems where data form
separate clusters of manifolds. Our framework offers simple views, explains
relationships among existing frameworks and provides further extensions which
can improve existing algorithms. Furthermore, a new semi-supervised
kernelization framework called ``KPCA trick'' is proposed to handle non-linear
problems.
|
0804.0980
|
Large MIMO Detection: A Low-Complexity Detector at High Spectral
Efficiencies
|
cs.IT math.IT
|
We consider large MIMO systems, where by `{\em large}' we mean number of
transmit and receive antennas of the order of tens to hundreds. Such large MIMO
systems will be of immense interest because of the very high spectral
efficiencies possible in such systems. We present a low-complexity detector
which achieves uncoded near-exponential diversity performance for hundreds of
antennas (i.e., achieves near SISO AWGN performance in a large MIMO fading
environment) with an average per-bit complexity of just $O(N_tN_r)$, where
$N_t$ and $N_r$ denote the number of transmit and receive antennas,
respectively. With an outer turbo code, the proposed detector achieves good
coded bit error performance as well. For example, in a 600 transmit and 600
receive antennas V-BLAST system with a high spectral efficiency of 200 bps/Hz
(using BPSK and rate-1/3 turbo code), our simulation results show that the
proposed detector performs close to within about 4.6 dB from theoretical
capacity. We also adopt the proposed detector for the low-complexity decoding
of high-rate non-orthogonal space-time block codes (STBC) from division
algebras (DA). For example, we have decoded the $16\times 16$ full-rate
non-orthogonal STBC from DA using the proposed detector and show that it
performs close to within about 5.5 dB of the capacity using 4-QAM and rate-3/4
turbo code at a spectral efficiency of 24 bps/Hz. The practical feasibility of
the proposed high-performance low-complexity detector could potentially trigger
wide interest in the implementation of large MIMO systems. In large MC-CDMA
systems with hundreds of users, the proposed detector is shown to achieve near
single-user performance at an average per-bit complexity linear in number of
users, which is quite appealing for its use in practical CDMA systems.
|
0804.0996
|
Woven Graph Codes: Asymptotic Performances and Examples
|
cs.IT math.IT
|
Constructions of woven graph codes based on constituent block and
convolutional codes are studied. It is shown that within the random ensemble of
such codes based on $s$-partite, $s$-uniform hypergraphs, where $s$ depends
only on the code rate, there exist codes satisfying the Varshamov-Gilbert (VG)
and the Costello lower bound on the minimum distance and the free distance,
respectively. A connection between regular bipartite graphs and tailbiting
codes is shown. Some examples of woven graph codes are presented. Among them an
example of a rate $R_{\rm wg}=1/3$ woven graph code with $d_{\rm free}=32$
based on Heawood's bipartite graph and containing $n=7$ constituent rate
$R^{c}=2/3$ convolutional codes with overall constraint lengths $\nu^{c}=5$ is
given. An encoding procedure for woven graph codes with complexity proportional
to the number of constituent codes and their overall constraint length
$\nu^{c}$ is presented.
|
0804.1033
|
A Semi-Automatic Framework to Discover Epistemic Modalities in
Scientific Articles
|
cs.CL cs.LO
|
Documents in scientific newspapers are often marked by attitudes and opinions
of the author and/or other persons, who contribute with objective and
subjective statements and arguments as well. In this respect, the attitude is
often accomplished by a linguistic modality. As in languages like english,
french and german, the modality is expressed by special verbs like can, must,
may, etc. and the subjunctive mood, an occurrence of modalities often induces
that these verbs take over the role of modality. This is not correct as it is
proven that modality is the instrument of the whole sentence where both the
adverbs, modal particles, punctuation marks, and the intonation of a sentence
contribute. Often, a combination of all these instruments are necessary to
express a modality. In this work, we concern with the finding of modal verbs in
scientific texts as a pre-step towards the discovery of the attitude of an
author. Whereas the input will be an arbitrary text, the output consists of
zones representing modalities.
|
0804.1046
|
Discrete schemes for Gaussian curvature and their convergence
|
cs.CV cs.CG cs.GR cs.NA
|
In this paper, several discrete schemes for Gaussian curvature are surveyed.
The convergence property of a modified discrete scheme for the Gaussian
curvature is considered. Furthermore, a new discrete scheme for Gaussian
curvature is resented. We prove that the new scheme converges at the regular
vertex with valence not less than 5. By constructing a counterexample, we also
show that it is impossible for building a discrete scheme for Gaussian
curvature which converges over the regular vertex with valence 4. Finally,
asymptotic errors of several discrete scheme for Gaussian curvature are
compared.
|
0804.1083
|
Towards algebraic methods for maximum entropy estimation
|
cs.IT math.IT
|
We show that various formulations (e.g., dual and Kullback-Csiszar
iterations) of estimation of maximum entropy (ME) models can be transformed to
solving systems of polynomial equations in several variables for which one can
use celebrated Grobner bases methods. Posing of ME estimation as solving
polynomial equations is possible, in the cases where feature functions
(sufficient statistic) that provides the information about the underlying
random variable in the form of expectations are integer valued.
|
0804.1117
|
Network Beamforming Using Relays with Perfect Channel Information
|
cs.IT math.IT
|
This paper is on beamforming in wireless relay networks with perfect channel
information at relays, the receiver, and the transmitter if there is a direct
link between the transmitter and receiver. It is assumed that every node in the
network has its own power constraint. A two-step amplify-and-forward protocol
is used, in which the transmitter and relays not only use match filters to form
a beam at the receiver but also adaptively adjust their transmit powers
according to the channel strength information. For a network with any number of
relays and no direct link, the optimal power control is solved analytically.
The complexity of finding the exact solution is linear in the number of relays.
Our results show that the transmitter should always use its maximal power and
the optimal power used at a relay is not a binary function. It can take any
value between zero and its maximum transmit power. Also, this value depends on
the quality of all other channels in addition to the relay's own channels.
Despite this coupling fact, distributive strategies are proposed in which, with
the aid of a low-rate broadcast from the receiver, a relay needs only its own
channel information to implement the optimal power control. Simulated
performance shows that network beamforming achieves the maximal diversity and
outperforms other existing schemes. Then, beamforming in networks with a direct
link are considered. We show that when the direct link exists during the first
step only, the optimal power control is the same as that of networks with no
direct link. For networks with a direct link during the second step, recursive
numerical algorithms are proposed to solve the power control problem.
Simulation shows that by adjusting the transmitter and relays' powers
adaptively, network performance is significantly improved.
|
0804.1133
|
Prospective Algorithms for Quantum Evolutionary Computation
|
cs.NE
|
This effort examines the intersection of the emerging field of quantum
computing and the more established field of evolutionary computation. The goal
is to understand what benefits quantum computing might offer to computational
intelligence and how computational intelligence paradigms might be implemented
as quantum programs to be run on a future quantum computer. We critically
examine proposed algorithms and methods for implementing computational
intelligence paradigms, primarily focused on heuristic optimization methods
including and related to evolutionary computation, with particular regard for
their potential for eventual implementation on quantum computing hardware.
|
0804.1172
|
Transceiver Design with Low-Precision Analog-to-Digital Conversion : An
Information-Theoretic Perspective
|
cs.IT math.IT
|
Modern communication receiver architectures center around digital signal
processing (DSP), with the bulk of the receiver processing being performed on
digital signals obtained after analog-to-digital conversion (ADC). In this
paper, we explore Shannon-theoretic performance limits when ADC precision is
drastically reduced, from typical values of 8-12 bits used in current
communication transceivers, to 1-3 bits. The goal is to obtain insight on
whether DSP-centric transceiver architectures are feasible as communication
bandwidths scale up, recognizing that high-precision ADC at high sampling rates
is either unavailable, or too costly or power-hungry. Specifically, we evaluate
the communication limits imposed by low-precision ADC for the ideal real
discrete-time Additive White Gaussian Noise (AWGN) channel, under an average
power constraint on the input. For an ADC with K quantization bins (i.e., a
precision of log2 K bits), we show that the Shannon capacity is achievable by a
discrete input distribution with at most K + 1 mass points. For 2-bin (1-bit)
symmetric ADC, this result is tightened to show that binary antipodal signaling
is optimum for any signal-to-noise ratio (SNR). For multi-bit ADC, the capacity
is computed numerically, and the results obtained are used to make the
following encouraging observations regarding system design with low-precision
ADC : (a) even at moderately high SNR of up to 20 dB, 2-3 bit quantization
results in only 10-20% reduction of spectral efficiency, which is acceptable
for large communication bandwidths, (b) standard equiprobable pulse amplitude
modulation with ADC thresholds set to implement maximum likelihood hard
decisions is asymptotically optimum at high SNR, and works well at low to
moderate SNRs as well.
|
0804.1183
|
Hash Property and Fixed-rate Universal Coding Theorems
|
cs.IT math.IT
|
The aim of this paper is to prove the achievability of fixed-rate universal
coding problems by using our previously introduced notion of hash property.
These problems are the fixed-rate lossless universal source coding problem and
the fixed-rate universal channel coding problem. Since an ensemble of sparse
matrices satisfies the hash property requirement, it is proved that we can
construct universal codes by using sparse matrices.
|
0804.1187
|
M\'ethode de calcul du rayonnement acoustique de structures complexes
|
cs.CE
|
In the automotive industry, predicting noise during design cycle is a
necessary step. Well-known methods exist to answer this issue in low frequency
domain. Among these, Finite Element Methods, adapted to closed domains, are
quite easy to implement whereas Boundary Element Methods are more adapted to
infinite domains, but may induce singularity problems. In this article, the
described method, the SDM, allows to use both methods in their best application
domain. A new method is also presented to solve the SDM exterior problem.
Instead of using Boundary Element Methods, an original use of Finite Elements
is made. Efficiency of this new version of the Substructure Deletion Method is
discussed.
|
0804.1193
|
Spreading Signals in the Wideband Limit
|
cs.IT math.IT
|
Wideband communications are impossible with signals that are spread over a
very large band and are transmitted over multipath channels unknown ahead of
time. This work exploits the I-mmse connection to bound the achievable
data-rate of spreading signals in wideband settings, and to conclude that the
achievable data-rate diminishes as the bandwidth increases due to channel
uncertainty. The result applies to all spreading modulations, i.e. signals that
are evenly spread over the bandwidth available to the communication system,
with SNR smaller than log(W/L)/(W/L) and holds for communications over channels
where the number of paths L is unbounded by sub-linear in the bandwidth W.
|
0804.1244
|
Geometric Data Analysis, From Correspondence Analysis to Structured Data
Analysis (book review)
|
cs.AI
|
Review of: Brigitte Le Roux and Henry Rouanet, Geometric Data Analysis, From
Correspondence Analysis to Structured Data Analysis, Kluwer, Dordrecht, 2004,
xi+475 pp.
|
0804.1266
|
Immune System Approaches to Intrusion Detection - A Review
|
cs.NE cs.CR
|
The use of artificial immune systems in intrusion detection is an appealing
concept for two reasons. Firstly, the human immune system provides the human
body with a high level of protection from invading pathogens, in a robust,
self-organised and distributed manner. Secondly, current techniques used in
computer security are not able to cope with the dynamic and increasingly
complex nature of computer systems and their security. It is hoped that
biologically inspired approaches in this area, including the use of
immune-based systems will be able to meet this challenge. Here we review the
algorithms used, the development of the systems and the outcome of their
implementation. We provide an introduction and analysis of the key developments
within this field, in addition to making suggestions for future research.
|
0804.1281
|
Data Reduction in Intrusion Alert Correlation
|
cs.CR cs.NE
|
Network intrusion detection sensors are usually built around low level models
of network traffic. This means that their output is of a similarly low level
and as a consequence, is difficult to analyze. Intrusion alert correlation is
the task of automating some of this analysis by grouping related alerts
together. Attack graphs provide an intuitive model for such analysis.
Unfortunately alert flooding attacks can still cause a loss of service on
sensors, and when performing attack graph correlation, there can be a large
number of extraneous alerts included in the output graph. This obscures the
fine structure of genuine attacks and makes them more difficult for human
operators to discern. This paper explores modified correlation algorithms which
attempt to minimize the impact of this attack.
|
0804.1302
|
Bolasso: model consistent Lasso estimation through the bootstrap
|
cs.LG math.ST stat.ML stat.TH
|
We consider the least-square linear regression problem with regularization by
the l1-norm, a problem usually referred to as the Lasso. In this paper, we
present a detailed asymptotic analysis of model consistency of the Lasso. For
various decays of the regularization parameter, we compute asymptotic
equivalents of the probability of correct model selection (i.e., variable
selection). For a specific rate decay, we show that the Lasso selects all the
variables that should enter the model with probability tending to one
exponentially fast, while it selects all other variables with strictly positive
probability. We show that this property implies that if we run the Lasso for
several bootstrapped replications of a given sample, then intersecting the
supports of the Lasso bootstrap estimates leads to consistent model selection.
This novel variable selection algorithm, referred to as the Bolasso, is
compared favorably to other linear regression methods on synthetic data and
datasets from the UCI machine learning repository.
|
0804.1382
|
Interference-Assisted Secret Communication
|
cs.IT cs.CR math.IT
|
Wireless communication is susceptible to adversarial eavesdropping due to the
broadcast nature of the wireless medium. In this paper it is shown how
eavesdropping can be alleviated by exploiting the superposition property of the
wireless medium. A wiretap channel with a helping interferer (WT-HI), in which
a transmitter sends a confidential message to its intended receiver in the
presence of a passive eavesdropper, and with the help of an independent
interferer, is considered. The interferer, which does not know the confidential
message, helps in ensuring the secrecy of the message by sending independent
signals. An achievable secrecy rate for the WT-HI is given. The results show
that interference can be exploited to assist secrecy in wireless
communications. An important example of the Gaussian case, in which the
interferer has a better channel to the intended receiver than to the
eavesdropper, is considered. In this situation, the interferer can send a
(random) codeword at a rate that ensures that it can be decoded and subtracted
from the received signal by the intended receiver but cannot be decoded by the
eavesdropper. Hence, only the eavesdropper is interfered with and the secrecy
level of the confidential message is increased.
|
0804.1409
|
Discovering More Accurate Frequent Web Usage Patterns
|
cs.DB cs.DS
|
Web usage mining is a type of web mining, which exploits data mining
techniques to discover valuable information from navigation behavior of World
Wide Web users. As in classical data mining, data preparation and pattern
discovery are the main issues in web usage mining. The first phase of web usage
mining is the data processing phase, which includes the session reconstruction
operation from server logs. Session reconstruction success directly affects the
quality of the frequent patterns discovered in the next phase. In reactive web
usage mining techniques, the source data is web server logs and the topology of
the web pages served by the web server domain. Other kinds of information
collected during the interactive browsing of web site by user, such as cookies
or web logs containing similar information, are not used. The next phase of web
usage mining is discovering frequent user navigation patterns. In this phase,
pattern discovery methods are applied on the reconstructed sessions obtained in
the first phase in order to discover frequent user patterns. In this paper, we
propose a frequent web usage pattern discovery method that can be applied after
session reconstruction phase. In order to compare accuracy performance of
session reconstruction phase and pattern discovery phase, we have used an agent
simulator, which models behavior of web users and generates web user navigation
as well as the log data kept by the web server.
|
0804.1421
|
A $O(\log m)$, deterministic, polynomial-time computable approximation
of Lewis Carroll's scoring rule
|
cs.GT cs.AI cs.MA
|
We provide deterministic, polynomial-time computable voting rules that
approximate Dodgson's and (the ``minimization version'' of) Young's scoring
rules to within a logarithmic factor. Our approximation of Dodgson's rule is
tight up to a constant factor, as Dodgson's rule is $\NP$-hard to approximate
to within some logarithmic factor. The ``maximization version'' of Young's rule
is known to be $\NP$-hard to approximate by any constant factor. Both
approximations are simple, and natural as rules in their own right: Given a
candidate we wish to score, we can regard either its Dodgson or Young score as
the edit distance between a given set of voter preferences and one in which the
candidate to be scored is the Condorcet winner. (The difference between the two
scoring rules is the type of edits allowed.) We regard the marginal cost of a
sequence of edits to be the number of edits divided by the number of reductions
(in the candidate's deficit against any of its opponents in the pairwise race
against that opponent) that the edits yield. Over a series of rounds, our
scoring rules greedily choose a sequence of edits that modify exactly one
voter's preferences and whose marginal cost is no greater than any other such
single-vote-modifying sequence.
|
0804.1441
|
On Kernelization of Supervised Mahalanobis Distance Learners
|
cs.LG cs.AI
|
This paper focuses on the problem of kernelizing an existing supervised
Mahalanobis distance learner. The following features are included in the paper.
Firstly, three popular learners, namely, "neighborhood component analysis",
"large margin nearest neighbors" and "discriminant neighborhood embedding",
which do not have kernel versions are kernelized in order to improve their
classification performances. Secondly, an alternative kernelization framework
called "KPCA trick" is presented. Implementing a learner in the new framework
gains several advantages over the standard framework, e.g. no mathematical
formulas and no reprogramming are required for a kernel implementation, the
framework avoids troublesome problems such as singularity, etc. Thirdly, while
the truths of representer theorems are just assumptions in previous papers
related to ours, here, representer theorems are formally proven. The proofs
validate both the kernel trick and the KPCA trick in the context of Mahalanobis
distance learning. Fourthly, unlike previous works which always apply brute
force methods to select a kernel, we investigate two approaches which can be
efficiently adopted to construct an appropriate kernel for a given dataset.
Finally, numerical results on various real-world datasets are presented.
|
0804.1448
|
Fast k Nearest Neighbor Search using GPU
|
cs.CV cs.DC
|
The recent improvements of graphics processing units (GPU) offer to the
computer vision community a powerful processing platform. Indeed, a lot of
highly-parallelizable computer vision problems can be significantly accelerated
using GPU architecture. Among these algorithms, the k nearest neighbor search
(KNN) is a well-known problem linked with many applications such as
classification, estimation of statistical properties, etc. The main drawback of
this task lies in its computation burden, as it grows polynomially with the
data size. In this paper, we show that the use of the NVIDIA CUDA API
accelerates the search for the KNN up to a factor of 120.
|
0804.1490
|
Distributed Space-Time Block Codes for the MIMO Multiple Access Channel
|
cs.IT math.IT
|
In this work, the Multiple transmit antennas Multiple Access Channel is
considered. A construction of a family of distributed space-time codes for this
channel is proposed. No Channel Side Information at the transmitters is assumed
and users are not allowed to cooperate together. It is shown that the proposed
code achieves the Diversity Multiplexing Tradeoff of the channel. As an
example, we consider the two-user MIMO-MAC channel. Simulation results show the
significant gain offered by the new coding scheme compared to an orthogonal
transmission scheme, e.g. time sharing.
|
0804.1493
|
Distributed Space Time Codes for the Amplify-and-Forward Multiple-Access
Relay Channel
|
cs.IT math.IT
|
In this work, we present a construction of a family of space-time block codes
for a Multi-Access Amplify-and- Forward Relay channel with two users and a
single half-duplex relay. It is assumed that there is no Channel Side
Information at the transmitters and that they are not allowed to cooperate
together. Using the Diversity Multiplexing Tradeoff as a tool to evaluate the
performance, we prove that the proposed scheme is optimal in some sense.
Moreover, we provide numerical results which show that the new scheme
outperforms the orthogonal transmission scheme, e. g. time sharing and offers a
significant gain.
|
0804.1602
|
Multiterminal source coding with complementary delivery
|
cs.IT math.IT
|
A coding problem for correlated information sources is investigated. Messages
emitted from two correlated sources are jointly encoded, and delivered to two
decoders. Each decoder has access to one of the two messages to enable it to
reproduce the other message. The rate-distortion function for the coding
problem and its interesting properties are clarified.
|
0804.1617
|
Optimal Power Control over Fading Cognitive Radio Channels by Exploiting
Primary User CSI
|
cs.IT math.IT
|
This paper is concerned with spectrum sharing cognitive radio networks, where
a secondary user (SU) or cognitive radio link communicates simultaneously over
the same frequency band with an existing primary user (PU) link. It is assumed
that the SU transmitter has the perfect channel state information (CSI) on the
fading channels from SU transmitter to both PU and SU receivers (as usually
assumed in the literature), as well as the fading channel from PU transmitter
to PU receiver (a new assumption). With the additional PU CSI, we study the
optimal power control for the SU over different fading states to maximize the
SU ergodic capacity subject to a new proposed constraint to protect the PU
transmission, which limits the maximum ergodic capacity loss of the PU resulted
from the SU transmission. It is shown that the proposed SU power-control policy
is superior over the conventional policy under the constraint on the maximum
tolerable interference power/interperferecne temperature at the PU receiver, in
terms of the achievable ergodic capacities of both PU and SU.
|
0804.1653
|
Nonextensive Generalizations of the Jensen-Shannon Divergence
|
cs.IT math.IT math.ST stat.TH
|
Convexity is a key concept in information theory, namely via the many
implications of Jensen's inequality, such as the non-negativity of the
Kullback-Leibler divergence (KLD). Jensen's inequality also underlies the
concept of Jensen-Shannon divergence (JSD), which is a symmetrized and smoothed
version of the KLD. This paper introduces new JSD-type divergences, by
extending its two building blocks: convexity and Shannon's entropy. In
particular, a new concept of q-convexity is introduced and shown to satisfy a
Jensen's q-inequality. Based on this Jensen's q-inequality, the Jensen-Tsallis
q-difference is built, which is a nonextensive generalization of the JSD, based
on Tsallis entropies. Finally, the Jensen-Tsallis q-difference is charaterized
in terms of convexity and extrema.
|
0804.1669
|
Subclose Families, Threshold Graphs, and the Weight Hierarchy of
Grassmann and Schubert Codes
|
math.CO cs.IT math.IT
|
We discuss the problem of determining the complete weight hierarchy of linear
error correcting codes associated to Grassmann varieties and, more generally,
to Schubert varieties in Grassmannians. The problem is partially solved in the
case of Grassmann codes, and one of the solutions uses the combinatorial notion
of a closed family. We propose a generalization of this to what is called a
subclose family. A number of properties of subclose families are proved, and
its connection with the notion of threshold graphs and graphs with maximum sum
of squares of vertex degrees is outlined.
|
0804.1697
|
Lower Bounds on the Rate-Distortion Function of Individual LDGM Codes
|
cs.IT math.IT
|
We consider lossy compression of a binary symmetric source by means of a
low-density generator-matrix code. We derive two lower bounds on the rate
distortion function which are valid for any low-density generator-matrix code
with a given node degree distribution L(x) on the set of generators and for any
encoding algorithm. These bounds show that, due to the sparseness of the code,
the performance is strictly bounded away from the Shannon rate-distortion
function. In this sense, our bounds represent a natural generalization of
Gallager's bound on the maximum rate at which low-density parity-check codes
can be used for reliable transmission. Our bounds are similar in spirit to the
technique recently developed by Dimakis, Wainwright, and Ramchandran, but they
apply to individual codes.
|
0804.1740
|
Pseudo Quasi-3 Designs and their Applications to Coding Theory
|
math.CO cs.IT math.IT
|
We define a pseudo quasi-3 design as a symmetric design with the property
that the derived and residual designs with respect to at least one block are
quasi-symmetric. Quasi-symmetric designs can be used to construct optimal self
complementary codes. In this article we give a construction of an infinite
family of pseudo quasi-3 designs whose residual designs allow us to construct a
family of codes with a new parameter set that meet the Grey Rankin bound.
|
0804.1748
|
Noncoherent Capacity of Underspread Fading Channels
|
cs.IT math.IT
|
We derive bounds on the noncoherent capacity of wide-sense stationary
uncorrelated scattering (WSSUS) channels that are selective both in time and
frequency, and are underspread, i.e., the product of the channel's delay spread
and Doppler spread is small. For input signals that are peak constrained in
time and frequency, we obtain upper and lower bounds on capacity that are
explicit in the channel's scattering function, are accurate for a large range
of bandwidth and allow to coarsely identify the capacity-optimal bandwidth as a
function of the peak power and the channel's scattering function. We also
obtain a closed-form expression for the first-order Taylor series expansion of
capacity in the limit of large bandwidth, and show that our bounds are tight in
the wideband regime. For input signals that are peak constrained in time only
(and, hence, allowed to be peaky in frequency), we provide upper and lower
bounds on the infinite-bandwidth capacity and find cases when the bounds
coincide and the infinite-bandwidth capacity is characterized exactly. Our
lower bound is closely related to a result by Viterbi (1967).
The analysis in this paper is based on a discrete-time discrete-frequency
approximation of WSSUS time- and frequency-selective channels. This
discretization explicitly takes into account the underspread property, which is
satisfied by virtually all wireless communication channels.
|
0804.1762
|
The Choquet integral for the aggregation of interval scales in
multicriteria decision making
|
cs.DM cs.AI
|
This paper addresses the question of which models fit with information
concerning the preferences of the decision maker over each attribute, and his
preferences about aggregation of criteria (interacting criteria). We show that
the conditions induced by these information plus some intuitive conditions lead
to a unique possible aggregation operator: the Choquet integral.
|
0804.1811
|
Space-Time Codes from Structured Lattices
|
cs.IT math.IT
|
We present constructions of Space-Time (ST) codes based on lattice coset
coding. First, we focus on ST code constructions for the short block-length
case, i.e., when the block-length is equal to or slightly larger than the
number of transmit antennas. We present constructions based on dense lattice
packings and nested lattice (Voronoi) shaping. Our codes achieve the optimal
diversity-multiplexing tradeoff of quasi-static MIMO fading channels for any
fading statistics, and perform very well also at practical, moderate values of
signal to noise ratios (SNR). Then, we extend the construction to the case of
large block lengths, by using trellis coset coding. We provide constructions of
trellis coded modulation (TCM) schemes that are endowed with good packing and
shaping properties. Both short-block and trellis constructions allow for a
reduced complexity decoding algorithm based on minimum mean squared error
generalized decision feedback equalizer (MMSE-GDFE) lattice decoding and a
combination of this with a Viterbi TCM decoder for the TCM case. Beyond the
interesting algebraic structure, we exhibit codes whose performance is among
the state-of-the art considering codes with similar encoding/decoding
complexity.
|
0804.1839
|
Necessary and Sufficient Conditions on Sparsity Pattern Recovery
|
cs.IT math.IT
|
The problem of detecting the sparsity pattern of a k-sparse vector in R^n
from m random noisy measurements is of interest in many areas such as system
identification, denoising, pattern recognition, and compressed sensing. This
paper addresses the scaling of the number of measurements m, with signal
dimension n and sparsity-level nonzeros k, for asymptotically-reliable
detection. We show a necessary condition for perfect recovery at any given SNR
for all algorithms, regardless of complexity, is m = Omega(k log(n-k))
measurements. Conversely, it is shown that this scaling of Omega(k log(n-k))
measurements is sufficient for a remarkably simple ``maximum correlation''
estimator. Hence this scaling is optimal and does not require more
sophisticated techniques such as lasso or matching pursuit. The constants for
both the necessary and sufficient conditions are precisely defined in terms of
the minimum-to-average ratio of the nonzero components and the SNR. The
necessary condition improves upon previous results for maximum likelihood
estimation. For lasso, it also provides a necessary condition at any SNR and
for low SNR improves upon previous work. The sufficient condition provides the
first asymptotically-reliable detection guarantee at finite SNR.
|
0804.1840
|
Selfish Distributed Compression over Networks: Correlation Induces
Anarchy
|
cs.GT cs.IT math.IT
|
We consider the min-cost multicast problem (under network coding) with
multiple correlated sources where each terminal wants to losslessly reconstruct
all the sources. We study the inefficiency brought forth by the selfish
behavior of the terminals in this scenario by modeling it as a noncooperative
game among the terminals. The degradation in performance due to the lack of
regulation is measured by the {\it Price of Anarchy} (POA), which is defined as
the ratio between the cost of the worst possible \textit{Wardrop equilibrium}
and the socially optimum cost. Our main result is that in contrast with the
case of independent sources, the presence of source correlations can
significantly increase the price of anarchy. Towards establishing this result,
we first characterize the socially optimal flow and rate allocation in terms of
four intuitive conditions. Next, we show that the Wardrop equilibrium is a
socially optimal solution for a different set of (related) cost functions.
Using this, we construct explicit examples that demonstrate that the POA $> 1$
and determine near-tight upper bounds on the POA as well. The main techniques
in our analysis are Lagrangian duality theory and the usage of the
supermodularity of conditional entropy.
|
0804.1845
|
An Optimal Bloom Filter Replacement Based on Matrix Solving
|
cs.DS cs.DB
|
We suggest a method for holding a dictionary data structure, which maps keys
to values, in the spirit of Bloom Filters. The space requirements of the
dictionary we suggest are much smaller than those of a hashtable. We allow
storing n keys, each mapped to value which is a string of k bits. Our suggested
method requires nk + o(n) bits space to store the dictionary, and O(n) time to
produce the data structure, and allows answering a membership query in O(1)
memory probes. The dictionary size does not depend on the size of the keys.
However, reducing the space requirements of the data structure comes at a
certain cost. Our dictionary has a small probability of a one sided error. When
attempting to obtain the value for a key that is stored in the dictionary we
always get the correct answer. However, when testing for membership of an
element that is not stored in the dictionary, we may get an incorrect answer,
and when requesting the value of such an element we may get a certain random
value. Our method is based on solving equations in GF(2^k) and using several
hash functions. Another significant advantage of our suggested method is that
we do not require using sophisticated hash functions. We only require pairwise
independent hash functions. We also suggest a data structure that requires only
nk bits space, has O(n2) preprocessing time, and has a O(log n) query time.
However, this data structures requires a uniform hash functions. In order
replace a Bloom Filter of n elements with an error proability of 2^{-k}, we
require nk + o(n) memory bits, O(1) query time, O(n) preprocessing time, and
only pairwise independent hash function. Even the most advanced previously
known Bloom Filter would require nk+O(n) space, and a uniform hash functions,
so our method is significantly less space consuming especially when k is small.
|
0804.1893
|
The F.A.S.T.-Model
|
cs.MA physics.soc-ph
|
A discrete model of pedestrian motion is presented that is implemented in the
Floor field- and Agentbased Simulation Tool (F.A.S.T.) which has already been
applicated to a variety of real life scenarios.
|
0804.1982
|
Linear Time Recognition Algorithms for Topological Invariants in 3D
|
cs.CV
|
In this paper, we design linear time algorithms to recognize and determine
topological invariants such as the genus and homology groups in 3D. These
properties can be used to identify patterns in 3D image recognition. This has
tremendous amount of applications in 3D medical image analysis. Our method is
based on cubical images with direct adjacency, also called (6,26)-connectivity
images in discrete geometry. According to the fact that there are only six
types of local surface points in 3D and a discrete version of the well-known
Gauss-Bonnett Theorem in differential geometry, we first determine the genus of
a closed 2D-connected component (a closed digital surface). Then, we use
Alexander duality to obtain the homology groups of a 3D object in 3D space.
|
0804.2036
|
Towards Physarum robots: computing and manipulating on water surface
|
cs.RO cs.AI
|
Plasmodium of Physarym polycephalum is an ideal biological substrate for
implementing concurrent and parallel computation, including combinatorial
geometry and optimization on graphs. We report results of scoping experiments
on Physarum computing in conditions of minimal friction, on the water surface.
We show that plasmodium of Physarum is capable for computing a basic spanning
trees and manipulating of light-weight objects. We speculate that our results
pave the pathways towards design and implementation of amorphous biological
robots.
|
0804.2057
|
Comparing and Combining Methods for Automatic Query Expansion
|
cs.IR
|
Query expansion is a well known method to improve the performance of
information retrieval systems. In this work we have tested different approaches
to extract the candidate query terms from the top ranked documents returned by
the first-pass retrieval.
One of them is the cooccurrence approach, based on measures of cooccurrence
of the candidate and the query terms in the retrieved documents. The other one,
the probabilistic approach, is based on the probability distribution of terms
in the collection and in the top ranked set.
We compare the retrieval improvement achieved by expanding the query with
terms obtained with different methods belonging to both approaches. Besides, we
have developed a na\"ive combination of both kinds of method, with which we
have obtained results that improve those obtained with any of them separately.
This result confirms that the information provided by each approach is of a
different nature and, therefore, can be used in a combined manner.
|
0804.2095
|
A Logic Programming Framework for Combinational Circuit Synthesis
|
cs.LO cs.CE cs.DM cs.PL
|
Logic Programming languages and combinational circuit synthesis tools share a
common "combinatorial search over logic formulae" background. This paper
attempts to reconnect the two fields with a fresh look at Prolog encodings for
the combinatorial objects involved in circuit synthesis. While benefiting from
Prolog's fast unification algorithm and built-in backtracking mechanism,
efficiency of our search algorithm is ensured by using parallel bitstring
operations together with logic variable equality propagation, as a mapping
mechanism from primary inputs to the leaves of candidate Leaf-DAGs implementing
a combinational circuit specification. After an exhaustive expressiveness
comparison of various minimal libraries, a surprising first-runner, Strict
Boolean Inequality "<" together with constant function "1" also turns out to
have small transistor-count implementations, competitive to NAND-only or
NOR-only libraries. As a practical outcome, a more realistic circuit
synthesizer is implemented that combines rewriting-based simplification of
(<,1) circuits with exhaustive Leaf-DAG circuit search.
Keywords: logic programming and circuit design, combinatorial object
generation, exact combinational circuit synthesis, universal boolean logic
libraries, symbolic rewriting, minimal transistor-count circuit synthesis
|
0804.2138
|
A constructive proof of the existence of Viterbi processes
|
math.ST cs.IT math.IT math.PR stat.CO stat.ML stat.TH
|
Since the early days of digital communication, hidden Markov models (HMMs)
have now been also routinely used in speech recognition, processing of natural
languages, images, and in bioinformatics. In an HMM $(X_i,Y_i)_{i\ge 1}$,
observations $X_1,X_2,...$ are assumed to be conditionally independent given an
``explanatory'' Markov process $Y_1,Y_2,...$, which itself is not observed;
moreover, the conditional distribution of $X_i$ depends solely on $Y_i$.
Central to the theory and applications of HMM is the Viterbi algorithm to find
{\em a maximum a posteriori} (MAP) estimate $q_{1:n}=(q_1,q_2,...,q_n)$ of
$Y_{1:n}$ given observed data $x_{1:n}$. Maximum {\em a posteriori} paths are
also known as Viterbi paths or alignments. Recently, attempts have been made to
study the behavior of Viterbi alignments when $n\to \infty$. Thus, it has been
shown that in some special cases a well-defined limiting Viterbi alignment
exists. While innovative, these attempts have relied on rather strong
assumptions and involved proofs which are existential. This work proves the
existence of infinite Viterbi alignments in a more constructive manner and for
a very general class of HMMs.
|
0804.2155
|
From Qualitative to Quantitative Proofs of Security Properties Using
First-Order Conditional Logic
|
cs.CR cs.AI cs.LO
|
A first-order conditional logic is considered, with semantics given by a
variant of epsilon-semantics, where p -> q means that Pr(q | p) approaches 1
super-polynomially --faster than any inverse polynomial. This type of
convergence is needed for reasoning about security protocols. A complete
axiomatization is provided for this semantics, and it is shown how a
qualitative proof of the correctness of a security protocol can be
automatically converted to a quantitative proof appropriate for reasoning about
concrete security.
|
0804.2189
|
Impact of Spatial Correlation on the Finite-SNR Diversity-Multiplexing
Tradeoff
|
cs.IT math.IT
|
The impact of spatial correlation on the performance limits of multielement
antenna (MEA) channels is analyzed in terms of the diversity-multiplexing
tradeoff (DMT) at finite signal-to-noise ratio (SNR) values. A lower bound on
the outage probability is first derived. Using this bound accurate finite-SNR
estimate of the DMT is then derived. This estimate allows to gain insight on
the impact of spatial correlation on the DMT at finite SNR. As expected, the
DMT is severely degraded as the spatial correlation increases. Moreover, using
asymptotic analysis, we show that our framework encompasses well-known results
concerning the asymptotic behavior of the DMT.
|
0804.2249
|
The Secrecy Graph and Some of its Properties
|
cs.IT cs.DM math.IT math.PR
|
A new random geometric graph model, the so-called secrecy graph, is
introduced and studied. The graph represents a wireless network and includes
only edges over which secure communication in the presence of eavesdroppers is
possible. The underlying point process models considered are lattices and
Poisson point processes. In the lattice case, analogies to standard bond and
site percolation can be exploited to determine percolation thresholds. In the
Poisson case, the node degrees are determined and percolation is studied using
analytical bounds and simulations. It turns out that a small density of
eavesdroppers already has a drastic impact on the connectivity of the secrecy
graph.
|
0804.2288
|
Parimutuel Betting on Permutations
|
cs.GT cs.CC cs.DS cs.MA
|
We focus on a permutation betting market under parimutuel call auction model
where traders bet on the final ranking of n candidates. We present a
Proportional Betting mechanism for this market. Our mechanism allows the
traders to bet on any subset of the n x n 'candidate-rank' pairs, and rewards
them proportionally to the number of pairs that appear in the final outcome. We
show that market organizer's decision problem for this mechanism can be
formulated as a convex program of polynomial size. More importantly, the
formulation yields a set of n x n unique marginal prices that are sufficient to
price the bets in this mechanism, and are computable in polynomial-time. The
marginal prices reflect the traders' beliefs about the marginal distributions
over outcomes. We also propose techniques to compute the joint distribution
over n! permutations from these marginal distributions. We show that using a
maximum entropy criterion, we can obtain a concise parametric form (with only n
x n parameters) for the joint distribution which is defined over an
exponentially large state space. We then present an approximation algorithm for
computing the parameters of this distribution. In fact, the algorithm addresses
the generic problem of finding the maximum entropy distribution over
permutations that has a given mean, and may be of independent interest.
|
0804.2346
|
Theory and Applications of Two-dimensional, Null-boundary,
Nine-Neighborhood, Cellular Automata Linear rules
|
cs.DM cs.CC cs.CV
|
This paper deals with the theory and application of 2-Dimensional,
nine-neighborhood, null- boundary, uniform as well as hybrid Cellular Automata
(2D CA) linear rules in image processing. These rules are classified into nine
groups depending upon the number of neighboring cells influences the cell under
consideration. All the Uniform rules have been found to be rendering multiple
copies of a given image depending on the groups to which they belong where as
Hybrid rules are also shown to be characterizing the phenomena of zooming in,
zooming out, thickening and thinning of a given image. Further, using hybrid CA
rules a new searching algorithm is developed called Sweepers algorithm which is
found to be applicable to simulate many inter disciplinary research areas like
migration of organisms towards a single point destination, Single Attractor and
Multiple Attractor Cellular Automata Theory, Pattern Classification and
Clustering Problem, Image compression, Encryption and Decryption problems,
Density Classification problem etc.
|
0804.2354
|
Information filtering based on wiki index database
|
cs.IR cs.CL
|
In this paper we present a profile-based approach to information filtering by
an analysis of the content of text documents. The Wikipedia index database is
created and used to automatically generate the user profile from the user
document collection. The problem-oriented Wikipedia subcorpora are created
(using knowledge extracted from the user profile) for each topic of user
interests. The index databases of these subcorpora are applied to filtering
information flow (e.g., mails, news). Thus, the analyzed texts are classified
into several topics explicitly presented in the user profile. The paper
concentrates on the indexing part of the approach. The architecture of an
application implementing the Wikipedia indexing is described. The indexing
method is evaluated using the Russian and Simple English Wikipedia.
|
0804.2401
|
Causal models have no complete axiomatic characterization
|
cs.AI cs.LO
|
Markov networks and Bayesian networks are effective graphic representations
of the dependencies embedded in probabilistic models. It is well known that
independencies captured by Markov networks (called graph-isomorphs) have a
finite axiomatic characterization. This paper, however, shows that
independencies captured by Bayesian networks (called causal models) have no
axiomatization by using even countably many Horn or disjunctive clauses. This
is because a sub-independency model of a causal model may be not causal, while
graph-isomorphs are closed under sub-models.
|
0804.2435
|
On the Expressiveness and Complexity of ATL
|
cs.LO cs.GT cs.MA
|
ATL is a temporal logic geared towards the specification and verification of
properties in multi-agents systems. It allows to reason on the existence of
strategies for coalitions of agents in order to enforce a given property. In
this paper, we first precisely characterize the complexity of ATL
model-checking over Alternating Transition Systems and Concurrent Game
Structures when the number of agents is not fixed. We prove that it is
\Delta^P_2 - and \Delta^P_?_3-complete, depending on the underlying multi-agent
model (ATS and CGS resp.). We also consider the same problems for some
extensions of ATL. We then consider expressiveness issues. We show how ATS and
CGS are related and provide translations between these models w.r.t.
alternating bisimulation. We also prove that the standard definition of ATL
(built on modalities "Next", "Always" and "Until") cannot express the duals of
its modalities: it is necessary to explicitely add the modality "Release".
|
0804.2469
|
On analytic properties of entropy rate
|
cs.IT math.IT
|
Entropy rate is a real valued functional on the space of discrete random
sources which lacks a closed formula even for subclasses of sources which have
intuitive parameterizations. A good way to overcome this problem is to examine
its analytic properties relative to some reasonable topology. A canonical
choice of a topology is that of the norm of total variation as it immediately
arises with the idea of a discrete random source as a probability measure on
sequence space. It is shown that entropy rate is Lipschitzian relative to this
topology, which, by well known facts, is close to differentiability. An
application of this theorem leads to a simple and elementary proof of the
existence of entropy rate of random sources with finite evolution dimension.
This class of sources encompasses arbitrary hidden Markov sources and quantum
random walks.
|
0804.2473
|
A Design Framework for Limited Feedback MIMO Systems with Zero-Forcing
DFE
|
cs.IT math.IT
|
We consider the design of multiple-input multiple-output communication
systems with a linear precoder at the transmitter, zero-forcing decision
feedback equalization (ZF-DFE) at the receiver, and a low-rate feedback channel
that enables communication from the receiver to the transmitter. The channel
state information (CSI) available at the receiver is assumed to be perfect, and
based on this information the receiver selects a suitable precoder from a
codebook and feeds back the index of this precoder to the transmitter. Our
approach to the design of the components of this limited feedback scheme is
based on the development, herein, of a unified framework for the joint design
of the precoder and the ZF-DFE under the assumption that perfect CSI is
available at both the transmitter and the receiver. The framework is general
and embraces a wide range of design criteria. This framework enables us to
characterize the statistical distribution of the optimal precoder in a standard
Rayleigh fading environment. Using this distribution, we show that codebooks
constructed from Grassmann packings minimize an upper bound on an average
distortion measure, and hence are natural candidates for the codebook in
limited feedback systems. We also show that for any given codebook the
performance of the proposed limited feedback schemes is an upper bound on the
corresponding schemes with linear zero-forcing receivers. Our simulation
studies show that the proposed limited feedback scheme can provide
significantly better performance at a lower feedback rate than existing schemes
in which the detection order is fed back to the transmitter.
|
0804.2487
|
The ergodic decomposition of asymptotically mean stationary random
sources
|
cs.IT math.IT math.PR
|
It is demonstrated how to represent asymptotically mean stationary (AMS)
random sources with values in standard spaces as mixtures of ergodic AMS
sources. This an extension of the well known decomposition of stationary
sources which has facilitated the generalization of prominent source coding
theorems to arbitrary, not necessarily ergodic, stationary sources. Asymptotic
mean stationarity generalizes the definition of stationarity and covers a much
larger variety of real-world examples of random sources of practical interest.
It is sketched how to obtain source coding and related theorems for arbitrary,
not necessarily ergodic, AMS sources, based on the presented ergodic
decomposition.
|
0804.2576
|
Interlace Polynomials: Enumeration, Unimodality, and Connections to
Codes
|
math.CO cs.IT math.IT
|
The interlace polynomial q was introduced by Arratia, Bollobas, and Sorkin.
It encodes many properties of the orbit of a graph under edge local
complementation (ELC). The interlace polynomial Q, introduced by Aigner and van
der Holst, similarly contains information about the orbit of a graph under
local complementation (LC). We have previously classified LC and ELC orbits,
and now give an enumeration of the corresponding interlace polynomials of all
graphs of order up to 12. An enumeration of all circle graphs of order up to 12
is also given. We show that there exist graphs of all orders greater than 9
with interlace polynomials q whose coefficient sequences are non-unimodal,
thereby disproving a conjecture by Arratia et al. We have verified that for
graphs of order up to 12, all polynomials Q have unimodal coefficients. It has
been shown that LC and ELC orbits of graphs correspond to equivalence classes
of certain error-correcting codes and quantum states. We show that the
properties of these codes and quantum states are related to properties of the
associated interlace polynomials.
|
0804.2808
|
Robust Precoder for Multiuser MISO Downlink with SINR Constraints
|
cs.IT math.IT
|
In this paper, we consider linear precoding with SINR constraints for the
downlink of a multiuser MISO (multiple-input single-output) communication
system in the presence of imperfect channel state information (CSI). The base
station is equipped with multiple transmit antennas and each user terminal is
equipped with a single receive antenna. We propose a robust design of linear
precoder which transmits minimum power to provide the required SINR at the user
terminals when the true channel state lies in a region of a given size around
the channel state available at the transmitter. We show that this design
problem can be formulated as a Second Order Cone Program (SOCP) which can be
solved efficiently. We compare the performance of the proposed design with some
of the robust designs reported in the literature. Simulation results show that
the proposed robust design provides better performance with reduced complexity.
|
0804.2844
|
An Analysis of Key Factors for the Success of the Communal Management of
Knowledge
|
cs.HC cs.AI
|
This paper explores the links between Knowledge Management and new
community-based models of the organization from both a theoretical and an
empirical perspective. From a theoretical standpoint, we look at Communities of
Practice (CoPs) and Knowledge Management (KM) and explore the links between the
two as they relate to the use of information systems to manage knowledge. We
begin by reviewing technologically supported approaches to KM and introduce the
idea of "Systemes d'Aide a la Gestion des Connaissances" SAGC (Systems to aid
the Management of Knowledge). Following this we examine the contribution that
communal structures such as CoPs can make to intraorganizational KM and
highlight some of 'success factors' for this approach to KM that are found in
the literature. From an empirical standpoint, we present the results of a
survey involving the Chief Knowledge Officers (CKOs) of twelve large French
businesses; the objective of this study was to identify the factors that might
influence the success of such approaches. The survey was analysed using
thematic content analysis and the results are presented here with some short
illustrative quotes from the CKOs. Finally, the paper concludes with some brief
reflections on what can be learnt from looking at this problem from these two
perspectives.
|
0804.2940
|
Secret Key Agreement by Soft-decision of Signals in Gaussian Maurer's
Model
|
cs.IT cs.CR math.IT
|
We consider the problem of secret key agreement in Gaussian Maurer's Model.
In Gaussian Maurer's model, legitimate receivers, Alice and Bob, and a
wire-tapper, Eve, receive signals randomly generated by a satellite through
three independent memoryless Gaussian channels respectively. Then Alice and Bob
generate a common secret key from their received signals. In this model, we
propose a protocol for generating a common secret key by using the result of
soft-decision of Alice and Bob's received signals. Then, we calculate a lower
bound on the secret key rate in our proposed protocol. As a result of
comparison with the protocol that only uses hard-decision, we found that the
higher rate is obtained by using our protocol.
|
0804.2950
|
An Adaptive-Parity Error-Resilient LZ'77 Compression Algorithm
|
cs.IT math.IT
|
The paper proposes an improved error-resilient Lempel-Ziv'77 (LZ'77)
algorithm employing an adaptive amount of parity bits for error protection. It
is a modified version of error resilient algorithm LZRS'77, proposed recently,
which uses a constant amount of parity over all of the encoded blocks of data.
The constant amount of parity is bounded by the lowest-redundancy part of the
encoded string, whereas the adaptive parity more efficiently utilizes the
available redundancy of the encoded string, and can be on average much higher.
The proposed algorithm thus provides better error protection of encoded data.
The performance of both algorithms was measured. The comparison showed a
noticeable improvement by use of adaptive parity. The proposed algorithm is
capable of correcting up to a few times as many errors as the original
algorithm, while the compression performance remains practically unchanged.
|
0804.2960
|
Eigenvalue based Spectrum Sensing Algorithms for Cognitive Radio
|
cs.IT math.IT
|
Spectrum sensing is a fundamental component is a cognitive radio. In this
paper, we propose new sensing methods based on the eigenvalues of the
covariance matrix of signals received at the secondary users. In particular,
two sensing algorithms are suggested, one is based on the ratio of the maximum
eigenvalue to minimum eigenvalue; the other is based on the ratio of the
average eigenvalue to minimum eigenvalue. Using some latest random matrix
theories (RMT), we quantify the distributions of these ratios and derive the
probabilities of false alarm and probabilities of detection for the proposed
algorithms. We also find the thresholds of the methods for a given probability
of false alarm. The proposed methods overcome the noise uncertainty problem,
and can even perform better than the ideal energy detection when the signals to
be detected are highly correlated. The methods can be used for various signal
detection applications without requiring the knowledge of signal, channel and
noise power. Simulations based on randomly generated signals, wireless
microphone signals and captured ATSC DTV signals are presented to verify the
effectiveness of the proposed methods.
|
0804.2991
|
Low-Complexity LDPC Codes with Near-Optimum Performance over the BEC
|
cs.IT math.IT
|
Recent works showed how low-density parity-check (LDPC) erasure correcting
codes, under maximum likelihood (ML) decoding, are capable of tightly
approaching the performance of an ideal maximum-distance-separable code on the
binary erasure channel. Such result is achievable down to low error rates, even
for small and moderate block sizes, while keeping the decoding complexity low,
thanks to a class of decoding algorithms which exploits the sparseness of the
parity-check matrix to reduce the complexity of Gaussian elimination (GE). In
this paper the main concepts underlying ML decoding of LDPC codes are recalled.
A performance analysis among various LDPC code classes is then carried out,
including a comparison with fixed-rate Raptor codes. The results show that LDPC
and Raptor codes provide almost identical performance in terms of decoding
failure probability vs. overhead.
|
0804.2998
|
OFDM based Distributed Space Time Coding for Asynchronous Relay Networks
|
cs.IT math.IT
|
Recently Li and Xia have proposed a transmission scheme for wireless relay
networks based on the Alamouti space time code and orthogonal frequency
division multiplexing to combat the effect of timing errors at the relay nodes.
This transmission scheme is amazingly simple and achieves a diversity order of
two for any number of relays. Motivated by its simplicity, this scheme is
extended to a more general transmission scheme that can achieve full
cooperative diversity for any number of relays. The conditions on the
distributed space time block code (DSTBC) structure that admit its application
in the proposed transmission scheme are identified and it is pointed out that
the recently proposed full diversity four group decodable DSTBCs from precoded
co-ordinate interleaved orthogonal designs and extended Clifford algebras
satisfy these conditions. It is then shown how differential encoding at the
source can be combined with the proposed transmission scheme to arrive at a new
transmission scheme that can achieve full cooperative diversity in asynchronous
wireless relay networks with no channel information and also no timing error
knowledge at the destination node. Finally, four group decodable distributed
differential space time block codes applicable in this new transmission scheme
for power of two number of relays are also provided.
|
0804.3109
|
Partial Cross-Correlation of D-Sequences based CDMA System
|
cs.IT math.IT
|
Like other pseudorandom sequences, decimal sequences may be used in designing
a Code Division Multiple Access (CDMA) system. They appear to be ideally suited
for this since the cross-correlation of d-sequences taken over the LCM of their
periods is zero. But a practical system will not, in most likelihood, satisfy
the condition that the number of chips per bit is equal to the LCM for all
sequences that are assigned to different users. It is essential, therefore, to
determine the partial cross-correlation properties of d-sequences. This paper
has performed experiments on d-sequences and found that the partial
cross-correlation is less than for PN sequences, indicating that d-sequences
can be effective for use in CDMA.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.