id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0904.4358
|
Adaptive sampling for linear state estimation
|
math.OC cs.SY math.PR math.ST stat.TH
|
When a sensor has continuous measurements but sends limited messages over a
data network to a supervisor which estimates the state, the available packet
rate fixes the achievable quality of state estimation. When such rate limits
turn stringent, the sensor's messaging policy should be designed anew. What are
the good causal messaging policies ? What should message packets contain ? What
is the lowest possible distortion in a causal estimate at the supervisor ? Is
Delta sampling better than periodic sampling ? We answer these questions under
an idealized model of the network and the assumption of perfect measurements at
the sensor. For a scalar, linear diffusion process, we study the problem of
choosing the causal sampling times that will give the lowest aggregate squared
error distortion. We stick to finite-horizons and impose a hard upper bound on
the number of allowed samples. We cast the design as a problem of choosing an
optimal sequence of stopping times. We reduce this to a nested sequence of
problems each asking for a single optimal stopping time. Under an unproven but
natural assumption about the least-square estimate at the supervisor, each of
these single stopping problems are of standard form. The optimal stopping times
are random times when the estimation error exceeds designed envelopes. For the
case where the state is a Brownian motion, we give analytically: the shape of
the optimal sampling envelopes, the shape of the envelopes under optimal Delta
sampling, and their performances. Surprisingly, we find that Delta sampling
performs badly. Hence, when the rate constraint is a hard limit on the number
of samples over a finite horizon, we should should not use Delta sampling.
|
0904.4449
|
DNA-Inspired Information Concealing
|
cs.IT cs.CR math.IT
|
Protection of the sensitive content is crucial for extensive information
sharing. We present a technique of information concealing, based on
introduction and maintenance of families of repeats. Repeats in DNA constitute
a basic obstacle for its reconstruction by hybridization.
|
0904.4458
|
Learning Character Strings via Mastermind Queries, with a Case Study
Involving mtDNA
|
cs.DS cs.CR cs.IT math.IT
|
We study the degree to which a character string, $Q$, leaks details about
itself any time it engages in comparison protocols with a strings provided by a
querier, Bob, even if those protocols are cryptographically guaranteed to
produce no additional information other than the scores that assess the degree
to which $Q$ matches strings offered by Bob. We show that such scenarios allow
Bob to play variants of the game of Mastermind with $Q$ so as to learn the
complete identity of $Q$. We show that there are a number of efficient
implementations for Bob to employ in these Mastermind attacks, depending on
knowledge he has about the structure of $Q$, which show how quickly he can
determine $Q$. Indeed, we show that Bob can discover $Q$ using a number of
rounds of test comparisons that is much smaller than the length of $Q$, under
reasonable assumptions regarding the types of scores that are returned by the
cryptographic protocols and whether he can use knowledge about the distribution
that $Q$ comes from. We also provide the results of a case study we performed
on a database of mitochondrial DNA, showing the vulnerability of existing
real-world DNA data to the Mastermind attack.
|
0904.4525
|
Number of Measurements in Sparse Signal Recovery
|
cs.IT math.IT
|
We analyze the asymptotic performance of sparse signal recovery from noisy
measurements. In particular, we generalize some of the existing results for the
Gaussian case to subgaussian and other ensembles. An achievable result is
presented for the linear sparsity regime. A converse on the number of required
measurements in the sub-linear regime is also presented, which cover many of
the widely used measurement ensembles. Our converse idea makes use of a
correspondence between compressed sensing ideas and compound channels in
information theory.
|
0904.4526
|
Feasibility Conditions for Interference Alignment
|
cs.IT cs.AR math.IT
|
The degrees of freedom of MIMO interference networks with constant channel
coefficients are not known in general. Determining the feasibility of a linear
interference alignment solution is a key step toward solving this open problem.
Our approach in this paper is to view the alignment problem as a system of
bilinear equations and determine its solvability by comparing the number of
equations and the number of variables. To this end, we divide interference
alignment problems into two classes - proper and improper. An interference
alignment problem is called proper if the number of equations does not exceed
the number of variables. Otherwise, it is called improper. Examples are
presented to support the intuition that for generic channel matrices, proper
systems are almost surely feasible and improper systems are almost surely
infeasible.
|
0904.4527
|
Limits of Learning about a Categorical Latent Variable under Prior
Near-Ignorance
|
cs.LG
|
In this paper, we consider the coherent theory of (epistemic) uncertainty of
Walley, in which beliefs are represented through sets of probability
distributions, and we focus on the problem of modeling prior ignorance about a
categorical random variable. In this setting, it is a known result that a state
of prior ignorance is not compatible with learning. To overcome this problem,
another state of beliefs, called \emph{near-ignorance}, has been proposed.
Near-ignorance resembles ignorance very closely, by satisfying some principles
that can arguably be regarded as necessary in a state of ignorance, and allows
learning to take place. What this paper does, is to provide new and substantial
evidence that also near-ignorance cannot be really regarded as a way out of the
problem of starting statistical inference in conditions of very weak beliefs.
The key to this result is focusing on a setting characterized by a variable of
interest that is \emph{latent}. We argue that such a setting is by far the most
common case in practice, and we provide, for the case of categorical latent
variables (and general \emph{manifest} variables) a condition that, if
satisfied, prevents learning to take place under prior near-ignorance. This
condition is shown to be easily satisfied even in the most common statistical
problems. We regard these results as a strong form of evidence against the
possibility to adopt a condition of prior near-ignorance in real statistical
problems.
|
0904.4530
|
Online Maximizing Weighted Throughput In A Fading Channel
|
cs.IT cs.DS math.IT
|
We consider online scheduling weighted packets with time constraints over a
fading channel. Packets arrive at the transmitter in an online manner. Each
packet has a value and a deadline by which it should be sent. The fade state of
the channel determines the throughput obtained per unit of time and the
channel's quality may change over time. In this paper, we design online
algorithms to maximize weighted throughput, defined as the total value of the
packets sent by their respective deadlines. Competitive ratio is employed to
measure an online algorithm's performance. For this problem and one of its
variants, we present two online algorithms with competitive ratios 2.618 and 2
respectively.
|
0904.4541
|
Evaluation of Marton's Inner Bound for the General Broadcast Channel
|
cs.IT math.IT
|
The best known inner bound on the two-receiver general broadcast channel
without a common message is due to Marton [3]. This result was subsequently
generalized in [p. 391, Problem 10(c) 2] and [4] to broadcast channels with a
common message. However the latter region is not computable (except in certain
special cases) as no bounds on the cardinality of its auxiliary random
variables exist. Nor is it even clear that the inner bound is a closed set. The
main obstacle in proving cardinality bounds is the fact that the traditional
use of the Carath\'{e}odory theorem, the main known tool for proving
cardinality bounds, does not yield a finite cardinality result. One of the main
contributions of this paper is the introduction of a new tool based on an
identity that relates the second derivative of the Shannon entropy of a
discrete random variable (under a certain perturbation) to the corresponding
Fisher information. In order to go beyond the traditional Carath\'{e}odory type
arguments, we identify certain properties that the auxiliary random variables
corresponding to the extreme points of the inner bound need to satisfy. These
properties are then used to establish cardinality bounds on the auxiliary
random variables of the inner bound, thereby proving the computability of the
region, and its closedness.
Lastly, we establish a conjecture of \cite{NairZizhou} that Marton's inner
bound and the recent outer bound of Nair and El Gamal do not match in general.
|
0904.4542
|
A Generalized Cut-Set Bound
|
cs.IT math.IT
|
In this paper, we generalize the well known cut-set bound to the problem of
lossy transmission of functions of arbitrarily correlated sources over a
discrete memoryless multiterminal network.
|
0904.4587
|
Adaptive Learning with Binary Neurons
|
cs.AI cs.NE
|
A efficient incremental learning algorithm for classification tasks, called
NetLines, well adapted for both binary and real-valued input patterns is
presented. It generates small compact feedforward neural networks with one
hidden layer of binary units and binary output units. A convergence theorem
ensures that solutions with a finite number of hidden units exist for both
binary and real-valued input patterns. An implementation for problems with more
than two classes, valid for any binary classifier, is proposed. The
generalization error and the size of the resulting networks are compared to the
best published results on well-known classification benchmarks. Early stopping
is shown to decrease overfitting, without improving the generalization
performance.
|
0904.4608
|
Temporal data mining for root-cause analysis of machine faults in
automotive assembly lines
|
cs.LG
|
Engine assembly is a complex and heavily automated distributed-control
process, with large amounts of faults data logged everyday. We describe an
application of temporal data mining for analyzing fault logs in an engine
assembly plant. Frequent episode discovery framework is a model-free method
that can be used to deduce (temporal) correlations among events from the logs
in an efficient manner. In addition to being theoretically elegant and
computationally efficient, frequent episodes are also easy to interpret in the
form actionable recommendations. Incorporation of domain-specific information
is critical to successful application of the method for analyzing fault logs in
the manufacturing domain. We show how domain-specific knowledge can be
incorporated using heuristic rules that act as pre-filters and post-filters to
frequent episode discovery. The system described here is currently being used
in one of the engine assembly plants of General Motors and is planned for
adaptation in other plants. To the best of our knowledge, this paper presents
the first real, large-scale application of temporal data mining in the
manufacturing domain. We believe that the ideas presented in this paper can
help practitioners engineer tools for analysis in other similar or related
application domains as well.
|
0904.4708
|
Quality Classifiers for Open Source Software Repositories
|
cs.SE cs.AI
|
Open Source Software (OSS) often relies on large repositories, like
SourceForge, for initial incubation. The OSS repositories offer a large variety
of meta-data providing interesting information about projects and their
success. In this paper we propose a data mining approach for training
classifiers on the OSS meta-data provided by such data repositories. The
classifiers learn to predict the successful continuation of an OSS project. The
`successfulness' of projects is defined in terms of the classifier confidence
with which it predicts that they could be ported in popular OSS projects (such
as FreeBSD, Gentoo Portage).
|
0904.4717
|
Continuous Strategy Replicator Dynamics for Multi--Agent Learning
|
cs.LG cs.AI cs.GT nlin.AO
|
The problem of multi-agent learning and adaptation has attracted a great deal
of attention in recent years. It has been suggested that the dynamics of multi
agent learning can be studied using replicator equations from population
biology. Most existing studies so far have been limited to discrete strategy
spaces with a small number of available actions. In many cases, however, the
choices available to agents are better characterized by continuous spectra.
This paper suggests a generalization of the replicator framework that allows to
study the adaptive dynamics of Q-learning agents with continuous strategy
spaces. Instead of probability vectors, agents strategies are now characterized
by probability measures over continuous variables. As a result, the ordinary
differential equations for the discrete case are replaced by a system of
coupled integral--differential replicator equations that describe the mutual
evolution of individual agent strategies. We derive a set of functional
equations describing the steady state of the replicator dynamics, examine their
solutions for several two-player games, and confirm our analytical results
using simulations.
|
0904.4727
|
Characterizations of Stable Model Semantics for Logic Programs with
Arbitrary Constraint Atoms
|
cs.AI cs.LO cs.PL
|
This paper studies the stable model semantics of logic programs with
(abstract) constraint atoms and their properties. We introduce a succinct
abstract representation of these constraint atoms in which a constraint atom is
represented compactly. We show two applications. First, under this
representation of constraint atoms, we generalize the Gelfond-Lifschitz
transformation and apply it to define stable models (also called answer sets)
for logic programs with arbitrary constraint atoms. The resulting semantics
turns out to coincide with the one defined by Son et al., which is based on a
fixpoint approach. One advantage of our approach is that it can be applied, in
a natural way, to define stable models for disjunctive logic programs with
constraint atoms, which may appear in the disjunctive head as well as in the
body of a rule. As a result, our approach to the stable model semantics for
logic programs with constraint atoms generalizes a number of previous
approaches. Second, we show that our abstract representation of constraint
atoms provides a means to characterize dependencies of atoms in a program with
constraint atoms, so that some standard characterizations and properties
relying on these dependencies in the past for logic programs with ordinary
atoms can be extended to logic programs with constraint atoms.
|
0904.4735
|
The Secrecy Capacity Region of the Degraded Vector Gaussian Broadcast
Channel
|
cs.IT math.IT
|
In this paper, we consider a scenario where a source node wishes to broadcast
two confidential messages for two respective receivers via a Gaussian MIMO
broadcast channel. A wire-tapper also receives the transmitted signal via
another MIMO channel. It is assumed that the channels are degraded and the
wire-tapper has the worst channel. We establish the capacity region of this
scenario. Our achievability scheme is a combination of the superposition of
Gaussian codes and randomization within the layers which we will refer to as
Secret Superposition Coding. For the outerbound, we use the notion of enhanced
channel to show that the secret superposition of Gaussian codes is optimal. It
is shown that we only need to enhance the channels of the legitimate receivers,
and the channel of the eavesdropper remains unchanged.
|
0904.4741
|
Belief-Propagation Decoding of Lattices Using Gaussian Mixtures
|
cs.IT math.IT
|
A belief-propagation decoder for low-density lattice codes is given which
represents messages explicitly as a mixture of Gaussians functions. The key
component is an algorithm for approximating a mixture of several Gaussians with
another mixture with a smaller number of Gaussians. This Gaussian mixture
reduction algorithm iteratively reduces the number of Gaussians by minimizing
the distance between the original mixture and an approximation with one fewer
Gaussians.
Error rates and noise thresholds of this decoder are compared with those for
the previously-proposed decoder which discretely quantizes the messages. The
error rates are indistinguishable for dimension 1000 and 10000 lattices, and
the Gaussian-mixture decoder has a 0.2 dB loss for dimension 100 lattices. The
Gaussian-mixture decoder has a loss of about 0.03 dB in the noise threshold,
which is evaluated via Monte Carlo density evolution. Further, the
Gaussian-mixture decoder uses far less storage for the messages.
|
0904.4774
|
Dictionary Identification - Sparse Matrix-Factorisation via
$\ell_1$-Minimisation
|
cs.IT cs.LG math.IT
|
This article treats the problem of learning a dictionary providing sparse
representations for a given signal class, via $\ell_1$-minimisation. The
problem can also be seen as factorising a $\ddim \times \nsig$ matrix $Y=(y_1
>... y_\nsig), y_n\in \R^\ddim$ of training signals into a $\ddim \times
\natoms$ dictionary matrix $\dico$ and a $\natoms \times \nsig$ coefficient
matrix $\X=(x_1... x_\nsig), x_n \in \R^\natoms$, which is sparse. The exact
question studied here is when a dictionary coefficient pair $(\dico,\X)$ can be
recovered as local minimum of a (nonconvex) $\ell_1$-criterion with input
$Y=\dico \X$. First, for general dictionaries and coefficient matrices,
algebraic conditions ensuring local identifiability are derived, which are then
specialised to the case when the dictionary is a basis. Finally, assuming a
random Bernoulli-Gaussian sparse model on the coefficient matrix, it is shown
that sufficiently incoherent bases are locally identifiable with high
probability. The perhaps surprising result is that the typically sufficient
number of training samples $\nsig$ grows up to a logarithmic factor only
linearly with the signal dimension, i.e. $\nsig \approx C \natoms \log
\natoms$, in contrast to previous approaches requiring combinatorially many
samples.
|
0904.4789
|
Frequency Domain Hybrid-ARQ Chase Combining for Broadband MIMO CDMA
Systems
|
cs.IT math.IT
|
In this paper, we consider high-speed wireless packet access using code
division multiple access (CDMA) and multiple-input multiple-output (MIMO).
Current wireless standards, such as high speed packet access (HSPA), have
adopted multi-code transmission and hybrid-automatic repeat request (ARQ) as
major technologies for delivering high data rates. The key technique in
hybrid-ARQ, is that erroneous data packets are kept in the receiver to
detect/decode retransmitted ones. This strategy is refereed to as packet
combining. In CDMA MIMO-based wireless packet access, multi-code transmission
suffers from severe performance degradation due to the loss of code
orthogonality caused by both interchip interference (ICI) and co-antenna
interference (CAI). This limitation results in large transmission delays when
an ARQ mechanism is used in the link layer. In this paper, we investigate
efficient minimum mean square error (MMSE) frequency domain equalization
(FDE)-based iterative (turbo) packet combining for cyclic prefix (CP)-CDMA MIMO
with Chase-type ARQ. We introduce two turbo packet combining schemes: i) In the
first scheme, namely "chip-level turbo packet combining", MMSE FDE and packet
combining are jointly performed at the chip-level. ii) In the second scheme,
namely "symbol-level turbo packet combining", chip-level MMSE FDE and
despreading are separately carried out for each transmission, then packet
combining is performed at the level of the soft demapper. The computational
complexity and memory requirements of both techniques are quite insensitive to
the ARQ delay, i.e., maximum number of ARQ rounds. The throughput is evaluated
for some representative antenna configurations and load factors to show the
gains offered by the proposed techniques.
|
0904.4836
|
FaceBots: Steps Towards Enhanced Long-Term Human-Robot Interaction by
Utilizing and Publishing Online Social Information
|
cs.RO cs.AI cs.CV
|
Our project aims at supporting the creation of sustainable and meaningful
longer-term human-robot relationships through the creation of embodied robots
with face recognition and natural language dialogue capabilities, which exploit
and publish social information available on the web (Facebook). Our main
underlying experimental hypothesis is that such relationships can be
significantly enhanced if the human and the robot are gradually creating a pool
of shared episodic memories that they can co-refer to (shared memories), and if
they are both embedded in a social web of other humans and robots they both
know and encounter (shared friends). In this paper, we are presenting such a
robot, which as we will see achieves two significant novelties.
|
0904.4863
|
A two-stage algorithm for extracting the multiscale backbone of complex
weighted networks
|
physics.soc-ph cs.SI physics.data-an stat.AP
|
The central problem of concern to Serrano, Boguna and Vespignani ("Extracting
the multiscale backbone of complex weighted networks", Proc Natl Acad Sci
106:6483-6488 [2009]) can be effectively and elegantly addressed using a
well-established two-stage algorithm that has been applied to internal
migration flows for numerous nations and several other forms of "transaction
flow data".
|
0904.4900
|
On optimal precoding in linear vector Gaussian channels with arbitrary
input distribution
|
cs.IT math.IT
|
The design of the precoder the maximizes the mutual information in linear
vector Gaussian channels with an arbitrary input distribution is studied.
Precisely, the precoder optimal left singular vectors and singular values are
derived. The characterization of the right singular vectors is left, in
general, as an open problem whose computational complexity is then studied in
three cases: Gaussian signaling, low SNR, and high SNR. For the Gaussian
signaling case and the low SNR regime, the dependence of the mutual information
on the right singular vectors vanishes, making the optimal precoder design
problem easy to solve. In the high SNR regime, however, the dependence on the
right singular vectors cannot be avoided and we show the difficulty of
computing the optimal precoder through an NP-hardness analysis.
|
0904.4921
|
Renormalization and computation I: motivation and background
|
math.QA cs.IT math.IT
|
In this paper I argue that infinities in the classical computation theory
such as the unsolvability of the Halting Problem can be addressed in the same
way as Feynman divergences in Quantum Field Theory, and that meaningful
versions of renormalization in this context can be devised. Connections with
quantum computation are also touched upon.
|
0904.4926
|
Variable-Rate M-PSK Communications without Channel Amplitude Estimation
|
cs.IT math.IT
|
Channel estimation at the receiver side is essential to adaptive modulation
schemes, prohibiting low complexity systems from using variable rate and/or
variable power transmissions. Towards providing a solution to this problem, we
introduce a variable-rate (VR) M-PSK modulation scheme, for communications over
fading channels, in the absence of channel gain estimation at the receiver. The
choice of the constellation size is based on the signal-plus-noise (S+N)
sampling value rather than on the signal-to-noise ratio (S/N). It is
analytically shown that S+N can serve as an excellent simpler criterion,
alternative to S/N, for determining the modulation order in VR systems. In this
way, low complexity transceivers can use VR transmissions in order to increase
their spectral efficiency under an error performance constraint. As an
application, we utilize the proposed VR modulation scheme in equal gain
combining (EGC) diversity receivers.
|
0905.0024
|
Theoretical Analysis of Cyclic Frequency Domain Noise and Feature
Detection for Cognitive Radio Systems
|
cs.IT math.IT
|
In cognitive radio systems, cyclostationary feature detection plays an
important role in spectrum sensing, especially in low SNR cases. To configure
the detection threshold under a certain noise level and a pre-set miss
detection probability Pf, it's important to derive the theoretical distribution
of the observation variable. In this paper, noise distribution in cyclic
frequency domain has been studied and Generalized Extreme Value (GEV)
distribution is found to be a precise match. Maximum likelihood estimation is
applied to estimate the parameters of GEV. Monte Carlo simulation has been
carried out to show that the simulated ROC curve is coincided with the
theoretical ROC curve, which proves the efficiency of the theoretical
distribution model.
|
0905.0036
|
On the Secrecy Rate Region for the Interference Channel
|
cs.IT math.IT
|
This paper studies interference channels with security constraints. The
existence of an external eavesdropper in a two-user interference channel is
assumed, where the network users would like to secure their messages from the
external eavesdropper. The cooperative binning and channel prefixing scheme is
proposed for this system model which allows users to cooperatively add
randomness to the channel in order to degrade the observations of the external
eavesdropper. This scheme allows users to add randomness to the channel in two
ways: 1) Users cooperate in their design of the binning codebooks, and 2) Users
cooperatively exploit the channel prefixing technique. As an example, the
channel prefixing technique is exploited in the Gaussian case to transmit a
superposition signal consisting of binning codewords and independently
generated noise samples. Gains obtained form the cooperative binning and
channel prefixing scheme compared to the single user scenario reveals the
positive effect of interference in increasing the network security. Remarkably,
interference can be exploited to cooperatively add randomness into the network
in order to enhance the security.
|
0905.0044
|
ADMiRA: Atomic Decomposition for Minimum Rank Approximation
|
math.NA cs.IT math.IT
|
We address the inverse problem that arises in compressed sensing of a
low-rank matrix. Our approach is to pose the inverse problem as an
approximation problem with a specified target rank of the solution. A simple
search over the target rank then provides the minimum rank solution satisfying
a prescribed data approximation bound. We propose an atomic decomposition that
provides an analogy between parsimonious representations of a sparse vector and
a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for
the vector case are extended to the matrix case through this atomic
decomposition. In particular, we propose an efficient and guaranteed algorithm
named ADMiRA that extends CoSaMP, its analogue for the vector case. The
performance guarantee is given in terms of the rank-restricted isometry
property and bounds both the number of iterations and the error in the
approximate solution for the general case where the solution is approximately
low-rank and the measurements are noisy. With a sparse measurement operator
such as the one arising in the matrix completion problem, the computation in
ADMiRA is linear in the number of measurements. The numerical experiments for
the matrix completion problem show that, although the measurement operator in
this case does not satisfy the rank-restricted isometry property, ADMiRA is a
competitive algorithm for matrix completion.
|
0905.0079
|
Multiple-Bases Belief-Propagation Decoding of High-Density Cyclic Codes
|
cs.IT math.IT
|
We introduce a new method for decoding short and moderate length linear block
codes with dense parity-check matrix representations of cyclic form, termed
multiple-bases belief-propagation (MBBP). The proposed iterative scheme makes
use of the fact that a code has many structurally diverse parity-check
matrices, capable of detecting different error patterns. We show that this
inherent code property leads to decoding algorithms with significantly better
performance when compared to standard BP decoding. Furthermore, we describe how
to choose sets of parity-check matrices of cyclic form amenable for
multiple-bases decoding, based on analytical studies performed for the binary
erasure channel. For several cyclic and extended cyclic codes, the MBBP
decoding performance can be shown to closely follow that of maximum-likelihood
decoders.
|
0905.0192
|
Fuzzy Mnesors
|
cs.AI
|
A fuzzy mnesor space is a semimodule over the positive real numbers. It can
be used as theoretical framework for fuzzy sets. Hence we can prove a great
number of properties for fuzzy sets without refering to the membership
functions.
|
0905.0197
|
An Application of Proof-Theory in Answer Set Programming
|
cs.AI
|
We apply proof-theoretic techniques in answer Set Programming. The main
results include: 1. A characterization of continuity properties of
Gelfond-Lifschitz operator for logic program. 2. A propositional
characterization of stable models of logic programs (without referring to loop
formulas.
|
0905.0233
|
Robust Principal Component Analysis: Exact Recovery of Corrupted
Low-Rank Matrices
|
cs.IT math.IT
|
This paper has been withdrawn due to a critical error near equation (71).
This error causes the entire argument of the paper to collapse.
Emmanuel Candes of Stanford discovered the error, and has suggested a correct
analysis, which will be reported in a separate publication.
|
0905.0266
|
Gaussian Belief with dynamic data and in dynamic network
|
cs.AI cond-mat.stat-mech cs.IT math.IT physics.soc-ph
|
In this paper we analyse Belief Propagation over a Gaussian model in a
dynamic environment. Recently, this has been proposed as a method to average
local measurement values by a distributed protocol ("Consensus Propagation",
Moallemi & Van Roy, 2006), where the average is available for read-out at every
single node. In the case that the underlying network is constant but the values
to be averaged fluctuate ("dynamic data"), convergence and accuracy are
determined by the spectral properties of an associated Ruelle-Perron-Frobenius
operator. For Gaussian models on Erdos-Renyi graphs, numerical computation
points to a spectral gap remaining in the large-size limit, implying
exceptionally good scalability. In a model where the underlying network also
fluctuates ("dynamic network"), averaging is more effective than in the dynamic
data case. Altogether, this implies very good performance of these methods in
very large systems, and opens a new field of statistical physics of large (and
dynamic) information systems.
|
0905.0374
|
Interference Alignment with Limited Feedback
|
cs.IT math.IT
|
We consider single-antenna interference networks where M sources, each with
an average transmit power of P/M, communicate with M destinations over
frequency-selective channels (with L taps each) and each destination has
perfect knowledge of its channels from each of the sources. Assuming that there
exist error-free non-interfering broadcast feedback links from each destination
to all the nodes (i.e., sources and destinations) in the network, we show that
naive interference alignment, in conjunction with vector quantization of the
impulse response coefficients according to the scheme proposed in Mukkavilli et
al., IEEE Trans. IT, 2003, achieves full spatial multiplexing gain of M/2,
provided that the number of feedback bits broadcast by each destination is at
least M(L-1) log P.
|
0905.0385
|
Diversity-Multiplexing tradeoff of the Two-User Interference Channel
|
cs.IT math.IT
|
Diversity-Multiplexing tradeoff (DMT) is a coarse high SNR approximation of
the fundamental tradeoff between data rate and reliability in a slow fading
channel. In this paper, we characterize the fundamental DMT of the two user
single antenna Gaussian interference channel. We show that the class of
multilevel superposition coding schemes universally achieves (for all fading
statistics) the DMT for the two-user interference channel. For the special case
of symmetric DMT, when the two users have identical rate and diversity gain
requirements, we characterize the DMT achieved by the Han-Kobayashi scheme,
which corresponds to two level superposition coding.
|
0905.0397
|
A representation of non-uniformly sampled deterministic and random
signals and their reconstruction using sample values and derivatives
|
cs.IT math.IT
|
Shannon in his 1949 paper suggested the use of derivatives to increase the
W*T product of the sampled signal. Use of derivatives enables improved
reconstruction particularly in the case of non-uniformly sampled signals. An
FM-AM representation for Lagrange/Hermite type interpolation and a
reconstruction technique are discussed. The representation using a product of a
polynomial and exponential of a polynomial is extensible to two dimensions.
When the directly available information is inadequate, estimation of the
signal and its derivative based on the correlation characteristics of Gaussian
filtered noise has been studied. This requires computation of incomplete normal
integrals. Reduction methods for reducing multivariate normal variables include
multistage partitioning, dynamic path integral and Hermite expansion for
computing the probability integrals necessary for estimating the mean of the
signal and its derivative at points intermediate between zero or threshold
crossings. The signals and their derivatives as measured or estimated are
utilized to reconstruct the signal at a desired sampling rate.
|
0905.0417
|
Two-Level Fingerprinting Codes
|
cs.IT cs.CR math.IT
|
We introduce the notion of two-level fingerprinting and traceability codes.
In this setting, the users are organized in a hierarchical manner by
classifying them into various groups; for instance, by dividing the
distribution area into several geographic regions, and collecting users from
the same region into one group. Two-level fingerprinting and traceability codes
have the following property: As in traditional (one-level) codes, when given an
illegal copy produced by a coalition of users, the decoder identifies one of
the guilty users if the coalition size is less than a certain threshold $t$.
Moreover, even when the coalition is of a larger size $s$ $(> t)$, the decoder
still provides partial information by tracing one of the groups containing a
guilty user.
We establish sufficient conditions for a code to possess the two-level
traceability property. In addition, we also provide constructions for two-level
fingerprinting codes and characterize the corresponding set of achievable
rates.
|
0905.0440
|
Tandem Coding and Cryptography on Wiretap Channels: EXIT Chart Analysis
|
cs.IT cs.CR math.IT
|
Traditional cryptography assumes an eavesdropper receives an error-free copy
of the transmitted ciphertext. Wyner's wiretap channel model recognizes that at
the physical layer both the intended receiver and the passive eavesdropper
inevitably receive an error-prone version of the transmitted message which must
be corrected prior to decryption. This paper considers the implications of
using both channel and cryptographic codes under the wiretap channel model in a
way that enhances the \emph{information-theoretic} security for the friendly
parties by keeping the information transfer to the eavesdropper small. We
consider a secret-key cryptographic system with a linear feedback shift
register (LFSR)-based keystream generator and observe the mutual information
between an LFSR-generated sequence and the received noise-corrupted ciphertext
sequence under a known-plaintext scenario. The effectiveness of a noniterative
fast correlation attack, which reduces the search time in a brute-force attack,
is shown to be correlated with this mutual information. For an iterative fast
correlation attack on this cryptographic system, it is shown that an EXIT chart
and mutual information are very good predictors of decoding success and failure
by a passive eavesdropper.
|
0905.0541
|
Design and Analysis of Successive Decoding with Finite Levels for the
Markov Channel
|
cs.IT math.IT
|
This paper proposes a practical successive decoding scheme with finite levels
for the finite-state Markov channels where there is no a priori state
information at the transmitter or the receiver. The design employs either a
random interleaver or a deterministic interleaver with an irregular pattern and
an optional iterative estimation and decoding procedure within each level. The
interleaver design criteria may be the achievable rate or the extrinsic
information transfer (EXIT) chart, depending on the receiver type. For random
interleavers, the optimization problem is solved efficiently using a
pilot-utility function, while for deterministic interleavers, a good
construction is given using empirical rules. Simulation results demonstrate
that the new successive decoding scheme combined with irregular low-density
parity-check codes can approach the identically and uniformly distributed
(i.u.d.) input capacity on the Markov-fading channel using only a few levels.
|
0905.0564
|
Selective Cooperative Relaying over Time-Varying Channels
|
cs.IT math.IT
|
In selective cooperative relaying only a single relay out of the set of
available relays is activated, hence the available power and bandwidth
resources are efficiently utilized. However, implementing selective cooperative
relaying in time-varying channels may cause frequent relay switchings that
deteriorate the overall performance. In this paper, we study the rate at which
a relay switching occurs in selective cooperative relaying applications in
time-varying fading channels. In particular, we derive closed-form expressions
for the relay switching rate (measured in Hz) for opportunistic relaying (OR)
and distributed switch and stay combining (DSSC). Additionally, expressions for
the average relay activation time for both of the considered schemes are also
provided, reflecting the average time that a selected relay remains active
until a switching occurs. Numerical results manifest that DSSC yields
considerably lower relay switching rates than OR, along with larger average
relay activation times, rendering it a better candidate for implementation of
relay selection in fast fading environments.
|
0905.0586
|
WinBioinfTools: Bioinformatics Tools for Windows High Performance
Computing Server 2008
|
cs.MS cs.CE q-bio.QM
|
Open source bioinformatics tools running under MS Windows are rare to find,
and those running under Windows HPC cluster are almost non-existing. This is
despite the fact that the Windows is the most popular operating system used
among life scientists. Therefore, we introduce in this initiative
WinBioinfTools, a toolkit containing a number of bioinformatics tools running
under Windows High Performance Computing Server 2008. It is an open source code
package, where users and developers can share and add to. We currently start
with three programs from the area of sequence analysis: 1) CoCoNUT for pairwise
genome comparison, 2) parallel BLAST for biological database search, and 3)
parallel global pairwise sequence alignment. In this report, we focus on
technical aspects concerning how some components of these tools were ported
from Linux/Unix environment to run under Windows. We also show the advantages
of using the Windows HPC Cluster 2008. We demonstrate by experiments the
performance gain achieved when using a computer cluster against a single
machine. Furthermore, we show the results of comparing the performance of
WinBioinfTools on the Windows and Linux Cluster.
|
0905.0606
|
Quantization for Soft-Output Demodulators in Bit-Interleaved Coded
Modulation Systems
|
cs.IT math.IT
|
We study quantization of log-likelihood ratios (LLR) in bit-interleaved coded
modulation (BICM) systems in terms of an equivalent discrete channel. We
propose to design the quantizer such that the quantizer outputs become
equiprobable. We investigate semi-analytically and numerically the ergodic and
outage capacity over single- and multiple-antenna channels for different
quantizers. Finally, we show bit error rate simulations for BICM systems with
LLR quantization using a rate 1/2 low-density parity-check code.
|
0905.0619
|
On the Sensitivity of Noncoherent Capacity to the Channel Model
|
cs.IT math.IT
|
The noncoherent capacity of stationary discrete-time fading channels is known
to be very sensitive to the fine details of the channel model. More
specifically, the measure of the set of harmonics where the power spectral
density of the fading process is nonzero determines if capacity grows
logarithmically in SNR or slower than logarithmically. An engineering-relevant
problem is to characterize the SNR value at which this sensitivity starts to
matter.
In this paper, we consider the general class of continuous-time
Rayleigh-fading channels that satisfy the wide-sense stationary
uncorrelated-scattering (WSSUS) assumption and are, in addition, underspread.
For this class of channels, we show that the noncoherent capacity is close to
the AWGN capacity for all SNR values of practical interest, independently of
whether the scattering function is compactly supported or not. As a byproduct
of our analysis, we obtain an information-theoretic pulse-design criterion for
orthogonal frequency-division multiplexing systems.
|
0905.0642
|
Simultaneous support recovery in high dimensions: Benefits and perils of
block $\ell_1/\ell_\infty$-regularization
|
math.ST cs.IT math.IT stat.TH
|
Consider the use of $\ell_{1}/\ell_{\infty}$-regularized regression for joint
estimation of a $\pdim \times \numreg$ matrix of regression coefficients. We
analyze the high-dimensional scaling of $\ell_1/\ell_\infty$-regularized
quadratic programming, considering both consistency in $\ell_\infty$-norm, and
variable selection. We begin by establishing bounds on the $\ell_\infty$-error
as well sufficient conditions for exact variable selection for fixed and random
designs. Our second set of results applies to $\numreg = 2$ linear regression
problems with standard Gaussian designs whose supports overlap in a fraction
$\alpha \in [0,1]$ of their entries: for this problem class, we prove that the
$\ell_{1}/\ell_{\infty}$-regularized method undergoes a phase transition--that
is, a sharp change from failure to success--characterized by the rescaled
sample size $\theta_{1,\infty}(n, p, s, \alpha) = n/\{(4 - 3 \alpha) s
\log(p-(2- \alpha) s)\}$. An implication of this threshold is that use of
$\ell_1 / \ell_{\infty}$-regularization yields improved statistical efficiency
if the overlap parameter is large enough ($\alpha > 2/3$), but has \emph{worse}
statistical efficiency than a naive Lasso-based approach for moderate to small
overlap ($\alpha < 2/3$). These results indicate that some caution needs to be
exercised in the application of $\ell_1/\ell_\infty$ block regularization: if
the data does not match its structure closely enough, it can impair statistical
performance relative to computationally less expensive schemes.
|
0905.0677
|
Feasibility of random basis function approximators for modeling and
control
|
cs.NE cs.AI
|
We discuss the role of random basis function approximators in modeling and
control. We analyze the published work on random basis function approximators
and demonstrate that their favorable error rate of convergence O(1/n) is
guaranteed only with very substantial computational resources. We also discuss
implications of our analysis for applications of neural networks in modeling
and control.
|
0905.0721
|
Diversity-Multiplexing Tradeoff in Fading Interference Channels
|
cs.IT math.IT
|
We analyze two-user single-antenna fading interference channels with perfect
receive channel state information (CSI) and no transmit CSI. We compute the
diversity-multiplexing tradeoff (DMT) region of a fixed-power-split Han and
Kobayashi (HK)-type superposition coding scheme and provide design criteria for
the corresponding superposition codes. We demonstrate that this scheme is
DMT-optimal under moderate, strong, and very strong interference by showing
that it achieves a DMT outer bound that we derive. Further, under very strong
interference, we show that a joint decoder is DMT-optimal and "decouples" the
fading interference channel, i.e., from a DMT perspective, it is possible to
transmit as if the interfering user were not present. In addition, we show
that, under very strong interference, decoding interference while treating the
intended signal as noise, subtracting the result out, and then decoding the
desired signal, a process known as "stripping", achieves the optimal DMT
region. Our proofs are constructive in the sense that code design criteria for
achieving DMT-optimality (in the cases where we can demonstrate it) are
provided.
|
0905.0740
|
A FORTRAN coded regular expression Compiler for IBM 1130 Computing
System
|
cs.CL cs.PL
|
REC (Regular Expression Compiler) is a concise programming language which
allows students to write programs without knowledge of the complicated syntax
of languages like FORTRAN and ALGOL. The language is recursive and contains
only four elements for control. This paper describes an interpreter of REC
written in FORTRAN.
|
0905.0747
|
Self-stabilizing Determinsitic Gathering
|
cs.MA
|
In this paper, we investigate the possibility to deterministically solve the
gathering problem (GP) with weak robots (anonymous, autonomous, disoriented,
deaf and dumb, and oblivious). We introduce strong multiplicity detection as
the ability for the robots to detect the exact number of robots located at a
given position. We show that with strong multiplicity detection, there exists a
deterministic self-stabilizing algorithm solving GP for n robots if, and only
if, n is odd.
|
0905.0749
|
Soft Motion Trajectory Planner for Service Manipulator Robot
|
cs.RO
|
Human interaction introduces two main constraints: Safety and Comfort.
Therefore service robot manipulator can't be controlled like industrial robotic
manipulator where personnel is isolated from the robot's work envelope. In this
paper, we present a soft motion trajectory planner to try to ensure that these
constraints are satisfied. This planner can be used on-line to establish visual
and force control loop suitable in presence of human. The cubic trajectories
build by this planner are good candidates as output of a manipulation task
planner. The obtained system is then homogeneous from task planning to robot
control. The soft motion trajectory planner limits jerk, acceleration and
velocity in cartesian space using quaternion. Experimental results carried out
on a Mitsubishi PA10-6CE arm are presented.
|
0905.0794
|
Constructions of Almost Optimal Resilient Boolean Functions on Large
Even Number of Variables
|
cs.IT cs.CR math.IT
|
In this paper, a technique on constructing nonlinear resilient Boolean
functions is described. By using several sets of disjoint spectra functions on
a small number of variables, an almost optimal resilient function on a large
even number of variables can be constructed. It is shown that given any $m$,
one can construct infinitely many $n$-variable ($n$ even), $m$-resilient
functions with nonlinearity $>2^{n-1}-2^{n/2}$. A large class of highly
nonlinear resilient functions which were not known are obtained. Then one
method to optimize the degree of the constructed functions is proposed. Last,
an improved version of the main construction is given.
|
0905.0838
|
What is the Value of Joint Processing of Pilots and Data in Block-Fading
Channels?
|
cs.IT math.IT
|
The spectral efficiency achievable with joint processing of pilot and data
symbol observations is compared with that achievable through the conventional
(separate) approach of first estimating the channel on the basis of the pilot
symbols alone, and subsequently detecting the data symbols. Studied on the
basis of a mutual information lower bound, joint processing is found to provide
a non-negligible advantage relative to separate processing, particularly for
fast fading. It is shown that, regardless of the fading rate, only a very small
number of pilot symbols (at most one per transmit antenna and per channel
coherence interval) should be transmitted if joint processing is allowed.
|
0905.0940
|
A Large-Deviation Analysis of the Maximum-Likelihood Learning of Markov
Tree Structures
|
stat.ML cs.IT math.IT
|
The problem of maximum-likelihood (ML) estimation of discrete tree-structured
distributions is considered. Chow and Liu established that ML-estimation
reduces to the construction of a maximum-weight spanning tree using the
empirical mutual information quantities as the edge weights. Using the theory
of large-deviations, we analyze the exponent associated with the error
probability of the event that the ML-estimate of the Markov tree structure
differs from the true tree structure, given a set of independently drawn
samples. By exploiting the fact that the output of ML-estimation is a tree, we
establish that the error exponent is equal to the exponential rate of decay of
a single dominant crossover event. We prove that in this dominant crossover
event, a non-neighbor node pair replaces a true edge of the distribution that
is along the path of edges in the true tree graph connecting the nodes in the
non-neighbor pair. Using ideas from Euclidean information theory, we then
analyze the scenario of ML-estimation in the very noisy learning regime and
show that the error exponent can be approximated as a ratio, which is
interpreted as the signal-to-noise ratio (SNR) for learning tree distributions.
We show via numerical experiments that in this regime, our SNR approximation is
accurate.
|
0905.1056
|
Bringing Toric Codes to the next dimension
|
math.AG cs.IT math.IT
|
This paper is concerned with the minimum distance computation for higher
dimensional toric codes defined by lattice polytopes. We show that the minimum
distance is multiplicative with respect to taking the product of polytopes, and
behaves in a simple way when one builds a k-dilate of a pyramid over a
polytope. This allows us to construct a large class of examples of higher
dimensional toric codes where we can compute the minimum distance explicitly.
|
0905.1130
|
Statistical Automatic Summarization in Organic Chemistry
|
cs.IR cs.CL
|
We present an oriented numerical summarizer algorithm, applied to producing
automatic summaries of scientific documents in Organic Chemistry. We present
its implementation named Yachs (Yet Another Chemistry Summarizer) that combines
a specific document pre-processing with a sentence scoring method relying on
the statistical properties of documents. We show that Yachs achieves the best
results among several other summarizers on a corpus of Organic Chemistry
articles.
|
0905.1187
|
The Residual Method for Regularizing Ill-Posed Problems
|
math.OC cs.SY
|
Although the \emph{residual method}, or \emph{constrained regularization}, is
frequently used in applications, a detailed study of its properties is still
missing. This sharply contrasts the progress of the theory of Tikhonov
regularization, where a series of new results for regularization in Banach
spaces has been published in the recent years. The present paper intends to
bridge the gap between the existing theories as far as possible. We develop a
stability and convergence theory for the residual method in general topological
spaces. In addition, we prove convergence rates in terms of (generalized)
Bregman distances, which can also be applied to non-convex regularization
functionals. We provide three examples that show the applicability of our
theory. The first example is the regularized solution of linear operator
equations on $L^p$-spaces, where we show that the results of Tikhonov
regularization generalize unchanged to the residual method. As a second
example, we consider the problem of density estimation from a finite number of
sampling points, using the Wasserstein distance as a fidelity term and an
entropy measure as regularization term. It is shown that the densities obtained
in this way depend continuously on the location of the sampled points and that
the underlying density can be recovered as the number of sampling points tends
to infinity. Finally, we apply our theory to compressed sensing. Here, we show
the well-posedness of the method and derive convergence rates both for convex
and non-convex regularization under rather weak conditions.
|
0905.1215
|
Tail Behavior of Sphere-Decoding Complexity in Random Lattices
|
cs.IT cs.CC math.IT math.ST stat.TH
|
We analyze the (computational) complexity distribution of sphere-decoding
(SD) for random infinite lattices. In particular, we show that under fairly
general assumptions on the statistics of the lattice basis matrix, the tail
behavior of the SD complexity distribution is solely determined by the inverse
volume of a fundamental region of the underlying lattice. Particularizing this
result to NxM, N>=M, i.i.d. Gaussian lattice basis matrices, we find that the
corresponding complexity distribution is of Pareto-type with tail exponent
given by N-M+1. We furthermore show that this tail exponent is not improved by
lattice-reduction, which includes layer-sorting as a special case.
|
0905.1235
|
The Modular Audio Recognition Framework (MARF) and its Applications:
Scientific and Software Engineering Notes
|
cs.SD cs.CL cs.CV cs.MM cs.NE
|
MARF is an open-source research platform and a collection of
voice/sound/speech/text and natural language processing (NLP) algorithms
written in Java and arranged into a modular and extensible framework
facilitating addition of new algorithms. MARF can run distributively over the
network and may act as a library in applications or be used as a source for
learning and extension. A few example applications are provided to show how to
use the framework. There is an API reference in the Javadoc format as well as
this set of accompanying notes with the detailed description of the
architectural design, algorithms, and applications. MARF and its applications
are released under a BSD-style license and is hosted at SourceForge.net. This
document provides the details and the insight on the internals of MARF and some
of the mentioned applications.
|
0905.1305
|
On the Distribution of the Sum of Gamma-Gamma Variates and Applications
in RF and Optical Wireless Communications
|
cs.IT math.IT
|
The Gamma-Gamma (GG) distribution has recently attracted the interest within
the research community due to its involvement in various communication systems.
In the context of RF wireless communications, GG distribution accurately models
the power statistics in composite shadowing/fading channels as well as in
cascade multipath fading channels, while in optical wireless (OW) systems, it
describes the fluctuations of the irradiance of optical signals distorted by
atmospheric turbulence. Although GG channel model offers analytical
tractability in the analysis of single input single output (SISO) wireless
systems, difficulties arise when studying multiple input multiple output (MIMO)
systems, where the distribution of the sum of independent GG variates is
required. In this paper, we present a novel simple closed-form approximation
for the distribution of the sum of independent, but not necessarily identically
distributed GG variates. It is shown that the probability density function
(PDF) of the GG sum can be efficiently approximated either by the PDF of a
single GG distribution, or by a finite weighted sum of PDFs of GG
distributions. To reveal the importance of the proposed approximation, the
performance of RF wireless systems in the presence of composite fading, as well
as MIMO OW systems impaired by atmospheric turbulence, are investigated.
Numerical results and simulations illustrate the accuracy of the proposed
approach.
|
0905.1375
|
Saddle-point Solution of the Fingerprinting Capacity Game Under the
Marking Assumption
|
cs.IT cs.CR math.IT
|
We study a fingerprinting game in which the collusion channel is unknown. The
encoder embeds fingerprints into a host sequence and provides the decoder with
the capability to trace back pirated copies to the colluders.
Fingerprinting capacity has recently been derived as the limit value of a
sequence of maxmin games with mutual information as the payoff function.
However, these games generally do not admit saddle-point solutions and are very
hard to solve numerically. Here under the so-called Boneh-Shaw marking
assumption, we reformulate the capacity as the value of a single two-person
zero-sum game, and show that it is achieved by a saddle-point solution.
If the maximal coalition size is $k$ and the fingerprint alphabet is binary,
we derive equations that can numerically solve the capacity game for arbitrary
$k$. We also provide tight upper and lower bounds on the capacity. Finally, we
discuss the asymptotic behavior of the fingerprinting game for large $k$ and
practical implementation issues.
|
0905.1386
|
Selective-Fading Multiple-Access MIMO Channels: Diversity-Multiplexing
Tradeoff and Dominant Outage Event Regions
|
cs.IT math.IT
|
We establish the optimal diversity-multiplexing (DM) tradeoff for coherent
selective-fading multiple-access MIMO channels and provide corresponding code
design criteria. As a byproduct, on the conceptual level, we find an
interesting relation between the DM tradeoff framework and the notion of
dominant error event regions, first introduced in the AWGN case by Gallager,
IEEE Trans. IT, 1985. This relation allows us to accurately characterize the
error mechanisms in MIMO fading multiple-access channels. In particular, we
find that, for a given rate tuple, the maximum achievable diversity order is
determined by a single outage event that dominates the total error probability
exponentially in SNR. Finally, we examine the distributed space-time code
construction proposed by Badr and Belfiore, Int. Zurich Seminar on Commun.,
2008, using the code design criteria derived in this paper.
|
0905.1424
|
Concept Stability for Constructing Taxonomies of Web-site Users
|
cs.CY cs.AI cs.SI stat.ML
|
Owners of a web-site are often interested in analysis of groups of users of
their site. Information on these groups can help optimizing the structure and
contents of the site. In this paper we use an approach based on formal concepts
for constructing taxonomies of user groups. For decreasing the huge amount of
concepts that arise in applications, we employ stability index of a concept,
which describes how a group given by a concept extent differs from other such
groups. We analyze resulting taxonomies of user groups for three target
websites.
|
0905.1460
|
Design of Learning Based MIMO Cognitive Radio Systems
|
cs.IT math.IT
|
This paper addresses the design issues of the multi-antenna-based cognitive
radio (CR) system that is able to operate concurrently with the licensed
primary radio (PR) system. We propose a practical CR transmission strategy
consisting of three major stages: environment learning, channel training, and
data transmission. In the environment learning stage, the CR transceivers both
listen to the PR transmission and apply blind algorithms to estimate the spaces
that are orthogonal to the channels from the PR. Assuming time-division duplex
(TDD) based transmission for the PR, cognitive beamforming is then designed and
applied at CR transceivers to restrict the interference to/from the PR during
the subsequent channel training and data transmission stages. In the channel
training stage, the CR transmitter sends training signals to the CR receiver,
which applies the linear-minimum-mean-square-error (LMMSE) based estimator to
estimate the effective channel. Considering imperfect estimations in both
learning and training stages, we derive a lower bound on the ergodic capacity
achievable for the CR in the data transmission stage. From this capacity lower
bound, we observe a general learning/training/throughput tradeoff associated
with the proposed scheme, pertinent to transmit power allocation between
training and transmission stages, as well as time allocation among learning,
training, and transmission stages. We characterize the aforementioned tradeoff
by optimizing the associated power and time allocation to maximize the CR
ergodic capacity.
|
0905.1512
|
A Nearly Optimal Construction of Flash Codes
|
cs.IT math.IT
|
Flash memory is a non-volatile computer memory comprised of blocks of cells,
wherein each cell can take on q different values or levels. While increasing
the cell level is easy, reducing the level of a cell can be accomplished only
by erasing an entire block. Since block erasures are highly undesirable, coding
schemes - known as floating codes or flash codes - have been designed in order
to maximize the number of times that information stored in a flash memory can
be written (and re-written) prior to incurring a block erasure. An (n,k,t)_q
flash code C is a coding scheme for storing k information bits in n cells in
such a way that any sequence of up to t writes (where a write is a transition 0
-> 1 or 1 -> 0 in any one of the k bits) can be accommodated without a block
erasure. The total number of available level transitions in n cells is n(q-1),
and the write deficiency of C, defined as \delta(C) = n(q-1) - t, is a measure
of how close the code comes to perfectly utilizing all these transitions. For k
> 6 and large n, the best previously known construction of flash codes achieves
a write deficiency of O(qk^2). On the other hand, the best known lower bound on
write deficiency is \Omega(qk). In this paper, we present a new construction of
flash codes that approaches this lower bound to within a factor logarithmic in
k. To this end, we first improve upon the so-called "indexed" flash codes, due
to Jiang and Bruck, by eliminating the need for index cells in the Jiang-Bruck
construction. Next, we further increase the number of writes by introducing a
new multi-stage (recursive) indexing scheme. We then show that the write
deficiency of the resulting flash codes is O(qk\log k) if q \geq \log_2k, and
at most O(k\log^2 k) otherwise.
|
0905.1537
|
On the Separability of Parallel Gaussian Interference Channels
|
cs.IT math.IT
|
The separability in parallel Gaussian interference channels (PGICs) is
studied in this paper. We generalize the separability results in one-sided
PGICs (OPGICs) by Sung \emph{et al.} to two-sided PGICs (TPGICs). Specifically,
for strong and mixed TPGICs, we show necessary and sufficient conditions for
the separability. For this, we show diagonal covariance matrices are sum-rate
optimal for strong and mixed TPGICs.
|
0905.1543
|
Sum capacity of multi-source linear finite-field relay networks with
fading
|
cs.IT math.IT
|
We study a fading linear finite-field relay network having multiple
source-destination pairs. Because of the interference created by different
unicast sessions, the problem of finding its capacity region is in general
difficult. We observe that, since channels are time-varying, relays can deliver
their received signals by waiting for appropriate channel realizations such
that the destinations can decode their messages without interference. We
propose a block Markov encoding and relaying scheme that exploits such channel
variations. By deriving a general cut-set upper bound and an achievable rate
region, we characterize the sum capacity for some classes of channel
distributions and network topologies. For example, when the channels are
uniformly distributed, the sum capacity is given by the minimum average rank of
the channel matrices constructed by all cuts that separate the entire sources
and destinations. We also describe other cases where the capacity is
characterized.
|
0905.1546
|
Fast and Near-Optimal Matrix Completion via Randomized Basis Pursuit
|
cs.IT cs.LG math.IT
|
Motivated by the philosophy and phenomenal success of compressed sensing, the
problem of reconstructing a matrix from a sampling of its entries has attracted
much attention recently. Such a problem can be viewed as an
information-theoretic variant of the well-studied matrix completion problem,
and the main objective is to design an efficient algorithm that can reconstruct
a matrix by inspecting only a small number of its entries. Although this is an
impossible task in general, Cand\`es and co-authors have recently shown that
under a so-called incoherence assumption, a rank $r$ $n\times n$ matrix can be
reconstructed using semidefinite programming (SDP) after one inspects
$O(nr\log^6n)$ of its entries. In this paper we propose an alternative approach
that is much more efficient and can reconstruct a larger class of matrices by
inspecting a significantly smaller number of the entries. Specifically, we
first introduce a class of so-called stable matrices and show that it includes
all those that satisfy the incoherence assumption. Then, we propose a
randomized basis pursuit (RBP) algorithm and show that it can reconstruct a
stable rank $r$ $n\times n$ matrix after inspecting $O(nr\log n)$ of its
entries. Our sampling bound is only a logarithmic factor away from the
information-theoretic limit and is essentially optimal. Moreover, the runtime
of the RBP algorithm is bounded by $O(nr^2\log n+n^2r)$, which compares very
favorably with the $\Omega(n^4r^2\log^{12}n)$ runtime of the SDP-based
algorithm. Perhaps more importantly, our algorithm will provide an exact
reconstruction of the input matrix in polynomial time. By contrast, the
SDP-based algorithm can only provide an approximate one in polynomial time.
|
0905.1594
|
A Recommender System to Support the Scholarly Communication Process
|
cs.DL cs.IR
|
The number of researchers, articles, journals, conferences, funding
opportunities, and other such scholarly resources continues to grow every year
and at an increasing rate. Many services have emerged to support scholars in
navigating particular aspects of this resource-rich environment. Some
commercial publishers provide recommender and alert services for the articles
and journals in their digital libraries. Similarly, numerous noncommercial
social bookmarking services have emerged for citation sharing. While these
services do provide some support, they lack an understanding of the various
problem-solving scenarios that researchers face daily. Example scenarios, to
name a few, include when a scholar is in search of an article related to
another article of interest, when a scholar is in search of a potential
collaborator for a funding opportunity, when a scholar is in search of an
optimal venue to which to submit their article, and when a scholar, in the role
of an editor, is in search of referees to review an article. All of these
example scenarios can be represented as a problem in information filtering by
means of context-sensitive recommendation. This article presents an overview of
a context-sensitive recommender system to support the scholarly communication
process that is based on the standards and technology set forth by the Semantic
Web initiative.
|
0905.1609
|
Acquisition of morphological families and derivational series from a
machine readable dictionary
|
cs.CL
|
The paper presents a linguistic and computational model aiming at making the
morphological structure of the lexicon emerge from the formal and semantic
regularities of the words it contains. The model is word-based. The proposed
morphological structure consists of (1) binary relations that connect each
headword with words that are morphologically related, and especially with the
members of its morphological family and its derivational series, and of (2) the
analogies that hold between the words. The model has been tested on the lexicon
of French using the TLFi machine readable dictionary.
|
0905.1643
|
Fixed Point and Bregman Iterative Methods for Matrix Rank Minimization
|
math.OC cs.IT math.IT
|
The linearly constrained matrix rank minimization problem is widely
applicable in many fields such as control, signal processing and system
identification. The tightest convex relaxation of this problem is the linearly
constrained nuclear norm minimization. Although the latter can be cast as a
semidefinite programming problem, such an approach is computationally expensive
to solve when the matrices are large. In this paper, we propose fixed point and
Bregman iterative algorithms for solving the nuclear norm minimization problem
and prove convergence of the first of these algorithms. By using a homotopy
approach together with an approximate singular value decomposition procedure,
we get a very fast, robust and powerful algorithm, which we call FPCA (Fixed
Point Continuation with Approximate SVD), that can solve very large matrix rank
minimization problems. Our numerical results on randomly generated and real
matrix completion problems demonstrate that this algorithm is much faster and
provides much better recoverability than semidefinite programming solvers such
as SDPT3. For example, our algorithm can recover 1000 x 1000 matrices of rank
50 with a relative error of 1e-5 in about 3 minutes by sampling only 20 percent
of the elements. We know of no other method that achieves as good
recoverability. Numerical experiments on online recommendation, DNA microarray
data set and image inpainting problems demonstrate the effectiveness of our
algorithms.
|
0905.1745
|
Capacity of a Class of Symmetric SIMO Gaussian Interference Channels
within O(1)
|
cs.IT math.IT
|
The N+1 user, 1 x N single input multiple output (SIMO) Gaussian interference
channel where each transmitter has a single antenna and each receiver has N
antennas is studied. The symmetric capacity within O(1) is characterized for
the symmetric case where all direct links have the same signal-to-noise ratio
(SNR) and all undesired links have the same interference-to-noise ratio (INR).
The gap to the exact capacity is a constant which is independent of SNR and
INR. To get this result, we first generalize the deterministic interference
channel introduced by El Gamal and Costa to model interference channels with
multiple antennas. We derive the capacity region of this deterministic
interference channel. Based on the insights provided by the deterministic
channel, we characterize the generalized degrees of freedom (GDOF) of Gaussian
case, which directly leads to the O(1) capacity approximation. On the
achievability side, an interesting conclusion is that the generalized degrees
of freedom (GDOF) regime where treating interference as noise is found to be
optimal in the 2 user interference channel, does not appear in the N+1 user, 1
x N SIMO case. On the converse side, new multi-user outer bounds emerge out of
this work that do not follow directly from the 2 user case. In addition to the
GDOF region, the outer bounds identify a strong interference regime where the
capacity region is established.
|
0905.1751
|
Experiment Study of Entropy Convergence of Ant Colony Optimization
|
cs.NE cs.AI
|
Ant colony optimization (ACO) has been applied to the field of combinatorial
optimization widely. But the study of convergence theory of ACO is rare under
general condition. In this paper, the authors try to find the evidence to prove
that entropy is related to the convergence of ACO, especially to the estimation
of the minimum iteration number of convergence. Entropy is a new view point
possibly to studying the ACO convergence under general condition. Key Words:
Ant Colony Optimization, Convergence of ACO, Entropy
|
0905.1755
|
Can the Utility of Anonymized Data be used for Privacy Breaches?
|
cs.DB
|
Group based anonymization is the most widely studied approach for privacy
preserving data publishing. This includes k-anonymity, l-diversity, and
t-closeness, to name a few. The goal of this paper is to raise a fundamental
issue on the privacy exposure of the current group based approach. This has
been overlooked in the past. The group based anonymization approach basically
hides each individual record behind a group to preserve data privacy. If not
properly anonymized, patterns can actually be derived from the published data
and be used by the adversary to breach individual privacy. For example, from
the medical records released, if patterns such as people from certain countries
rarely suffer from some disease can be derived, then the information can be
used to imply linkage of other people in an anonymized group with this disease
with higher likelihood. We call the derived patterns from the published data
the foreground knowledge. This is in contrast to the background knowledge that
the adversary may obtain from other channels as studied in some previous work.
Finally, we show by experiments that the attack is realistic in the privacy
benchmark dataset under the traditional group based anonymization approach.
|
0905.1778
|
Encoding of Network Protection Codes Against Link and Node Failures Over
Finite Fields
|
cs.IT cs.CR cs.NI math.IT
|
Link and node failures are common two fundamental problems that affect
operational networks. Hence, protection of communication networks is essential
to increase their reliability, performance, and operations. Much research work
has been done to protect against link and node failures, and to provide
reliable solutions based on pre-defined provision or dynamic restoration of the
domain. In this paper we develop network protection strategies against multiple
link failures using network coding and joint capacities. In these strategies,
the source nodes apply network coding for their transmitted data to provide
backup copies for recovery at the receivers' nodes. Such techniques can be
applied to optical, IP, and mesh networks. The encoding operations of
protection codes are defined over finite fields. Furthermore, the normalized
capacity of the communication network is given by $(n-t)/n$ in case of $t$ link
failures. In addition, a bound on the minimum required field size is derived.
|
0905.1883
|
Cascade multiterminal source coding
|
cs.IT math.IT
|
We investigate distributed source coding of two correlated sources X and Y
where messages are passed to a decoder in a cascade fashion. The encoder of X
sends a message at rate R_1 to the encoder of Y. The encoder of Y then sends a
message to the decoder at rate R_2 based both on Y and on the message it
received about X. The decoder's task is to estimate a function of X and Y. For
example, we consider the minimum mean squared-error distortion when encoding
the sum of jointly Gaussian random variables under these constraints. We also
characterize the rates needed to reconstruct a function of X and Y losslessly.
Our general contribution toward understanding the limits of the cascade
multiterminal source coding network is in the form of inner and outer bounds on
the achievable rate region for satisfying a distortion constraint for an
arbitrary distortion function d(x,y,z). The inner bound makes use of a balance
between two encoding tactics--relaying the information about X and
recompressing the information about X jointly with Y. In the Gaussian case, a
threshold is discovered for identifying which of the two extreme strategies
optimizes the inner bound. Relaying outperforms recompressing the sum at the
relay for some rate pairs if the variance of X is greater than the variance of
Y.
|
0905.1906
|
Improved Adaptive Group Testing Algorithms with Applications to Multiple
Access Channels and Dead Sensor Diagnosis
|
cs.DS cs.IT math.IT
|
We study group-testing algorithms for resolving broadcast conflicts on a
multiple access channel (MAC) and for identifying the dead sensors in a mobile
ad hoc wireless network. In group-testing algorithms, we are asked to identify
all the defective items in a set of items when we can test arbitrary subsets of
items. In the standard group-testing problem, the result of a test is
binary--the tested subset either contains defective items or not. In the more
generalized versions we study in this paper, the result of each test is
non-binary. For example, it may indicate whether the number of defective items
contained in the tested subset is zero, one, or at least two. We give adaptive
algorithms that are provably more efficient than previous group testing
algorithms. We also show how our algorithms can be applied to solve conflict
resolution on a MAC and dead sensor diagnosis. Dead sensor diagnosis poses an
interesting challenge compared to MAC resolution, because dead sensors are not
locally detectable, nor are they themselves active participants.
|
0905.1964
|
On Models of Multi-user Gaussian Channels with Fading
|
cs.IT math.IT
|
An analytically tractable model for Gaussian multiuser channels with fading
is studied, and the capacity region of this model is found to be a good
approximation of the capacity region of the original Gaussian network. This
work extends the existing body of work on deterministic models for Gaussian
multiuser channels to include the physical phenomenon of fading. In particular,
it generalizes these results to a unicast, multiple node network setting with
fading.
|
0905.1990
|
Sparse Linear Representation
|
cs.IT math.IT
|
This paper studies the question of how well a signal can be reprsented by a
sparse linear combination of reference signals from an overcomplete dictionary.
When the dictionary size is exponential in the dimension of signal, then the
exact characterization of the optimal distortion is given as a function of the
dictionary size exponent and the number of reference signals for the linear
representation. Roughly speaking, every signal is sparse if the dictionary size
is exponentially large, no matter how small the exponent is. Furthermore, an
iterative method similar to matching pursuit that successively finds the best
reference signal at each stage gives asymptotically optimal representations.
This method is essentially equivalent to successive refinement for multiple
descriptions and provides a simple alternative proof of the successive
refinability of white Gaussian sources.
|
0905.2004
|
Termination Prediction for General Logic Programs
|
cs.PL cs.AI cs.LO
|
We present a heuristic framework for attacking the undecidable termination
problem of logic programs, as an alternative to current
termination/non-termination proof approaches. We introduce an idea of
termination prediction, which predicts termination of a logic program in case
that neither a termination nor a non-termination proof is applicable. We
establish a necessary and sufficient characterization of infinite (generalized)
SLDNF-derivations with arbitrary (concrete or moded) queries, and develop an
algorithm that predicts termination of general logic programs with arbitrary
non-floundering queries. We have implemented a termination prediction tool and
obtained quite satisfactory experimental results. Except for five programs
which break the experiment time limit, our prediction is 100% correct for all
296 benchmark programs of the Termination Competition 2007, of which eighteen
programs cannot be proved by any of the existing state-of-the-art analyzers
like AProVE07, NTI, Polytool and TALP.
|
0905.2098
|
End-to-End Joint Antenna Selection Strategy and Distributed Compress and
Forward Strategy for Relay Channels
|
cs.IT math.IT
|
Multi-hop relay channels use multiple relay stages, each with multiple relay
nodes, to facilitate communication between a source and destination.
Previously, distributed space-time codes were proposed to maximize the
achievable diversity-multiplexing tradeoff, however, they fail to achieve all
the points of the optimal diversity-multiplexing tradeoff. In the presence of a
low-rate feedback link from the destination to each relay stage and the source,
this paper proposes an end-to-end antenna selection (EEAS) strategy as an
alternative to distributed space-time codes. The EEAS strategy uses a subset of
antennas of each relay stage for transmission of the source signal to the
destination with amplify and forwarding at each relay stage. The subsets are
chosen such that they maximize the end-to-end mutual information at the
destination. The EEAS strategy achieves the corner points of the optimal
diversity-multiplexing tradeoff (corresponding to maximum diversity gain and
maximum multiplexing gain) and achieves better diversity gain at intermediate
values of multiplexing gain, versus the best known distributed space-time
coding strategies. A distributed compress and forward (CF) strategy is also
proposed to achieve all points of the optimal diversity-multiplexing tradeoff
for a two-hop relay channel with multiple relay nodes.
|
0905.2125
|
Experience-driven formation of parts-based representations in a model of
layered visual memory
|
q-bio.NC cs.LG nlin.AO
|
Growing neuropsychological and neurophysiological evidence suggests that the
visual cortex uses parts-based representations to encode, store and retrieve
relevant objects. In such a scheme, objects are represented as a set of
spatially distributed local features, or parts, arranged in stereotypical
fashion. To encode the local appearance and to represent the relations between
the constituent parts, there has to be an appropriate memory structure formed
by previous experience with visual objects. Here, we propose a model how a
hierarchical memory structure supporting efficient storage and rapid recall of
parts-based representations can be established by an experience-driven process
of self-organization. The process is based on the collaboration of slow
bidirectional synaptic plasticity and homeostatic unit activity regulation,
both running at the top of fast activity dynamics with winner-take-all
character modulated by an oscillatory rhythm. These neural mechanisms lay down
the basis for cooperation and competition between the distributed units and
their synaptic connections. Choosing human face recognition as a test task, we
show that, under the condition of open-ended, unsupervised incremental
learning, the system is able to form memory traces for individual faces in a
parts-based fashion. On a lower memory layer the synaptic structure is
developed to represent local facial features and their interrelations, while
the identities of different persons are captured explicitly on a higher layer.
An additional property of the resulting representations is the sparseness of
both the activity during the recall and the synaptic patterns comprising the
memory traces.
|
0905.2159
|
On the Secrecy Rate of Interference Networks using structured codes
|
cs.IT math.IT
|
This paper shows that structured transmission schemes are a good choice for
secret communication over interference networks with an eavesdropper.
Structured transmission is shown to exploit channel asymmetries and thus
perform better than randomly generated codebooks for such channels. For a class
of interference channels, we show that an equivocation sumrate that is within
two bits of the maximum possible legitimate communication sum-rate is
achievable using lattice codes.
|
0905.2200
|
Towards Chip-on-Chip Neuroscience: Fast Mining of Frequent Episodes
Using Graphics Processors
|
cs.DC cs.DB
|
Computational neuroscience is being revolutionized with the advent of
multi-electrode arrays that provide real-time, dynamic, perspectives into brain
function. Mining event streams from these chips is critical to understanding
the firing patterns of neurons and to gaining insight into the underlying
cellular activity. We present a GPGPU solution to mining spike trains. We focus
on mining frequent episodes which captures coordinated events across time even
in the presence of intervening background/"junk" events. Our algorithmic
contributions are two-fold: MapConcatenate, a new computation-to-core mapping
scheme, and a two-pass elimination approach to quickly find supported episodes
from a large number of candidates. Together, they help realize a real-time
"chip-on-chip" solution to neuroscience data mining, where one chip (the
multi-electrode array) supplies the spike train data and another (the GPGPU)
mines it at a scale unachievable previously. Evaluation on both synthetic and
real datasets demonstrate the potential of our approach.
|
0905.2203
|
Accelerator-Oriented Algorithm Transformation for Temporal Data Mining
|
cs.DC cs.DB
|
Temporal data mining algorithms are becoming increasingly important in many
application domains including computational neuroscience, especially the
analysis of spike train data. While application scientists have been able to
readily gather multi-neuronal datasets, analysis capabilities have lagged
behind, due to both lack of powerful algorithms and inaccessibility to powerful
hardware platforms. The advent of GPU architectures such as Nvidia's GTX 280
offers a cost-effective option to bring these capabilities to the
neuroscientist's desktop. Rather than port existing algorithms onto this
architecture, we advocate the need for algorithm transformation, i.e.,
rethinking the design of the algorithm in a way that need not necessarily
mirror its serial implementation strictly. We present a novel implementation of
a frequent episode discovery algorithm by revisiting "in-the-large" issues such
as problem decomposition as well as "in-the-small" issues such as data layouts
and memory access patterns. This is non-trivial because frequent episode
discovery does not lend itself to GPU-friendly data-parallel mapping
strategies. Applications to many datasets and comparisons to CPU as well as
prior GPU implementations showcase the advantages of our approach.
|
0905.2248
|
Protection against link errors and failures using network coding
|
cs.IT cs.NI math.IT
|
We propose a network-coding based scheme to protect multiple bidirectional
unicast connections against adversarial errors and failures in a network. The
network consists of a set of bidirectional primary path connections that carry
the uncoded traffic. The end nodes of the bidirectional connections are
connected by a set of shared protection paths that provide the redundancy
required for protection. Such protection strategies are employed in the domain
of optical networks for recovery from failures. In this work we consider the
problem of simultaneous protection against adversarial errors and failures.
Suppose that n_e paths are corrupted by the omniscient adversary. Under our
proposed protocol, the errors can be corrected at all the end nodes with 4n_e
protection paths. More generally, if there are n_e adversarial errors and n_f
failures, 4n_e + 2n_f protection paths are sufficient. The number of protection
paths only depends on the number of errors and failures being protected against
and is independent of the number of unicast connections.
|
0905.2297
|
On Optimal Distributed Joint Source-Channel Coding for Correlated
Gaussian Sources over Gaussian Channels
|
cs.IT math.IT
|
We consider the problem of distributed joint source-channel coding of
correlated Gaussian sources over a Gaussian Multiple Access Channel (GMAC).
There may be side information at the decoder and/or at the encoders. First we
specialize a general result (for transmission of correlated sources over a MAC
with side information) to obtain sufficient conditions for reliable
transmission over a Gaussian MAC. This system does not satisfy the
source-channel separation. We study and compare three joint source-channel
coding schemes available in literature. We show that each of these schemes is
optimal under different scenarios. One of the schemes, Amplify and Forward (AF)
which simplifies the design of encoders and the decoder, is optimal at low SNR
but not at high SNR. Another scheme is asymptotically optimal at high SNR. The
third coding scheme is optimal for orthogonal Gaussian channels. We also show
that AF is close to the optimal scheme for orthogonal channels even at high
SNR.
|
0905.2311
|
Residus de 2-formes differentielles sur les surfaces algebriques et
applications aux codes correcteurs d'erreurs
|
math.AG cs.IT math.IT
|
The theory of algebraic-geometric codes has been developed in the beginning
of the 80's after a paper of V.D. Goppa. Given a smooth projective algebraic
curve X over a finite field, there are two different constructions of
error-correcting codes. The first one, called "functional", uses some rational
functions on X and the second one, called "differential", involves some
rational 1-forms on this curve. Hundreds of papers are devoted to the study of
such codes.
In addition, a generalization of the functional construction for algebraic
varieties of arbitrary dimension is given by Y. Manin in an article of 1984. A
few papers about such codes has been published, but nothing has been done
concerning a generalization of the differential construction to the
higher-dimensional case.
In this thesis, we propose a differential construction of codes on algebraic
surfaces. Afterwards, we study the properties of these codes and particularly
their relations with functional codes. A pretty surprising fact is that a main
difference with the case of curves appears. Indeed, if in the case of curves, a
differential code is always the orthogonal of a functional one, this assertion
generally fails for surfaces. Last observation motivates the study of codes
which are the orthogonal of some functional code on a surface. Therefore, we
prove that, under some condition on the surface, these codes can be realized as
sums of differential codes. Moreover, we show that some answers to some open
problems "a la Bertini" could give very interesting informations on the
parameters of these codes.
|
0905.2341
|
Differential approach for the study of duals of algebraic-geometric
codes on surfaces
|
math.AG cs.IT math.IT math.NT
|
The purpose of the present article is the study of duals of functional codes
on algebraic surfaces. We give a direct geometrical description of them, using
differentials. Even if this geometrical description is less trivial, it can be
regarded as a natural extension to surfaces of the result asserting that the
dual of a functional code on a curve is a differential code. We study the
parameters of such codes and state a lower bound for their minimum distance.
Using this bound, one can study some examples of codes on surfaces, and in
particular surfaces with Picard number 1 like elliptic quadrics or some
particular cubic surfaces. The parameters of some of the studied codes reach
those of the best known codes up to now.
|
0905.2345
|
The dual minimum distance of arbitrary dimensional algebraic--geometric
codes
|
math.AG cs.IT math.IT
|
In this article, the minimum distance of the dual $C^{\bot}$ of a functional
code $C$ on an arbitrary dimensional variety $X$ over a finite field $\F_q$ is
studied. The approach consists in finding minimal configurations of points on
$X$ which are not in "general position". If $X$ is a curve, the result improves
in some situations the well-known Goppa designed distance.
|
0905.2347
|
Combining Supervised and Unsupervised Learning for GIS Classification
|
cs.LG
|
This paper presents a new hybrid learning algorithm for unsupervised
classification tasks. We combined Fuzzy c-means learning algorithm and a
supervised version of Minimerror to develop a hybrid incremental strategy
allowing unsupervised classifications. We applied this new approach to a
real-world database in order to know if the information contained in unlabeled
features of a Geographic Information System (GIS), allows to well classify it.
Finally, we compared our results to a classical supervised classification
obtained by a multilayer perceptron.
|
0905.2386
|
Combinatorial information distance
|
cs.DM cs.IT math.IT
|
Let $|A|$ denote the cardinality of a finite set $A$. For any real number $x$
define $t(x)=x$ if $x\geq1$ and 1 otherwise. For any finite sets $A,B$ let
$\delta(A,B)$ $=$ $\log_{2}(t(|B\cap\bar{A}||A|))$. We define {This appears as
Technical Report # arXiv:0905.2386v4. A shorter version appears in the {Proc.
of Mini-Conference on Applied Theoretical Computer Science (MATCOS-10)},
Slovenia, Oct. 13-14, 2010.} a new cobinatorial distance $d(A,B)$ $=$
$\max\{\delta(A,B),\delta(B,A)\} $ which may be applied to measure the distance
between binary strings of different lengths. The distance is based on a
classical combinatorial notion of information introduced by Kolmogorov.
|
0905.2392
|
On Channel Output Feedback in Deterministic Interference Channels
|
cs.IT math.IT
|
In this paper, we study the effect of channel output feedback on the sum
capacity in a two-user symmetric deterministic interference channel. We find
that having a single feedback link from one of the receivers to its own
transmitter results in the same sum capacity as having a total of 4 feedback
links from both the receivers to both the transmitters. Hence, from the sum
capacity point of view, the three additional feedback links are not helpful. We
also consider a half-duplex feedback model where the forward and the feedback
resources are symmetric and timeshared. Surprisingly, we find that there is no
gain in sum-capacity with feedback in a half-duplex feedback model when
interference links have more capacity than direct links.
|
0905.2413
|
Outage Capacity and Optimal Transmission for Dying Channels
|
cs.IT math.IT
|
In wireless networks, communication links may be subject to random fatal
impacts: for example, sensor networks under sudden power losses or cognitive
radio networks with unpredictable primary user spectrum occupancy. Under such
circumstances, it is critical to quantify how fast and reliably the information
can be collected over attacked links. For a single point-to-point channel
subject to a random attack, named as a \emph{dying channel}, we model it as a
block-fading (BF) channel with a finite and random delay constraint. First, we
define the outage capacity as the performance measure, followed by studying the
optimal coding length $K$ such that the outage probability is minimized when
uniform power allocation is assumed. For a given rate target and a coding
length $K$, we then minimize the outage probability over the power allocation
vector $\mv{P}_{K}$, and show that this optimization problem can be cast into a
convex optimization problem under some conditions. The optimal solutions for
several special cases are discussed.
Furthermore, we extend the single point-to-point dying channel result to the
parallel multi-channel case where each sub-channel is a dying channel, and
investigate the corresponding asymptotic behavior of the overall outage
probability with two different attack models: the independent-attack case and
the $m$-dependent-attack case. It can be shown that the overall outage
probability diminishes to zero for both cases as the number of sub-channels
increases if the \emph{rate per unit cost} is less than a certain threshold.
The outage exponents are also studied to reveal how fast the outage probability
improves over the number of sub-channels.
|
0905.2416
|
Identifying Influential Bloggers: Time Does Matter
|
cs.IR cs.DL
|
Blogs have recently become one of the most favored services on the Web. Many
users maintain a blog and write posts to express their opinion, experience and
knowledge about a product, an event and every subject of general or specific
interest. More users visit blogs to read these posts and comment them. This
"participatory journalism" of blogs has such an impact upon the masses that
Keller and Berry argued that through blogging "one American in tens tells the
other nine how to vote, where to eat and what to buy" \cite{keller1}.
Therefore, a significant issue is how to identify such influential bloggers.
This problem is very new and the relevant literature lacks sophisticated
solutions, but most importantly these solutions have not taken into account
temporal aspects for identifying influential bloggers, even though the time is
the most critical aspect of the Blogosphere. This article investigates the
issue of identifying influential bloggers by proposing two easily computed
blogger ranking methods, which incorporate temporal aspects of the blogging
activity. Each method is based on a specific metric to score the blogger's
posts. The first metric, termed MEIBI, takes into consideration the number of
the blog post's inlinks and its comments, along with the publication date of
the post. The second metric, MEIBIX, is used to score a blog post according to
the number and age of the blog post's inlinks and its comments. These methods
are evaluated against the state-of-the-art influential blogger identification
method utilizing data collected from a real-world community blog site. The
obtained results attest that the new methods are able to better identify
significant temporal patterns in the blogging behaviour.
|
0905.2422
|
Multilevel Coding over Two-Hop Single-User Networks
|
cs.IT math.IT
|
In this paper, a two-hop network in which information is transmitted from a
source via a relay to a destination is considered. It is assumed that the
channels are static fading with additive white Gaussian noise. All nodes are
equipped with a single antenna and the Channel State Information (CSI) of each
hop is not available at the corresponding transmitter. The relay is assumed to
be simple, i.e., not capable of data buffering over multiple coding blocks,
water-filling over time, or rescheduling. A commonly used design criterion in
such configurations is the maximization of the average received rate at the
destination. We show that using a continuum of multilevel codes at both the
source and the relay, in conjunction with decode and forward strategy at the
relay, performs optimum in this setup. In addition, we present a scheme to
optimally allocate the available source and relay powers to different levels of
their corresponding codes. The performance of this scheme is evaluated assuming
Rayleigh fading and compared with the previously known strategies.
|
0905.2423
|
Bounds on sets with few distances
|
math.CO cs.IT math.IT math.MG
|
We derive a new estimate of the size of finite sets of points in metric
spaces with few distances. The following applications are considered:
(1) we improve the Ray-Chaudhuri--Wilson bound of the size of uniform
intersecting families of subsets;
(2) we refine the bound of Delsarte-Goethals-Seidel on the maximum size of
spherical sets with few distances;
(3) we prove a new bound on codes with few distances in the Hamming space,
improving an earlier result of Delsarte.
We also find the size of maximal binary codes and maximal constant-weight
codes of small length with 2 and 3 distances.
|
0905.2429
|
Time Delay Estimation from Low Rate Samples: A Union of Subspaces
Approach
|
cs.IT math.IT
|
Time delay estimation arises in many applications in which a multipath medium
has to be identified from pulses transmitted through the channel. Various
approaches have been proposed in the literature to identify time delays
introduced by multipath environments. However, these methods either operate on
the analog received signal, or require high sampling rates in order to achieve
reasonable time resolution. In this paper, our goal is to develop a unified
approach to time delay estimation from low rate samples of the output of a
multipath channel. Our methods result in perfect recovery of the multipath
delays from samples of the channel output at the lowest possible rate, even in
the presence of overlapping transmitted pulses. This rate depends only on the
number of multipath components and the transmission rate, but not on the
bandwidth of the probing signal. In addition, our development allows for a
variety of different sampling methods. By properly manipulating the low-rate
samples, we show that the time delays can be recovered using the well-known
ESPRIT algorithm. Combining results from sampling theory with those obtained in
the context of direction of arrival estimation methods, we develop necessary
and sufficient conditions on the transmitted pulse and the sampling functions
in order to ensure perfect recovery of the channel parameters at the minimal
possible rate. Our results can be viewed in a broader context, as a sampling
theorem for analog signals defined over an infinite union of subspaces.
|
0905.2435
|
Quantified Multimodal Logics in Simple Type Theory
|
cs.AI cs.LO
|
We present a straightforward embedding of quantified multimodal logic in
simple type theory and prove its soundness and completeness. Modal operators
are replaced by quantification over a type of possible worlds. We present
simple experiments, using existing higher-order theorem provers, to demonstrate
that the embedding allows automated proofs of statements in these logics, as
well as meta properties of them.
|
0905.2447
|
The Diversity Multiplexing Tradeoff for Interference Networks
|
cs.IT math.IT
|
The diversity-multiplexing tradeoff (DMT) for interference networks, such as
the interference channel, the X channel, the Z interference channel and the Z
channel, is analyzed. In particular, we investigate the impact of
rate-splitting and channel knowledge at the transmitters. We also use the DMT
of the Z channel and the Z interference channel to distill insights into the
"loud neighbor" problem for femto-cell networks.
|
0905.2449
|
The Role of Self-Forensics in Vehicle Crash Investigations and Event
Reconstruction
|
cs.CY cs.AI cs.CR cs.OH
|
This paper further introduces and formalizes a novel concept of
self-forensics for automotive vehicles, specified in the Forensic Lucid
language. We argue that self-forensics, with the forensics taken out of the
cybercrime domain, is applicable to "self-dissection" of intelligent vehicles
and hardware systems for automated incident and anomaly analysis and event
reconstruction by the software with or without the aid of the engineering teams
in a variety of forensic scenarios. We propose a formal design, requirements,
and specification of the self-forensic enabled units (similar to blackboxes) in
vehicles that will help investigation of incidents and also automated reasoning
and verification of theories along with the events reconstruction in a formal
model. We argue such an analysis is beneficial to improve the safety of the
passengers and their vehicles, like the airline industry does for planes.
|
0905.2459
|
On Design and Implementation of the Distributed Modular Audio
Recognition Framework: Requirements and Specification Design Document
|
cs.CV cs.DC cs.MM cs.NE cs.SD
|
We present the requirements and design specification of the open-source
Distributed Modular Audio Recognition Framework (DMARF), a distributed
extension of MARF. The distributed version aggregates a number of distributed
technologies (e.g. Java RMI, CORBA, Web Services) in a pluggable and modular
model along with the provision of advanced distributed systems algorithms. We
outline the associated challenges incurred during the design and implementation
as well as overall specification of the project and its advantages and
limitations.
|
0905.2463
|
Generalized Kernel-based Visual Tracking
|
cs.CV cs.MM
|
In this work we generalize the plain MS trackers and attempt to overcome
standard mean shift trackers' two limitations.
It is well known that modeling and maintaining a representation of a target
object is an important component of a successful visual tracker.
However, little work has been done on building a robust template model for
kernel-based MS tracking. In contrast to building a template from a single
frame, we train a robust object representation model from a large amount of
data. Tracking is viewed as a binary classification problem, and a
discriminative classification rule is learned to distinguish between the object
and background. We adopt a support vector machine (SVM) for training. The
tracker is then implemented by maximizing the classification score. An
iterative optimization scheme very similar to MS is derived for this purpose.
|
0905.2473
|
On the Workings of Genetic Algorithms: The Genoclique Fixing Hypothesis
|
cs.NE cs.AI
|
We recently reported that the simple genetic algorithm (SGA) is capable of
performing a remarkable form of sublinear computation which has a
straightforward connection with the general problem of interacting attributes
in data-mining. In this paper we explain how the SGA can leverage this
computational proficiency to perform efficient adaptation on a broad class of
fitness functions. Based on the relative ease with which a practical fitness
function might belong to this broad class, we submit a new hypothesis about the
workings of genetic algorithms. We explain why our hypothesis is superior to
the building block hypothesis, and, by way of empirical validation, we present
the results of an experiment in which the use of a simple mechanism called
clamping dramatically improved the performance of an SGA with uniform crossover
on large, randomly generated instances of the MAX 3-SAT problem.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.