id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
0912.4289
|
Turbo Analog Error Correcting Codes Decodable By Linear Programming
|
cs.IT math.IT
|
In this paper we present a new Turbo analog error correcting coding scheme
for real valued signals that are corrupted by impulsive noise. This Turbo code
improves Donoho's deterministic construction by using a probabilistic approach.
More specifically, our construction corrects more errors than the matrices of
Donoho by allowing a vanishingly small probability of error (with the increase
in block size). The problem of decoding the long block code is decoupled into
two sets of parallel Linear Programming problems. This leads to a significant
reduction in decoding complexity as compared to one-step Linear Programming
decoding.
|
0912.4473
|
Learning to Predict Combinatorial Structures
|
cs.LG cs.AI
|
The major challenge in designing a discriminative learning algorithm for
predicting structured data is to address the computational issues arising from
the exponential size of the output space. Existing algorithms make different
assumptions to ensure efficient, polynomial time estimation of model
parameters. For several combinatorial structures, including cycles, partially
ordered sets, permutations and other graph classes, these assumptions do not
hold. In this thesis, we address the problem of designing learning algorithms
for predicting combinatorial structures by introducing two new assumptions: (i)
The first assumption is that a particular counting problem can be solved
efficiently. The consequence is a generalisation of the classical ridge
regression for structured prediction. (ii) The second assumption is that a
particular sampling problem can be solved efficiently. The consequence is a new
technique for designing and analysing probabilistic structured prediction
models. These results can be applied to solve several complex learning problems
including but not limited to multi-label classification, multi-category
hierarchical classification, and label ranking.
|
0912.4546
|
Enhanced Feedback Iterative Decoding of Sparse Quantum Codes
|
quant-ph cs.IT math.IT
|
Decoding sparse quantum codes can be accomplished by syndrome-based decoding
using a belief propagation (BP) algorithm.We significantly improve this
decoding scheme by developing a new feedback adjustment strategy for the
standard BP algorithm. In our feedback procedure, we exploit much of the
information from stabilizers, not just the syndrome but also the values of the
frustrated checks on individual qubits of the code and the channel model.
Furthermore we show that our decoding algorithm is superior to belief
propagation algorithms using only the syndrome in the feedback procedure for
all cases of the depolarizing channel. Our algorithm does not increase the
measurement overhead compared to the previous method, as the extra information
comes for free from the requisite stabilizer measurements.
|
0912.4553
|
Consensus Dynamics in a non-deterministic Naming Game with Shared Memory
|
cs.MA cs.AI
|
In the naming game, individuals or agents exchange pairwise local information
in order to communicate about objects in their common environment. The goal of
the game is to reach a consensus about naming these objects. Originally used to
investigate language formation and self-organizing vocabularies, we extend the
classical naming game with a globally shared memory accessible by all agents.
This shared memory can be interpreted as an external source of knowledge like a
book or an Internet site. The extended naming game models an environment
similar to one that can be found in the context of social bookmarking and
collaborative tagging sites where users tag sites using appropriate labels, but
also mimics an important aspect in the field of human-based image labeling.
Although the extended naming game is non-deterministic in its word selection,
we show that consensus towards a common vocabulary is reached. More
importantly, we show the qualitative and quantitative influence of the external
source of information, i.e. the shared memory, on the consensus dynamics
between the agents.
|
0912.4556
|
On successive refinement of diversity for fading ISI channels
|
cs.IT math.IT
|
Rate and diversity impose a fundamental trade-off in communications. This
trade-off was investigated for flat-fading channels in [15] as well as for
Inter-symbol Interference (ISI) channels in [1]. A different point of view was
explored in [12] where high-rate codes were designed so that they have a
high-diversity code embedded within them. These diversity embedded codes were
investigated for flat fading channels both from an information theoretic
viewpoint [5] and from a coding theory viewpoint in [2]. In this paper we
explore the use of diversity embedded codes for inter-symbol interference
channels. In particular the main result of this paper is that the diversity
multiplexing trade-off for fading MISO/SIMO/SISO ISI channels is indeed
successively refinable. This implies that for fading ISI channels with a single
degree of freedom one can embed a high diversity code within a high rate code
without any performance loss (asymptotically). This is related to a
deterministic structural observation about the asymptotic behavior of frequency
response of channel with respect to fading strength of time domain taps as well
as a coding scheme to take advantage of this observation.
|
0912.4571
|
Fast Alternating Linearization Methods for Minimizing the Sum of Two
Convex Functions
|
math.OC cs.CV math.NA
|
We present in this paper first-order alternating linearization algorithms
based on an alternating direction augmented Lagrangian approach for minimizing
the sum of two convex functions. Our basic methods require at most
$O(1/\epsilon)$ iterations to obtain an $\epsilon$-optimal solution, while our
accelerated (i.e., fast) versions of them require at most
$O(1/\sqrt{\epsilon})$ iterations, with little change in the computational
effort required at each iteration. For both types of methods, we present one
algorithm that requires both functions to be smooth with Lipschitz continuous
gradients and one algorithm that needs only one of the functions to be so.
Algorithms in this paper are Gauss-Seidel type methods, in contrast to the ones
proposed by Goldfarb and Ma in [21] where the algorithms are Jacobi type
methods. Numerical results are reported to support our theoretical conclusions
and demonstrate the practical potential of our algorithms.
|
0912.4584
|
A Necessary and Sufficient Condition for Graph Matching Being Equivalent
to the Maximum Weight Clique Problem
|
cs.AI
|
This paper formulates a necessary and sufficient condition for a generic
graph matching problem to be equivalent to the maximum vertex and edge weight
clique problem in a derived association graph. The consequences of this results
are threefold: first, the condition is general enough to cover a broad range of
practical graph matching problems; second, a proof to establish equivalence
between graph matching and clique search reduces to showing that a given graph
matching problem satisfies the proposed condition; and third, the result sets
the scene for generic continuous solutions for a broad range of graph matching
problems. To illustrate the mathematical framework, we apply it to a number of
graph matching problems, including the problem of determining the graph edit
distance.
|
0912.4595
|
On the Optimal Number of Cooperative Base Stations in Network MIMO
|
cs.IT math.IT
|
We consider the multi-cell uplink (network MIMO) where M base-stations (BSs)
communicate simultaneously with M user terminals (UTs). Although the potential
benefit of multi-cell cooperation increases with M, the overhead related to
learning the uplink channels will rapidly dominate the uplink resource. In
other words, there exists a non-trivial tradeoff between the performance gains
of network MIMO and the related overhead in channel estimation for a finite
coherence time. We use a close approximation of the ergodic capacity to study
this tradeoff by taking some realistic aspects into account such as unreliable
backhaul links and different path losses between the BSs and UTs. Our results
provide some insight into practical limitations as well as realistic dimensions
of network MIMO systems.
|
0912.4598
|
Elkan's k-Means for Graphs
|
cs.AI
|
This paper extends k-means algorithms from the Euclidean domain to the domain
of graphs. To recompute the centroids, we apply subgradient methods for solving
the optimization-based formulation of the sample mean of graphs. To accelerate
the k-means algorithm for graphs without trading computational time against
solution quality, we avoid unnecessary graph distance calculations by
exploiting the triangle inequality of the underlying distance metric following
Elkan's k-means algorithm proposed in \cite{Elkan03}. In experiments we show
that the accelerated k-means algorithm are faster than the standard k-means
algorithm for graphs provided there is a cluster structure in the data.
|
0912.4637
|
Local and Global Trust Based on the Concept of Promises
|
cs.MA
|
We use the notion of a promise to define local trust between agents
possessing autonomous decision-making. An agent is trustworthy if it is
expected that it will keep a promise. This definition satisfies most
commonplace meanings of trust. Reputation is then an estimation of this
expectation value that is passed on from agent to agent.
Our definition distinguishes types of trust, for different behaviours, and
decouples the concept of agent reliability from the behaviour on which the
judgement is based. We show, however, that trust is fundamentally heuristic, as
it provides insufficient information for agents to make a rational judgement. A
global trustworthiness, or community trust can be defined by a proportional,
self-consistent voting process, as a weighted eigenvector-centrality function
of the promise theoretical graph.
|
0912.4649
|
The use of ideas of Information Theory for studying "language" and
intelligence in ants
|
cs.IT cs.AI math.IT nlin.AO
|
In this review we integrate results of long term experimental study on ant
"language" and intelligence which were fully based on fundamental ideas of
Information Theory, such as the Shannon entropy, the Kolmogorov complexity, and
the Shannon's equation connecting the length of a message ($l$) and its
frequency $(p)$, i.e. $l = - \log p$ for rational communication systems. This
approach, new for studying biological communication systems, enabled us to
obtain the following important results on ants' communication and intelligence:
i) to reveal "distant homing" in ants, that is, their ability to transfer
information about remote events; ii) to estimate the rate of information
transmission; iii) to reveal that ants are able to grasp regularities and to
use them for "compression" of information; iv) to reveal that ants are able to
transfer to each other the information about the number of objects; v) to
discover that ants can add and subtract small numbers. The obtained results
show that Information Theory is not only wonderful mathematical theory, but
many its results may be considered as Nature laws.
|
0912.4660
|
Finding the Maximizers of the Information Divergence from an Exponential
Family
|
cs.IT math.IT
|
This paper investigates maximizers of the information divergence from an
exponential family $E$. It is shown that the $rI$-projection of a maximizer $P$
to $E$ is a convex combination of $P$ and a probability measure $P_-$ with
disjoint support and the same value of the sufficient statistics $A$. This
observation can be used to transform the original problem of maximizing
$D(\cdot||E)$ over the set of all probability measures into the maximization of
a function $\Dbar$ over a convex subset of $\ker A$. The global maximizers of
both problems correspond to each other. Furthermore, finding all local
maximizers of $\Dbar$ yields all local maximizers of $D(\cdot||E)$.
This paper also proposes two algorithms to find the maximizers of $\Dbar$ and
applies them to two examples, where the maximizers of $D(\cdot||E)$ were not
known before.
|
0912.4742
|
Optimizing Histogram Queries under Differential Privacy
|
cs.DB cs.CR
|
Differential privacy is a robust privacy standard that has been successfully
applied to a range of data analysis tasks. Despite much recent work, optimal
strategies for answering a collection of correlated queries are not known.
We study the problem of devising a set of strategy queries, to be submitted
and answered privately, that will support the answers to a given workload of
queries. We propose a general framework in which query strategies are formed
from linear combinations of counting queries, and we describe an optimal method
for deriving new query answers from the answers to the strategy queries. Using
this framework we characterize the error of strategies geometrically, and we
propose solutions to the problem of finding optimal strategies.
|
0912.4872
|
Interpretations of Directed Information in Portfolio Theory, Data
Compression, and Hypothesis Testing
|
cs.IT math.IT
|
We investigate the role of Massey's directed information in portfolio theory,
data compression, and statistics with causality constraints. In particular, we
show that directed information is an upper bound on the increment in growth
rates of optimal portfolios in a stock market due to {causal} side information.
This upper bound is tight for gambling in a horse race, which is an extreme
case of stock markets. Directed information also characterizes the value of
{causal} side information in instantaneous compression and quantifies the
benefit of {causal} inference in joint compression of two stochastic processes.
In hypothesis testing, directed information evaluates the best error exponent
for testing whether a random process $Y$ {causally} influences another process
$X$ or not. These results give a natural interpretation of directed information
$I(Y^n \to X^n)$ as the amount of information that a random sequence $Y^n =
(Y_1,Y_2,..., Y_n)$ {causally} provides about another random sequence $X^n =
(X_1,X_2,...,X_n)$. A new measure, {\em directed lautum information}, is also
introduced and interpreted in portfolio theory, data compression, and
hypothesis testing.
|
0912.4879
|
Similarit\'e en intension vs en extension : \`a la crois\'ee de
l'informatique et du th\'e\^atre
|
cs.AI
|
Traditional staging is based on a formal approach of similarity leaning on
dramaturgical ontologies and instanciation variations. Inspired by interactive
data mining, that suggests different approaches, we give an overview of
computer science and theater researches using computers as partners of the
actor to escape the a priori specification of roles.
|
0912.4883
|
On Finding Predictors for Arbitrary Families of Processes
|
cs.LG cs.AI cs.IT math.IT math.ST stat.TH
|
The problem is sequence prediction in the following setting. A sequence
$x_1,...,x_n,...$ of discrete-valued observations is generated according to
some unknown probabilistic law (measure) $\mu$. After observing each outcome,
it is required to give the conditional probabilities of the next observation.
The measure $\mu$ belongs to an arbitrary but known class $C$ of stochastic
process measures. We are interested in predictors $\rho$ whose conditional
probabilities converge (in some sense) to the "true" $\mu$-conditional
probabilities if any $\mu\in C$ is chosen to generate the sequence. The
contribution of this work is in characterizing the families $C$ for which such
predictors exist, and in providing a specific and simple form in which to look
for a solution. We show that if any predictor works, then there exists a
Bayesian predictor, whose prior is discrete, and which works too. We also find
several sufficient and necessary conditions for the existence of a predictor,
in terms of topological characterizations of the family $C$, as well as in
terms of local behaviour of the measures in $C$, which in some cases lead to
procedures for constructing such predictors. It should be emphasized that the
framework is completely general: the stochastic processes considered are not
required to be i.i.d., stationary, or to belong to any parametric or countable
family.
|
0912.4884
|
An Invariance Principle for Polytopes
|
cs.CC cs.CG cs.DM cs.LG math.PR
|
Let X be randomly chosen from {-1,1}^n, and let Y be randomly chosen from the
standard spherical Gaussian on R^n. For any (possibly unbounded) polytope P
formed by the intersection of k halfspaces, we prove that
|Pr [X belongs to P] - Pr [Y belongs to P]| < log^{8/5}k * Delta, where Delta
is a parameter that is small for polytopes formed by the intersection of
"regular" halfspaces (i.e., halfspaces with low influence). The novelty of our
invariance principle is the polylogarithmic dependence on k. Previously, only
bounds that were at least linear in k were known. We give two important
applications of our main result: (1) A polylogarithmic in k bound on the
Boolean noise sensitivity of intersections of k "regular" halfspaces (previous
work gave bounds linear in k). (2) A pseudorandom generator (PRG) with seed
length O((log n)*poly(log k,1/delta)) that delta-fools all polytopes with k
faces with respect to the Gaussian distribution. We also obtain PRGs with
similar parameters that fool polytopes formed by intersection of regular
halfspaces over the hypercube. Using our PRG constructions, we obtain the first
deterministic quasi-polynomial time algorithms for approximately counting the
number of solutions to a broad class of integer programs, including dense
covering problems and contingency tables.
|
0912.4936
|
Genus Computing for 3D digital objects: algorithm and implementation
|
cs.CV cs.CG
|
This paper deals with computing topological invariants such as connected
components, boundary surface genus, and homology groups. For each input data
set, we have designed or implemented algorithms to calculate connected
components, boundary surfaces and their genus, and homology groups. Due to the
fact that genus calculation dominates the entire task for 3D object in 3D
space, in this paper, we mainly discuss the calculation of the genus. The new
algorithms designed in this paper will perform:
(1) pathological cases detection and deletion, (2) raster space to point
space (dual space) transformation, (3) the linear time algorithm for boundary
point classification, and (4) genus calculation.
|
0912.4988
|
Sparse Recovery from Combined Fusion Frame Measurements
|
cs.IT math.IT
|
Sparse representations have emerged as a powerful tool in signal and
information processing, culminated by the success of new acquisition and
processing techniques such as Compressed Sensing (CS). Fusion frames are very
rich new signal representation methods that use collections of subspaces
instead of vectors to represent signals. This work combines these exciting
fields to introduce a new sparsity model for fusion frames. Signals that are
sparse under the new model can be compressively sampled and uniquely
reconstructed in ways similar to sparse signals using standard CS. The
combination provides a promising new set of mathematical tools and signal
models useful in a variety of applications. With the new model, a sparse signal
has energy in very few of the subspaces of the fusion frame, although it does
not need to be sparse within each of the subspaces it occupies. This sparsity
model is captured using a mixed l1/l2 norm for fusion frames.
A signal sparse in a fusion frame can be sampled using very few random
projections and exactly reconstructed using a convex optimization that
minimizes this mixed l1/l2 norm. The provided sampling conditions generalize
coherence and RIP conditions used in standard CS theory. It is demonstrated
that they are sufficient to guarantee sparse recovery of any signal sparse in
our model. Moreover, a probabilistic analysis is provided using a stochastic
model on the sparse signal that shows that under very mild conditions the
probability of recovery failure decays exponentially with increasing dimension
of the subspaces.
|
0912.4991
|
Complexity Analysis of Unsaturated Flow in Heterogeneous Media Using a
Complex Network Approach
|
cs.CE physics.geo-ph
|
In this study, we investigate the complexity of two-phase flow (air/water) in
a heterogeneous soil sample by using complex network theory, where the supposed
porous media is non-deformable media, under the time-dependent gas pressure.
Based on the different similarity measurements (i.e., correlation, Euclidean
metrics) over the emerged patterns from the evolution of saturation of
non-wetting phase of a multi-heterogeneous soil sample, the emerged complex
networks are recognized. Understanding of the properties of complex networks
(such degree distribution, mean path length, clustering coefficient) can be
supposed as a way to analysis of variation of saturation profiles structures
(as the solution of finite element method on the coupled PDEs) where complexity
is coming from the changeable connection and links between assumed nodes. Also,
the path of evolution of the supposed system will be illustrated on the state
space of networks either in correlation and Euclidean measurements. The results
of analysis showed in a closed system the designed complex networks approach to
small world network where the mean path length and clustering coefficient are
low and high, respectively. As another result, the evolution of macro -states
of system (such mean velocity of air or pressure) can be scaled with
characteristics of structure complexity of saturation. In other part, we tried
to find a phase transition criterion based on the variation of non-wetting
phase velocity profiles over a network which had been constructed over
correlation distance.
|
0912.4995
|
1-State Error-Trellis Decoding of LDPC Convolutional Codes Based on
Circulant Matrices
|
cs.IT math.IT
|
We consider the decoding of convolutional codes using an error trellis
constructed based on a submatrix of a given check matrix. In the proposed
method, the syndrome-subsequence computed using the remaining submatrix is
utilized as auxiliary information for decoding. Then the ML error path is
correctly decoded using the degenerate error trellis. We also show that the
decoding complexity of the proposed method is basically identical with that of
the conventional one based on the original error trellis. Next, we apply the
method to check matrices with monomial entries proposed by Tanner et al. By
choosing any row of the check matrix as the submatrix for error-trellis
construction, a 1-state error trellis is obtained. Noting the fact that a
likelihood-concentration on the all-zero state and the states with many 0's
occurs in the error trellis, we present a simplified decoding method based on a
1-state error trellis, from which decoding-complexity reduction is realized.
|
0912.5009
|
The MacWilliams Theorem for Four-Dimensional Modulo Metrics
|
cs.IT cs.DM math.IT
|
In this paper, the MacWilliams theorem is stated for codes over finite field
with four-dimensional modulo metrics.
|
0912.5029
|
Complexity of stochastic branch and bound methods for belief tree search
in Bayesian reinforcement learning
|
cs.LG cs.AI
|
There has been a lot of recent work on Bayesian methods for reinforcement
learning exhibiting near-optimal online performance. The main obstacle facing
such methods is that in most problems of interest, the optimal solution
involves planning in an infinitely large tree. However, it is possible to
obtain stochastic lower and upper bounds on the value of each tree node. This
enables us to use stochastic branch and bound algorithms to search the tree
efficiently. This paper proposes two such algorithms and examines their
complexity in this setting.
|
0912.5043
|
Bit Error Rate is Convex at High SNR
|
cs.IT math.IT
|
Motivated by a wide-spread use of convex optimization techniques, convexity
properties of bit error rate of the maximum likelihood detector operating in
the AWGN channel are studied for arbitrary constellations and bit mappings,
which may also include coding under maximum-likelihood decoding. Under this
generic setting, the pairwise probability of error and bit error rate are shown
to be convex functions of the SNR in the high SNR regime with
explicitly-determined boundary. The bit error rate is also shown to be a convex
function of the noise power in the low noise/high SNR regime.
|
0912.5055
|
Rateless Codes for Single-Server Streaming to Diverse Users
|
cs.IT math.IT
|
We investigate the performance of rateless codes for single-server streaming
to diverse users, assuming that diversity in users is present not only because
they have different channel conditions, but also because they demand different
amounts of information and have different decoding capabilities. The LT
encoding scheme is employed. While some users accept output symbols of all
degrees and decode using belief propagation, others only collect degree- 1
output symbols and run no decoding algorithm. We propose several performance
measures, and optimize the performance of the rateless code used at the server
through the design of the code degree distribution. Optimization problems are
formulated for the asymptotic regime and solved as linear programming problems.
Optimized performance shows great improvement in total bandwidth consumption
over using the conventional ideal soliton distribution, or simply sending
separately encoded streams to different types of user nodes. Simulation
experiments confirm the usability of the optimization results obtained for the
asymptotic regime as a guideline for finite-length code design.
|
0912.5073
|
A Rational Decision Maker with Ordinal Utility under Uncertainty:
Optimism and Pessimism
|
cs.AI cs.GT
|
In game theory and artificial intelligence, decision making models often
involve maximizing expected utility, which does not respect ordinal invariance.
In this paper, the author discusses the possibility of preserving ordinal
invariance and still making a rational decision under uncertainty.
|
0912.5079
|
A Lower Bound on the Complexity of Approximating the Entropy of a Markov
Source
|
cs.IT math.IT
|
Suppose that, for any (k \geq 1), (\epsilon > 0) and sufficiently large
$\sigma$, we are given a black box that allows us to sample characters from a
$k$th-order Markov source over the alphabet (\{0, ..., \sigma - 1\}). Even if
we know the source has entropy either 0 or at least (\log (\sigma - k)), there
is still no algorithm that, with probability bounded away from (1 / 2), guesses
the entropy correctly after sampling at most ((\sigma - k)^{k / 2 - \epsilon})
characters.
|
0912.5176
|
On the deletion channel with small deletion probability
|
cs.IT math.IT
|
The deletion channel is the simplest point-to-point communication channel
that models lack of synchronization. Despite significant effort, little is
known about its capacity, and even less about optimal coding schemes. In this
paper we intiate a new systematic approach to this problem, by demonstrating
that capacity can be computed in a series expansion for small deletion
probability. We compute two leading terms of this expansion, and show that
capacity is achieved, up to this order, by i.i.d. uniform random distribution
of the input. We think that this strategy can be useful in a number of capacity
calculations.
|
0912.5187
|
Statistical Complexity in Traveling Densities
|
nlin.PS cs.IT math.IT quant-ph
|
In this work, we analyze the behavior of statistical complexity in several
systems where two identical densities that travel in opposite direction cross
each other. The crossing between two Gaussian, rectangular and triangular
densities is studied in detail. For these three cases, the shape of the total
density presenting an extreme value in complexity is found.
|
0912.5193
|
Ranking relations using analogies in biological and information networks
|
stat.ME cs.LG physics.soc-ph q-bio.QM stat.AP
|
Analogical reasoning depends fundamentally on the ability to learn and
generalize about relations between objects. We develop an approach to
relational learning which, given a set of pairs of objects
$\mathbf{S}=\{A^{(1)}:B^{(1)},A^{(2)}:B^{(2)},\ldots,A^{(N)}:B ^{(N)}\}$,
measures how well other pairs A:B fit in with the set $\mathbf{S}$. Our work
addresses the following question: is the relation between objects A and B
analogous to those relations found in $\mathbf{S}$? Such questions are
particularly relevant in information retrieval, where an investigator might
want to search for analogous pairs of objects that match the query set of
interest. There are many ways in which objects can be related, making the task
of measuring analogies very challenging. Our approach combines a similarity
measure on function spaces with Bayesian analysis to produce a ranking. It
requires data containing features of the objects of interest and a link matrix
specifying which relationships exist; no further attributes of such
relationships are necessary. We illustrate the potential of our method on text
analysis and information networks. An application on discovering functional
interactions between pairs of proteins is discussed in detail, where we show
that our approach can work in practice even if a small set of protein pairs is
provided.
|
0912.5235
|
Using Multipartite Graphs for Recommendation and Discovery
|
astro-ph.IM cs.DL cs.IR physics.soc-ph
|
The Smithsonian/NASA Astrophysics Data System exists at the nexus of a dense
system of interacting and interlinked information networks. The syntactic and
the semantic content of this multipartite graph structure can be combined to
provide very specific research recommendations to the scientist/user.
|
0912.5241
|
Believe It or Not: Adding Belief Annotations to Databases
|
cs.DB cs.AI
|
We propose a database model that allows users to annotate data with belief
statements. Our motivation comes from scientific database applications where a
community of users is working together to assemble, revise, and curate a shared
data repository. As the community accumulates knowledge and the database
content evolves over time, it may contain conflicting information and members
can disagree on the information it should store. For example, Alice may believe
that a tuple should be in the database, whereas Bob disagrees. He may also
insert the reason why he thinks Alice believes the tuple should be in the
database, and explain what he thinks the correct tuple should be instead.
We propose a formal model for Belief Databases that interprets users'
annotations as belief statements. These annotations can refer both to the base
data and to other annotations. We give a formal semantics based on a fragment
of multi-agent epistemic logic and define a query language over belief
databases. We then prove a key technical result, stating that every belief
database can be encoded as a canonical Kripke structure. We use this structure
to describe a relational representation of belief databases, and give an
algorithm for translating queries over the belief database into standard
relational queries. Finally, we report early experimental results with our
prototype implementation on synthetic data.
|
0912.5287
|
Uniqueness theorem for analytic functions and its application in
denoising problem
|
cs.IT math.IT
|
In various applications the problem of separation of the original signal and
the noise arises. For example, in the identification problem for discrete
linear and causal systems, the original signal consists of the values of
transfer function at some points in the unit disk. In this paper we discuss the
problem of choosing the points in the unite disk, for which it is possible to
remove the additive noise with probability one. Since the transfer function is
analytic in the unite disk, so this problem is related to the uniqueness
theorems for analytic functions. Here we give a new uniqueness result for
bounded analytic functions and show its applications in the denoising problem.
|
0912.5340
|
Why so? or Why no? Functional Causality for Explaining Query Answers
|
cs.DB cs.AI
|
In this paper, we propose causality as a unified framework to explain query
answers and non-answers, thus generalizing and extending several previously
proposed approaches of provenance and missing query result explanations.
We develop our framework starting from the well-studied definition of actual
causes by Halpern and Pearl. After identifying some undesirable characteristics
of the original definition, we propose functional causes as a refined
definition of causality with several desirable properties. These properties
allow us to apply our notion of causality in a database context and apply it
uniformly to define the causes of query results and their individual
contributions in several ways: (i) we can model both provenance as well as
non-answers, (ii) we can define explanations as either data in the input
relations or relational operations in a query plan, and (iii) we can give
graded degrees of responsibility to individual causes, thus allowing us to rank
causes. In particular, our approach allows us to explain contributions to
relational aggregate functions and to rank causes according to their respective
responsibilities. We give complexity results and describe polynomial algorithms
for evaluating causality in tractable cases. Throughout the paper, we
illustrate the applicability of our framework with several examples.
Overall, we develop in this paper the theoretical foundations of causality
theory in a database context.
|
0912.5353
|
Diversity-Multiplexing-Delay Tradeoffs in MIMO Multihop Networks with
ARQ
|
cs.IT math.IT
|
Tradeoff in diversity, multiplexing, and delay in multihop MIMO relay
networks with ARQ is studied, where the random delay is caused by queueing and
ARQ retransmission. This leads to an optimal ARQ allocation problem with
per-hop delay or end-to-end delay constraint. The optimal ARQ allocation has to
trade off between the ARQ error that the receiver fails to decode in the
allocated maximum ARQ rounds and the packet loss due to queueing delay. These
two probability of errors are characterized using the
diversity-multiplexing-delay tradeoff (DMDT) (without queueing) and the tail
probability of random delay derived using large deviation techniques,
respectively. Then the optimal ARQ allocation problem can be formulated as a
convex optimization problem. We show that the optimal ARQ allocation should
balance each link performance as well avoid significant queue delay, which is
also demonstrated by numerical examples.
|
0912.5410
|
A survey of statistical network models
|
stat.ME cs.LG physics.soc-ph q-bio.MN stat.ML
|
Networks are ubiquitous in science and have become a focal point for
discussion in everyday life. Formal statistical models for the analysis of
network data have emerged as a major topic of interest in diverse areas of
study, and most of these involve a form of graphical representation.
Probability models on graphs date back to 1959. Along with empirical studies in
social psychology and sociology from the 1960s, these early works generated an
active network community and a substantial literature in the 1970s. This effort
moved into the statistical literature in the late 1970s and 1980s, and the past
decade has seen a burgeoning network literature in statistical physics and
computer science. The growth of the World Wide Web and the emergence of online
networking communities such as Facebook, MySpace, and LinkedIn, and a host of
more specialized professional network communities has intensified interest in
the study of networks and network data. Our goal in this review is to provide
the reader with an entry point to this burgeoning literature. We begin with an
overview of the historical development of statistical network modeling and then
we introduce a number of examples that have been studied in the network
literature. Our subsequent discussion focuses on a number of prominent static
and dynamic network models and their interconnections. We emphasize formal
model descriptions, and pay special attention to the interpretation of
parameters and their estimation. We end with a description of some open
problems and challenges for machine learning and statistics.
|
0912.5426
|
The Hardness and Approximation Algorithms for L-Diversity
|
cs.DB
|
The existing solutions to privacy preserving publication can be classified
into the theoretical and heuristic categories. The former guarantees provably
low information loss, whereas the latter incurs gigantic loss in the worst
case, but is shown empirically to perform well on many real inputs. While
numerous heuristic algorithms have been developed to satisfy advanced privacy
principles such as l-diversity, t-closeness, etc., the theoretical category is
currently limited to k-anonymity which is the earliest principle known to have
severe vulnerability to privacy attacks. Motivated by this, we present the
first theoretical study on l-diversity, a popular principle that is widely
adopted in the literature. First, we show that optimal l-diverse generalization
is NP-hard even when there are only 3 distinct sensitive values in the
microdata. Then, an (l*d)-approximation algorithm is developed, where d is the
dimensionality of the underlying dataset. This is the first known algorithm
with a non-trivial bound on information loss. Extensive experiments with real
datasets validate the effectiveness and efficiency of proposed solution.
|
0912.5434
|
A Complete Theory of Everything (will be subjective)
|
cs.IT astro-ph.CO math.IT physics.pop-ph
|
Increasingly encompassing models have been suggested for our world. Theories
range from generally accepted to increasingly speculative to apparently bogus.
The progression of theories from ego- to geo- to helio-centric models to
universe and multiverse theories and beyond was accompanied by a dramatic
increase in the sizes of the postulated worlds, with humans being expelled from
their center to ever more remote and random locations. Rather than leading to a
true theory of everything, this trend faces a turning point after which the
predictive power of such theories decreases (actually to zero). Incorporating
the location and other capacities of the observer into such theories avoids
this problem and allows to distinguish meaningful from predictively meaningless
theories. This also leads to a truly complete theory of everything consisting
of a (conventional objective) theory of everything plus a (novel subjective)
observer process. The observer localization is neither based on the
controversial anthropic principle, nor has it anything to do with the
quantum-mechanical observation process. The suggested principle is extended to
more practical (partial, approximate, probabilistic, parametric) world models
(rather than theories of everything). Finally, I provide a justification of
Ockham's razor, and criticize the anthropic principle, the doomsday argument,
the no free lunch theorem, and the falsifiability dogma.
|
0912.5449
|
Time and Memory Efficient Lempel-Ziv Compression Using Suffix Arrays
|
cs.DS cs.IT math.IT
|
The well-known dictionary-based algorithms of the Lempel-Ziv (LZ) 77 family
are the basis of several universal lossless compression techniques. These
algorithms are asymmetric regarding encoding/decoding time and memory
requirements, with the former being much more demanding. In the past years,
considerable attention has been devoted to the problem of finding efficient
data structures to support these searches, aiming at optimizing the encoders in
terms of speed and memory. Hash tables, binary search trees and suffix trees
have been widely used for this purpose, as they allow fast search at the
expense of memory. Some recent research has focused on suffix arrays (SA), due
to their low memory requirements and linear construction algorithms. Previous
work has shown how the LZ77 decomposition can be computed using a single SA or
an SA with an auxiliary array with the longest common prefix information. The
SA-based algorithms use less memory than the tree-based encoders, allocating
the strictly necessary amount of memory, regardless of the contents of the text
to search/encode. In this paper, we improve on previous work by proposing
faster SA-based algorithms for LZ77 encoding and sub-string search, keeping
their low memory requirements. For some compression settings, on a large set of
benchmark files, our low-memory SA-based encoders are also faster than
tree-based encoders. This provides time and memory efficient LZ77 encoding,
being a possible replacement for trees on well known encoders like LZMA. Our
algorithm is also suited for text classification, because it provides a compact
way to describe text in a bag-of-words representation, as well as a fast
indexing mechanism that allows to quickly find all the sets of words that start
with a given symbol, over a static dictionary.
|
0912.5456
|
From a Link Semantic to Semantic Links - Building Context in Educational
Hypermedia
|
cs.IR cs.NI
|
Modularization and granulation are key concepts in educational content
management, whereas teaching, learning and understanding require a discourse
within thematic contexts. Even though hyperlinks and semantically typed
references provide the context building blocks of hypermedia systems, elaborate
concepts to derive, manage and propagate such relations between content objects
are not around at present. Based on Semantic Web standards, this paper makes
several contributions to content enrichment. Work starts from harvesting
multimedia annotations in class-room recordings, and proceeds to deriving a
dense educational semantic net between eLearning Objects decorated with
extended LOM relations. Special focus is drawn on the processing of recorded
speech and on an Ontological Evaluation Layer that autonomously derives
meaningful inter-object relations. Further on, a semantic representation of
hyperlinks is developed and elaborated to the concept of semantic link
contexts, an approach to manage a coherent rhetoric of linking. These solutions
have been implemented in the Hypermedia Learning Objects System (hylOs), our
eLearning content management system. hylOs is built upon the more general Media
Information Repository (MIR) and the MIR adaptive context linking environment
(MIRaCLE), its linking extension. MIR is an open system supporting the
standards XML and JNDI. hylOs benefits from configurable information
structures, sophisticated access logic and high-level authoring tools like the
WYSIWYG XML editor and its Instructional Designer.
|
0912.5502
|
Writer Identification Using Inexpensive Signal Processing Techniques
|
cs.CV
|
We propose to use novel and classical audio and text signal-processing and
otherwise techniques for "inexpensive" fast writer identification tasks of
scanned hand-written documents "visually". The "inexpensive" refers to the
efficiency of the identification process in terms of CPU cycles while
preserving decent accuracy for preliminary identification. This is a
comparative study of multiple algorithm combinations in a pattern recognition
pipeline implemented in Java around an open-source Modular Audio Recognition
Framework (MARF) that can do a lot more beyond audio. We present our
preliminary experimental findings in such an identification task. We simulate
"visual" identification by "looking" at the hand-written document as a whole
rather than trying to extract fine-grained features out of it prior
classification.
|
0912.5511
|
A general approach to belief change in answer set programming
|
cs.AI
|
We address the problem of belief change in (nonmonotonic) logic programming
under answer set semantics. Unlike previous approaches to belief change in
logic programming, our formal techniques are analogous to those of
distance-based belief revision in propositional logic. In developing our
results, we build upon the model theory of logic programs furnished by SE
models. Since SE models provide a formal, monotonic characterisation of logic
programs, we can adapt techniques from the area of belief revision to belief
change in logic programs. We introduce methods for revising and merging logic
programs, respectively. For the former, we study both subset-based revision as
well as cardinality-based revision, and we show that they satisfy the majority
of the AGM postulates for revision. For merging, we consider operators
following arbitration merging and IC merging, respectively. We also present
encodings for computing the revision as well as the merging of logic programs
within the same logic programming framework, giving rise to a direct
implementation of our approach in terms of off-the-shelf answer set solvers.
These encodings reflect in turn the fact that our change operators do not
increase the complexity of the base formalism.
|
0912.5533
|
Oriented Straight Line Segment Algebra: Qualitative Spatial Reasoning
about Oriented Objects
|
cs.AI
|
Nearly 15 years ago, a set of qualitative spatial relations between oriented
straight line segments (dipoles) was suggested by Schlieder. This work received
substantial interest amongst the qualitative spatial reasoning community.
However, it turned out to be difficult to establish a sound constraint calculus
based on these relations. In this paper, we present the results of a new
investigation into dipole constraint calculi which uses algebraic methods to
derive sound results on the composition of relations and other properties of
dipole calculi. Our results are based on a condensed semantics of the dipole
relations.
In contrast to the points that are normally used, dipoles are extended and
have an intrinsic direction. Both features are important properties of natural
objects. This allows for a straightforward representation of prototypical
reasoning tasks for spatial agents. As an example, we show how to generate
survey knowledge from local observations in a street network. The example
illustrates the fast constraint-based reasoning capabilities of the dipole
calculus. We integrate our results into two reasoning tools which are publicly
available.
|
0912.5537
|
Quantum Reverse Shannon Theorem
|
quant-ph cs.IT math.IT
|
Dual to the usual noisy channel coding problem, where a noisy (classical or
quantum) channel is used to simulate a noiseless one, reverse Shannon theorems
concern the use of noiseless channels to simulate noisy ones, and more
generally the use of one noisy channel to simulate another. For channels of
nonzero capacity, this simulation is always possible, but for it to be
efficient, auxiliary resources of the proper kind and amount are generally
required. In the classical case, shared randomness between sender and receiver
is a sufficient auxiliary resource, regardless of the nature of the source, but
in the quantum case the requisite auxiliary resources for efficient simulation
depend on both the channel being simulated, and the source from which the
channel inputs are coming. For tensor power sources (the quantum generalization
of classical IID sources), entanglement in the form of standard ebits
(maximally entangled pairs of qubits) is sufficient, but for general sources,
which may be arbitrarily correlated or entangled across channel inputs,
additional resources, such as entanglement-embezzling states or backward
communication, are generally needed. Combining existing and new results, we
establish the amounts of communication and auxiliary resources needed in both
the classical and quantum cases, the tradeoffs among them, and the loss of
simulation efficiency when auxiliary resources are absent or insufficient. In
particular we find a new single-letter expression for the excess forward
communication cost of coherent feedback simulations of quantum channels (i.e.
simulations in which the sender retains what would escape into the environment
in an ordinary simulation), on non-tensor-power sources in the presence of
unlimited ebits but no other auxiliary resource. Our results on tensor power
sources establish a strong converse to the entanglement-assisted capacity
theorem.
|
1001.0001
|
On the structure of non-full-rank perfect codes
|
cs.IT math.IT
|
The Krotov combining construction of perfect 1-error-correcting binary codes
from 2000 and a theorem of Heden saying that every non-full-rank perfect
1-error-correcting binary code can be constructed by this combining
construction is generalized to the $q$-ary case. Simply, every non-full-rank
perfect code $C$ is the union of a well-defined family of $\mu$-components
$K_\mu$, where $\mu$ belongs to an "outer" perfect code $C^*$, and these
components are at distance three from each other. Components from distinct
codes can thus freely be combined to obtain new perfect codes. The Phelps
general product construction of perfect binary code from 1984 is generalized to
obtain $\mu$-components, and new lower bounds on the number of perfect
1-error-correcting $q$-ary codes are presented.
|
1001.0036
|
The Computational Structure of Spike Trains
|
q-bio.NC cs.IT math.IT nlin.AO physics.data-an stat.ML
|
Neurons perform computations, and convey the results of those computations
through the statistical structure of their output spike trains. Here we present
a practical method, grounded in the information-theoretic analysis of
prediction, for inferring a minimal representation of that structure and for
characterizing its complexity. Starting from spike trains, our approach finds
their causal state models (CSMs), the minimal hidden Markov models or
stochastic automata capable of generating statistically identical time series.
We then use these CSMs to objectively quantify both the generalizable structure
and the idiosyncratic randomness of the spike train. Specifically, we show that
the expected algorithmic information content (the information needed to
describe the spike train exactly) can be split into three parts describing (1)
the time-invariant structure (complexity) of the minimal spike-generating
process, which describes the spike train statistically; (2) the randomness
(internal entropy rate) of the minimal spike-generating process; and (3) a
residual pure noise term not described by the minimal spike-generating process.
We use CSMs to approximate each of these quantities. The CSMs are inferred
nonparametrically from the data, making only mild regularity assumptions, via
the causal state splitting reconstruction algorithm. The methods presented here
complement more traditional spike train analyses by describing not only spiking
probability and spike train entropy, but also the complexity of a spike train's
structure. We demonstrate our approach using both simulated spike trains and
experimental data recorded in rat barrel cortex during vibrissa stimulation.
|
1001.0054
|
Cryptographic Implications for Artificially Mediated Games
|
cs.CR cs.AI cs.GT
|
There is currently an intersection in the research of game theory and
cryptography. Generally speaking, there are two aspects to this partnership.
First there is the application of game theory to cryptography. Yet, the purpose
of this paper is to focus on the second aspect, the converse of the first, the
application of cryptography to game theory. Chiefly, there exist a branch of
non-cooperative games which have a correlated equilibrium as their solution.
These equilibria tend to be superior to the conventional Nash equilibria. The
primary condition for a correlated equilibrium is the presence of a mediator
within the game. This is simply a neutral and mutually trusted entity. It is
the role of the mediator to make recommendations in terms of strategy profiles
to all players, who then act (supposedly) on this advice. Each party privately
provides the mediator with the necessary information, and the referee responds
privately with their optimized strategy set. However, there seem to be a
multitude of situations in which no mediator could exist. Thus, games modeling
these sorts of cases could not use these entities as tools for analysis. Yet,
if these equilibria are in the best interest of players, it would be rational
to construct a machine, or protocol, to calculate them. Of course, this machine
would need to satisfy some standard for secure transmission between a player
and itself. The requirement that no third party could detect either the input
or strategy profile would need to be satisfied by this scheme. Here is the
synthesis of cryptography into game theory; analyzing the ability of the
players to construct a protocol which can be used successfully in the place of
a mediator.
|
1001.0063
|
On a Model for Integrated Information
|
cs.AI
|
In this paper we give a thorough presentation of a model proposed by Tononi
et al. for modeling \emph{integrated information}, i.e. how much information is
generated in a system transitioning from one state to the next one by the
causal interaction of its parts and \emph{above and beyond} the information
given by the sum of its parts. We also provides a more general formulation of
such a model, independent from the time chosen for the analysis and from the
uniformity of the probability distribution at the initial time instant.
Finally, we prove that integrated information is null for disconnected systems.
|
1001.0080
|
Non-line-of-sight Node Localization based on Semi-Definite Programming
in Wireless Sensor Networks
|
cs.IT math.IT
|
An unknown-position sensor can be localized if there are three or more
anchors making time-of-arrival (TOA) measurements of a signal from it. However,
the location errors can be very large due to the fact that some of the
measurements are from non-line-of-sight (NLOS) paths. In this paper, we propose
a semi-definite programming (SDP) based node localization algorithm in NLOS
environment for ultra-wideband (UWB) wireless sensor networks. The positions of
sensors can be estimated using the distance estimates from location-aware
anchors as well as other sensors. However, in the absence of LOS paths, e.g.,
in indoor networks, the NLOS range estimates can be significantly biased. As a
result, the NLOS error can remarkably decrease the location accuracy.
And it is not easy to efficiently distinguish LOS from NLOS measurements. In
this paper, an algorithm is proposed that achieves high location accuracy
without the need of identifying NLOS and LOS measurement.
|
1001.0107
|
Exact Regeneration Codes for Distributed Storage Repair Using
Interference Alignment
|
cs.IT math.IT
|
The high repair cost of (n,k) Maximum Distance Separable (MDS) erasure codes
has recently motivated a new class of codes, called Regenerating Codes, that
optimally trade off storage cost for repair bandwidth. On one end of this
spectrum of Regenerating Codes are Minimum Storage Regenerating (MSR) codes
that can match the minimum storage cost of MDS codes while also significantly
reducing repair bandwidth. In this paper, we describe Exact-MSR codes which
allow for any failed nodes (whether they are systematic or parity nodes) to be
regenerated exactly rather than only functionally or information-equivalently.
We show that Exact-MSR codes come with no loss of optimality with respect to
random-network-coding based MSR codes (matching the cutset-based lower bound on
repair bandwidth) for the cases of: (a) k/n <= 1/2; and (b) k <= 3. Our
constructive approach is based on interference alignment techniques, and,
unlike the previous class of random-network-coding based approaches, we provide
explicit and deterministic coding schemes that require a finite-field size of
at most 2(n-k).
|
1001.0115
|
Developing Artificial Herders Using Jason
|
cs.MA
|
This paper gives an overview of a proposed strategy for the "Cows and
Herders" scenario given in the Multi-Agent Programming Contest 2009. The
strategy is to be implemented using the Jason platform, based on the
agent-oriented programming language Agent-Speak. The paper describes the
agents, their goals and the strategies they should follow. The basis for the
paper and for participating in the contest is a new course given in spring 2009
and our main objective is to show that we are able to implement complex
multi-agent systems with the knowledge gained in an introductory course on
multi-agent systems.
|
1001.0167
|
Position Modulation Code for Rewriting Write-Once Memories
|
cs.IT math.IT
|
A write-once memory (wom) is a storage medium formed by a number of
``write-once'' bit positions (wits), where each wit initially is in a `0' state
and can be changed to a `1' state irreversibly. Examples of write-once memories
include SLC flash memories and optical disks. This paper presents a low
complexity coding scheme for rewriting such write-once memories, which is
applicable to general problem configurations. The proposed scheme is called the
\emph{position modulation code}, as it uses the positions of the zero symbols
to encode some information. The proposed technique can achieve code rates
higher than state-of-the-art practical solutions for some configurations. For
instance, there is a position modulation code that can write 56 bits 10 times
on 278 wits, achieving rate 2.01. In addition, the position modulation code is
shown to achieve a rate at least half of the optimal rate.
|
1001.0210
|
Achieving the Secrecy Capacity of Wiretap Channels Using Polar Codes
|
cs.IT cs.CR math.IT
|
Suppose Alice wishes to send messages to Bob through a communication channel
C_1, but her transmissions also reach an eavesdropper Eve through another
channel C_2. The goal is to design a coding scheme that makes it possible for
Alice to communicate both reliably and securely. Reliability is measured in
terms of Bob's probability of error in recovering the message, while security
is measured in terms of Eve's equivocation ratio. Wyner showed that the
situation is characterized by a single constant C_s, called the secrecy
capacity, which has the following meaning: for all $\epsilon > 0$, there exist
coding schemes of rate $R \ge C_s - \epsilon$ that asymptotically achieve both
the reliability and the security objectives. However, his proof of this result
is based upon a nonconstructive random-coding argument. To date, despite a
considerable research effort, the only case where we know how to construct
coding schemes that achieve secrecy capacity is when Eve's channel C_2 is an
erasure channel, or a combinatorial variation thereof.
Polar codes were recently invented by Arikan; they approach the capacity of
symmetric binary-input discrete memoryless channels with low encoding and
decoding complexity. Herein, we use polar codes to construct a coding scheme
that achieves the secrecy capacity for a wide range of wiretap channels. Our
construction works for any instantiation of the wiretap channel model, as long
as both C_1 and C_2 are symmetric and binary-input, and C_2 is degraded with
respect to C_1. Moreover, we show how to modify our construction in order to
provide strong security, in the sense defined by Maurer, while still operating
at a rate that approaches the secrecy capacity. In this case, we cannot
guarantee that the reliability condition will be satisfied unless the main
channel C_1 is noiseless, although we believe it can be always satisfied in
practice.
|
1001.0282
|
Robust Image Watermarking in the Wavelet Domain for Copyright Protection
|
cs.IT cs.CR math.IT
|
In this paper a new approach to image watermarking in wavelet domain is
presented. The idea is to hide the watermark data in blocks of the block
segmented image. Two schemes are presented based on this idea by embedding the
watermark data in the low pass wavelet coefficients of each block. Due to low
computational complexity of the proposed approach, this algorithm can be
implemented in real time. Experimental results demonstrate the
impercepti-bility of the proposed method and its high robustness against
various attacks such as filtering, JPEG compres-sion, cropping, noise addition
and geometric distortions.
|
1001.0339
|
Tight oracle bounds for low-rank matrix recovery from a minimal number
of random measurements
|
cs.IT math.IT
|
This paper presents several novel theoretical results regarding the recovery
of a low-rank matrix from just a few measurements consisting of linear
combinations of the matrix entries. We show that properly constrained
nuclear-norm minimization stably recovers a low-rank matrix from a constant
number of noisy measurements per degree of freedom; this seems to be the first
result of this nature. Further, the recovery error from noisy data is within a
constant of three targets: 1) the minimax risk, 2) an oracle error that would
be available if the column space of the matrix were known, and 3) a more
adaptive oracle error which would be available with the knowledge of the column
space corresponding to the part of the matrix that stands above the noise.
Lastly, the error bounds regarding low-rank matrices are extended to provide an
error bound when the matrix has full rank with decaying singular values. The
analysis in this paper is based on the restricted isometry property (RIP)
introduced in [6] for vectors, and in [22] for matrices.
|
1001.0346
|
Protocol design and stability/delay analysis of half-duplex buffered
cognitive relay systems
|
cs.IT math.IT
|
In this paper, we quantify the benefits of employing relay station in
large-coverage cognitive radio systems which opportunistically access the
licensed spectrum of some small-coverage primary systems scattered inside.
Through analytical study, we show that even a simple decode-and-forward (SDF)
relay, which can hold only one packet, offers significant path-loss gain in
terms of the spatial transmission opportunities and link reliability. However,
such scheme fails to capture the spatial-temporal burstiness of the primary
activities, that is, when either the source-relay (SR) link or
relay-destination (RD) link is blocked by the primary activities, the cognitive
spectrum access has to stop. To overcome this obstacle, we further propose
buffered decode-and-forward (BDF) protocol. By exploiting the infinitely long
buffer at the relay, the blockage time on either SR or RD link is saved for
cognitive spectrum access. The buffer gain is shown analytically to improve the
stability region and average end-to-end delay performance of the cognitive
relay system.
|
1001.0357
|
Orthogonal vs Non-Orthogonal Multiple Access with Finite Input Alphabet
and Finite Bandwidth
|
cs.IT math.IT
|
For a two-user Gaussian multiple access channel (GMAC), frequency division
multiple access (FDMA), a well known orthogonal-multiple-access (O-MA) scheme
has been preferred to non-orthogonal-multiple-access (NO-MA) schemes since FDMA
can achieve the sum-capacity of the channel with only single-user decoding
complexity [\emph{Chapter 14, Elements of Information Theory by Cover and
Thomas}]. However, with finite alphabets, in this paper, we show that NO-MA is
better than O-MA for a two-user GMAC. We plot the constellation constrained
(CC) capacity regions of a two-user GMAC with FDMA and time division multiple
access (TDMA) and compare them with the CC capacity regions with trellis coded
multiple access (TCMA), a recently introduced NO-MA scheme. Unlike the Gaussian
alphabets case, it is shown that the CC capacity region with FDMA is strictly
contained inside the CC capacity region with TCMA. In particular, for a given
bandwidth, the gap between the CC capacity regions with TCMA and FDMA is shown
to increase with the increase in the average power constraint. Also, for a
given power constraint, the gap between the CC capacity regions with TCMA and
FDMA is shown to decrease with the increase in the bandwidth. Hence, for finite
alphabets, a NO-MA scheme such as TCMA is better than the well known O-MAC
schemes, FDMA and TDMA which makes NO-MA schemes worth pursuing in practice for
a two-user GMAC.
|
1001.0358
|
Finiteness of rank invariants of multidimensional persistent homology
groups
|
math.AT cs.IT math.IT
|
Rank invariants are a parametrized version of Betti numbers of a space
multi-filtered by a continuous vector-valued function. In this note we give a
sufficient condition for their finiteness. This condition is sharp for spaces
embeddable in R^n.
|
1001.0405
|
Optimal Query Complexity for Reconstructing Hypergraphs
|
cs.LG
|
In this paper we consider the problem of reconstructing a hidden weighted
hypergraph of constant rank using additive queries. We prove the following: Let
$G$ be a weighted hidden hypergraph of constant rank with n vertices and $m$
hyperedges. For any $m$ there exists a non-adaptive algorithm that finds the
edges of the graph and their weights using $$ O(\frac{m\log n}{\log m}) $$
additive queries. This solves the open problem in [S. Choi, J. H. Kim. Optimal
Query Complexity Bounds for Finding Graphs. {\em STOC}, 749--758,~2008].
When the weights of the hypergraph are integers that are less than
$O(poly(n^d/m))$ where $d$ is the rank of the hypergraph (and therefore for
unweighted hypergraphs) there exists a non-adaptive algorithm that finds the
edges of the graph and their weights using $$ O(\frac{m\log \frac{n^d}{m}}{\log
m}). $$ additive queries.
Using the information theoretic bound the above query complexities are tight.
|
1001.0440
|
Tutoring System for Dance Learning
|
cs.IR cs.MM
|
Recent advances in hardware sophistication related to graphics display, audio
and video devices made available a large number of multimedia and hypermedia
applications. These multimedia applications need to store and retrieve the
different forms of media like text, hypertext, graphics, still images,
animations, audio and video. Dance is one of the important cultural forms of a
nation and dance video is one such multimedia types. Archiving and retrieving
the required semantics from these dance media collections is a crucial and
demanding multimedia application. This paper summarizes the difference dance
video archival techniques and systems. Keywords: Multimedia, Culture Media,
Metadata archival and retrieval systems, MPEG-7, XML.
|
1001.0591
|
Comparing Distributions and Shapes using the Kernel Distance
|
cs.CG cs.CV cs.LG
|
Starting with a similarity function between objects, it is possible to define
a distance metric on pairs of objects, and more generally on probability
distributions over them. These distance metrics have a deep basis in functional
analysis, measure theory and geometric measure theory, and have a rich
structure that includes an isometric embedding into a (possibly infinite
dimensional) Hilbert space. They have recently been applied to numerous
problems in machine learning and shape analysis.
In this paper, we provide the first algorithmic analysis of these distance
metrics. Our main contributions are as follows: (i) We present fast
approximation algorithms for computing the kernel distance between two point
sets P and Q that runs in near-linear time in the size of (P cup Q) (note that
an explicit calculation would take quadratic time). (ii) We present
polynomial-time algorithms for approximately minimizing the kernel distance
under rigid transformation; they run in time O(n + poly(1/epsilon, log n)).
(iii) We provide several general techniques for reducing complex objects to
convenient sparse representations (specifically to point sets or sets of points
sets) which approximately preserve the kernel distance. In particular, this
allows us to reduce problems of computing the kernel distance between various
types of objects such as curves, surfaces, and distributions to computing the
kernel distance between point sets. These take advantage of the reproducing
kernel Hilbert space and a new relation linking binary range spaces to
continuous range spaces with bounded fat-shattering dimension.
|
1001.0597
|
Inference of global clusters from locally distributed data
|
stat.ME cs.LG stat.ML
|
We consider the problem of analyzing the heterogeneity of clustering
distributions for multiple groups of observed data, each of which is indexed by
a covariate value, and inferring global clusters arising from observations
aggregated over the covariate domain. We propose a novel Bayesian nonparametric
method reposing on the formalism of spatial modeling and a nested hierarchy of
Dirichlet processes. We provide an analysis of the model properties, relating
and contrasting the notions of local and global clusters. We also provide an
efficient inference algorithm, and demonstrate the utility of our method in
several data examples, including the problem of object tracking and a global
clustering analysis of functional data where the functional identity
information is not available.
|
1001.0700
|
Vandalism Detection in Wikipedia: a Bag-of-Words Classifier Approach
|
cs.LG cs.CY cs.IR
|
A bag-of-words based probabilistic classifier is trained using regularized
logistic regression to detect vandalism in the English Wikipedia. Isotonic
regression is used to calibrate the class membership probabilities. Learning
curve, reliability, ROC, and cost analysis are performed.
|
1001.0716
|
Totally Asynchronous Interference Channels
|
cs.IT math.IT
|
This paper addresses an interference channel consisting of $\mathbf{n}$
active users sharing $u$ frequency sub-bands. Users are asynchronous meaning
there exists a mutual delay between their transmitted codes. A stationary model
for interference is considered by assuming the starting point of an
interferer's data is uniformly distributed along the codeword of any user. The
spectrum is divided to private and common bands each containing
$v_{\mathrm{p}}$ and $v_{\mathrm{c}}$ frequency sub-bands respectively. We
consider a scenario where all transmitters are unaware of the number of active
users and the channel gains. The optimum $v_{\mathrm{p}}$ and $v_{\mathrm{c}}$
are obtained such that the so-called outage capacity per user is maximized. If
$\Pr\{\mathbf{n}\leq 2\}=1$, upper and lower bounds on the mutual information
between the input and output of the channel for each user are derived using a
genie-aided technique. The proposed bounds meet each other as the code length
grows to infinity yielding a closed expression for the achievable rates. If
$\Pr\{\mathbf{n}>2\}>0$, all users follow a locally Randomized On-Off signaling
scheme on the common band where each transmitter quits transmitting its
Gaussian signals independently from transmission to transmission. Using a
conditional version of Entropy Power Inequality (EPI) and an upper bound on the
differential entropy of a mixed Gaussian random variable, lower bounds on the
achievable rates of users are developed. Thereafter, the activation probability
on each transmission slot is designed resulting in the largest outage capacity.
|
1001.0723
|
MacWilliams Identities for Terminated Convolutional Codes
|
cs.IT math.IT
|
Shearer and McEliece [1977] showed that there is no MacWilliams identity for
the free distance spectra of orthogonal linear convolutional codes. We show
that on the other hand there does exist a MacWilliams identity between the
generating functions of the weight distributions per unit time of a linear
convolutional code C and its orthogonal code C^\perp, and that this
distribution is as useful as the free distance spectrum for estimating code
performance. These observations are similar to those made recently by
Bocharova, Hug, Johannesson and Kudryashov; however, we focus on terminating by
tail-biting rather than by truncation.
|
1001.0735
|
Named Models in Coalgebraic Hybrid Logic
|
cs.LO cs.AI
|
Hybrid logic extends modal logic with support for reasoning about individual
states, designated by so-called nominals. We study hybrid logic in the broad
context of coalgebraic semantics, where Kripke frames are replaced with
coalgebras for a given functor, thus covering a wide range of reasoning
principles including, e.g., probabilistic, graded, default, or coalitional
operators. Specifically, we establish generic criteria for a given coalgebraic
hybrid logic to admit named canonical models, with ensuing completeness proofs
for pure extensions on the one hand, and for an extended hybrid language with
local binding on the other. We instantiate our framework with a number of
examples. Notably, we prove completeness of graded hybrid logic with local
binding.
|
1001.0746
|
Alternation-Trading Proofs, Linear Programming, and Lower Bounds
|
cs.CC cs.AI
|
A fertile area of recent research has demonstrated concrete polynomial time
lower bounds for solving natural hard problems on restricted computational
models. Among these problems are Satisfiability, Vertex Cover, Hamilton Path,
Mod6-SAT, Majority-of-Majority-SAT, and Tautologies, to name a few. The proofs
of these lower bounds follow a certain proof-by-contradiction strategy that we
call alternation-trading. An important open problem is to determine how
powerful such proofs can possibly be.
We propose a methodology for studying these proofs that makes them amenable
to both formal analysis and automated theorem proving. We prove that the search
for better lower bounds can often be turned into a problem of solving a large
series of linear programming instances. Implementing a small-scale theorem
prover based on this result, we extract new human-readable time lower bounds
for several problems. This framework can also be used to prove concrete
limitations on the current techniques.
|
1001.0793
|
On the Vacationing CEO Problem: Achievable Rates and Outer Bounds
|
cs.IT math.IT
|
This paper studies a class of source coding problems that combines elements
of the CEO problem with the multiple description problem. In this setting,
noisy versions of one remote source are observed by two nodes with encoders
(which is similar to the CEO problem). However, it differs from the CEO problem
in that each node must generate multiple descriptions of the source. This
problem is of interest in multiple scenarios in efficient communication over
networks. In this paper, an achievable region and an outer bound are presented
for this problem, which is shown to be sum rate optimal for a class of
distortion constraints.
|
1001.0820
|
Abstract Answer Set Solvers with Learning
|
cs.AI cs.LO
|
Nieuwenhuis, Oliveras, and Tinelli (2006) showed how to describe enhancements
of the Davis-Putnam-Logemann-Loveland algorithm using transition systems,
instead of pseudocode. We design a similar framework for several algorithms
that generate answer sets for logic programs: Smodels, Smodels-cc, Asp-Sat with
Learning (Cmodels), and a newly designed and implemented algorithm Sup. This
approach to describing answer set solvers makes it easier to prove their
correctness, to compare them, and to design new systems.
|
1001.0827
|
Document Clustering with K-tree
|
cs.IR cs.AI cs.DS
|
This paper describes the approach taken to the XML Mining track at INEX 2008
by a group at the Queensland University of Technology. We introduce the K-tree
clustering algorithm in an Information Retrieval context by adapting it for
document clustering. Many large scale problems exist in document clustering.
K-tree scales well with large inputs due to its low complexity. It offers
promising results both in terms of efficiency and quality. Document
classification was completed using Support Vector Machines.
|
1001.0830
|
K-tree: Large Scale Document Clustering
|
cs.IR cs.AI cs.DS
|
We introduce K-tree in an information retrieval context. It is an efficient
approximation of the k-means clustering algorithm. Unlike k-means it forms a
hierarchy of clusters. It has been extended to address issues with sparse
representations. We compare performance and quality to CLUTO using document
collections. The K-tree has a low time complexity that is suitable for large
document collections. This tree structure allows for efficient disk based
implementations where space requirements exceed that of main memory.
|
1001.0833
|
Random Indexing K-tree
|
cs.IR cs.AI cs.DS
|
Random Indexing (RI) K-tree is the combination of two algorithms for
clustering. Many large scale problems exist in document clustering. RI K-tree
scales well with large inputs due to its low complexity. It also exhibits
features that are useful for managing a changing collection. Furthermore, it
solves previous issues with sparse document vectors when using K-tree. The
algorithms and data structures are defined, explained and motivated. Specific
modifications to K-tree are made for use with RI. Experiments have been
executed to measure quality. The results indicate that RI K-tree improves
document cluster quality over the original K-tree algorithm.
|
1001.0879
|
Linear Probability Forecasting
|
cs.LG
|
Multi-class classification is one of the most important tasks in machine
learning. In this paper we consider two online multi-class classification
problems: classification by a linear model and by a kernelized model. The
quality of predictions is measured by the Brier loss function. We suggest two
computationally efficient algorithms to work with these problems and prove
theoretical guarantees on their losses. We kernelize one of the algorithms and
prove theoretical guarantees on its loss. We perform experiments and compare
our algorithms with logistic regression.
|
1001.0887
|
Stable Feature Selection for Biomarker Discovery
|
cs.CE q-bio.QM
|
Feature selection techniques have been used as the workhorse in biomarker
discovery applications for a long time. Surprisingly, the stability of feature
selection with respect to sampling variations has long been under-considered.
It is only until recently that this issue has received more and more attention.
In this article, we review existing stable feature selection methods for
biomarker discovery using a generic hierarchal framework. We have two
objectives: (1) providing an overview on this new yet fast growing topic for a
convenient reference; (2) categorizing existing methods under an expandable
framework for future research and development.
|
1001.0921
|
Graph Quantization
|
cs.AI
|
Vector quantization(VQ) is a lossy data compression technique from signal
processing, which is restricted to feature vectors and therefore inapplicable
for combinatorial structures. This contribution presents a theoretical
foundation of graph quantization (GQ) that extends VQ to the domain of
attributed graphs. We present the necessary Lloyd-Max conditions for optimality
of a graph quantizer and consistency results for optimal GQ design based on
empirical distortion measures and stochastic optimization. These results
statistically justify existing clustering algorithms in the domain of graphs.
The proposed approach provides a template of how to link structural pattern
recognition methods other than GQ to statistical pattern recognition.
|
1001.0927
|
Accelerating Competitive Learning Graph Quantization
|
cs.CV
|
Vector quantization(VQ) is a lossy data compression technique from signal
processing for which simple competitive learning is one standard method to
quantize patterns from the input space. Extending competitive learning VQ to
the domain of graphs results in competitive learning for quantizing input
graphs. In this contribution, we propose an accelerated version of competitive
learning graph quantization (GQ) without trading computational time against
solution quality. For this, we lift graphs locally to vectors in order to avoid
unnecessary calculations of intractable graph distances. In doing so, the
accelerated version of competitive learning GQ gradually turns locally into a
competitive learning VQ with increasing number of iterations. Empirical results
show a significant speedup by maintaining a comparable solution quality.
|
1001.0958
|
Effectively integrating information content and structural relationship
to improve the GO-based similarity measure between proteins
|
cs.CE q-bio.GN
|
The Gene Ontology (GO) provides a knowledge base to effectively describe
proteins. However, measuring similarity between proteins based on GO remains a
challenge. In this paper, we propose a new similarity measure, information
coefficient similarity measure (SimIC), to effectively integrate both the
information content (IC) of GO terms and the structural information of GO
hierarchy to determine the similarity between proteins. Testing on yeast
proteins, our results show that SimIC efficiently addresses the shallow
annotation issue in GO, thus improves the correlations between GO similarities
of yeast proteins and their expression similarities as well as between GO
similarities of yeast proteins and their sequence similarities. Furthermore, we
demonstrate that the proposed SimIC is superior in predicting yeast protein
interactions. We predict 20484 yeast protein-protein interactions (PPIs)
between 2462 proteins based on the high SimIC values of biological process (BP)
and cellular component (CC). Examining the 214 MIPS complexes in our predicted
PPIs shows that all members of 159 MIPS complexes can be found in our PPI
predictions, which is more than those (120/214) found in PPIs predicted by
relative specificity similarity (RSS). Integrating IC and structural
information of GO hierarchy can improve the effectiveness of the semantic
similarity measure of GO terms. The new SimIC can effectively correct the
effect of shallow annotation, and then provide an effective way to measure
similarity between proteins based on Gene Ontology.
|
1001.1009
|
Multi-path Probabilistic Available Bandwidth Estimation through Bayesian
Active Learning
|
cs.NI cs.LG
|
Knowing the largest rate at which data can be sent on an end-to-end path such
that the egress rate is equal to the ingress rate with high probability can be
very practical when choosing transmission rates in video streaming or selecting
peers in peer-to-peer applications. We introduce probabilistic available
bandwidth, which is defined in terms of ingress rates and egress rates of
traffic on a path, rather than in terms of capacity and utilization of the
constituent links of the path like the standard available bandwidth metric. In
this paper, we describe a distributed algorithm, based on a probabilistic
graphical model and Bayesian active learning, for simultaneously estimating the
probabilistic available bandwidth of multiple paths through a network. Our
procedure exploits the fact that each packet train provides information not
only about the path it traverses, but also about any path that shares a link
with the monitored path. Simulations and PlanetLab experiments indicate that
this process can dramatically reduce the number of probes required to generate
accurate estimates.
|
1001.1020
|
An Empirical Evaluation of Four Algorithms for Multi-Class
Classification: Mart, ABC-Mart, Robust LogitBoost, and ABC-LogitBoost
|
cs.LG cs.AI cs.CV
|
This empirical study is mainly devoted to comparing four tree-based boosting
algorithms: mart, abc-mart, robust logitboost, and abc-logitboost, for
multi-class classification on a variety of publicly available datasets. Some of
those datasets have been thoroughly tested in prior studies using a broad range
of classification algorithms including SVM, neural nets, and deep learning.
In terms of the empirical classification errors, our experiment results
demonstrate:
1. Abc-mart considerably improves mart. 2. Abc-logitboost considerably
improves (robust) logitboost. 3. Robust) logitboost} considerably improves mart
on most datasets. 4. Abc-logitboost considerably improves abc-mart on most
datasets. 5. These four boosting algorithms (especially abc-logitboost)
outperform SVM on many datasets. 6. Compared to the best deep learning methods,
these four boosting algorithms (especially abc-logitboost) are competitive.
|
1001.1021
|
The Capacity of Random Linear Coding Networks as Subspace Channels
|
cs.IT math.IT
|
In this paper, we consider noncoherent random linear coding networks (RLCNs)
as a discrete memoryless channel (DMC) whose input and output alphabets consist
of subspaces. This contrasts with previous channel models in the literature
which assume matrices as the channel input and output. No particular
assumptions are made on the network topology or the transfer matrix, except
that the latter may be rank-deficient according to some rank deficiency
probability distribution. We introduce a random vector basis selection
procedure which renders the DMC symmetric. The capacity we derive can be seen
as a lower bound on the capacity of noncoherent RLCNs, where subspace coding
suffices to achieve this bound.
|
1001.1026
|
On Network-Error Correcting Convolutional Codes under the BSC Edge Error
Model
|
cs.IT math.IT
|
Convolutional network-error correcting codes (CNECCs) are known to provide
error correcting capability in acyclic instantaneous networks within the
network coding paradigm under small field size conditions. In this work, we
investigate the performance of CNECCs under the error model of the network
where the edges are assumed to be statistically independent binary symmetric
channels, each with the same probability of error $p_e$($0\leq p_e<0.5$). We
obtain bounds on the performance of such CNECCs based on a modified generating
function (the transfer function) of the CNECCs. For a given network, we derive
a mathematical condition on how small $p_e$ should be so that only single edge
network-errors need to be accounted for, thus reducing the complexity of
evaluating the probability of error of any CNECC. Simulations indicate that
convolutional codes are required to possess different properties to achieve
good performance in low $p_e$ and high $p_e$ regimes. For the low $p_e$ regime,
convolutional codes with good distance properties show good performance. For
the high $p_e$ regime, convolutional codes that have a good \textit{slope} (the
minimum normalized cycle weight) are seen to be good. We derive a lower bound
on the slope of any rate $b/c$ convolutional code with a certain degree.
|
1001.1027
|
An Unsupervised Algorithm For Learning Lie Group Transformations
|
cs.CV cs.LG
|
We present several theoretical contributions which allow Lie groups to be fit
to high dimensional datasets. Transformation operators are represented in their
eigen-basis, reducing the computational complexity of parameter estimation to
that of training a linear transformation model. A transformation specific
"blurring" operator is introduced that allows inference to escape local minima
via a smoothing of the transformation space. A penalty on traversed manifold
distance is added which encourages the discovery of sparse, minimal distance,
transformations between states. Both learning and inference are demonstrated
using these methods for the full set of affine transformations on natural image
patches. Transformation operators are then trained on natural video sequences.
It is shown that the learned video transformations provide a better description
of inter-frame differences than the standard motion model based on rigid
translation.
|
1001.1078
|
Stability of multidimensional persistent homology with respect to domain
perturbations
|
math.AT cs.CG cs.IT math.IT
|
Motivated by the problem of dealing with incomplete or imprecise acquisition
of data in computer vision and computer graphics, we extend results concerning
the stability of persistent homology with respect to function perturbations to
results concerning the stability with respect to domain perturbations. Domain
perturbations can be measured in a number of different ways. An important
method to compare domains is the Hausdorff distance. We show that by encoding
sets using the distance function, the multidimensional matching distance
between rank invariants of persistent homology groups is always upperly bounded
by the Hausdorff distance between sets. Moreover we prove that our construction
maintains information about the original set. Other well known methods to
compare sets are considered, such as the symmetric difference distance between
classical sets and the sup-distance between fuzzy sets. Also in these cases we
present results stating that the multidimensional matching distance between
rank invariants of persistent homology groups is upperly bounded by these
distances. An experiment showing the potential of our approach concludes the
paper.
|
1001.1079
|
Measuring Latent Causal Structure
|
cs.LG
|
Discovering latent representations of the observed world has become
increasingly more relevant in data analysis. Much of the effort concentrates on
building latent variables which can be used in prediction problems, such as
classification and regression. A related goal of learning latent structure from
data is that of identifying which hidden common causes generate the
observations, such as in applications that require predicting the effect of
policies. This will be the main problem tackled in our contribution: given a
dataset of indicators assumed to be generated by unknown and unmeasured common
causes, we wish to discover which hidden common causes are those, and how they
generate our data. This is possible under the assumption that observed
variables are linear functions of the latent causes with additive noise.
Previous results in the literature present solutions for the case where each
observed variable is a noisy function of a single latent variable. We show how
to extend the existing results for some cases where observed variables measure
more than one latent variable.
|
1001.1106
|
Optimal Thresholds for GMD Decoding with (L+1)/L-extended Bounded
Distance Decoders
|
cs.IT math.IT
|
We investigate threshold-based multi-trial decoding of concatenated codes
with an inner Maximum-Likelihood decoder and an outer error/erasure
(L+1)/L-extended Bounded Distance decoder, i.e. a decoder which corrects e
errors and t erasures if e(L+1)/L + t <= d - 1, where d is the minimum distance
of the outer code and L is a positive integer. This is a generalization of
Forney's GMD decoding, which was considered only for L = 1, i.e. outer Bounded
Minimum Distance decoding. One important example for (L+1)/L-extended Bounded
Distance decoders is decoding of L-Interleaved Reed-Solomon codes. Our main
contribution is a threshold location formula, which allows to optimally erase
unreliable inner decoding results, for a given number of decoding trials and
parameter L. Thereby, the term optimal means that the residual codeword error
probability of the concatenated code is minimized. We give an estimation of
this probability for any number of decoding trials.
|
1001.1117
|
Matrix Extension with Symmetry and Its Application to Filter Banks
|
cs.IT cs.NA math.IT math.NA math.RA
|
In this paper, we completely solve the matrix extension problem with symmetry
and provide a step-by-step algorithm to construct such a desired matrix
$\mathsf{P}_e$ from a given matrix $\mathsf{P}$. Furthermore, using a cascade
structure, we obtain a complete representation of any $r\times s$ paraunitary
matrix $\mathsf{P}$ having compatible symmetry, which in turn leads to an
algorithm for deriving a desired matrix $\mathsf{P}_e$ from a given matrix
$\mathsf{P}$. Matrix extension plays an important role in many areas such as
electronic engineering, system sciences, applied mathematics, and pure
mathematics. As an application of our general results on matrix extension with
symmetry, we obtain a satisfactory algorithm for constructing symmetric
paraunitary filter banks and symmetric orthonormal multiwavelets by deriving
high-pass filters with symmetry from any given low-pass filters with symmetry.
Several examples are provided to illustrate the proposed algorithms and results
in this paper.
|
1001.1122
|
Principal manifolds and graphs in practice: from molecular biology to
dynamical systems
|
cs.NE cs.AI
|
We present several applications of non-linear data modeling, using principal
manifolds and principal graphs constructed using the metaphor of elasticity
(elastic principal graph approach). These approaches are generalizations of the
Kohonen's self-organizing maps, a class of artificial neural networks. On
several examples we show advantages of using non-linear objects for data
approximation in comparison to the linear ones. We propose four numerical
criteria for comparing linear and non-linear mappings of datasets into the
spaces of lower dimension. The examples are taken from comparative political
science, from analysis of high-throughput data in molecular biology, from
analysis of dynamical systems.
|
1001.1133
|
Multi-cell MIMO Downlink with Fairness Criteria: the Large System Limit
|
cs.IT math.IT
|
We consider the downlink of a cellular network with multiple cells and
multi-antenna base stations including arbitrary inter-cell cooperation,
realistic distance-dependent pathloss and general "fairness" requirements.
Beyond Monte Carlo simulation, no efficient computation method to evaluate the
ergodic throughput of such systems has been provided so far. We propose an
analytic method based on the combination of the large random matrix theory with
Lagrangian optimization. The proposed method is computationally much more
efficient than Monte Carlo simulation and provides a very accurate
approximation (almost indistinguishable) for the actual finite-dimensional
case, even for of a small number of users and base station antennas. Numerical
examples include linear 2-cell and planar three-sectored 7-cell layouts, with
no inter-cell cooperation, sector cooperation, and full cooperation.
|
1001.1143
|
Redundancy in Systems which Entertain a Model of Themselves: Interaction
Information and the Self-organization of Anticipation
|
cs.IR physics.soc-ph
|
Mutual information among three or more dimensions (mu-star = - Q) has been
considered as interaction information. However, Krippendorff (2009a, 2009b) has
shown that this measure cannot be interpreted as a unique property of the
interactions and has proposed an alternative measure of interaction information
based on iterative approximation of maximum entropies. Q can then be considered
as a measure of the difference between interaction information and redundancy
generated in a model entertained by an observer. I argue that this provides us
with a measure of the imprint of a second-order observing system -- a model
entertained by the system itself -- on the underlying information processing.
The second-order system communicates meaning hyper-incursively; an observation
instantiates this meaning-processing within the information processing. The net
results may add to or reduce the prevailing uncertainty. The model is tested
empirically for the case where textual organization can be expected to contain
intellectual organization in terms of distributions of title words, author
names, and cited references.
|
1001.1187
|
Joint Scheduling and ARQ for MU-MIMO Downlink in the Presence of
Inter-Cell Interference
|
cs.IT math.IT
|
User scheduling and multiuser multi-antenna (MU-MIMO) transmission are at the
core of high rate data-oriented downlink schemes of the next-generation of
cellular systems (e.g., LTE-Advanced). Scheduling selects groups of users
according to their channels vector directions and SINR levels. However, when
scheduling is applied independently in each cell, the inter-cell interference
(ICI) power at each user receiver is not known in advance since it changes at
each new scheduling slot depending on the scheduling decisions of all
interfering base stations. In order to cope with this uncertainty, we consider
the joint operation of scheduling, MU-MIMO beamforming and Automatic Repeat
reQuest (ARQ). We develop a game-theoretic framework for this problem and build
on stochastic optimization techniques in order to find optimal scheduling and
ARQ schemes. Particularizing our framework to the case of "outage service
rates", we obtain a scheme based on adaptive variable-rate coding at the
physical layer, combined with ARQ at the Logical Link Control (ARQ-LLC). Then,
we present a novel scheme based on incremental redundancy Hybrid ARQ (HARQ)
that is able to achieve a throughput performance arbitrarily close to the
"genie-aided service rates", with no need for a genie that provides
non-causally the ICI power levels. The novel HARQ scheme is both easier to
implement and superior in performance with respect to the conventional
combination of adaptive variable-rate coding and ARQ-LLC.
|
1001.1197
|
Construction of wiretap codes from ordinary channel codes
|
cs.IT cs.CR math.IT
|
From an arbitrary given channel code over a discrete or Gaussian memoryless
channel, we construct a wiretap code with the strong security. Our construction
can achieve the wiretap capacity under mild assumptions. The key tool is the
new privacy amplification theorem bounding the eavesdropped information in
terms of the Gallager function.
|
1001.1210
|
Pure Parsimony Xor Haplotyping
|
cs.CE cs.DS
|
The haplotype resolution from xor-genotype data has been recently formulated
as a new model for genetic studies. The xor-genotype data is a cheaply
obtainable type of data distinguishing heterozygous from homozygous sites
without identifying the homozygous alleles. In this paper we propose a
formulation based on a well-known model used in haplotype inference: pure
parsimony. We exhibit exact solutions of the problem by providing polynomial
time algorithms for some restricted cases and a fixed-parameter algorithm for
the general case. These results are based on some interesting combinatorial
properties of a graph representation of the solutions. Furthermore, we show
that the problem has a polynomial time k-approximation, where k is the maximum
number of xor-genotypes containing a given SNP. Finally, we propose a heuristic
and produce an experimental analysis showing that it scales to real-world large
instances taken from the HapMap project.
|
1001.1214
|
The Capacity of Finite-State Channels in the High-Noise Regime
|
cs.IT math.IT
|
This paper considers the derivative of the entropy rate of a hidden Markov
process with respect to the observation probabilities. The main result is a
compact formula for the derivative that can be evaluated easily using Monte
Carlo methods. It is applied to the problem of computing the capacity of a
finite-state channel (FSC) and, in the high-noise regime, the formula has a
simple closed-form expression that enables series expansion of the capacity of
a FSC. This expansion is evaluated for a binary-symmetric channel under a (0,1)
run-length limited constraint and an intersymbol-interference channel with
Gaussian noise.
|
1001.1221
|
Boosting k-NN for categorization of natural scenes
|
cs.CV
|
The k-nearest neighbors (k-NN) classification rule has proven extremely
successful in countless many computer vision applications. For example, image
categorization often relies on uniform voting among the nearest prototypes in
the space of descriptors. In spite of its good properties, the classic k-NN
rule suffers from high variance when dealing with sparse prototype datasets in
high dimensions. A few techniques have been proposed to improve k-NN
classification, which rely on either deforming the nearest neighborhood
relationship or modifying the input space. In this paper, we propose a novel
boosting algorithm, called UNN (Universal Nearest Neighbors), which induces
leveraged k-NN, thus generalizing the classic k-NN rule. We redefine the voting
rule as a strong classifier that linearly combines predictions from the k
closest prototypes. Weak classifiers are learned by UNN so as to minimize a
surrogate risk. A major feature of UNN is the ability to learn which prototypes
are the most relevant for a given class, thus allowing one for effective data
reduction. Experimental results on the synthetic two-class dataset of Ripley
show that such a filtering strategy is able to reject "noisy" prototypes. We
carried out image categorization experiments on a database containing eight
classes of natural scenes. We show that our method outperforms significantly
the classic k-NN classification, while enabling significant reduction of the
computational cost by means of data filtering.
|
1001.1257
|
Decisional Processes with Boolean Neural Network: the Emergence of
Mental Schemes
|
cs.AI
|
Human decisional processes result from the employment of selected quantities
of relevant information, generally synthesized from environmental incoming data
and stored memories. Their main goal is the production of an appropriate and
adaptive response to a cognitive or behavioral task. Different strategies of
response production can be adopted, among which haphazard trials, formation of
mental schemes and heuristics. In this paper, we propose a model of Boolean
neural network that incorporates these strategies by recurring to global
optimization strategies during the learning session. The model characterizes as
well the passage from an unstructured/chaotic attractor neural network typical
of data-driven processes to a faster one, forward-only and representative of
schema-driven processes. Moreover, a simplified version of the Iowa Gambling
Task (IGT) is introduced in order to test the model. Our results match with
experimental data and point out some relevant knowledge coming from
psychological domain.
|
1001.1276
|
A framework to model real-time databases
|
cs.DB
|
Real-time databases deal with time-constrained data and time-constrained
transactions. The design of this kind of databases requires the introduction of
new concepts to support both data structures and the dynamic behaviour of the
database. In this paper, we give an overview about different aspects of
real-time databases and we clarify requirements of their modelling. Then, we
present a framework for real-time database design and describe its fundamental
operations. A case study demonstrates the validity of the structural model and
illustrates SQL queries and Java code generated from the classes of the model
|
1001.1278
|
On Critical Relative Distance of DNA Codes for Additive Stem Similarity
|
cs.IT math.IT q-bio.BM q-bio.GN
|
We consider DNA codes based on the nearest-neighbor (stem) similarity model
which adequately reflects the "hybridization potential" of two DNA sequences.
Our aim is to present a survey of bounds on the rate of DNA codes with respect
to a thermodynamically motivated similarity measure called an additive stem
similarity. These results yield a method to analyze and compare known samples
of the nearest neighbor "thermodynamic weights" associated to stacked pairs
that occurred in DNA secondary structures.
|
1001.1298
|
Coded OFDM by Unique Word Prefix
|
cs.IT math.IT
|
In this paper we propose a novel transmit signal structure and an adjusted
and optimized receiver for OFDM (orthogonal frequency division multiplexing).
Instead of the conventional cyclic prefix we use a deterministic sequence,
which we call unique word (UW), as guard interval. We show how unique words,
which are already well investigated for single carrier systems with frequency
domain equalization (SC/FDE), can also be introduced in OFDM symbols. Since
unique words represent known sequences, they can advantageously be used for
synchronization and channel estimation purposes. Furthermore, the proposed
approach introduces a complex number Reed-Solomon (RS-) code structure within
the sequence of subcarriers. This allows for RS-decoding or to apply a highly
efficient Wiener smoother succeeding a zero forcing stage at the receiver. We
present simulation results in an indoor multipath environment to highlight the
advantageous properties of the proposed scheme.
|
1001.1320
|
Distributed scientific communication in the European information
society: Some cases of "Mode 2" fields of research
|
cs.IR cs.DL physics.soc-ph
|
Can self-organization of scientific communication be specified by using
literature-based indicators? In this study, we explore this question by
applying entropy measures to typical "Mode-2" fields of knowledge production.
We hypothesized these scientific systems to be developing from a
self-organization of the interaction between cognitive and institutional
levels: European subsidized research programs aim at creating an institutional
network, while a cognitive reorganization is continuously ongoing at the
scientific field level. The results indicate that the European system develops
towards a stable level of distribution of cited references and title-words
among the European member states. We suggested that this distribution could be
a property of the emerging European system. In order to measure to degree of
specialization with respect to the respective distributions of countries, cited
references and title words, the mutual information among the three frequency
distributions was calculated. The so-called transmission values informed us
that the European system shows increasing levels of differentiation.
|
1001.1373
|
The Serializability of Network Codes
|
cs.IT cs.DS math.IT
|
Network coding theory studies the transmission of information in networks
whose vertices may perform nontrivial encoding and decoding operations on data
as it passes through the network. The main approach to deciding the feasibility
of network coding problems aims to reduce the problem to optimization over a
polytope of entropic vectors subject to constraints imposed by the network
structure. In the case of directed acyclic graphs, these constraints are
completely understood, but for general graphs the problem of enumerating them
remains open: it is not known how to classify the constraints implied by a
property that we call serializability, which refers to the absence of
paradoxical circular dependencies in a network code.
In this work we initiate the first systematic study of the constraints
imposed on a network code by serializability. We find that serializability
cannot be detected solely by evaluating the Shannon entropy of edge sets in the
graph, but nevertheless, we give a polynomial-time algorithm that decides the
serializability of a network code. We define a certificate of
non-serializability, called an information vortex, that plays a role in the
theory of serializability comparable to the role of fractional cuts in
multicommodity flow theory, including a type of min-max relation. Finally, we
study the serializability deficit of a network code, defined as the minimum
number of extra bits that must be sent in order to make it serializable. For
linear codes, we show that it is NP-hard to approximate this parameter within a
constant factor, and we demonstrate some surprising facts about the behavior of
this parameter under parallel composition of codes.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.