id
stringlengths 9
16
| title
stringlengths 4
278
| categories
stringlengths 5
104
| abstract
stringlengths 6
4.09k
|
|---|---|---|---|
1110.3898
|
An Interpolation Procedure for List Decoding Reed--Solomon codes Based
on Generalized Key Equations
|
cs.IT math.IT
|
The key step of syndrome-based decoding of Reed-Solomon codes up to half the
minimum distance is to solve the so-called Key Equation. List decoding
algorithms, capable of decoding beyond half the minimum distance, are based on
interpolation and factorization of multivariate polynomials. This article
provides a link between syndrome-based decoding approaches based on Key
Equations and the interpolation-based list decoding algorithms of Guruswami and
Sudan for Reed-Solomon codes. The original interpolation conditions of
Guruswami and Sudan for Reed-Solomon codes are reformulated in terms of a set
of Key Equations. These equations provide a structured homogeneous linear
system of equations of Block-Hankel form, that can be solved by an adaption of
the Fundamental Iterative Algorithm. For an $(n,k)$ Reed-Solomon code, a
multiplicity $s$ and a list size $\listl$, our algorithm has time complexity
\ON{\listl s^4n^2}.
|
1110.3907
|
AOSO-LogitBoost: Adaptive One-Vs-One LogitBoost for Multi-Class Problem
|
stat.ML cs.AI cs.CV
|
This paper presents an improvement to model learning when using multi-class
LogitBoost for classification. Motivated by the statistical view, LogitBoost
can be seen as additive tree regression. Two important factors in this setting
are: 1) coupled classifier output due to a sum-to-zero constraint, and 2) the
dense Hessian matrices that arise when computing tree node split gain and node
value fittings. In general, this setting is too complicated for a tractable
model learning algorithm. However, too aggressive simplification of the setting
may lead to degraded performance. For example, the original LogitBoost is
outperformed by ABC-LogitBoost due to the latter's more careful treatment of
the above two factors.
In this paper we propose techniques to address the two main difficulties of
the LogitBoost setting: 1) we adopt a vector tree (i.e. each node value is
vector) that enforces a sum-to-zero constraint, and 2) we use an adaptive block
coordinate descent that exploits the dense Hessian when computing tree split
gain and node values. Higher classification accuracy and faster convergence
rates are observed for a range of public data sets when compared to both the
original and the ABC-LogitBoost implementations.
|
1110.3917
|
How to Evaluate Dimensionality Reduction? - Improving the Co-ranking
Matrix
|
cs.LG cs.IR
|
The growing number of dimensionality reduction methods available for data
visualization has recently inspired the development of quality assessment
measures, in order to evaluate the resulting low-dimensional representation
independently from a methods' inherent criteria. Several (existing) quality
measures can be (re)formulated based on the so-called co-ranking matrix, which
subsumes all rank errors (i.e. differences between the ranking of distances
from every point to all others, comparing the low-dimensional representation to
the original data). The measures are often based on the partioning of the
co-ranking matrix into 4 submatrices, divided at the K-th row and column,
calculating a weighted combination of the sums of each submatrix. Hence, the
evaluation process typically involves plotting a graph over several (or even
all possible) settings of the parameter K. Considering simple artificial
examples, we argue that this parameter controls two notions at once, that need
not necessarily be combined, and that the rectangular shape of submatrices is
disadvantageous for an intuitive interpretation of the parameter. We debate
that quality measures, as general and flexible evaluation tools, should have
parameters with a direct and intuitive interpretation as to which specific
error types are tolerated or penalized. Therefore, we propose to replace K with
two parameters to control these notions separately, and introduce a differently
shaped weighting on the co-ranking matrix. The two new parameters can then
directly be interpreted as a threshold up to which rank errors are tolerated,
and a threshold up to which the rank-distances are significant for the
evaluation. Moreover, we propose a color representation of local quality to
visually support the evaluation process for a given mapping, where every point
in the mapping is colored according to its local contribution to the overall
quality.
|
1110.3961
|
A Dynamic Framework of Reputation Systems for an Agent Mediated e-market
|
cs.MA cs.AI cs.IT cs.SI math.IT
|
The success of an agent mediated e-market system lies in the underlying
reputation management system to improve the quality of services in an
information asymmetric e-market. Reputation provides an operatable metric for
establishing trustworthiness between mutually unknown online entities.
Reputation systems encourage honest behaviour and discourage malicious
behaviour of participating agents in the e-market. A dynamic reputation model
would provide virtually instantaneous knowledge about the changing e-market
environment and would utilise Internets' capacity for continuous interactivity
for reputation computation. This paper proposes a dynamic reputation framework
using reinforcement learning and fuzzy set theory that ensures judicious use of
information sharing for inter-agent cooperation. This framework is sensitive to
the changing parameters of e-market like the value of transaction and the
varying experience of agents with the purpose of improving inbuilt defense
mechanism of the reputation system against various attacks so that e-market
reaches an equilibrium state and dishonest agents are weeded out of the market.
|
1110.4015
|
The large-scale structure of journal citation networks
|
cs.SI cs.DL physics.soc-ph
|
We analyse the large-scale structure of the journal citation network built
from information contained in the Thomson-Reuters Journal Citation Reports. To
this end, we take advantage of the network science paraphernalia and explore
network properties like density, percolation robustness, average and largest
node distances, reciprocity, incoming and outgoing degree distributions, as
well as assortative mixing by node degrees. We discover that the journal
citation network is a dense, robust, small, and reciprocal world. Furthermore,
in and out node degree distributions display long-tails, with few vital
journals and many trivial ones, and they are strongly positively correlated.
|
1110.4050
|
Joint Scheduling and Resource Allocation in OFDMA Downlink Systems via
ACK/NAK Feedback
|
cs.IT math.IT
|
In this paper, we consider the problem of joint scheduling and resource
allocation in the OFDMA downlink, with the goal of maximizing an expected
long-term goodput-based utility subject to an instantaneous sum-power
constraint, and where the feedback to the base station consists only of
ACK/NAKs from recently scheduled users. We first establish that the optimal
solution is a partially observable Markov decision process (POMDP), which is
impractical to implement. In response, we propose a greedy approach to joint
scheduling and resource allocation that maintains a posterior channel
distribution for every user, and has only polynomial complexity. For
frequency-selective channels with Markov time-variation, we then outline a
recursive method to update the channel posteriors, based on the ACK/NAK
feedback, that is made computationally efficient through the use of particle
filtering. To gauge the performance of our greedy approach relative to that of
the optimal POMDP, we derive a POMDP performance upper-bound. Numerical
experiments show that, for slowly fading channels, the performance of our
greedy scheme is relatively close to the upper bound, and much better than
fixed-power random user scheduling (FP-RUS), despite its relatively low
complexity.
|
1110.4069
|
Transmission of non-linear binary input functions over a CDMA System
|
cs.IT math.IT
|
We study the problem of transmission of binary input non-linear functions
over a network of mobiles based on CDMA. Motivation for this study comes from
the application of using cheap measurement devices installed on personal
cell-phones to monitor environmental parameters such as air pollution,
temperature and noise level. Our model resembles the MAC model of Nazer and
Gastpar except that the encoders are restricted to be CDMA encoders. Unlike the
work of Nazer and Gastpar whose main attention is transmission of linear
functions, we deal with non-linear functions with binary inputs. A main
contribution of this paper is a lower bound on the computational capacity for
this problem. While in the traditional CDMA system the signature matrix of the
CDMA system preferably has independent rows, in our setup the signature matrix
of the CDMA system is viewed as the parity check matrix of a linear code,
reflecting our treatment of the interference.
|
1110.4076
|
Learning in Real-Time Search: A Unifying Framework
|
cs.AI
|
Real-time search methods are suited for tasks in which the agent is
interacting with an initially unknown environment in real time. In such
simultaneous planning and learning problems, the agent has to select its
actions in a limited amount of time, while sensing only a local part of the
environment centered at the agents current location. Real-time heuristic search
agents select actions using a limited lookahead search and evaluating the
frontier states with a heuristic function. Over repeated experiences, they
refine heuristic values of states to avoid infinite loops and to converge to
better solutions. The wide spread of such settings in autonomous software and
hardware agents has led to an explosion of real-time search algorithms over the
last two decades. Not only is a potential user confronted with a hodgepodge of
algorithms, but he also faces the choice of control parameters they use. In
this paper we address both problems. The first contribution is an introduction
of a simple three-parameter framework (named LRTS) which extracts the core
ideas behind many existing algorithms. We then prove that LRTA*, epsilon-LRTA*,
SLA*, and gamma-Trap algorithms are special cases of our framework. Thus, they
are unified and extended with additional features. Second, we prove
completeness and convergence of any algorithm covered by the LRTS framework.
Third, we prove several upper-bounds relating the control parameters and
solution quality. Finally, we analyze the influence of the three control
parameters empirically in the realistic scalable domains of real-time
navigation on initially unknown maps from a commercial role-playing game as
well as routing in ad hoc sensor networks.
|
1110.4099
|
The Complexification of Engineering
|
nlin.AO cs.AI
|
This paper deals with the arrow of complexification of engineering. We claim
that the complexification of engineering consists in (a) that shift throughout
which engineering becomes a science; thus it ceases to be a (mere) praxis or
profession; (b) becoming a science, engineering can be considered as one of the
sciences of complexity. In reality, the complexification of engineering is the
process by which engineering can be studied, achieved and understood in terms
of knowledge, and not of goods and services any longer. Complex engineered
systems and bio-inspired engineering are so far the two expressions of a
complex engineering.
|
1110.4102
|
Using time-delayed mutual information to discover and interpret temporal
correlation structure in complex populations
|
nlin.CD cs.IT math.DS math.IT stat.ME
|
This paper addresses how to calculate and interpret the time-delayed mutual
information for a complex, diversely and sparsely measured, possibly
non-stationary population of time-series of unknown composition and origin. The
primary vehicle used for this analysis is a comparison between the time-delayed
mutual information averaged over the population and the time-delayed mutual
information of an aggregated population (here aggregation implies the
population is conjoined before any statistical estimates are implemented).
Through the use of information theoretic tools, a sequence of practically
implementable calculations are detailed that allow for the average and
aggregate time-delayed mutual information to be interpreted. Moreover, these
calculations can be also be used to understand the degree of homo- or
heterogeneity present in the population. To demonstrate that the proposed
methods can be used in nearly any situation, the methods are applied and
demonstrated on the time series of glucose measurements from two different
subpopulations of individuals from the Columbia University Medical Center
electronic health record repository, revealing a picture of the composition of
the population as well as physiological features.
|
1110.4123
|
Positive words carry less information than negative words
|
cs.CL cs.IR physics.soc-ph
|
We show that the frequency of word use is not only determined by the word
length \cite{Zipf1935} and the average information content
\cite{Piantadosi2011}, but also by its emotional content. We have analyzed
three established lexica of affective word usage in English, German, and
Spanish, to verify that these lexica have a neutral, unbiased, emotional
content. Taking into account the frequency of word usage, we find that words
with a positive emotional content are more frequently used. This lends support
to Pollyanna hypothesis \cite{Boucher1969} that there should be a positive bias
in human expression. We also find that negative words contain more information
than positive words, as the informativeness of a word increases uniformly with
its valence decrease. Our findings support earlier conjectures about (i) the
relation between word frequency and information content, and (ii) the impact of
positive emotions on communication and social links.
|
1110.4126
|
Relay Selection and Performance Analysis in Multiple-User Networks
|
cs.IT math.IT
|
This paper investigates the relay selection (RS) problem in networks with
multiple users and multiple common amplify-and-forward (AF) relays. Considering
the overall quality-of-service of the network, we first specify our definition
of optimal RS for multiple-user relay networks. Then an optimal RS (ORS)
algorithm is provided, which is a straightforward extension of an RS scheme in
the literature that maximizes the minimum end-to-end receive signal-to-noise
ratio (SNR) of all users. The complexity of the ORS is quadratic in both the
number of users and the number of relays. Then a suboptimal RS (SRS) scheme is
proposed, which has linear complexity in the number of relays and quadratic
complexity in the number of users. Furthermore, diversity orders of both the
ORS and the proposed SRS are theoretically derived and compared with those of a
naive RS scheme and the single-user RS network. It is shown that the ORS
achieves full diversity; while the diversity order of the SRS decreases with
the the number of users. For two-user networks, the outage probabilities and
array gains corresponding to the minimum SNR of the RS schemes are derived in
closed forms. It is proved that the advantage of the SRS over the naive RS
scheme increases as the number of relays in the network increases. Simulation
results are provided to corroborate the analytical results.
|
1110.4174
|
Clipping Noise Cancellation for OFDM and OFDMA Systems Using Compressed
Sensing
|
cs.IT math.IT
|
In this paper, we propose clipping noise cancellation scheme using compressed
sensing (CS) for orthogonal frequency division multiplexing (OFDM) systems. In
the proposed scheme, only the data tones with high reliability are exploited in
reconstructing the clipping noise instead of the whole data tones. For
reconstructing the clipping noise using a fraction of the data tones at the
receiver, the CS technique is applied. The proposed scheme is also applicable
to interleaved orthogonal frequency division multiple access (OFDMA) systems
due to the decomposition of fast Fourier transform (FFT) structure. Numerical
analysis shows that the proposed scheme performs well for clipping noise
cancellation of both OFDM and OFDMA systems.
|
1110.4175
|
The Price of Anarchy (POA) of network coding and routing based on
average pricing mechanism
|
cs.NI cs.GT cs.IT math.IT
|
The congestion pricing is an efficient allocation approach to mediate demand
and supply of network resources. Different from the previous pricing using
Affine Marginal Cost (AMC), we focus on studying the game between network
coding and routing flows sharing a single link when users are price
anticipating based on an Average Cost Sharing (ACS) pricing mechanism. We
characterize the worst-case efficiency bounds of the game compared with the
optimal, i.e., the price-of anarchy (POA), which can be low bound 50% with
routing only. When both network coding and routing are applied, the POA can be
as low as 4/9. Therefore, network coding cannot improve the POA significantly
under the ACS. Moreover, for more efficient use of limited resources, it
indicates the sharing users have a higher tendency to choose network coding.
|
1110.4181
|
Injecting External Solutions Into CMA-ES
|
cs.LG
|
This report considers how to inject external candidate solutions into the
CMA-ES algorithm. The injected solutions might stem from a gradient or a Newton
step, a surrogate model optimizer or any other oracle or search mechanism. They
can also be the result of a repair mechanism, for example to render infeasible
solutions feasible. Only small modifications to the CMA-ES are necessary to
turn injection into a reliable and effective method: too long steps need to be
tightly renormalized. The main objective of this report is to reveal this
simple mechanism. Depending on the source of the injected solutions,
interesting variants of CMA-ES arise. When the best-ever solution is always
(re-)injected, an elitist variant of CMA-ES with weighted multi-recombination
arises. When \emph{all} solutions are injected from an \emph{external} source,
the resulting algorithm might be viewed as \emph{adaptive encoding} with
step-size control. In first experiments, injected solutions of very good
quality lead to a convergence speed twice as fast as on the (simple) sphere
function without injection. This means that we observe an impressive speed-up
on otherwise difficult to solve functions. Single bad injected solutions on the
other hand do no significant harm.
|
1110.4198
|
A Reliable Effective Terascale Linear Learning System
|
cs.LG stat.ML
|
We present a system and a set of techniques for learning linear predictors
with convex losses on terascale datasets, with trillions of features, {The
number of features here refers to the number of non-zero entries in the data
matrix.} billions of training examples and millions of parameters in an hour
using a cluster of 1000 machines. Individually none of the component techniques
are new, but the careful synthesis required to obtain an efficient
implementation is. The result is, up to our knowledge, the most scalable and
efficient linear learning system reported in the literature (as of 2011 when
our experiments were conducted). We describe and thoroughly evaluate the
components of the system, showing the importance of the various design choices.
|
1110.4248
|
Ideogram Based Chinese Sentiment Word Orientation Computation
|
cs.CL
|
This paper presents a novel algorithm to compute sentiment orientation of
Chinese sentiment word. The algorithm uses ideograms which are a distinguishing
feature of Chinese language. The proposed algorithm can be applied to any
sentiment classification scheme. To compute a word's sentiment orientation
using the proposed algorithm, only the word itself and a precomputed character
ontology is required, rather than a corpus. The influence of three parameters
over the algorithm performance is analyzed and verified by experiment.
Experiment also shows that proposed algorithm achieves an F Measure of 85.02%
outperforming existing ideogram based algorithm.
|
1110.4285
|
Topological Feature Based Classification
|
cs.SI physics.soc-ph
|
There has been a lot of interest in developing algorithms to extract clusters
or communities from networks. This work proposes a method, based on
blockmodelling, for leveraging communities and other topological features for
use in a predictive classification task. Motivated by the issues faced by the
field of community detection and inspired by recent advances in Bayesian topic
modelling, the presented model automatically discovers topological features
relevant to a given classification task. In this way, rather than attempting to
identify some universal best set of clusters for an undefined goal, the aim is
to find the best set of clusters for a particular purpose.
Using this method, topological features can be validated and assessed within
a given context by their predictive performance.
The proposed model differs from other relational and semi-supervised learning
models as it identifies topological features to explain the classification
decision. In a demonstration on a number of real networks the predictive
capability of the topological features are shown to rival the performance of
content based relational learners. Additionally, the model is shown to
outperform graph-based semi-supervised methods on directed and approximately
bipartite networks.
|
1110.4322
|
An Optimal Algorithm for Linear Bandits
|
cs.LG stat.ML
|
We provide the first algorithm for online bandit linear optimization whose
regret after T rounds is of order sqrt{Td ln N} on any finite class X of N
actions in d dimensions, and of order d*sqrt{T} (up to log factors) when X is
infinite. These bounds are not improvable in general. The basic idea utilizes
tools from convex geometry to construct what is essentially an optimal
exploration basis. We also present an application to a model of linear bandits
with expert advice. Interestingly, these results show that bandit linear
optimization with expert advice in d dimensions is no more difficult (in terms
of the achievable regret) than the online d-armed bandit problem with expert
advice (where EXP4 is optimal).
|
1110.4412
|
Aspiration Learning in Coordination Games
|
cs.GT cs.LG
|
We consider the problem of distributed convergence to efficient outcomes in
coordination games through dynamics based on aspiration learning. Under
aspiration learning, a player continues to play an action as long as the
rewards received exceed a specified aspiration level. Here, the aspiration
level is a fading memory average of past rewards, and these levels also are
subject to occasional random perturbations. A player becomes dissatisfied
whenever a received reward is less than the aspiration level, in which case the
player experiments with a probability proportional to the degree of
dissatisfaction. Our first contribution is the characterization of the
asymptotic behavior of the induced Markov chain of the iterated process in
terms of an equivalent finite-state Markov chain. We then characterize
explicitly the behavior of the proposed aspiration learning in a generalized
version of coordination games, examples of which include network formation and
common-pool games. In particular, we show that in generic coordination games
the frequency at which an efficient action profile is played can be made
arbitrarily large. Although convergence to efficient outcomes is desirable, in
several coordination games, such as common-pool games, attainability of fair
outcomes, i.e., sequences of plays at which players experience highly rewarding
returns with the same frequency, might also be of special interest. To this
end, we demonstrate through analysis and simulations that aspiration learning
also establishes fair outcomes in all symmetric coordination games, including
common-pool games.
|
1110.4414
|
(1+eps)-approximate Sparse Recovery
|
cs.DS cs.IT math.IT
|
The problem central to sparse recovery and compressive sensing is that of
stable sparse recovery: we want a distribution of matrices A in R^{m\times n}
such that, for any x \in R^n and with probability at least 2/3 over A, there is
an algorithm to recover x* from Ax with
||x* - x||_p <= C min_{k-sparse x'} ||x - x'||_p for some constant C > 1 and
norm p. The measurement complexity of this problem is well understood for
constant C > 1. However, in a variety of applications it is important to obtain
C = 1 + eps for a small eps > 0, and this complexity is not well understood. We
resolve the dependence on eps in the number of measurements required of a
k-sparse recovery algorithm, up to polylogarithmic factors for the central
cases of p = 1 and p = 2. Namely, we give new algorithms and lower bounds that
show the number of measurements required is (1/eps^{p/2})k polylog(n). For p =
2, our bound of (1/eps) k log(n/k) is tight up to constant factors. We also
give matching bounds when the output is required to be k-sparse, in which case
we achieve (1/eps^p) k polylog(n). This shows the distinction between the
complexity of sparse and non-sparse outputs is fundamental.
|
1110.4416
|
Data-dependent kernels in nearly-linear time
|
cs.LG
|
We propose a method to efficiently construct data-dependent kernels which can
make use of large quantities of (unlabeled) data. Our construction makes an
approximation in the standard construction of semi-supervised kernels in
Sindhwani et al. 2005. In typical cases these kernels can be computed in
nearly-linear time (in the amount of data), improving on the cubic time of the
standard construction, enabling large scale semi-supervised learning in a
variety of contexts. The methods are validated on semi-supervised and
unsupervised problems on data sets containing upto 64,000 sample points.
|
1110.4441
|
Distributed Storage for Intermittent Energy Sources: Control Design and
Performance Limits
|
cs.SY
|
One of the most important challenges in the integration of renewable energy
sources into the power grid lies in their `intermittent' nature. The power
output of sources like wind and solar varies with time and location due to
factors that cannot be controlled by the provider. Two strategies have been
proposed to hedge against this variability: 1) use energy storage systems to
effectively average the produced power over time; 2) exploit distributed
generation to effectively average production over location. We introduce a
network model to study the optimal use of storage and transmission resources in
the presence of random energy sources. We propose a Linear-Quadratic based
methodology to design control strategies, and we show that these strategies are
asymptotically optimal for some simple network topologies. For these
topologies, the dependence of optimal performance on storage and transmission
capacity is explicitly quantified.
|
1110.4474
|
Robustness of Social Networks: Comparative Results Based on Distance
Distributions
|
cs.SI physics.soc-ph
|
Given a social network, which of its nodes have a stronger impact in
determining its structure? More formally: which node-removal order has the
greatest impact on the network structure? We approach this well-known problem
for the first time in a setting that combines both web graphs and social
networks, using datasets that are orders of magnitude larger than those
appearing in the previous literature, thanks to some recently developed
algorithms and software tools that make it possible to approximate accurately
the number of reachable pairs and the distribution of distances in a graph. Our
experiments highlight deep differences in the structure of social networks and
web graphs, show significant limitations of previous experimental results, and
at the same time reveal clustering by label propagation as a new and very
effective way of locating nodes that are important from a structural viewpoint.
|
1110.4481
|
Learning Hierarchical and Topographic Dictionaries with Structured
Sparsity
|
cs.LG
|
Recent work in signal processing and statistics have focused on defining new
regularization functions, which not only induce sparsity of the solution, but
also take into account the structure of the problem. We present in this paper a
class of convex penalties introduced in the machine learning community, which
take the form of a sum of l_2 and l_infinity-norms over groups of variables.
They extend the classical group-sparsity regularization in the sense that the
groups possibly overlap, allowing more flexibility in the group design. We
review efficient optimization methods to deal with the corresponding inverse
problems, and their application to the problem of learning dictionaries of
natural image patches: On the one hand, dictionary learning has indeed proven
effective for various signal processing tasks. On the other hand, structured
sparsity provides a natural framework for modeling dependencies between
dictionary elements. We thus consider a structured sparse regularization to
learn dictionaries embedded in a particular structure, for instance a tree or a
two-dimensional grid. In the latter case, the results we obtain are similar to
the dictionaries produced by topographic independent component analysis.
|
1110.4499
|
Category-Based Routing in Social Networks: Membership Dimension and the
Small-World Phenomenon (Full)
|
cs.SI cs.DS physics.soc-ph
|
A classic experiment by Milgram shows that individuals can route messages
along short paths in social networks, given only simple categorical information
about recipients (such as "he is a prominent lawyer in Boston" or "she is a
Freshman sociology major at Harvard"). That is, these networks have very short
paths between pairs of nodes (the so-called small-world phenomenon); moreover,
participants are able to route messages along these paths even though each
person is only aware of a small part of the network topology. Some sociologists
conjecture that participants in such scenarios use a greedy routing strategy in
which they forward messages to acquaintances that have more categories in
common with the recipient than they do, and similar strategies have recently
been proposed for routing messages in dynamic ad-hoc networks of mobile
devices. In this paper, we introduce a network property called membership
dimension, which characterizes the cognitive load required to maintain
relationships between participants and categories in a social network. We show
that any connected network has a system of categories that will support greedy
routing, but that these categories can be made to have small membership
dimension if and only if the underlying network exhibits the small-world
phenomenon.
|
1110.4544
|
Compression-based Similarity
|
cs.IT math.IT
|
First we consider pair-wise distances for literal objects consisting of
finite binary files. These files are taken to contain all of their meaning,
like genomes or books. The distances are based on compression of the objects
concerned, normalized, and can be viewed as similarity distances. Second, we
consider pair-wise distances between names of objects, like "red" or
"christianity." In this case the distances are based on searches of the
Internet. Such a search can be performed by any search engine that returns
aggregate page counts. We can extract a code length from the numbers returned,
use the same formula as before, and derive a similarity or relative semantics
between names for objects. The theory is based on Kolmogorov complexity. We
test both similarities extensively experimentally.
|
1110.4613
|
Wiretap Channels: Implications of the More Capable Condition and Cyclic
Shift Symmetry
|
cs.IT cs.CR math.IT
|
Characterization of the rate-equivocation region of a general wiretap channel
involves two auxiliary random variables: U, for rate splitting and V, for
channel prefixing. Evaluation of regions involving auxiliary random variables
is generally difficult. In this paper, we explore specific classes of wiretap
channels for which the expression and evaluation of the rate-equivocation
region are simpler. In particular, we show that when the main channel is more
capable than the eavesdropping channel, V=X is optimal and the boundary of the
rate-equivocation region can be achieved by varying U alone. Conversely, we
show under a mild condition that if the main receiver is not more capable, then
V=X is strictly suboptimal. Next, we focus on the class of cyclic shift
symmetric wiretap channels. We explicitly determine the optimal selections of
rate splitting U and channel prefixing V that achieve the boundary of the
rate-equivocation region. We show that optimal U and V are determined via
cyclic shifts of the solution of an auxiliary optimization problem that
involves only one auxiliary random variable. In addition, we provide a
sufficient condition for cyclic shift symmetric wiretap channels to have U=\phi
as an optimal selection. Finally, we apply our results to the binary-input
cyclic shift symmetric wiretap channels. We solve the corresponding constrained
optimization problem by inspecting each point of the I(X;Y)-I(X;Z) function. We
thoroughly characterize the rate-equivocation regions of the BSC-BEC and
BEC-BSC wiretap channels. In particular, we find that U=\phi is optimal and the
boundary of the rate-equivocation region is achieved by varying V alone for the
BSC-BEC wiretap channel.
|
1110.4624
|
Aladdin: Augmenting Urban Environments with Local Area Linked
Data-Casting
|
cs.SI cs.NI
|
Urban environments are brimming with information sources, yet these are
typically disconnected from related information on the Web. Addressing this
disconnect requires an infrastructure able to disseminate information to a
specific micro-location, to be consumed by interested parties. This paper
proposes Aladdin, an infrastructure for highly localised broadcast of Linked
Data via radio waves. When combined with data retrieved from the Web, Aladdin
can enable a new generation of micro-location-aware mobile applications and
services.
|
1110.4657
|
A Version of Geiringer-like Theorem for Decision Making in the
Environments with Randomness and Incomplete Information
|
cs.AI cs.DM
|
Purpose: In recent years Monte-Carlo sampling methods, such as Monte Carlo
tree search, have achieved tremendous success in model free reinforcement
learning. A combination of the so called upper confidence bounds policy to
preserve the "exploration vs. exploitation" balance to select actions for
sample evaluations together with massive computing power to store and to update
dynamically a rather large pre-evaluated game tree lead to the development of
software that has beaten the top human player in the game of Go on a 9 by 9
board. Much effort in the current research is devoted to widening the range of
applicability of the Monte-Carlo sampling methodology to partially observable
Markov decision processes with non-immediate payoffs. The main challenge
introduced by randomness and incomplete information is to deal with the action
evaluation at the chance nodes due to drastic differences in the possible
payoffs the same action could lead to. The aim of this article is to establish
a version of a theorem that originated from population genetics and has been
later adopted in evolutionary computation theory that will lead to novel
Monte-Carlo sampling algorithms that provably increase the AI potential. Due to
space limitations the actual algorithms themselves will be presented in the
sequel papers, however, the current paper provides a solid mathematical
foundation for the development of such algorithms and explains why they are so
promising.
|
1110.4703
|
Proactive Resource Allocation: Harnessing the Diversity and Multicast
Gains
|
cs.IT cs.NI math.IT
|
This paper introduces the novel concept of proactive resource allocation
through which the predictability of user behavior is exploited to balance the
wireless traffic over time, and hence, significantly reduce the bandwidth
required to achieve a given blocking/outage probability. We start with a simple
model in which the smart wireless devices are assumed to predict the arrival of
new requests and submit them to the network T time slots in advance. Using
tools from large deviation theory, we quantify the resulting prediction
diversity gain} to establish that the decay rate of the outage event
probabilities increases with the prediction duration T. This model is then
generalized to incorporate the effect of the randomness in the prediction
look-ahead time T. Remarkably, we also show that, in the cognitive networking
scenario, the appropriate use of proactive resource allocation by the primary
users improves the diversity gain of the secondary network at no cost in the
primary network diversity. We also shed lights on multicasting with predictable
demands and show that the proactive multicast networks can achieve a
significantly higher diversity gain that scales super-linearly with T. Finally,
we conclude by a discussion of the new research questions posed under the
umbrella of the proposed proactive (non-causal) wireless networking framework.
|
1110.4713
|
Kernel Topic Models
|
cs.LG stat.ML
|
Latent Dirichlet Allocation models discrete data as a mixture of discrete
distributions, using Dirichlet beliefs over the mixture weights. We study a
variation of this concept, in which the documents' mixture weight beliefs are
replaced with squashed Gaussian distributions. This allows documents to be
associated with elements of a Hilbert space, admitting kernel topic models
(KTM), modelling temporal, spatial, hierarchical, social and other structure
between documents. The main challenge is efficient approximate inference on the
latent Gaussian. We present an approximate algorithm cast around a Laplace
approximation in a transformed basis. The KTM can also be interpreted as a type
of Gaussian process latent variable model, or as a topic model conditional on
document features, uncovering links between earlier work in these areas.
|
1110.4719
|
A Generalized Arc-Consistency Algorithm for a Class of Counting
Constraints: Revised Edition that Incorporates One Correction
|
cs.AI
|
This paper introduces the SEQ BIN meta-constraint with a polytime algorithm
achieving general- ized arc-consistency according to some properties. SEQ BIN
can be used for encoding counting con- straints such as CHANGE, SMOOTH or
INCREAS- ING NVALUE. For some of these constraints and some of their variants
GAC can be enforced with a time and space complexity linear in the sum of
domain sizes, which improves or equals the best known results of the
literature.
|
1110.4723
|
Influence Blocking Maximization in Social Networks under the Competitive
Linear Threshold Model Technical Report
|
cs.SI physics.soc-ph
|
In many real-world situations, different and often opposite opinions,
innovations, or products are competing with one another for their social
influence in a networked society. In this paper, we study competitive influence
propagation in social networks under the competitive linear threshold (CLT)
model, an extension to the classic linear threshold model. Under the CLT model,
we focus on the problem that one entity tries to block the influence
propagation of its competing entity as much as possible by strategically
selecting a number of seed nodes that could initiate its own influence
propagation. We call this problem the influence blocking maximization (IBM)
problem. We prove that the objective function of IBM in the CLT model is
submodular, and thus a greedy algorithm could achieve 1-1/e approximation
ratio. However, the greedy algorithm requires Monte-Carlo simulations of
competitive influence propagation, which makes the algorithm not efficient. We
design an efficient algorithm CLDAG, which utilizes the properties of the CLT
model, to address this issue. We conduct extensive simulations of CLDAG, the
greedy algorithm, and other baseline algorithms on real-world and synthetic
datasets. Our results show that CLDAG is able to provide best accuracy in par
with the greedy algorithm and often better than other algorithms, while it is
two orders of magnitude faster than the greedy algorithm.
|
1110.4784
|
Web search queries can predict stock market volumes
|
q-fin.ST cs.LG physics.soc-ph
|
We live in a computerized and networked society where many of our actions
leave a digital trace and affect other people's actions. This has lead to the
emergence of a new data-driven research field: mathematical methods of computer
science, statistical physics and sociometry provide insights on a wide range of
disciplines ranging from social science to human mobility. A recent important
discovery is that query volumes (i.e., the number of requests submitted by
users to search engines on the www) can be used to track and, in some cases, to
anticipate the dynamics of social phenomena. Successful exemples include
unemployment levels, car and home sales, and epidemics spreading. Few recent
works applied this approach to stock prices and market sentiment. However, it
remains unclear if trends in financial markets can be anticipated by the
collective wisdom of on-line users on the web. Here we show that trading
volumes of stocks traded in NASDAQ-100 are correlated with the volumes of
queries related to the same stocks. In particular, query volumes anticipate in
many cases peaks of trading by one day or more. Our analysis is carried out on
a unique dataset of queries, submitted to an important web search engine, which
enable us to investigate also the user behavior. We show that the query volume
dynamics emerges from the collective but seemingly uncoordinated activity of
many users. These findings contribute to the debate on the identification of
early warnings of financial systemic risk, based on the activity of users of
the www.
|
1110.4844
|
Analyzing Answers in Threaded Discussions using a Role-Based Information
Network
|
cs.SI cs.IR
|
Online discussion boards are an important medium for collaboration. The goal
of our work is to understand how messages and individual discussants contribute
to Q&A discussions. We present a novel network model for capturing in-formation
roles of messages and discussants, and show how we identify useful answers to
the initial question. We first classify information seeking or information
providing roles of messages, such as question, answer or acknowledgement. We
also identify user intent in the discussion as an information seeker or a
provider. We capture such role information within a reply-to discussion
network, and identify messages that answer seeker questions and how answeres
are acknowledged. Message influences are analyzed using B-centrality measures.
User influences across different threads are combined with message influences.
We use the combined score in identifying the most useful answer in the thread.
The resulting ranks correlate with human provided ranks with an MRR score of
0.67.
|
1110.4851
|
Leveraging User Diversity to Harvest Knowledge on the Social Web
|
cs.IR cs.SI physics.soc-ph
|
Social web users are a very diverse group with varying interests, levels of
expertise, enthusiasm, and expressiveness. As a result, the quality of content
and annotations they create to organize content is also highly variable. While
several approaches have been proposed to mine social annotations, for example,
to learn folksonomies that reflect how people relate narrower concepts to
broader ones, these methods treat all users and the annotations they create
uniformly. We propose a framework to automatically identify experts, i.e.,
knowledgeable users who create high quality annotations, and use their
knowledge to guide folksonomy learning. We evaluate the approach on a large
body of social annotations extracted from the photosharing site Flickr. We show
that using expert knowledge leads to more detailed and accurate folksonomies.
Moreover, we show that including annotations from non-expert, or novice, users
leads to more comprehensive folksonomies than experts' knowledge alone.
|
1110.4925
|
The Similarity between Stochastic Kronecker and Chung-Lu Graph Models
|
cs.SI
|
The analysis of massive graphs is now becoming a very important part of
science and industrial research. This has led to the construction of a large
variety of graph models, each with their own advantages. The Stochastic
Kronecker Graph (SKG) model has been chosen by the Graph500 steering committee
to create supercomputer benchmarks for graph algorithms. The major reasons for
this are its easy parallelization and ability to mirror real data. Although SKG
is easy to implement, there is little understanding of the properties and
behavior of this model.
We show that the parallel variant of the edge-configuration model given by
Chung and Lu (referred to as CL) is notably similar to the SKG model. The graph
properties of an SKG are extremely close to those of a CL graph generated with
the appropriate parameters. Indeed, the final probability matrix used by SKG is
almost identical to that of a CL model. This implies that the graph
distribution represented by SKG is almost the same as that given by a CL model.
We also show that when it comes to fitting real data, CL performs as well as
SKG based on empirical studies of graph properties. CL has the added benefit of
a trivially simple fitting procedure and exactly matching the degree
distribution. Our results suggest that users of the SKG model should consider
the CL model because of its similar properties, simpler structure, and ability
to fit a wider range of degree distributions. At the very least, CL is a good
control model to compare against.
|
1110.4970
|
Studying Satellite Image Quality Based on the Fusion Techniques
|
cs.CV
|
Various and different methods can be used to produce high-resolution
multispectral images from high-resolution panchromatic image (PAN) and
low-resolution multispectral images (MS), mostly on the pixel level. However,
the jury is still out on the benefits of a fused image compared to its original
images. There is also a lack of measures for assessing the objective quality of
the spatial resolution for the fusion methods. Therefore, an objective quality
of the spatial resolution assessment for fusion images is required. So, this
study attempts to develop a new qualitative assessment to evaluate the spatial
quality of the pan sharpened images by many spatial quality metrics. Also, this
paper deals with a comparison of various image fusion techniques based on pixel
and feature fusion techniques.
|
1110.4999
|
Capacity of the Gaussian Relay Channel with Correlated Noises to Within
a Constant Gap
|
cs.IT math.IT
|
This paper studies the relaying strategies and the approximate capacity of
the classic three-node Gaussian relay channel, but where the noises at the
relay and at the destination are correlated. It is shown that the capacity of
such a relay channel can be achieved to within a constant gap of $\hf \log_2 3
=0.7925$ bits using a modified version of the noisy network coding strategy,
where the quantization level at the relay is set in a correlation dependent
way. As a corollary, this result establishes that the conventional
compress-and-forward scheme also achieves to within a constant gap to the
capacity. In contrast, the decode-and-forward and the single-tap
amplify-and-forward relaying strategies can have an infinite gap to capacity in
the regime where the noises at the relay and at the destination are highly
correlated, and the gain of the relay-to-destination link goes to infinity.
|
1110.5000
|
On Noisy Network Coding for a Gaussian Relay Chain Network with
Correlated Noises
|
cs.IT math.IT
|
Noisy network coding, which elegantly combines the conventional
compress-and-forward relaying strategy and ideas from network coding, has
recently drawn much attention for its simplicity and optimality in achieving to
within constant gap of the capacity of the multisource multicast Gaussian
network. The constant-gap result, however, applies only to Gaussian relay
networks with independent noises. This paper investigates the application of
noisy network coding to networks with correlated noises. By focusing on a
four-node Gaussian relay chain network with a particular noise correlation
structure, it is shown that noisy network coding can no longer achieve to
within constant gap to capacity with the choice of Gaussian inputs and Gaussian
quantization. The cut-set bound of the relay chain network in this particular
case, however, can be achieved to within half a bit by a simple concatenation
of a correlation-aware noisy network coding strategy and a decode-and-forward
scheme.
|
1110.5015
|
Spectral descriptors for deformable shapes
|
cs.CV cs.CG cs.GR math.DG
|
Informative and discriminative feature descriptors play a fundamental role in
deformable shape analysis. For example, they have been successfully employed in
correspondence, registration, and retrieval tasks. In the recent years,
significant attention has been devoted to descriptors obtained from the
spectral decomposition of the Laplace-Beltrami operator associated with the
shape. Notable examples in this family are the heat kernel signature (HKS) and
the wave kernel signature (WKS). Laplacian-based descriptors achieve
state-of-the-art performance in numerous shape analysis tasks; they are
computationally efficient, isometry-invariant by construction, and can
gracefully cope with a variety of transformations. In this paper, we formulate
a generic family of parametric spectral descriptors. We argue that in order to
be optimal for a specific task, the descriptor should take into account the
statistics of the corpus of shapes to which it is applied (the "signal") and
those of the class of transformations to which it is made insensitive (the
"noise"). While such statistics are hard to model axiomatically, they can be
learned from examples. Following the spirit of the Wiener filter in signal
processing, we show a learning scheme for the construction of optimal spectral
descriptors and relate it to Mahalanobis metric learning. The superiority of
the proposed approach is demonstrated on the SHREC'10 benchmark.
|
1110.5045
|
Error Graphs and the Reconstruction of Elements in Groups
|
math.CO cs.IT math.GR math.IT
|
Packing and covering problems for metric spaces, and graphs in particular,
are of essential interest in combinatorics and coding theory. They are
formulated in terms of metric balls of vertices. We consider a new problem in
graph theory which is also based on the consideration of metric balls of
vertices, but which is distinct from the traditional packing and covering
problems. This problem is motivated by applications in information transmission
when redundancy of messages is not sufficient for their exact reconstruction,
and applications in computational biology when one wishes to restore an
evolutionary process. It can be defined as the reconstruction, or
identification, of an unknown vertex in a given graph from a minimal number of
vertices (erroneous or distorted patterns) in a metric ball of a given radius r
around the unknown vertex. For this problem it is required to find minimum
restrictions for such a reconstruction to be possible and also to find
efficient reconstruction algorithms under such minimal restrictions.
In this paper we define error graphs and investigate their basic properties.
A particular class of error graphs occurs when the vertices of the graph are
the elements of a group, and when the path metric is determined by a suitable
set of group elements. These are the undirected Cayley graphs. Of particular
interest is the transposition Cayley graph on the symmetric group which occurs
in connection with the analysis of transpositional mutations in molecular
biology. We obtain a complete solution of the above problems for the
transposition Cayley graph on the symmetric group.
|
1110.5051
|
Wikipedia Edit Number Prediction based on Temporal Dynamics Only
|
cs.LG
|
In this paper, we describe our approach to the Wikipedia Participation
Challenge which aims to predict the number of edits a Wikipedia editor will
make in the next 5 months. The best submission from our team, "zeditor",
achieved 41.7% improvement over WMF's baseline predictive model and the final
rank of 3rd place among 96 teams. An interesting characteristic of our approach
is that only temporal dynamics features (i.e., how the number of edits changes
in recent periods, etc.) are used in a self-supervised learning framework,
which makes it easy to be generalised to other application domains.
|
1110.5057
|
Patterns of Emotional Blogging and Emergence of Communities: Agent-Based
Model on Bipartite Networks
|
cs.SI cs.HC physics.soc-ph
|
Background: We study mechanisms underlying the collective emotional behavior
of Bloggers by using the agent-based modeling and the parameters inferred from
the related empirical data.
Methodology/Principal Findings: A bipartite network of emotional agents and
posts evolves through the addition of agents and their actions on posts. The
emotion state of an agent,quantified by the arousal and the valence, fluctuates
in time due to events on the connected posts, and in the moments of agent's
action it is transferred to a selected post. We claim that the indirect
communication of the emotion in the model rules, combined with the action-delay
time and the circadian rhythm extracted from the empirical data, can explain
the genesis of emotional bursts by users on popular Blogs and similar Web
portals. The model also identifies the parameters and how they influence the
course of the dynamics.
Conclusions: The collective behavior is here recognized by the emergence of
communities on the network and the fractal time-series of their emotional
comments, powered by the negative emotion (critique). The evolving agents
communities leave characteristic patterns of the activity in the phase space of
the arousal--valence variables, where each segment represents a common emotion
described in psychology.
|
1110.5063
|
Recovering a Clipped Signal in Sparseland
|
cs.IT math.IT
|
In many data acquisition systems it is common to observe signals whose
amplitudes have been clipped. We present two new algorithms for recovering a
clipped signal by leveraging the model assumption that the underlying signal is
sparse in the frequency domain. Both algorithms employ ideas commonly used in
the field of Compressive Sensing; the first is a modified version of Reweighted
$\ell_1$ minimization, and the second is a modification of a simple greedy
algorithm known as Trivial Pursuit. An empirical investigation shows that both
approaches can recover signals with significant levels of clipping
|
1110.5091
|
3D Protein Structure Predicted from Sequence
|
q-bio.BM cs.CE physics.bio-ph physics.data-an
|
The evolutionary trajectory of a protein through sequence space is
constrained by function and three-dimensional (3D) structure. Residues in
spatial proximity tend to co-evolve, yet attempts to invert the evolutionary
record to identify these constraints and use them to computationally fold
proteins have so far been unsuccessful. Here, we show that co-variation of
residue pairs, observed in a large protein family, provides sufficient
information to determine 3D protein structure. Using a data-constrained maximum
entropy model of the multiple sequence alignment, we identify pairs of
statistically coupled residue positions which are expected to be close in the
protein fold, termed contacts inferred from evolutionary information (EICs). To
assess the amount of information about the protein fold contained in these
coupled pairs, we evaluate the accuracy of predicted 3D structures for proteins
of 50-260 residues, from 15 diverse protein families, including a G-protein
coupled receptor. These structure predictions are de novo, i.e., they do not
use homology modeling or sequence-similar fragments from known structures. The
resulting low C{\alpha}-RMSD error range of 2.7-5.1{\AA}, over at least 75% of
the protein, indicates the potential for predicting essentially correct 3D
structures for the thousands of protein families that have no known structure,
provided they include a sufficiently large number of divergent sample
sequences. With the current enormous growth in sequence information based on
new sequencing technology, this opens the door to a comprehensive survey of
protein 3D structures, including many not currently accessible to the
experimental methods of structural genomics. This advance has potential
applications in many biological contexts, such as synthetic biology,
identification of functional sites in proteins and interpretation of the
functional impact of genetic variants.
|
1110.5092
|
Geometry of the 3-user MIMO interference channel
|
cs.IT math.IT
|
This paper studies vector space interference alignment for the three-user
MIMO interference channel with no time or frequency diversity. The main result
is a characterization of the feasibility of interference alignment in the
symmetric case where all transmitters have M antennas and all receivers have N
antennas. If N >= M and all users desire d transmit dimensions, then alignment
is feasible if and only if (2r+1)d <= max(rN,(r+1)M) for all nonnegative
integers r. The analogous result holds with M and N switched if M >= N.
It turns out that, just as for the 3-user parallel interference channel
\cite{BT09}, the length of alignment paths captures the essence of the problem.
In fact, for each feasible value of M and N the maximum alignment path length
dictates both the converse and achievability arguments.
One of the implications of our feasibility criterion is that simply counting
equations and comparing to the number of variables does not predict
feasibility. Instead, a more careful investigation of the geometry of the
alignment problem is required. The necessary condition obtained by counting
equations is implied by our new feasibility criterion.
|
1110.5097
|
Absolute Uniqueness of Phase Retrieval with Random Illumination
|
physics.optics cs.CV math-ph math.MP
|
Random illumination is proposed to enforce absolute uniqueness and resolve
all types of ambiguity, trivial or nontrivial, from phase retrieval. Almost
sure irreducibility is proved for any complex-valued object of a full rank
support. While the new irreducibility result can be viewed as a probabilistic
version of the classical result by Bruck, Sodin and Hayes, it provides a novel
perspective and an effective method for phase retrieval.
In particular, almost sure uniqueness, up to a global phase, is proved for
complex-valued objects under general two-point conditions. Under a tight sector
constraint absolute uniqueness is proved to hold with probability exponentially
close to unity as the object sparsity increases. Under a magnitude constraint
with random amplitude illumination, uniqueness modulo global phase is proved to
hold with probability exponentially close to unity as object sparsity
increases. For general complex-valued objects without any constraint, almost
sure uniqueness up to global phase is established with two sets of Fourier
magnitude data under two independent illuminations. Numerical experiments
suggest that random illumination essentially alleviates most, if not all,
numerical problems commonly associated with the standard phasing algorithms.
|
1110.5102
|
Towards Holistic Scene Understanding: Feedback Enabled Cascaded
Classification Models
|
cs.CV cs.AI cs.RO
|
Scene understanding includes many related sub-tasks, such as scene
categorization, depth estimation, object detection, etc. Each of these
sub-tasks is often notoriously hard, and state-of-the-art classifiers already
exist for many of them. These classifiers operate on the same raw image and
provide correlated outputs. It is desirable to have an algorithm that can
capture such correlation without requiring any changes to the inner workings of
any classifier.
We propose Feedback Enabled Cascaded Classification Models (FE-CCM), that
jointly optimizes all the sub-tasks, while requiring only a `black-box'
interface to the original classifier for each sub-task. We use a two-layer
cascade of classifiers, which are repeated instantiations of the original ones,
with the output of the first layer fed into the second layer as input. Our
training method involves a feedback step that allows later classifiers to
provide earlier classifiers information about which error modes to focus on. We
show that our method significantly improves performance in all the sub-tasks in
the domain of scene understanding, where we consider depth estimation, scene
categorization, event categorization, object detection, geometric labeling and
saliency detection. Our method also improves performance in two robotic
applications: an object-grasping robot and an object-finding robot.
|
1110.5156
|
Smart Cane: Assistive Cane for Visually-impaired People
|
cs.SY
|
This paper reports on a study that helps visually-impaired people to walk
more confidently. The study hypothesizes that a smart cane that alerts
visually-impaired people over obstacles in front could help them in walking
with less accident. The aim of the paper is to address the development work of
a cane that could communicate with the users through voice alert and vibration,
which is named Smart Cane. T he development work involves coding and physical
installation. A series of tests have been carried out on the smart cane and the
results are discussed. This study found that the Smart Cane functions well as
intended, in alerting users about the obstacles in front
|
1110.5172
|
Quels formalismes temporels pour repr\'esenter des connaissances
extraites de textes de recettes de cuisine ?
|
cs.AI
|
The Taaable projet goal is to create a case-based reasoning system for
retrieval and adaptation of cooking recipes. Within this framework, we are
discussing the temporal aspects of recipes and the means of representing those
in order to adapt their text.
|
1110.5173
|
Ad Hoc Protocols Via Multi Agent Based Tools
|
cs.SI
|
The purpose of this paper is investigating behaviors of Ad Hoc protocols in
Agent-based simulation environments. First we bring brief introduction about
agents and Ad Hoc networks. We introduce some agent-based simulation tools like
NS-2. Then we focus on two protocols, which are Ad Hoc On-demand Multipath
Distance Vector (AODV) and Destination Sequenced Distance Vector (DSDV). At the
end, we bring simulation results and discuss about their reasons.
|
1110.5176
|
Demodulating Subsampled Direct Sequence Spread Spectrum Signals using
Compressive Signal Processing
|
cs.IT cs.NI math.IT
|
We show that to lower the sampling rate in a spread spectrum communication
system using Direct Sequence Spread Spectrum (DSSS), compressive signal
processing can be applied to demodulate the received signal. This may lead to a
decrease in the power consumption or the manufacturing price of wireless
receivers using spread spectrum technology. The main novelty of this paper is
the discovery that in spread spectrum systems it is possible to apply
compressive sensing with a much simpler hardware architecture than in other
systems, making the implementation both simpler and more energy efficient. Our
theoretical work is exemplified with a numerical experiment using the IEEE
802.15.4 standard's 2.4 GHz band specification. The numerical results support
our theoretical findings and indicate that compressive sensing may be used
successfully in spread spectrum communication systems. The results obtained
here may also be applicable in other spread spectrum technologies, such as Code
Division Multiple Access (CDMA) systems.
|
1110.5181
|
Paraglide: Interactive Parameter Space Partitioning for Computer
Simulations
|
cs.SY
|
In this paper we introduce paraglide, a visualization system designed for
interactive exploration of parameter spaces of multi-variate simulation models.
To get the right parameter configuration, model developers frequently have to
go back and forth between setting parameters and qualitatively judging the
outcomes of their model. During this process, they build up a grounded
understanding of the parameter effects in order to pick the right setting.
Current state-of-the-art tools and practices, however, fail to provide a
systematic way of exploring these parameter spaces, making informed decisions
about parameter settings a tedious and workload-intensive task. Paraglide
endeavors to overcome this shortcoming by assisting the sampling of the
parameter space and the discovery of qualitatively different model outcomes.
This results in a decomposition of the model parameter space into regions of
distinct behaviour. We developed paraglide in close collaboration with experts
from three different domains, who all were involved in developing new models
for their domain. We first analyzed current practices of six domain experts and
derived a set of design requirements, then engaged in a longitudinal
user-centered design process, and finally conducted three in-depth case studies
underlining the usefulness of our approach.
|
1110.5183
|
Diffusion of Information in Robot Swarms
|
cs.RO
|
This work is devoted to communication approaches, which spread information in
robot swarms. These mechanisms are useful for large-scale systems and also for
such cases when a limited communication equipment does not allow routing of
information packages. We focus on two approaches such as virtual fields and
epidemic algorithms, discuss several aspects of hardware implementation and
demonstrate experiments performed with microrobots "Jasmine".
|
1110.5186
|
Removing spurious interactions in complex networks
|
physics.soc-ph cs.SI
|
Identifying and removing spurious links in complex networks is a meaningful
problem for many real applications and is crucial for improving the reliability
of network data, which in turn can lead to a better understanding of the highly
interconnected nature of various social, biological and communication systems.
In this work we study the features of different simple spurious link
elimination methods, revealing that they may lead to the distortion of
networks' structural and dynamical properties. Accordingly, we propose a hybrid
method which combines similarity-based index and edge-betweenness centrality.
We show that our method can effectively eliminate the spurious interactions
while leaving the network connected and preserving the network's
functionalities.
|
1110.5222
|
Continuous transition of social efficiencies in the stochastic strategy
Minority Game
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We show that in a variant of the Minority Game problem, the agents can reach
a state of maximum social efficiency, where the fluctuation between the two
choices is minimum, by following a simple stochastic strategy. By imagining a
social scenario where the agents can only guess about the number of excess
people in the majority, we show that as long as the guess value is sufficiently
close to the reality, the system can reach a state of full efficiency or
minimum fluctuation. A continuous transition to less efficient condition is
observed when the guess value becomes worse. Hence, people can optimize their
guess value for excess population to optimize the period of being in the
majority state. We also consider the situation where a finite fraction of
agents always decide completely randomly (random trader) as opposed to the rest
of the population that follow a certain strategy (chartist). For a single
random trader the system becomes fully efficient with majority-minority
crossover occurring every two-days interval on average. For just two random
traders, all the agents have equal gain with arbitrarily small fluctuations.
|
1110.5265
|
On Programs and Genomes
|
q-bio.OT cs.CE q-bio.GN
|
We outline the global control architecture of genomes. A theory of genomic
control information is presented. The concept of a developmental control
network called a cene (for control gene) is introduced. We distinguish
parts-genes from control genes or cenes. Cenes are interpreted and executed by
the cell and, thereby, direct cell actions including communication, growth,
division, differentiation and multi-cellular development. The cenome is the
global developmental control network in the genome. The cenome is also a cene
that consists of interlinked sub-cenes that guide the ontogeny of the organism.
The complexity of organisms is linked to the complexity of the cenome. The
relevance to ontogeny and evolution is mentioned. We introduce the concept of a
universal cell and a universal genome.
|
1110.5280
|
Two-Population Dynamics in a Growing Network Model
|
physics.soc-ph cond-mat.stat-mech cs.SI
|
We introduce a growing network evolution model with nodal attributes. The
model describes the interactions between potentially violent V and non-violent
N agents who have different affinities in establishing connections within their
own population versus between the populations. The model is able to generate
all stable triads observed in real social systems. In the framework of rate
equations theory, we employ the mean-field approximation to derive analytical
expressions of the degree distribution and the local clustering coefficient for
each type of nodes. Analytical derivations agree well with numerical simulation
results. The assortativity of the potentially violent network qualitatively
resembles the connectivity pattern in terrorist networks that was recently
reported. The assortativity of the network driven by aggression shows clearly
different behavior than the assortativity of the networks with connections of
non-aggressive nature in agreement with recent empirical results of an online
social system.
|
1110.5342
|
Dynamic Bit Allocation for Object Tracking in Bandwidth Limited Sensor
Networks
|
stat.AP cs.IT math.IT
|
In this paper, we study the target tracking problem in wireless sensor
networks (WSNs) using quantized sensor measurements under limited bandwidth
availability. At each time step of tracking, the available bandwidth $R$ needs
to be distributed among the $N$ sensors in the WSN for the next time step. The
optimal solution for the bandwidth allocation problem can be obtained by using
a combinatorial search which may become computationally prohibitive for large
$N$ and $R$. Therefore, we develop two new computationally efficient suboptimal
bandwidth distribution algorithms which are based on convex relaxation and
approximate dynamic programming (A-DP). We compare the mean squared error (MSE)
and computational complexity performances of convex relaxation and A-DP with
other existing suboptimal bandwidth distribution schemes based on generalized
Breiman, Friedman, Olshen, and Stone (GBFOS) algorithm and greedy search.
Simulation results show that, A-DP, convex optimization and GBFOS yield similar
MSE performance, which is very close to that based on the optimal exhaustive
search approach and they outperform greedy search and nearest neighbor based
bandwidth allocation approaches significantly. Computationally, A-DP is more
efficient than the bandwidth allocation schemes based on convex relaxation and
GBFOS, especially for a large sensor network.
|
1110.5371
|
MyZone: A Next-Generation Online Social Network
|
cs.SI cs.CR cs.DC cs.NI physics.soc-ph
|
This technical report considers the design of a social network that would
address the shortcomings of the current ones, and identifies user privacy,
security, and service availability as strong motivations that push the
architecture of the proposed design to be distributed. We describe our design
in detail and identify the property of resiliency as a key objective for the
overall design philosophy.
We define the system goals, threat model, and trust model as part of the
system model, and discuss the challenges in adapting such distributed
frameworks to become highly available and highly resilient in potentially
hostile environments. We propose a distributed solution to address these
challenges based on a trust-based friendship model for replicating user
profiles and disseminating messages, and examine how this approach builds upon
prior work in distributed Peer-to-Peer (P2P) networks.
|
1110.5383
|
Quilting Stochastic Kronecker Product Graphs to Generate Multiplicative
Attribute Graphs
|
stat.ML cs.LG stat.CO
|
We describe the first sub-quadratic sampling algorithm for the Multiplicative
Attribute Graph Model (MAGM) of Kim and Leskovec (2010). We exploit the close
connection between MAGM and the Kronecker Product Graph Model (KPGM) of
Leskovec et al. (2010), and show that to sample a graph from a MAGM it suffices
to sample small number of KPGM graphs and \emph{quilt} them together. Under a
restricted set of technical conditions our algorithm runs in $O((\log_2(n))^3
|E|)$ time, where $n$ is the number of nodes and $|E|$ is the number of edges
in the sampled graph. We demonstrate the scalability of our algorithm via
extensive empirical evaluation; we can sample a MAGM graph with 8 million nodes
and 20 billion edges in under 6 hours.
|
1110.5396
|
Joint Channel-Network Coding Strategies for Networks with Low Complexity
Relays
|
cs.IT math.IT
|
We investigate joint network and channel coding schemes for networks when
relay nodes are not capable of performing channel coding operations. Rather,
channel encoding is performed at the source node while channel decoding is done
only at the destination nodes. We examine three different decoding strategies:
independent network-then-channel decoding, serial network and channel decoding,
and joint network and channel decoding. Furthermore, we describe how to
implement such joint network and channel decoding using iteratively decodable
error correction codes. Using simple networks as a model, we derive achievable
rate regions and use simulations to demonstrate the effectiveness of the three
decoders.
|
1110.5404
|
Face Recognition Based on SVM and 2DPCA
|
cs.CV
|
The paper will present a novel approach for solving face recognition problem.
Our method combines 2D Principal Component Analysis (2DPCA), one of the
prominent methods for extracting feature vectors, and Support Vector Machine
(SVM), the most powerful discriminative method for classification. Experiments
based on proposed method have been conducted on two public data sets FERET and
AT&T; the results show that the proposed method could improve the
classification rates.
|
1110.5447
|
Optimal discovery with probabilistic expert advice
|
math.OC cs.LG
|
We consider an original problem that arises from the issue of security
analysis of a power system and that we name optimal discovery with
probabilistic expert advice. We address it with an algorithm based on the
optimistic paradigm and the Good-Turing missing mass estimator. We show that
this strategy uniformly attains the optimal discovery rate in a macroscopic
limit sense, under some assumptions on the probabilistic experts. We also
provide numerical experiments suggesting that this optimal behavior may still
hold under weaker assumptions.
|
1110.5450
|
Hand Tracking based on Hierarchical Clustering of Range Data
|
cs.CV
|
Fast and robust hand segmentation and tracking is an essential basis for
gesture recognition and thus an important component for contact-less
human-computer interaction (HCI). Hand gesture recognition based on 2D video
data has been intensively investigated. However, in practical scenarios purely
intensity based approaches suffer from uncontrollable environmental conditions
like cluttered background colors. In this paper we present a real-time hand
segmentation and tracking algorithm using Time-of-Flight (ToF) range cameras
and intensity data. The intensity and range information is fused into one pixel
value, representing its combined intensity-depth homogeneity. The scene is
hierarchically clustered using a GPU based parallel merging algorithm, allowing
a robust identification of both hands even for inhomogeneous backgrounds. After
the detection, both hands are tracked on the CPU. Our tracking algorithm can
cope with the situation that one hand is temporarily covered by the other hand.
|
1110.5609
|
Self-similar scaling of density in complex real-world networks
|
nlin.AO cs.SI physics.soc-ph
|
Despite their diverse origin, networks of large real-world systems reveal a
number of common properties including small-world phenomena, scale-free degree
distributions and modularity. Recently, network self-similarity as a natural
outcome of the evolution of real-world systems has also attracted much
attention within the physics literature. Here we investigate the scaling of
density in complex networks under two classical box-covering
renormalizations-network coarse-graining-and also different community-based
renormalizations. The analysis on over 50 real-world networks reveals a
power-law scaling of network density and size under adequate renormalization
technique, yet irrespective of network type and origin. The results thus
advance a recent discovery of a universal scaling of density among different
real-world networks [Laurienti et al., Physica A 390 (20) (2011) 3608-3613.]
and imply an existence of a scale-free density also within-among different
self-similar scales of-complex real-world networks. The latter further improves
the comprehension of self-similar structure in large real-world networks with
several possible applications.
|
1110.5667
|
Inducing Probabilistic Programs by Bayesian Program Merging
|
cs.AI cs.LG
|
This report outlines an approach to learning generative models from data. We
express models as probabilistic programs, which allows us to capture abstract
patterns within the examples. By choosing our language for programs to be an
extension of the algebraic data type of the examples, we can begin with a
program that generates all and only the examples. We then introduce greater
abstraction, and hence generalization, incrementally to the extent that it
improves the posterior probability of the examples given the program. Motivated
by previous approaches to model merging and program induction, we search for
such explanatory abstractions using program transformations. We consider two
types of transformation: Abstraction merges common subexpressions within a
program into new functions (a form of anti-unification). Deargumentation
simplifies functions by reducing the number of arguments. We demonstrate that
this approach finds key patterns in the domain of nested lists, including
parameterized sub-functions and stochastic recursion.
|
1110.5673
|
Heterogeneity shapes groups growth in social online communities
|
physics.soc-ph cs.SI
|
Many complex systems are characterized by broad distributions capturing, for
example, the size of firms, the population of cities or the degree distribution
of complex networks. Typically this feature is explained by means of a
preferential growth mechanism. Although heterogeneity is expected to play a
role in the evolution it is usually not considered in the modeling probably due
to a lack of empirical evidence on how it is distributed. We characterize the
intrinsic heterogeneity of groups in an online community and then show that
together with a simple linear growth and an inhomogeneous birth rate it
explains the broad distribution of group members.
|
1110.5688
|
Discussion on "Techniques for Massive-Data Machine Learning in
Astronomy" by A. Gray
|
astro-ph.IM astro-ph.CO cs.LG
|
Astronomy is increasingly encountering two fundamental truths: (1) The field
is faced with the task of extracting useful information from extremely large,
complex, and high dimensional datasets; (2) The techniques of astroinformatics
and astrostatistics are the only way to make this tractable, and bring the
required level of sophistication to the analysis. Thus, an approach which
provides these tools in a way that scales to these datasets is not just
desirable, it is vital. The expertise required spans not just astronomy, but
also computer science, statistics, and informatics. As a computer scientist and
expert in machine learning, Alex's contribution of expertise and a large number
of fast algorithms designed to scale to large datasets, is extremely welcome.
We focus in this discussion on the questions raised by the practical
application of these algorithms to real astronomical datasets. That is, what is
needed to maximally leverage their potential to improve the science return?
This is not a trivial task. While computing and statistical expertise are
required, so is astronomical expertise. Precedent has shown that, to-date, the
collaborations most productive in producing astronomical science results (e.g,
the Sloan Digital Sky Survey), have either involved astronomers expert in
computer science and/or statistics, or astronomers involved in close, long-term
collaborations with experts in those fields. This does not mean that the
astronomers are giving the most important input, but simply that their input is
crucial in guiding the effort in the most fruitful directions, and coping with
the issues raised by real data. Thus, the tools must be useable and
understandable by those whose primary expertise is not computing or statistics,
even though they may have quite extensive knowledge of those fields.
|
1110.5710
|
Results on the Redundancy of Universal Compression for Finite-Length
Sequences
|
cs.IT math.IT
|
In this paper, we investigate the redundancy of universal coding schemes on
smooth parametric sources in the finite-length regime. We derive an upper bound
on the probability of the event that a sequence of length $n$, chosen using
Jeffreys' prior from the family of parametric sources with $d$ unknown
parameters, is compressed with a redundancy smaller than
$(1-\epsilon)\frac{d}{2}\log n$ for any $\epsilon>0$. Our results also confirm
that for large enough $n$ and $d$, the average minimax redundancy provides a
good estimate for the redundancy of most sources. Our result may be used to
evaluate the performance of universal source coding schemes on finite-length
sequences. Additionally, we precisely characterize the minimax redundancy for
two--stage codes. We demonstrate that the two--stage assumption incurs a
negligible redundancy especially when the number of source parameters is large.
Finally, we show that the redundancy is significant in the compression of small
sequences.
|
1110.5722
|
Annotation of Scientific Summaries for Information Retrieval
|
cs.IR
|
We present a methodology combining surface NLP and Machine Learning
techniques for ranking asbtracts and generating summaries based on annotated
corpora. The corpora were annotated with meta-semantic tags indicating the
category of information a sentence is bearing (objective, findings, newthing,
hypothesis, conclusion, future work, related work). The annotated corpus is fed
into an automatic summarizer for query-oriented abstract ranking and multi-
abstract summarization. To adapt the summarizer to these two tasks, two novel
weighting functions were devised in order to take into account the distribution
of the tags in the corpus. Results, although still preliminary, are encouraging
us to pursue this line of work and find better ways of building IR systems that
can take into account semantic annotations in a corpus.
|
1110.5741
|
Secure Capacity Region for Erasure Broadcast Channels with Feedback
|
cs.IT math.IT
|
We formulate and study a cryptographic problem relevant to wireless: a
sender, Alice, wants to transmit private messages to two receivers, Bob and
Calvin, using unreliable wireless broadcast transmissions and short public
feedback from Bob and Calvin. We ask, at what rates can we broadcast the
private messages if we also provide (information-theoretic) unconditional
security guarantees that Bob and Calvin do not learn each-other's message? We
characterize the largest transmission rates to the two receivers, for any
protocol that provides unconditional security guarantees. We design a protocol
that operates at any rate-pair within the above region, uses very simple
interactions and operations, and is robust to misbehaving users.
|
1110.5746
|
Private and Quantum Capacities of More Capable and Less Noisy Quantum
Channels
|
quant-ph cs.IT math.IT
|
Two new classes of quantum channels, which we call more capable and less
noisy, are introduced. The more capable class consists of channels such that
the quantum capacities of the complementary channels to the environments are
zero. The less noisy class consists of channels such that the private
capacities of the complementary channels to the environment are zero. For the
more capable class, it is clarified that the private capacity and quantum
capacity coincide. For the less noisy class, it is clarified that the private
capacity and quantum capacity can be single letter characterized.
|
1110.5762
|
Swarmrobot.org - Open-hardware Microrobotic Project for Large-scale
Artificial Swarms
|
cs.RO cs.MA
|
The purpose of this paper is to give an overview of the open-hardware
microrobotic project swarmrobot.org and the platform Jasmine for building
large-scale artificial swarms. The project targets an open development of
cost-effective hardware and software for a quick implementation of swarm
behavior with real robots. Detailed instructions for making the robot,
open-source simulator, software libraries and multiple publications about
performed experiments are ready for download and intend to facilitate
exploration of collective and emergent phenomena, guided self-organization and
swarm robotics in experimental way.
|
1110.5765
|
Throughput-Distortion Computation Of Generic Matrix Multiplication:
Toward A Computation Channel For Digital Signal Processing Systems
|
cs.MS cs.CE
|
The generic matrix multiply (GEMM) function is the core element of
high-performance linear algebra libraries used in many
computationally-demanding digital signal processing (DSP) systems. We propose
an acceleration technique for GEMM based on dynamically adjusting the
imprecision (distortion) of computation. Our technique employs adaptive scalar
companding and rounding to input matrix blocks followed by two forms of packing
in floating-point that allow for concurrent calculation of multiple results.
Since the adaptive companding process controls the increase of concurrency (via
packing), the increase in processing throughput (and the corresponding increase
in distortion) depends on the input data statistics. To demonstrate this, we
derive the optimal throughput-distortion control framework for GEMM for the
broad class of zero-mean, independent identically distributed, input sources.
Our approach converts matrix multiplication in programmable processors into a
computation channel: when increasing the processing throughput, the output
noise (error) increases due to (i) coarser quantization and (ii) computational
errors caused by exceeding the machine-precision limitations. We show that,
under certain distortion in the GEMM computation, the proposed framework can
significantly surpass 100% of the peak performance of a given processor. The
practical benefits of our proposal are shown in a face recognition system and a
multi-layer perceptron system trained for metadata learning from a large music
feature database.
|
1110.5813
|
Overlapping Community Detection in Networks: the State of the Art and
Comparative Study
|
cs.SI cs.DS physics.soc-ph
|
This paper reviews the state of the art in overlapping community detection
algorithms, quality measures, and benchmarks. A thorough comparison of
different algorithms (a total of fourteen) is provided. In addition to
community level evaluation, we propose a framework for evaluating algorithms'
ability to detect overlapping nodes, which helps to assess over-detection and
under-detection. After considering community level detection performance
measured by Normalized Mutual Information, the Omega index, and node level
detection performance measured by F-score, we reached the following
conclusions. For low overlapping density networks, SLPA, OSLOM, Game and COPRA
offer better performance than the other tested algorithms. For networks with
high overlapping density and high overlapping diversity, both SLPA and Game
provide relatively stable performance. However, test results also suggest that
the detection in such networks is still not yet fully resolved. A common
feature observed by various algorithms in real-world networks is the relatively
small fraction of overlapping nodes (typically less than 30%), each of which
belongs to only 2 or 3 communities.
|
1110.5863
|
A Wikipedia Literature Review
|
cs.DL cs.IR
|
This paper was originally designed as a literature review for a doctoral
dissertation focusing on Wikipedia. This exposition gives the structure of
Wikipedia and the latest trends in Wikipedia research.
|
1110.5865
|
Cancer Networks: A general theoretical and computational framework for
understanding cancer
|
q-bio.MN cs.CE cs.MA q-bio.CB q-bio.GN
|
We present a general computational theory of cancer and its developmental
dynamics. The theory is based on a theory of the architecture and function of
developmental control networks which guide the formation of multicellular
organisms. Cancer networks are special cases of developmental control networks.
Cancer results from transformations of normal developmental networks. Our
theory generates a natural classification of all possible cancers based on
their network architecture. Each cancer network has a unique topology and
semantics and developmental dynamics that result in distinct clinical tumor
phenotypes. We apply this new theory with a series of proof of concept cases
for all the basic cancer types. These cases have been computationally modeled,
their behavior simulated and mathematically described using a multicellular
systems biology approach. There are fascinating correspondences between the
dynamic developmental phenotype of computationally modeled {\em in silico}
cancers and natural {\em in vivo} cancers. The theory lays the foundation for a
new research paradigm for understanding and investigating cancer. The theory of
cancer networks implies that new diagnostic methods and new treatments to cure
cancer will become possible.
|
1110.5870
|
Universal and efficient compressed sensing by spread spectrum and
application to realistic Fourier imaging techniques
|
cs.IT math.IT
|
We advocate a compressed sensing strategy that consists of multiplying the
signal of interest by a wide bandwidth modulation before projection onto
randomly selected vectors of an orthonormal basis. Firstly, in a digital
setting with random modulation, considering a whole class of sensing bases
including the Fourier basis, we prove that the technique is universal in the
sense that the required number of measurements for accurate recovery is optimal
and independent of the sparsity basis. This universality stems from a drastic
decrease of coherence between the sparsity and the sensing bases, which for a
Fourier sensing basis relates to a spread of the original signal spectrum by
the modulation (hence the name "spread spectrum"). The approach is also
efficient as sensing matrices with fast matrix multiplication algorithms can be
used, in particular in the case of Fourier measurements. Secondly, these
results are confirmed by a numerical analysis of the phase transition of the
l1- minimization problem. Finally, we show that the spread spectrum technique
remains effective in an analog setting with chirp modulation for application to
realistic Fourier imaging. We illustrate these findings in the context of radio
interferometry and magnetic resonance imaging.
|
1110.5890
|
Location-aided Distributed Primary User Identification in a Cognitive
Radio Scenario
|
cs.NI cs.IT math.IT
|
We address a cognitive radio scenario, where a number of secondary users
performs identification of which primary user, if any, is transmitting, in a
distributed way and using limited location information. We propose two fully
distributed algorithms: the first is a direct identification scheme, and in the
other a distributed sub-optimal detection based on a simplified Neyman-Pearson
energy detector precedes the identification scheme. Both algorithms are studied
analytically in a realistic transmission scenario, and the advantage obtained
by detection pre-processing is also verified via simulation. Finally, we give
details of their fully distributed implementation via consensus averaging
algorithms.
|
1110.5892
|
Semi-optimal Practicable Algorithmic Cooling
|
quant-ph cs.ET cs.IT math.IT
|
Algorithmic Cooling (AC) of spins applies entropy manipulation algorithms in
open spin-systems in order to cool spins far beyond Shannon's entropy bound. AC
of nuclear spins was demonstrated experimentally, and may contribute to nuclear
magnetic resonance (NMR) spectroscopy. Several cooling algorithms were
suggested in recent years, including practicable algorithmic cooling (PAC) and
exhaustive AC. Practicable algorithms have simple implementations, yet their
level of cooling is far from optimal; Exhaustive algorithms, on the other hand,
cool much better, and some even reach (asymptotically) an optimal level of
cooling, but they are not practicable. We introduce here semi-optimal
practicable AC (SOPAC), wherein few cycles (typically 2-6) are performed at
each recursive level. Two classes of SOPAC algorithms are proposed and
analyzed. Both attain cooling levels significantly better than PAC, and are
much more efficient than the exhaustive algorithms. The new algorithms are
shown to bridge the gap between PAC and exhaustive AC. In addition, we
calculated the number of spins required by SOPAC in order to purify qubits for
quantum computation. As few as 12 and 7 spins are required (in an ideal
scenario) to yield a mildly pure spin (60% polarized) from initial
polarizations of 1% and 10%, respectively. In the latter case, about five more
spins are sufficient to produce a highly pure spin (99.99% polarized), which
could be relevant for fault-tolerant quantum computing.
|
1110.5944
|
Communication cost of classically simulating a quantum channel with
subsequent rank-1 projective measurement
|
quant-ph cs.IT math-ph math.IT math.MP
|
A process of preparation, transmission and subsequent projective measurement
of a qubit can be simulated by a classical model with only two bits of
communication and some amount of shared randomness. However no model for n
qubits with a finite amount of classical communication is known at present. A
lower bound for the communication cost can provide useful hints for a
generalization. It is known for example that the amount of communication must
be greater than c 2^n, where c~0.01. The proof uses a quite elaborate theorem
of communication complexity. Using a mathematical conjecture known as the
"double cap conjecture", we strengthen this result by presenting a geometrical
and extremely simple derivation of the lower bound 2^n-1. Only rank-1
projective measurements are involved in the derivation.
|
1110.5945
|
A New Similarity Measure for Non-Local Means Filtering of MRI Images
|
cs.CV
|
The acquisition of MRI images offers a trade-off in terms of acquisition
time, spatial/temporal resolution and signal-to-noise ratio (SNR). Thus, for
instance, increasing the time efficiency of MRI often comes at the expense of
reduced SNR. This, in turn, necessitates the use of post-processing tools for
noise rejection, which makes image de-noising an indispensable component of
computer assistance diagnosis. In the field of MRI, a multitude of image
de-noising methods have been proposed hitherto. In this paper, the application
of a particular class of de-noising algorithms - known as non-local mean (NLM)
filters - is investigated. Such filters have been recently applied for MRI data
enhancement and they have been shown to provide more accurate results as
compared to many alternative de-noising algorithms. Unfortunately, virtually
all existing methods for NLM filtering have been derived under the assumption
of additive white Gaussian (AWG) noise contamination. Since this assumption is
known to fail at low values of SNR, an alternative formulation of NLM filtering
is required, which would take into consideration the correct Rician statistics
of MRI noise. Accordingly, the contribution of the present paper is two-fold.
First, it points out some principal disadvantages of the earlier methods of NLM
filtering of MRI images and suggests means to rectify them. Second, the paper
introduces a new similarity measure for NLM filtering of MRI Images, which is
derived under bona fide statistical assumptions and results in more accurate
reconstruction of MR scans as compared to alternative NLM approaches. Finally,
the utility and viability of the proposed method is demonstrated through a
series of numerical experiments using both in silico and in vivo MRI data.
|
1110.5962
|
Tracking Traders' Understanding of the Market Using e-Communication Data
|
cs.SI physics.data-an physics.soc-ph
|
Tracking the volume of keywords in Internet searches, message boards, or
Tweets has provided an alternative for following or predicting associations
between popular interest or disease incidences. Here, we extend that research
by examining the role of e-communications among day traders and their
collective understanding of the market. Our study introduces a general method
that focuses on bundles of words that behave differently from daily
communication routines, and uses original data covering the content of instant
messages among all day traders at a trading firm over a 40-month period.
Analyses show that two word bundles convey traders' understanding of same day
market events and potential next day market events. We find that when market
volatility is high, traders' communications are dominated by same day events,
and when volatility is low, communications are dominated by next day events. We
show that the stronger the traders' attention to either same day or next day
events, the higher their collective trading performance. We conclude that
e-communication among traders is a product of mass collaboration over diverse
viewpoints that embodies unique information about their weak or strong
understanding of the market.
|
1110.5992
|
User preference extraction using dynamic query sliders in conjunction
with UPS-EMO algorithm
|
cs.NE cs.NA
|
One drawback of evolutionary multiobjective optimization algorithms (EMOA)
has traditionally been high computational cost to create an approximation of
the Pareto front: number of required objective function evaluations usually
grows high. On the other hand, for the decision maker (DM) it may be difficult
to select one of the many produced solutions as the final one, especially in
the case of more than two objectives.
To overcome the above mentioned drawbacks number of EMOA's incorporating the
decision makers preference information have been proposed. In this case, it is
possible to save objective function evaluations by generating only the part of
the front the DM is interested in, thus also narrowing down the pool of
possible selections for the final solution.
Unfortunately, most of the current EMO approaches utilizing preferences are
not very intuitive to use, i.e. they may require tweaking of unintuitive
parameters, and it is not always clear what kind of results one can get with
given set of parameters. In this study we propose a new approach to visually
inspect produced solutions, and to extract preference information from the DM
to further guide the search. Our approach is based on intuitive use of dynamic
query sliders, which serve as a means to extract preference information and are
part of the graphical user interface implemented for the efficient UPS-EMO
algorithm.
|
1110.6012
|
The automorphism group of a self-dual binary [72,36,16] code does not
contain Z7, Z3xZ3, or D10
|
cs.IT math.IT
|
A computer calculation with Magma shows that there is no extremal self-dual
binary code C of length 72 that has an automorphism group containing D10,
Z3xZ3, or Z7. Combining this with the known results in the literature one
obtains that Aut(C) is either Z5 or has order dividing 24.
|
1110.6027
|
Entropy of the Mixture of Sources and Entropy Dimension
|
cs.IT math.IT
|
We investigate the problem of the entropy of the mixture of sources. There is
given an estimation of the entropy and entropy dimension of convex combination
of measures. The proof is based on our alternative definition of the entropy
based on measures instead of partitions.
|
1110.6061
|
A Matricial Algorithm for Polynomial Refinement
|
cs.IT math.IT
|
In order to have a multiresolution analysis, the scaling function must be
refinable. That is, it must be the linear combination of 2-dilation,
$\mathbb{Z}$-translates of itself. Refinable functions used in connection with
wavelets are typically compactly supported. In 2002, David Larson posed the
question in his REU site, "Are all polynomials (of a single variable) finitely
refinable?" That summer the author proved that the answer indeed was true using
basic linear algebra. The result was presented in a number of talks but had not
been typed up until now. The purpose of this short note is to record that
particular proof.
|
1110.6078
|
On the Mathematical Structure of Balanced Chemical Reaction Networks
Governed by Mass Action Kinetics
|
math.OC cs.SY math.DS physics.chem-ph q-bio.QM
|
Motivated by recent progress on the interplay between graph theory, dynamics,
and systems theory, we revisit the analysis of chemical reaction networks
described by mass action kinetics. For reaction networks possessing a
thermodynamic equilibrium we derive a compact formulation exhibiting at the
same time the structure of the complex graph and the stoichiometry of the
network, and which admits a direct thermodynamical interpretation. This
formulation allows us to easily characterize the set of equilibria and their
stability properties. Furthermore, we develop a framework for interconnection
of chemical reaction networks. Finally we discuss how the established framework
leads to a new approach for model reduction.
|
1110.6084
|
The multi-armed bandit problem with covariates
|
math.ST cs.LG stat.ML stat.TH
|
We consider a multi-armed bandit problem in a setting where each arm produces
a noisy reward realization which depends on an observable random covariate. As
opposed to the traditional static multi-armed bandit problem, this setting
allows for dynamically changing rewards that better describe applications where
side information is available. We adopt a nonparametric model where the
expected rewards are smooth functions of the covariate and where the hardness
of the problem is captured by a margin parameter. To maximize the expected
cumulative reward, we introduce a policy called Adaptively Binned Successive
Elimination (abse) that adaptively decomposes the global problem into suitably
"localized" static bandit problems. This policy constructs an adaptive
partition using a variant of the Successive Elimination (se) policy. Our
results include sharper regret bounds for the se policy in a static bandit
problem and minimax optimal regret bounds for the abse policy in the dynamic
problem.
|
1110.6089
|
A Universal 4D Model for Double-Efficient Lossless Data Compressions
|
cs.IT math.CO math.IT
|
This article discusses the theory, model, implementation and performance of a
combinatorial fuzzy-binary and-or (FBAR) algorithm for lossless data
compression (LDC) and decompression (LDD) on 8-bit characters. A combinatorial
pairwise flags is utilized as new zero/nonzero, impure/pure bit-pair operators,
where their combination forms a 4D hypercube to compress a sequence of bytes.
The compressed sequence is stored in a grid file of constant size.
Decompression is by using a fixed size translation table (TT) to access the
grid file during I/O data conversions. Compared to other LDC algorithms,
double-efficient (DE) entropies denoting 50% compressions with reasonable
bitrates were observed. Double-extending the usage of the TT component in code,
exhibits a Universal Predictability via its negative growth of entropy for LDCs
> 87.5% compression, quite significant for scaling databases and network
communications. This algorithm is novel in encryption, binary, fuzzy and
information-theoretic methods such as probability. Therefore, information
theorists, computer scientists and engineers may find the algorithm useful for
its logic and applications.
|
1110.6097
|
The Decentralized Structure of Collective Attention on the Web
|
cs.IR cs.SI physics.soc-ph
|
Background: The collective browsing behavior of users gives rise to a flow
network transporting attention between websites. By analyzing the structure of
this network we uncovered a nontrivial scaling regularity concerning the impact
of websites.
Methodology: We constructed three clickstreams networks, whose nodes were
websites and edges were formed by the users switching between sites. We
developed an indicator Ci as a measure of the impact of site i and investigated
its correlation with the traffic of the site Ai both on the three networks and
across the language communities within the networks.
Conclusions: We found that the impact of websites increased slower than their
traffic. Specifically, there existed a scaling relationship between Ci and Ai
with an exponent gamma smaller than 1. We suggested that this scaling
relationship characterized the decentralized structure of the clickstream
circulation: the World Wide Web is a system that favors small sites in
reassigning the collective attention of users.
|
1110.6127
|
Optimal Forwarding in Delay Tolerant Networks with Multiple Destinations
|
cs.NI cs.SY
|
We study the trade-off between delivery delay and energy consumption in a
delay tolerant network in which a message (or a file) has to be delivered to
each of several destinations by epidemic relaying. In addition to the
destinations, there are several other nodes in the network that can assist in
relaying the message. We first assume that, at every instant, all the nodes
know the number of relays carrying the packet and the number of destinations
that have received the packet. We formulate the problem as a controlled
continuous time Markov chain and derive the optimal closed loop control (i.e.,
forwarding policy). However, in practice, the intermittent connectivity in the
network implies that the nodes may not have the required perfect knowledge of
the system state. To address this issue, we obtain an ODE (i.e., a
deterministic fluid) approximation for the optimally controlled Markov chain.
This fluid approximation also yields an asymptotically optimal open loop
policy. Finally, we evaluate the performance of the deterministic policy over
finite networks. Numerical results show that this policy performs close to the
optimal closed loop policy.
|
1110.6128
|
Classical Hierarchical Correlation Quantification on Tripartite Qubit
Mixed State Families
|
quant-ph cs.IT math.IT nlin.CD
|
There are at least a number of ways to formally define complexity. Most of
them relate to some kind of minimal description of the studied object. Being
this one in form of minimal resources of minimal effort needed to generate the
object itself. This is usually achieved by detecting and taking advantage of
regularities within the object. Regularities can commonly be described in an
information-theoretic approach by quantifying the amount of correlation playing
a role in the system, this being spatial, temporal or both. This is the
approach closely related to the extent that the whole cannot be understood as
only the sum of its parts, but also by their interactions. Feature considered
to be most fundamental. Nevertheless, this irreducibility, even in the basic
quantum informational setting of composite states, is also present due to the
intrinsic structure of Hilbert spaces' tensor product. In this approach, this
irreducibility is quantified based on statistics of von Neumann measurements
forming mutually unbiased bases. Upon two different kinds of tripartite qubit
mixed state families, which hold the two possible distinct entangled states on
this space. Results show that this quantification is sensible to the different
kind of entanglement present on those families.
|
1110.6161
|
Sum-Rate Optimal Power Policies for Energy Harvesting Transmitters in an
Interference Channel
|
cs.IT math.IT
|
This paper considers a two-user Gaussian interference channel with energy
harvesting transmitters. Different than conventional battery powered wireless
nodes, energy harvesting transmitters have to adapt transmission to
availability of energy at a particular instant. In this setting, the optimal
power allocation problem to maximize the sum throughput with a given deadline
is formulated. The convergence of the proposed iterative coordinate descent
method for the problem is proved and the short-term throughput maximizing
offline power allocation policy is found. Examples for interference regions
with known sum capacities are given with directional water-filling
interpretations. Next, stochastic data arrivals are addressed. Finally online
and/or distributed near-optimal policies are proposed. Performance of the
proposed algorithms are demonstrated through simulations.
|
1110.6188
|
Ranked Sparse Signal Support Detection
|
cs.IT math.IT
|
This paper considers the problem of detecting the support (sparsity pattern)
of a sparse vector from random noisy measurements. Conditional power of a
component of the sparse vector is defined as the energy conditioned on the
component being nonzero. Analysis of a simplified version of orthogonal
matching pursuit (OMP) called sequential OMP (SequOMP) demonstrates the
importance of knowledge of the rankings of conditional powers. When the simple
SequOMP algorithm is applied to components in nonincreasing order of
conditional power, the detrimental effect of dynamic range on thresholding
performance is eliminated. Furthermore, under the most favorable conditional
powers, the performance of SequOMP approaches maximum likelihood performance at
high signal-to-noise ratio.
|
1110.6199
|
Enhancing Binary Images of Non-Binary LDPC Codes
|
cs.IT math.IT
|
We investigate the reasons behind the superior performance of belief
propagation decoding of non-binary LDPC codes over their binary images when the
transmission occurs over the binary erasure channel. We show that although
decoding over the binary image has lower complexity, it has worse performance
owing to its larger number of stopping sets relative to the original non-binary
code. We propose a method to find redundant parity-checks of the binary image
that eliminate these additional stopping sets, so that we achieve performance
comparable to that of the original non-binary LDPC code with lower decoding
complexity.
|
1110.6200
|
TopicViz: Semantic Navigation of Document Collections
|
cs.HC cs.AI cs.CL
|
When people explore and manage information, they think in terms of topics and
themes. However, the software that supports information exploration sees text
at only the surface level. In this paper we show how topic modeling -- a
technique for identifying latent themes across large collections of documents
-- can support semantic exploration. We present TopicViz, an interactive
environment for information exploration. TopicViz combines traditional search
and citation-graph functionality with a range of novel interactive
visualizations, centered around a force-directed layout that links documents to
the latent themes discovered by the topic model. We describe several use
scenarios in which TopicViz supports rapid sensemaking on large document
collections.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.